Understanding Optimum Link Growth

Link Growth on The Intertubes
Let’s talk about the subject of link growth. For the context of this conversation (and by that, I mean one-way lecture), I am assuming that everyone is defining link growth at the rate at which a domain as a whole and specific pages gain new backlinks. More importantly, how quickly search engines discover and “count” these backlinks.

I’ve blogged before about link velocity before and generally summarised that it was of course, a factor in how well your website ranks. However, as with most SEO topics, the devil is in the detail and there’s a lot of myths about the detail. So I would like to discuss:

1) What signals do “good” links and “not-so-good” links give to your website?

2) How do domain age and your current backlink count play a part in determining your “optimal” link velocity?

3) Can you be harmed by incoming links?

These are what I believe are some of the most important (it’s definitely not all) factors attributing to link growth/velocity. As I want to have this blog post finished in a short time, I’m going to try and stick around these core 3 points, although I’m sure I’ll end up running off at a tangent like I usually do. If however, you think I’ve missed something critical, drop me a comment and I’ll see if I can do a follow-up.

The difference between trust & popularity
When talking about links, it’s important to realize that there is a world of difference between a signal of trust and a signal of popularity. They are not mutually exclusive and to rank competitively, you’ll need signals of both trust and popularity, but for now, realizing they are different is enough.

Understanding Optimum Link GrowthFor instance: Michael Jackson is still (apparently) very popular, but you wouldn’t trust him to babysit your kids, would you? The guy down the road in your new neighborhood might be the most popular guy in your street, but you’re not going to trust him until someone you know well gives him the thumbs up.

So for your site to rank well, Google needs to be able to have a degree of trust (e.g. source of incoming links, domain age, site footprints) to ensure your not just another piece of 2 bit web-scum and it needs to know your content is popular (i.e. good content, link velocity, types of links). As I’ve already said, I’m not going to get into a drawn-out debate about content here, just looking at links.

What comes first, trust or popularity?
It doesn’t really make much logical sense that you’ll launch a website and with no fanfare, you get a stream of hundreds of low-quality links every week.

This kind of sits well with the original plan of the PageRank algorithm, which let’s not forget is actually (originally) trying to calculate the chance that a random surfer clicking around the web will bump into your site. This notion of a random surfer, clicking random links gave Google an excellent abstract to work out the whole “page authority” that the lion’s share of their algorithm sprang from.

Nowadays, you’ll hear lots of people trumping about going after quality (i.e. high PR links) rather lots of “low quality” (low PR links) while trying to remain relevant. From the algorithm origins point of view, the higher authority pages simply have more of these virtual random surfers landing on them; so more chance of a random surfer clicking your link.

Looking back at “time zero” when the PageRank started to propagate around the web, apart from internal PR stacking, all sites were equal, so PageRank was actually collected by raw numbers of links, rather than this “quality” (high PR) angle, which is actually just a cumulative effect of the PageRank algorithm (at least in its original form).

Hopefully, you’re still with more or not bored about going over fundamentals, but without this level of understanding, you’ll have a job getting your head around the more advanced concepts of link growth. Keep in mind here, I’m talking about pure PageRank in its original form (I’m sure it’s no longer use by Google at this point, but instead Page Authority), I’m not talking about ranking factors as a whole. To be honest, when I’m ranking websites (which I’m pretty good at), Page Authority normally plays a small role in my decision making, it is however useful as an abstract concept when planning linking strategies.

The point I’ve been eluding to here is, for Google to buy into the fact that yes your site is getting lots of natural “run of the mill” links, you first will need links from higher Page Authority page. This line of thinking is, of course, assuming you don’t use a product like Google Analytics – (“Googlebot: Hmm, 58 visitors per month and 1,200 new incoming links per month, makes perfect sense!”).

Google is also pretty good at identifying “types” of websites and marrying this up to trust relationships. So for instance, I think most people would like a link from the homepage of the BBC News website, it’s a whopping high Page Authority and has bucket loads of trust. Here’s a question though: Is it a “relevant” link? The BBC News website covers a massive variety of topics, as most news sites do, so what is relevant and what is not is pretty much dependent on the story, which of course cover all topics. Does a link from the BBC News site mean your site is “popular”? No, (although it might make it so). Here’s an excellent question to ask yourself, between these two scenarios which is most believable:

1) Brand new site launched :: Couple of links from small blogs :: Gets 2,000 links in first month

2) Brand new site launched :: 1 linked from BBC News Homepage :: Gets 2,000 links in first month

Of course, you’ve hopefully identified situation 2 as the far more likely candidate. Let’s consider what Google “knows” about the BBC website:

Googlebot says:

1) I know it’s a news website (varied topics)

2) I know millions of other sites link to it (it’s incredibly popular)

3) Lots of people reference deep pages (the content is of great content)

4) I see new content hourly as well as all the syndicated content I’m tracking (Fresh – as a news site should be)

5) It’s been around for years and never tried to trick me (another indicator of trust)

6) If they link to somebody, they are likely to send them lots of traffic (PR)

7) if they link to somebody, I can pretty much be sure I can trust this person they link to

Despite its critics, I’m a big believer in (at least some kind of) TrustRank system. It makes perfect sense and if you haven’t read the PDF, it’s very much worth doing so. In a hat tip to critics, it is incredibly hard to prove because of the dynamic nature of the web, it is almost impossible to separate the effects of Page Authority, relevance, timing, content and a myriad of other glossary terms you could throw at any argument. However, without leaps of faith, no progress would be made as we’re all building on theory here.

Side Note: While I’m talking about experimentation and proof, I’m still chipping away at my SEO Ranking Factors project (albeit slower than I like) and I’ll be willing to share some scripts for “tracking TrustRank” in the new year – dead useful stuff.

Okay, the point I’m making here is that these high trust/authority whatever you want to call them, sites are a stepping stone to greater things. I would agree with the whitehat doctrine that yes (if it’s your own domain at least) you will require links from these sources if you are to rank well in the future. We’ll look at some examples of how to rank without those links later (:

Trust needs to come before mass popularity and there are other things you may want to consider apart from just scanning websites and looking for an as much green bar as possible. There are other mechanisms, which while I don’t believe Google is using to the full extent they should (even when they play around with that WikiSearch – must not get started on that).

So looking from a Wikinomics aspect, they are less trustworthy but being on the front page of Digg, being popular in Stumble, having lots of Delicious bookmarks could all be signals of trust as well as popularity (although at the moment at least, they are easier to game). I would expect, before Google can use these types of signals as strong factors of search, there will need to be more accountability (i.e. mass information empire) for user accounts. This is perhaps one of the things that could make WikiSearch work, being linked to your Google Account, Google can see if you use Gmail, search, docs, video, blogger, analytics, the list goes on – it’s going to be much harder to create “fake” accounts to boost your popularity.

Domain age and link profiles
Domain age definitely has its foot in the door in terms of ranking, however having an old domain doesn’t give you a laminated backstage pass to Google rankings. The most sense you’re going to get out of looking at domain age comes with overlaying it with a link growth profile, which is essentially the time aspect of your link building operation.

Your natural link growth should have an obvious logical curve when averaged out, probably something like this:

Understanding Optimum Link GrowthWhich roughly shows that during a natural (normalized) organic growth, the number of links you gain per day/month/week will increase (your link velocity goes up). This is an effect of natural link growth, discovery and more visitors to your site. Even if you excuse my horrific graph drawing skills, the graph is pretty simplified.

How does this fit into link growth then?
I’ll be bold and make a couple of statements:

1) When you have established trust, even the crappiest of crap links will help you rank (proof to come)

2) The more trustage (that’s my new term for trust over time (age)) the greater “buffer” you have for building links quickly

Which also brings us to two conclusions:

3) Straying outside of this “buffer zone” (i.e. 15,000 low-quality new links on week 1) can you see penalized.

4) If you’ve got great trust you can really improve your rankings just by hammering any crap links you like at the site.

So, going along with my crap-o-matic graphs:

Understanding Optimum Link GrowthAs I’ve crudely tried to demonstrate in graphical form, your “buffer zone” for links increases almost on a log scale, along with your natural links. Once you’ve established a nice domain authority, it’s pretty much free game with links, within reason.

I s’pose you’re going to want some proof for all these wild claims, aren’t you?

Can incoming links harm your website?
The logical answer to this would be “no”. Why would Google have a system in place that penalizes you for bad incoming links? If Google did this, they would actually make their job of ranking decent pages much harder, with SEOs focusing on damaging the competition, rather than working on their own sites. It would be a nightmare, with a whole sub-economy of competitor disruption springing up.

That’s the logical answer. Unfortunately, the correct answer is yes. I’ll say it again for the scan readers:

It is possible to damage the rankings of other websites with incoming links

Quote me if you like.

Now by “bad links” I don’t mean the local blackhat viagra site linking to you, that will most likely have absolutely no effect whatsoever. Those kinds of sites which Google class “bad neighborhood” can’t spread their filth by just linking to you, let’s be clear on that. You’re more at risk if someone tricks you into linking to a bad site with some kind of Jedi mind trick.

There are two ways I’ve seen websites rankings damaged by incoming links:

1) Hopefully, this one is obvious. I experienced this myself after registering a new domain, putting a site up 2 days later – which ranked great for the first couple of weeks. Then, well… I “accidentally” built 15,000 links to it in a single day. Whoops. I never saw that site in the top 100 again.

2) There is a reliable method to knock pages out of the index, which I’ve done (only once) and seen others do many, many times. Basically, you’re not using “bad” links as such, by this I mean not from dodgy/blackhat or banned sites, they are links from normal sites. If for instance, you find a sub-page of a website ranking for a term, say “elvis t-shirts” (this is a random term, I don’t even know what the SERPs are for this term) with 500 incoming links to that page. If you get some nice scripts and programs (I won’t open Pandora’s Box here – if you know what I’m talking about then great) and drop 50,000 links over a 2 week period with the anchor text “buy viagra”, you’ll find quite magically you have totally screwed Google’s relevancy for that page.

I’ve seen pages absolutely destroyed by this technique, going from 1st page to not ranking in the top 500 – inside of a week. Pretty powerful stuff. You’ll struggle with root domains (homepages) but sub-pages can drop like flies without too much problem. Obviously, the younger the site the easier this technique is to achieve.

You said you could just rank with shoddy links?
Absolutely true. Once you’ve got domain authority, it’s pretty easy to rank with any type of link you can get your hands on, which means blackhat scripts and programs come in very useful. To see this in effect, all you have to do is keep your eye on the blackhat SERPs. “Buy Viagra” is always a good search term to see what the BHs are up to. It is pretty common to see Bebo pages, Don’t Stay In pages – or the myriad of other authoritative domains with User Generated Content rank in the top 10 for “Buy Viagra”. If you check out the backlink profiles of these pages you will see, surprise, surprise, they are utter crap low-quality links.

The domains already have trust and authority – all the need is popularity to rank.

Trust & Popularity are two totally different signals.

Which does your site need?

We have learned:

1) You can damage sites with incoming links

2) Trust & Authority are two totally different things – Don’t just clump it all in as “Page Authority”

3) You can’t rank pages on authority domains with pure crap spam links.

I hope you find this discussion insightful. Thank you for take the time to read it entirely.

 

Google Rating
4.8
Based on 5 reviews
js_loader