How to factor trust into your link building strategy (without ripping it up and starting again)

How to factor trust into your link building strategy (without ripping it up and starting again)

Back in April Bill Slawski covered a patent apparently updating PageRank, the calculation Google originally used to rank its results (and still a major feature of the algorithm as far as we know). Bill notes that the update bears a strong resemblance to the TrustRank patent grated to Yahoo! in 2004…as did the previous continuation patent, granted in 2006.

The MozTrust metric did a great job of illustrating what Yahoo! tried to do with TrustRank:

We determine MozTrust by calculating link “distance” between a given page and a “seed” site — a specific, known trust source (website) on the Internet. Think of this like six degrees of separation: The closer you are linked to a trusted website, the more trust you have.

How MozTrust works

Using both MozTrust and Majestic’s equivalent TrustFlow metric to assess offsite factors we recently found that trust generally correlated better with ranking improvements than relevance – though we’ve had a strong indication of this for a while and it seems likely that Google has been using some variant of “TrustRank” to value pages for at least a decade (again, see Bill’s previous post). Moz’ Domain Authority metric is effectively a combination of MozRank (a proxy for PageRank) and MozTrust…so to use that metric to illustrate what seems to be happening: MozRank (authority) should be a slightly less significant aspect of Domain Authority, with MozTrust taking greater importance. We’ve always been more fond of the MozTrust metric than Domain Authority because it’s harder to game – we’re taking Moz’ interpretation of which sites are considered trustworthy, but the crawler can easily determine whether a site has links from those sites or not (and trusted sites are much less likely to be added to a disavow file, which skews Domain Authority massively).

The patent Bill wrote about, titled “Producing a ranking for pages using distances in a web-link graph”, references many of the central components of MozTrust and shows how the distance (i.e. number of linking “hops”) between a seed site and your website will be factored in when Google calculates search rankings. As with most patents it isn’t clear whether the exact techniques outlined are in use (or when they came into use), but there are strong indications that the “TrustRank” patent is, or should be.

Why using trusted “seed” sites makes sense

There’s a reason it’s called the web: linking chains aren’t linear and no two websites’ link profiles look the same.

Link graph SEO by the Sea

How the link graph looks when trusted seed pages are separated from non-seed pages.

PageRank “flows” through paid and earned links equally. Anchor text also seems to be equally applicable to paid and earned links. It’s likely that devalued links – backlinks from sites Google has determined to be selling links or spammy in some other way – do not pass less PageRank and the effects of anchor text are not “switched off”. Rather, Google decides to simply not trust those sites anymore.

The Beginner’s Guide to SEO encourages us to think about linking in the context of voting for sites we approve of: in this scenario nobody is prevented from casting their ballot but votes that have been paid for are thrown out for fraud and never counted. This will sometimes go undiscovered for months (even years), but once found out, that site’s vote will never be counted again…regardless of whether future votes are unbiased with money.

TrustRank MozTrust illustration

I’d strongly recommend considering (and writing down) your policies around building links on sites that you suspect of selling links – this will save you time in the long run. Consider adding some steps to your QA process:

  1. Use Bing’s linkfromdomain: search operator (e.g. linkfromdomain:moz.com) to see every website your link target already links to. Are there brands linked to that don’t seem to fit the theme of the website? A few quick site: searches will show you how natural those posts actually are.
  2. Read the 10 most recent posts at a minimum. You’re looking for anchor text that doesn’t seem like it should be there; mentions of brands where a competitor could easily be substituted (e.g. credit card providers like company name); and disclaimers (sponsored post, guest post, advertorial post, in collaboration/partnership with). Put the site on a list not to contact in future if any of these show up.
  3. Keep your do not contact list in a separate tab to your hitlist and use VLOOKUP to make sure you’re not contacting someone you previously decided not to. We’ve turned this into a Chrome extension to save time: when we visit a website we know if we’ve previously worked with a journalist or blogger using the site, who contacted them (and therefore who already has a relationship) and what the content was about – or if we’ve decided not to work with the site (or added it to a client’s disavow file).

If you’ve made contact with a site owner for the first time; they’ve shown interest in the brand and/or content you’d like them to talk about; but now they’re asking for payment…should you walk away? Not necessarily. This call should be made on a case by case basis because – assuming you followed a QA process to identify the site and it appeared natural enough in the first place – simply because you don’t want to waste time. Again, this is where a written process or policy comes in handy: your reply should say that it’s against your company’s policy to buy links (and therefore you couldn’t do it even if you wanted to) but also highlight Google’s policy on link selling. No brand ever wants to be outed for buying links since that often leads to manual actions; but no blogger wants to be outed for selling them either because that will ultimately destroy their source of income (that doesn’t mean your reply should sound threatening!). In my experience around 1 link in 5 gets placed for free where payment was initially asked for – I’ll walk away from the other four and place the site on a do not contact list.

Private Blog Networks (PBNs) are gaining in popularity once again because they work – but it’s no coincidence that most of the rhetoric still concerns how to keep them hidden. Between 2012 and 2014 it was pretty usual that links from a PBN to your website uncovered by Google’s Webspam team would result in a manual action or algorithmic penalty…it’s since become more likely that those links will just stop counting (become “devalued”) and your rankings will fall over time. That’s not to say that manual actions are unheard of in 2018 but it’s 100% clear that some brands are getting away with it.

Although Google has previously claimed that it doesn’t use data from disavow files uploaded to Google Search Console to determine rankings, it would provide a way to crowdsource large numbers of seed sites. From the patent’s abstract:

Generally, it is desirable to use a large number of seed pages to accommodate the different languages and a wide range of fields which are contained in the fast growing web contents. Unfortunately, this variation of PageRank requires solving the entire system for each seed separately. Hence, as the number of seed pages increases, the complexity of computation increases linearly, thereby limiting the number of seeds that can be practically used.

Through the disavow links tool Google has access to lists of websites that SEOs don’t think are trustworthy (often because they paid for links from those sites). As Marie Haynes pointed out last year, Penguin is not a machine learning algorithm – but it’s not beyond the realms of possibility that Google is using machine learning to determine which sites can be trusted from disavow file data.

How this changes our link acquisition tactics

Any updates Google makes to its algorithms shouldn’t affect the actual process of outreach: we’re still talking to humans and link building is about relationships and mutual benefit (and a completely different kind of trust).

Trust GIF Giphy

What should change is how we choose our targets.

Links that benefit us most come from seed sites (or sites with links from seed sites). We obviously don’t really know which websites Google trusts but there’s a good chance that websites with a high MozTrust score are trusted by Google too – and they’re more likely to be linked to from trusted seed sites, even if they don’t fall into that category themselves.

Metrics aside, it should be pretty obvious which websites are the most trusted on the web – they tend to be universities and government organisations (and the SEO industry has been clear that .gov and .edu links are more valuable for a long time); plus national and international press and sites with extremely high readerships…that doesn’t mean that highly trafficked sites with contributors frequently offering to sell links (Forbes, Huffington Post etc.) are well trusted – they probably aren’t.

Other link targets that should definitely be at the top of your priority list are any publishing sites which compete with your own website in search results – tech companies absolutely want links from techcrunch.com, not just because they’re relevant, but because they’re clearly trusted enough to rank competitively for terms we also want to rank for. The patent “Producing a ranking for pages using distances in a web-link graph patent” also references the importance in diversity of themes covered by seed sites – links from trusted sites outside of your industry or country are still likely to pass a significant amount of trust and therefore benefit your search rankings.

Ultimately we want to evidence the trust these seed sites have in our websites. Though there’s still a strong argument for a diverse link profile, we’ve found results have been best when we keep returning to the well and getting multiple links from a site we’re confident is trusted. This has always made sense from an efficiency point of view: if we’ve built a relationship with a journalist who writes for TechCrunch, for example, it’s more likely that she will continue to cover our brand/feature our content – it’s usually easier to get a second, third or fourth link from techcrunch.com than it is to get the first.

In reality there’s no way to know whether Google trusts a website – and how much. What’s clear now more than ever is that trust has a significant bearing on whether a site will rank or not. Considering this in our strategies using the information available has been paying dividends for a while.

I’d love to know what you think – is trust a core component in your link acquisition? Tweet me.

Related Posts