Inventory as an SEO ranking factor
The depth of your product range is a big factor in how your pages rank in search engines.
According to Searchmetrics, ecommerce pages that appear in the top 10 search results have a 30% larger file size than other queries. Speed is clearly important, for everyone, but Google will make concessions on page load times for online retailers because searchers prefer results with more products to choose from. A fast ecommerce website with a lot of inventory generally wins.
Searchmetrics’ study highlights a lot of signals that correlate with better rankings – a 25% higher word count than in non-retail SERPs and 73% more internal links, for example – but these are just symptomatic. Nobody is suggesting you need more internal links to improve your search rankings. You need more products.
The challenge for a search engine: a page with more products is more likely to match a visitor’s intent, but which product was it they intended to find? Could a search engine determine whether that product is on a page/abundant/easy to spot in the range?
Resource identification from organic and structured content
Theoretically Google can now make direct use of a website’s database to determine whether it has the stock or inventory to answer a search query, according to a new patent spotted by Bill Slawski (who else?)
Bill notes some of the advantages this patent provides:
Websites need not generate multiple “optimized webpages” that are optimized for particular instances of queries to ensure the website is identified in a search result. Instead, the underlying capabilities of the website database and the authority of the website are used as metrics to surface websites and databases that are of high quality with respect to a particular query. This reduces the overall cost of website management, and provides users with data that are more likely to satisfy the user’s informational need than the optimized webpages.
Ignoring that the reference to an underlying “authority of the website” contradicts recent claims from several Googlers (no, seriously, just don’t think about it) there are some problems with this approach:
- The ability of a webpage to technically answer a query is not the only factor ranking that page – a single page answering many potential queries is unlikely to rank highly for any of them. Basics like meta data and <h1> attributes would be down weighted
- A visitor who had searched for “black leather jackets” but is served a /leather-jackets/ page with black leather jackets on it would have to click a filter to see what they specifically searched for unless brands could be more confident that Google could handle their filters, which at present they can’t
- Google has a problem with software automatically spinning up landing pages based on keyword searches/volumes, which seems to be in vogue again – this is an extreme example of a terrible user experience but these “doorway pages” do seem to get caught up in the Panda algorithm (or a “core algorithm update” as it’s now known)
The assumption here is that searchers are already wedded to their choices. From the patent:
The systems and methods can utilize the conceptual schemas of the databases to provide additional information for queries that may not otherwise be derived from the queries. For example, a user that types in the search query [Brand X cameras under 300] may be searching for Brand X cameras that cost less than $300. The user, however, may not know that the “Q” models of Brand X cameras are prosumer models that each retail in excess of $300. Thus, by use of a product database, the search engine may determine that “Q” model are each in excess of $300. Thus, the search engine may modify the query with an operator that excludes the “Q” models, e.g., [Brand X cameras under 300 OP:NOT(Q)], or, alternatively, modify the query to emphasize resources that include reference to Brand X models that are priced under $300. The search engine thus surfaces fewer resources that include extraneous information, thereby satisfying the user’s informational need more quickly than if the extraneous information were provided.
What’s to say that a searcher wouldn’t be more likely to buy a camera if it cost $320 but was reduced from $500? Many people would cough up the extra cash if they thought they were getting a great deal, but perhaps a result like this would not be displayed. Google may think that it’s “satisfying the user’s informational need more quickly” whereas it’s actually impeding the user’s ability to research the product they want to buy (which is the one thing Google is fantastic at).
If anything, the camera example from the patent implies webpages that include products that don’t match the query are less likely to be surfaced, making it more important for big websites to have “optimised” landing pages.
Bill has some interesting thoughts on how Google might rank and display results using this method so I suggest reading the whole post, but the patent is an evolution of something we noticed after the first Panda updates.
The size of your inventory is a ranking factor
One of the key signals Google looks at is the number of people who visit your site and then return quickly to Google to search for the exact same thing again. This indicates to Google that the searcher is not satisfied with what they found on your site.
When Panda was launched it affected a number of key verticals including voucher sites and car classified websites. The problem with both of these sites is they generate a lot of short clicks and have really struggled to recover from Panda in a lot of cases.
Google denied using short clicks as a way to penalise or reward sites for years before Danny Sullivan was able to confirm that clicks were at least partially used to determine quality of results in 2015.
Google confirms watching clicks to evaluate results quality. FYI Google still won’t say if clicks used as rank signal pic.twitter.com/jzNGc5reQk
— Danny Sullivan (@dannysullivan) March 25, 2015
For an easy demonstration of why voucher sites have been so badly affected by Panda (/Phantom) updates, search for “nandos vouchers”. Click on the first result. The page lists deals that you’re probably aware Nando’s always offer, and nothing particularly impressive, so the chances are high that you would return to the search page and click on the next result down. And repeat. Because Nando’s don’t do vouchers.
This is obviously an extreme example. But imagine you wanted to buy a jacket: if the first result Google returns is a page with 5 jackets you’re not likely to spend a lot of time with that result before returning to the SERPs and clicking the next listing. Many browsers – people who don’t already have a brand in mind – are likely to spend their time with ASOS because it’s a website that ranks well and has an enormous range of jackets.
Inventory = how to beat Google
Inventory can be broadly taken to mean what a website has available – for ecommerce websites that means product listings or ads for classifieds websites.
Searchers click on a result because they have a reasonable expectation that the website will have what they’re looking for and as a result the websites with the largest range of products tend to win. It’s the internet version of why you’ll jump in your car and drive to a large supermarket rather than visiting the shop on the corner. It’s even likely that you’ve visited that local shop in the past and been left disappointed with the stock available – from now on you’ll go straight to the supermarket. That’s the real life equivalent of short clicks vs. long clicks.
This is also one of the reasons “department store” style retailers dominate search results. It’s partially because enormous sites like tesco.com have 28 products in a niche and furniture specialists like made.com have half that…
…but it’s more to do with the fact that Tesco does such a good job of telling people they have the product range often enough (see the snippet above) that people believe them and click on the result without fear of wasting time.
This goes further. According to Jason Del Rey on Recode 55% of product searchers now start with Amazon. Inventory – and the ease with which it can be accessed – is the reason Google is so scared of Amazon. Shoppers expect that Amazon stocks what they are looking for – and it does. It’s also easier to find, so why trust Google to return the right result on Amazon when you could cut out a click and potentially save yourself the trouble of searching again?
Although there is no previous data, it’s still encouraging to see that 13% of product searches are started on a retailer’s website. This is only half as many as Google, Bing, Yahoo! etc. combined. Imagine how many customers could be prevented from ever using a search engine during a purchase if retailers improved their own internal search functionality?
Niche-specific “Amazons” are surfacing too (Amazon’s “niche” is everything that you can buy, and that’s pretty broad). Skyscanner isn’t far off becoming “Amazon for flights”. See also: Autotrader. From Patrick’s post:
Loads of car classified sites have been hit by Panda in the past two years and the main signal (other than things like duplicate content etc. which are easy to fix) is the fact that none of the mid-sized sites can claim to offer the definitive car search experience for a searcher. No matter how good the site is, there are very few people who will just look at one website and be happy, unless that site is Autotrader.
Skyscanner can’t claim to list more flights than Google does (unless you’re limiting your thoughts to Google’s inferior flight search product), nor can Autotrader claim to have a broader selection of cars. You can use Google to find almost every flight, car or product that anyone has online. But these brands are successfully convincing the public that they can be absolutely trusted to have the answer or product that is searched for, returned without quite so much noise. So cut out the middle man.
I’m waiting for an “Amazon of finance”. Why sift through the nonsense content from banks that Google is forced to return if you can ask MoneySupermarket.com, Money.co.uk or Confused.com a question about your finances and expect that they will have an answer for you that you can trust?
I don’t think I’ll be waiting for long.
The future for Google
In February 2016 an announcement came from Mountain View that would see Google Search Appliance retired. No new hardware would be bought or sold from 2017, with renewals due to end next year. A cloud-based Search Appliance “replacement” is apparently on the horizon (it might not be cheap). From Fortune:
Google’s appliance was intended for companies that want to use Google technology to search internal documents by author name, prices, dates, and other data. Google’s partners made money by integrating the appliance’s search with customer document archives, applications, and websites.
Take a look at some of Search Appliance’s capabilities – it was a decent product plenty of companies invested in (and plenty of companies are being forced to invest in migrating away from it now). Patents like Resource identification from organic and structured content suggest that Google hasn’t abandoned the idea of integrating websites’ own internal search functions with Google.com et al. If it can convince the public that it’s still the best way to search retailers’ inventory, maybe it can “cut out the middleman” (Amazon) and even serve some ads in the process.
For now…some rules to live by
It’s worth noting that the publication of a patent doesn’t mean the techniques mentioned are currently in use (although as Bill points out we’ll know it’s likely to have been implemented once we start to see search results like the one shown in the screenshot above) – the value for us is the context the author provides as to why the techniques could be considered a good idea.
For now though there are certainly a handful of rules that brands should follow that will benefit brands now- regardless of if and when the “Resource identification from organic and structured content” patent is implemented.
- Brands stocking a smaller range of products (or posting a smaller number of classified ads etc.) than competitors have to supplement that range with useful content if they hope to keep users on a webpage for longer. Don’t confuse useful content with boilerplate – the aim is to keep visitors on the page and if content is easily and willfully ignored it really isn’t useful.
- If the brand with the market-leading range also has the market-leading content then don’t think in terms of trying to outrank them; think in terms of trying to make your webpage safe. A webpage with a small number of products and a little boilerplate copy (or no copy at all) is at risk from the next “core algorithm update”/”phantom”/Panda iteration.
- On the subject of Panda, don’t just list all your colour variants separately and expect it to look like a broader range of products – everything is likely to rank worse than before. It’s fine to list slight product variations separately as long as precautions are taken (canonical tags being the most obvious).
- I’d generally recommend sub-category level pages with fewer than 10 products are too thin to be sub-category pages on their own unless the products are extremely niche. Remember, you’re not trying to convince Google you have a page worth ranking – you’re trying to convince visitors that the variety on offer is interesting enough to spend some time with you. If you have 5 products in a niche, and everyone else has 2, you’re probably going to win.
- Regardless of the size of your inventory you should be using all available Schema.org markup. As per the patent referenced above (and by Bill) it’s likely to become essential for brands with broad product ranges – and right now it’s still an advantage for smaller brands as many huge retailers don’t mark everything up.
- Please, please, please have a site search strategy. If you’re not paying attention to the results you’re choosing to show your customers why do you care what Google shows so much?
The intention of the patent is to assess whether websites may be capable of answering questions. You can either wait until someone such as myself has a stab that this may be a factor in a “Phantom” update, or you can assess this yourself. Audit your website’s internal search function.