The technical SEO audit complete guide

Performing an in depth technical SEO audit is an extensive process and some factors can be more fruitful than others; depending on the issues with the website you are auditing.

Although you often end up straying from the checklist, it’s good to have one to ensure that you cover every angle so you can identify errors in a website without missing anything.

This blog will not give you a step-by-step guide of what each aspect of a technical audit is; some prior SEO knowledge is assumed as I only provide a basic explanation of why you should check it and how to check it.

The aim of the blog is to be more of a checklist to ensure that you miss nothing when looking at a sites technical SEO.

I’ve sectioned this article to help make it easier to understand:

1. Tools Used

2. Google Search Console
2.1 Structured Data Errors
2.2 International Targeting
2.3 Mobile Usability Issues
2.4 Index Status
2.5 Content Keywords
2.6 Crawl Errors
2.7 Crawl Stats
2.8 Sitemaps

3. Site Crawl
3.1 Meta Issues & H1s
3.2 HTTPS / HTTP Duplication
3.3 Response Codes
3.4 Image ALTs
3.5 Content Duplication
3.6 Canonical Tags

4. On-Site Checks
4.1 Robots.txt
4.2 Page Source
4.3 Internal Linking
4.4 URL Structure
4.5 Breadcrumb Navigation

5. Indexing
5.1 Check Site Is Indexed
5.2 Ranks For Own Name
5.3 Index Status
5.4 Duplicate Pages

6. Site Speed Benchmarking
6.1 Identifying Site Speed
6.2 Identify Slow Elements

1. Tools Used

Google Search Console
Structured Data Testing Tool
Mobile Friendly Test
Screaming Frog
PageSpeed Insights
WebPageTest
Deep Crawl

2. Google Search Console

2.1 Structured Data Errors
Structured data is often referred to as a ‘schema’ or ‘microdata’ and it helps improve the way that your page is represented in SERPs.

A collaboration between Google, Bing, Yandex and Yahoo! It helps search engines understand the content and improve the way that your search result are displayed and forms appropriate rich snippets.

In Search Console you can check your structured data errors; it will give you an explanation of all the items with errors and inform you of what is missing. You can also use the Structured Data Testing Tool to get a month in-depth analysis of your site.

2.2 International Targeting
If your site is international and you serve several countries then you want to ensure that Google know this.

The way to do this is to implement hreflang tags on pages for different countries and languages; this helps Google understand which pages to serve in search results to users who are searching in different languages.

Here is an example of a hreflang tag:

These should be kept in thetags of relevant pages. 2.3 Mobile Usability Issues Mobile usability is majorly important especially with Google showing an increased focus on it in recent months and the effect it can have on usability.

Search Console offers some great guideline usability issues showing you precisely which pages have the issues so you can identify the problem straight away.

Another tool that Google offers is the Mobile Friendly Test, an easy to use tool that identifies issues with mobile usability on the URL you analyse whilst also showing you exactly how Googlebot sees the page.

2.4 Index Status
Use the Index Status advanced section to find out how many pages on a website are indexed in total, how many are blocked by robots and how many have been removed.
I will touch on this more in the Indexing section of the post, but the amount of pages that Google are indexing is important as you want to match it up with the total amount of pages you have in your sitemap.xml.

2.5 Content Keywords
You may be thinking, why are you checking content keywords in a technical SEO audit? Well, it’s to check the websites health.

If you have been hacked or if your website is compromised in anyway then you will often see terms which are related with spam sites here before you even get warned by Search Console. You should always keep an eye on this section of search console as a business or for your clients.

2.6 Crawl Errors
Google Search Console offers very useful insights into your current site errors and URL errors. In search console you can easily identify if a site can communicate with the DNS, your server connectivity status and if your robots.txt is accessible.

After you have identified that all the site error notifications, move on to the URL errors.

Split by Desktop, Smartphone & Feature phones, you can see your total server errors, soft 404s, not founds and other status code errors, and you will be provided with the offending URLs. These are the most important crawl errors as this is how Google crawlers see your website.

2.7 Crawl Stats
Googlebot provides you with its activity for the last 90 days in crawling your site, this section often gets missed in technical audits but it’s important as it can be one of the first ways of identifying if you have been hit by an update of if there is going to be an issue with indexing.

The main stage of focus should be the “pages crawled per day”; you need to make sure that you have no significant drops.

If you want to further increase your knowledge of this and go more in-depth, you need to do server log file analysis.

Using log file analysis can help you further identify errors by comparing how often different search engines visit your site and trying to find an explanation for this.

2.8 Sitemaps
Firstly you need to ensure that they do in fact have an xml sitemap, this usually sits on www.edit.co.uk/sitemap.xml. Then you need to check to see in the sitemap section has been added and tested.

Once this has been picked up you need to check the submitted pages vs indexed pages. Does it look right? Does it add up to how many pages the site has, and are the pages that should be indexed being indexed?

3. Site Crawl

3.1 Meta Issues & H1s
Meta issues is a general bracket for title tags and descriptions, as well as the H1 tags. You can easily identify these issues by using Screaming Frog spider.

These issues are picked up in Google and can not only effect your rankings but also affect your click through rate in the SERPs, as the title tag or description may not be optimized to what the user is searching.

Title tags should typically be between 50-60 characters, descriptions 150-160 and H1s don’t have a technical limitation, but you should always make them short and descriptive, and don’t duplicate anything!

3.2 HTTPS / HTTP Duplication
When you have run a crawl in Screaming Frog or IIS you can find how many URLs are sitting on HTTP & HTTPS domains and identify any duplicates.

If there aren’t any duplicates, you need to identify if there are different URLs on HTTP or HTTPS and if it’s intentional and for a purpose. You can use this to determine if the migration to HTTPS was a success or a failure.

3.3 Response Codes
When running a site crawl with Screaming Frog you will see a response codes section, and these are the three sections you want to be paying attention to; they aren’t all necessarily error codes.

3xx – Typically what you want to look at here is the 301 & 302 codes. There is nothing wrong with 301 redirects, but you need to check that there isn’t a redirect loop with any of them, essentially checking that they only go through one redirect and not multiple.

302 codes, however, are only temporary redirects although they can be treated as 301s if they are left long enough; so if it’s a permanent change you should ensure that they have a 301 status code.

4xx – 400, 403 and 404 are the most common status codes you will find in this section; each one of these should be carefully investigated.

400s are bad requests which mean users can’t access the page, 403 are forbidden which means users are unauthorised to access the page and 404 means the page has not been found which often means it has been deleted and not redirected.

5xx – The most common server errors you will see in this section are 500, which is an internal server error. This error is more server side and shouldn’t necessarily be fixed with a 301; you should investigate why this is happening first.

3.4 Image ALTs
Image ALTs are how search engines identify and rank your images on site, although you may not think it is a major factor, it could be dependent on the users search intention.

A good example SERP is “kitchen design” as after the local pack, the second ranking position is a collection for images. In order to rank for this you would need to have image ALTs related to the search.

3.5 Content Duplication
Deep Crawl tends to be best for this, it gives you a breakdown of all the crawled URLs and how many of which are duplicate pages. When you continue into this option you can find out the primary URLs and the top duplicate URLs.

In order to avoid being penalised by Google Panda, you want to ensure that you have little or no duplicate content across your site, as this can be deemed as low quality.

3.6 Canonical Tags
Rel=Canonical tags are an option that you can use to avoid duplicate content, you should have canonical tags across your site on anything that could potential be deemed as duplicate content, such as pages which have a “/” on the end and pages that don’t.

Most crawling software will show you your canonical tags, you need to find out if they have been implemented correctly. You can find a great guide on how to use a canonical tag and what you can use it for in the Moz Blog by Lindsay Wassell.

4. On-Site Checks

4.1 Robots.txt
Every site needs a Robots.txt, it tells crawlers what to crawl and what not to crawl. Typically this sits on the URL www.edit.co.uk/robots.txt.
If you have found a Robots.txt then you need to ensure that it only disallows the correct URLs. Essentially this works the same as a noindex tag but you can disallow search engines to crawl a set of pages, for example:

Disallow: /blog/*
Would stop Google from indexing and crawling all blog pages.

4.2 Page Source
This can take a lot of prior understanding and knowledge but opening up the page source and scrolling through can be the best way to identify issues with a website.

Essentially viewing a page as the source is a much easier way of visualising how Google crawls your website.

One of the common problems that is in the page source is large pieces of code in the head which could be condensed.

The issue with large amounts of header code is it takes Google longer to crawl the content, almost as if it is below the fold on the front end, and this could potential impact the ranking potential of the content.

4.3 Internal Linking
Is the internal linking up to scratch? Does the menu contain all of their top pages and is it set out in a way which would make it easy for a user to navigate the website? This is a bit of CRO as well as technical SEO.

Internal linking is almost as important as external linking; the structure of your site should mean that your most important page has the most internal links (typically the home page) and it goes in a descending order.

Site wide links can be very powerful with the correct anchor text; they can help improve the ranking of a page significantly as it’s seen as more important.

4.4 URL Structure
Does the URL structure make sense? Are your top pages close to the main domain? These are questions you should ask when looking at a sites URL structure. You need to find out if every page is under a suitable category and the URLs are accurate descriptions of the page, inclusive of target keywords.

4.5 Breadcrumb Navigation
The breadcrumb navigation is a secondary navigation schema that helps a user navigate the site and return to previous pages easily.
Breadcrumbs also have a Schema, with this option it doesn’t only aid the user, it can also tidy up your search results.

google

You should check to see if it is feasible to implement these breadcrumbs if they aren’t already in place.

5. Indexing

5.1 Check Site Is Indexed
This should be a given, and you may have already verified this with Google Search Console, but you should check that your site is being indexed fully. To do this type into google site:www.edit.co.uk, replacing branded3 with your URL. Are the results what you expected?

5.2 Ranks For Own Name
Do you rank for your own name? Search your own name in Google and see if you rank first for your own name. You should always rank for your own name, it shows trust to the user and means that your brand traffic and reputation is not benefitting someone else.

If you don’t rank for your own name then you need to question why this is the case, identify what is ranking above you and why. Google should always prefer the actual brand over and wants to rank you for your own name, which is why brands aren’t overtaken by Wikipedia pages.

5.3 Index Status
Check how well your site is indexing; once you have done the site: search you will see a line that says “About x amount of results (0.53 Seconds)”.

This roughly tells you the amount of pages that Google have within their index, you need to check if this matches up to Google Search Console sitemap indexation and the amount of HTML pages which you saw in your crawl.

5.4 Duplicate Pages

Finding duplicate pages can be done through a Screaming Frog crawl, or manually checking indexation. You can check the crawl for duplicate title and descriptions as this tends to encompass duplicate URL issues.

What you should be looking for is duplication of HTTPS & HTTP domains, URLs with and without the prefixing www and the trailing slash on URLs. Here are a few examples of links that may not contain a noindex tag:

• https://edit.co.uk
• https://edit.co.uk/
• edit.co.uk
• edit.co.uk/

As standard these should all redirect to the same domain, but if they don’t check to see if they are indexed with a site: search as it may lead you to further issues.

6. Site Speed Benchmarking

6.1 Identifying Site Speed
Site speed is becoming an increasingly important metric with Google increasingly putting more emphasis on it as a ranking factor due to it having such a large effect on usability. You should constantly be fighting an on-going battle with your site speed, doing whatever you can to increase it.

There are several tools you can use for site speed but the one which you should do first is PageSpeed Insights, this is Google’s own tool and it helps you identify your site speed in comparison to others through their grading system.

Once you have run the tool it identifies areas that you should fix and consider fixing, but this doesn’t give you an extensive amount of detail.

After that I tend to run WebPageTest, the site doesn’t look up to much but it helps identify individual issues.

It provides you with the load times of every single element of a page and also identifies what items are taking the longest to load such as HTML, JS, CSS, Image, Flash & Font factors.

6.2 Identifying Slow Elements
Once you have run both the tests you can identify the issues of focus, and if there is anything you can do to fix it.

Using the individual breakdown by WebPageTest, you can identify if there are certain elements you can remove or minify as you can see the size of each element.

Related Posts