Site speed has always been important in SEO. Not always directly for rankings but, those interested in achieving high conversion rates have always placed importance on good user experience and how quickly a page loads is a key component.
With recent changes to Googles algorithms making speed more of a ranking factor, performance is only going to become more important, particularly given the dominance of mobile devices.
There are many different options when it comes to performance testing. Free tools include WebPageTest.org, GTMetrix, Pingdom and many others but one of the most popular performance analysis tools available at the moment is Lighthouse from Google.
Lighthouse is popular for several reasons including the fact that it is developed by Google themselves, so you can get a good idea of what they consider to be important. The reports generated by Lighthouse are also very comprehensive and provide clear feedback on what the issues are on a page are and how they should be addressed.
A downside with this tool (as with many other page speed testing tools) is that it can take a while to run, and the manual method of checking individual pages is laborious and doesn’t lend itself well to bulk testing.
Fortunately, it is possible to run Lighthouse via a command line interface which means that you can have it running in the background, checking URLs without requiring you to keep going back to process new URLs manually. We’ll be explaining this in detail throughout this article*.
How Much Time Can You Save by Running Lighthouse on the Command Line?
In considering options for efficiency improvements, it’s always important to weigh up the potential time savings versus the level of effort required for implementation.
When running a Lighthouse report manually there are two-time aspects to consider; How long it takes to set a report running (going to the page, opening Chrome developer tools, selecting audit options etc) and the amount of time it takes for the report to actually run.
For the 25 reports from the Edit website that I created when writing this post, I went through the process both manually and using the command line to compare the two.
Time taken to run the reports manually
For each of the 25 URLs it took around 20 seconds to go the URL, select to run only the performance report and start it running. This equates to around 8 minutes in total.
For each URL, I ran the report five times and took an average of how long it took to complete each run. The overall average was 36.37 seconds for each to report to run, so the completion time for all 25 works out to be around 15 minutes.
Combining these two times, the total amount of time either passive or active was 23 minutes. Let’s compare that to the automated method.
Time taken to run the reports via the command line
Rather than entering each URL separately, we simply populate a text file with a list of the URLs that were tested previously (this process will be explained later).
As with the manual process, I ran each of the 25 URLs through 5 runs and took an average which came out at 12.55 seconds, with the time taken to process all 25 reports being just over 5 minutes.
Overall the command line approach was around 78% faster with a time saving of 18 minutes. Whilst this is a good time saving, it doesn’t sound earth shattering but as we start to scale up the number of URLs being tested, the time savings become somewhat more appealing.
This is shown in the example table below based on the quoted run times for each report:
It’s often not necessary to run such large numbers of Lighthouse audits and we often just take a smaller sample of pages, covering different types to get an idea of common issues. But, if you do have to analyse a large number of URLs, then the prospect of being able to save a couple of days is going to be too good to turn down!
Another benefit of running reports from the command line is that by default, the report is saved as a HTML file, whereas with the Dev tools method of running reports, if you want something client facing then you also need to export the data into a JSON file and send that to the client along with a link to the Lighthouse audit viewer
There are a couple of problems with this. First of all, the JSON files are huge and can’t typically be emailed in large numbers, so an upload solution is required. There is also the extra hurdle of having to drag the files into the viewer to be able to look at them.
An alternative here is to use the Chrome Lighthouse plugin which offers the same export features as the command line generated report, however it has similar processing times as the browser based method.
One of the issues that many SEO agencies face is getting buy-in from internal dev teams to implement changes, and anything we can do to make their lives easier only improves the likelihood that our recommendations will be implemented – and this is one small step in achieving this.
Getting Started – What you need to run Lighthouse from the command line
There are only 3 things that we are going to need to have installed in order to be able to run Lighthouse from the command line:
- Google Chrome
- Node JS
The first thing to do is make sure you have Chrome installed. Even though we are going to be batch processing reports in the background the Chrome browser is still required to create headless instances.
After we make sure that Chrome is installed, the next thing we need to do is download Node.js. Windows users should download the LTS version which at the time of writing is 8.11.3.
Mac OSX users should download the macOS installer here
Now that we have Chrome and Node.js installed the final step is to install Lighthouse so we can call it from the command line. This is a very quick and easy process.
Firstly, you need to open a Windows command prompt. You can typically do this by opening the run dialog box and typing cmd as shown below:
This should open up a command prompt window like the one shown in the screenshot below. There are various different methods for opening the command prompt depending on which version of windows you have.
This article shows different methods for windows 7, 8.1 and 10. I am using Windows 10, but the methods are pretty much the same across all recent versions of Windows.
* Note: If you find the font size in the command prompt on Windows too small you can right click on the top bar and adjust the size of the text to make it more comfortable to read.
Now we have a command prompt open. We can install the Lighthouse module using the command below and pressing enter:
npm install -g lighthouse
In the command above, we are using npm install to install the Lighthouse module from the NPM repository discussed previously.
The -g flag means that the module is installed globally and is not dependent on the directory that you are currently in. You can find out more about this here.
Finally, the lighthouse part is telling npm which particular module you want to install. Once you have entered the above command hit return and you should get the following appear showing a progress bar.
As you can see from the final line, the whole process took around 25 seconds to complete so it is not a long installation. If at any time you want to uninstall lighthouse you can simply run basically the same command but in reverse so npm uninstall -g lighthouse
Open terminal (easy way if you don’t have a shortcut on your taskbar, is Command > Spacebar then type ‘Terminal’)
On the command line enter the following:
sudo npm install -g lighthouse (you will be required to enter your password)
Note – If you have any problems with getting the npm command to run you may need open a new command prompt. (hat tip to Mike G for that).
Your First Lighthouse Audit Report from the command line
We now have everything we need installed to be able to generate reports from the command line so let’s go ahead and do that!
At first, we will run a basic report and then we will look at some of the many options available for customising what is generated.
To run a basic report, simply enter the following command and hit enter:
* yoururl should be the URL that you want to test so in my case I would enter:
After you hit enter you should initially see a screen that looks like this:
In this instance you will also get a chrome window pop up which will start modifying the display of the page in question (we will talk about how to avoid this in the next section)
After a short delay, the command prompt will start listing what lighthouse is currently testing and when it is finished it will show you a link to where your report has been generated.
For a basic run of the tool, the report will be generated in whichever folder you currently happen to be in. In my case I was in the root of my H:\ so that is where the report will be saved. By default, the filename should be the URL of the page tested and the date and time when it was generated.
And that’s it, you have (hopefully) run your first lighthouse report from the command line! So, let’s take a look at what you actually get…
The contents of your Lighthouse Report
Those reading this article are most likely familiar with Lighthouse reports, and what you get as standard when generating reports via the command line is pretty much the same. Depending on what categories you have specified when running the report, you will get up to 5 areas of analysis:
There are differences in some areas for example, the metrics used in the performance section. Here are the ones that are the same for both methods of generating the reports:
The browser based report also has a “First Interactive” metric and the Command Line version has “First CPU Idle” and “First Contentful Paint”
One of the most significant differences between the two methods of generating the report the ability to pass the information on to a third party. With the browser based approach you can export the data to a JSON file but unfortunately the file generated is huge and needs to be interpreted with the Audit viewer mentioned earlier.
With the Command line version, the report is natively generated in HTML and can immediately be forwarded on. You are also provided with a number of other options as shown below:
This is one of the key benefits to generating reports in this way as you are able to immediately share the results without having to forward on both a large JSON file and instructions for how to view it.
Customizing your reports: Going beyond the basic Lighthouse CLI Report
As mentioned earlier, we have only looked at the basic default settings so far and this is just the tip of the iceberg when it comes to what you can do with this tool. In this section, we will look at some of the ways that you can customise how your reports are generated. We won’t cover all of the available options in this post but will look in more detail in future posts at what you can do.
To see a list of all of the configuration options you can run the following from the command prompt:
This will give you a list of all the available options as shown in the screenshot below:
Below we will run through some of these options and what they do. It’s useful to note that each flag should be prefixed with a double hyphen e.g — and that generally the flags don’t need to be in a specific order to work.
Quiet and headless
By default, when you request a report, a Chrome browser will open, and the command line will log each part of the process. If you want the report to run silently you can use the following:
lighthouse https://edit.co.uk –quiet –chrome-flags=”–headless”
In the example above, the –quiet flag means that nothing is shown in the command prompt until the process has ended and the chrome-flag value will run the process in a headless instance of Chrome so that a browser window isn’t opened during the running of the report. The latter is particularly useful if you are running a batch of reports in the background which we will cover later.
Once the process has finished, a report will be saved in the folder that you are in when you initiate the command with no notification.
If you want to know when your report has finished you can add the –view flag, so that your report opens in a browser window once it has finished running. An example in conjunction with the previous flags would be:
lighthouse https://edit.co.uk –quiet –chrome-flags=”–headless” –view
Running Lighthouse from the command line allows you to generate reports in three different formats; HTML, JSON and CSV. The default is HTML however you can use any of the three or even generate a report in all of them at once. An example of generating a CSV version of the report that we looked at above would be:
lighthouse https://edit.co.uk –output csv
To generate a report in both csv and html formats you could use:
lighthouse https://www.edit.co.uk –output csv –output html
As with running reports through the browser you can choose which categories you want to report on. If you wanted to only get performance data, you could use the following:
lighthouse https://www.edit.co.uk –only-categories performance
If you wanted to get both performance and SEO you could do:
lighthouse https://edit.co.uk –only-categories performance –only-categories seo
Running Batches of Lighthouse Reports on the Command Line
At this point you might be thinking this is all well and good, but it’s not really going to save me any time. So, we will now look at running batches of reports which is where you will hopefully start to see the benefit of this approach to running Lighthouse audits from the command line.
For this next part we are going to create two files, one to store our list of URLs and another to hold the script that will generate our reports. The first thing we need to do is choose where we are going to store our files and create a new folder.
There are a few differences here between the Windows and Mac processes which will be covered in the two sections below:
I am going to create a folder called demo in this location
C:\Users\mike.osolinski\Documents\Local\Lighthouse\demo but you can create a folder anywhere you want and name it whatever you want.
Important: Make sure that the path does not contain any spaces.
Once you have created a folder to hold your files create a text file called urls.txt. This is the file that we will store our list of pages to test in.
Open your text file and enter a list of URLs as shown below:
Save and close this file and create another in the same folder called test.bat (the file name is arbitrary) and open it for editing in your favourite text editor. I use notepad++ but anything will do really. You can just basic notepad if you prefer.
In test.bat you should add the following lines of code replacing C:\Users\mike.osolinski\Documents\Local\Lighthouse\demo\urls.txt with the path to your urls.txt file. We will run through what each line of code does below.
for /f “delims=” %%a in (C:\Users\mike.osolinski\Documents\Local\Lighthouse\demo\urls.txt) DO (
ECHO Line is: %%a
lighthouse –quiet –chrome-flags=”–headless” %%a
In the script above the first command we see is @echo off which relates to what output is in the command window. If we run the script as is for a single URL, we will see something like the below:
If we remove the @echo off line and re-run the script, then the output will be as follows:
As we can see when the @echo off line is removed the whole of the test.bat file is also echoed to the console. This isn’t a particularly big deal for us as the script is so small, but if you’re running very large scripts then you may want to avoid this.
The next line (see below) is extremely important and key to our script running properly. It’s a bit more involved than the first line, so we’ll break it down into separate parts.
for /f “delims=” %%a in (C:\Users\mike.osolinski\Documents\Local\Lighthouse\demo\urls.txt) DO (
First, the opening for we are creating a for loop to read through each of the URLs in our text file. The /f indicates that we are going to run the loop against a file and “delims=” indicates how our data is delimited which in this case is via a space / line break. You can read more setting options for the type of source data and delimiting data here.
Next, we need to declare a container variable to store each value which is where %%a comes in.
Now, we have in (H:\Lighthouse\demo\urls.txt) which is hopefully fairly self-explanatory. This is the path of the file that contains our list of URLs.
Finally, we have DO ( which opens up the block that will contain the action to perform for each iteration of the script. Everything within the opening and closing brackets will be repeated for each value in the text file. In our case we have 2 lines within the DO block:
ECHO Line is: %%a
lighthouse –quiet –chrome-flags=”–headless” %%a
The first line here echo’s out the text “Line Is:” followed by the URL value that is currently being processed which is stored in the %%a variable discussed earlier. This is not necessary and is just a test output to make sure we’re storing the correct values in our variables.
The final line should look familiar, we are just using the same method of calling the lighthouse module as we did in our earlier single test, but rather than using a hardcoded URL, we’re using whatever value is currently in our variable. This script will repeat itself for each URL in the text file until they have all been processed.
The code for the loop is a little different on the Mac and is as follows:
while IFS=” read -r l || [ -n “$l” ]; do
echo ‘Working on report for the site… ‘ $l
lighthouse –quiet –output html –output csv –chrome-flags=”–headless” $l
done < “urls.txt”
You can find out more information about the Mac version of the loop at the URLs below.
https://askubuntu.com/a/975408 and https://bash.cyberciti.biz/guide/$IFS
Important note – after while IFS= it is two single ‘quotes, not a double (not “)
This file should be saved as a .sh file called something like runLighthouse.sh in a folder such as ‘Documents/Lighthouse/’ (on mac use ~/Documents to go to the documents… ~ (tilde) is a homedir shortcut.
You should save the urls.txt file in the same directory as runLighthouse.sh
Testing the Script
Ok, so hopefully we should now be ready to test our script. The first thing you need to do is navigate in the command prompt to the directory that contains your urls.txt and test.bat files. To do this, you must first navigate via a windows file explorer window to the directory that contains the files, and copy the URL as shown below:
Once you have the URL, open a command prompt as you did previously to navigate to the directory location that you just copied. If you’re unfamiliar with navigating directories in the command prompt, it’s very simple. You just enter cd and then the location so in my case:
Or on a Mac (assuming you have used the save path in the example above)
You should be able to paste the URL is by pressing ctrl + v as you would normally in Windows. If you want to navigate up a folder in the command prompt, simply type cd .. which will move you up a folder.
To see the contents of the directory you are currently in, type dir If you want to clear the current command prompt window you can type cls – this won’t delete anything, but it will clear the command prompt window which can be useful if you have run a lot of commands and are confused about what you are looking at.
Once you’re in the right place in the command prompt you should have a screen that looks like this:
All you should need to do now is run the command depending on which operating system you are using
Press enter, and your reports should start generating, and you should start getting prompts on the command line like this:
Once all the reports have finished running, the command line should return to showing the directory you are in, and a flashing cursor which indicates that the process has finished.
So that’s it! You’ve run your first Lighthouse batch reporting process!
Hopefully you’ve found this guide useful and that it will save you some time in your performance auditing.
We have barely scratched the surface in the possibilities for automated performance testing, and in the next post we will be looking at exporting the data as JSON and importing to Excel for bulk analysis. We will also look more at some of the other metrics that are available outside of what is generated in the HTML reports.
In the mean-time if you have any questions or problems with the processes outlined above, then feel free to get in touch.
* Huge thanks to Mike Gracia who not only very kindly acted as a guinea pig to test this guide and pointed out a few areas where people could get confused, but also provided instructions for the differences between installing on Windows and a Mac and the differences in the batch process.