We have started to use Lighthouse to track the improvements we make to our sites. While this seems to work quite well for desktop sites, i.e. we see the values improve over time and as we make changes, for mobile sites the values remain consistently low. We do repeat the tests and use the best of three, but still.
Below, we have the results of the New York Times mobile site that appears to perform badly vis-a-vis the desktop site. The other two are sites of ours, the main site and the third one being our own.
Browsing the site (as well as the NYT, of course) this apparent bad performance cannot be felt at all.
The test procedure:
- run same test three times for each site
- mobile
- no PWA
- incognito mode
Now, while initially enthusiastic about Lighthouse's capability to evaluate a site by attributing aggregated figures that are easy to digest by management, we have the impression that they are not actually useful as they don't correspond to the users' reality and don't change even though we make changes.
Also, this being a Single Page Application, the first load of the page may take some more time, but any further navigation is quasi-instantaneous. We could not find a Lighthouse feature to take this into account.
I have been using PSI for mobile on our sites and worked well. Atleast mobile score was always better than lab data & my motivation was report was consistent on some external sites like https://covid19.ca.gov/.
Coming to tool works well for initial load but does not take into affect for one page app since cls is continuous evaluation has user scroll through CLS changes that is not simulated in tool. That is where field data differs.
Thanks,