Jay Ghosh
Jay Ghosh

Reputation: 682

Major discrepancies on Google's Lighthouse scores for PWA application

This might not seem like a technical question, but I'll try to make it so.

I've recently made a ReactJS based PWA E-commerce application. And while checking for performance on lighthouse I've noticed a couple of discrepancies and would like someone to explain those to me

  1. The mobile score is almost always worse than the desktop score. And not only for my app but for 100% of all E-commerce websites I've tested (Amazon, Flipkart, eBay, Myntra, etc.). Why is that? From what I understand the scores matter the most on First Contentful Paint, and Largest Contentful Paint. If that was the case, then wouldn't a phone technically have less area to paint resulting in the scores being higher?

  2. The scores fluctuate. And fluctuates a LOT. I've noticed this only happens for Single-Page-Apps though. And it can't be because of minute TTFB differences, since I've seen scores fluctuate 20+ points at a time. If that's the case, then how do we trust the scores?

And before you say "Check the Diagnostics report and you'll understand what's wrong", my question is not "how to increase the lighthouse scores", it is more specific towards the two questions asked above. And they have nothing to do with the actual scores.

Upvotes: 1

Views: 918

Answers (1)

GrahamTheDev
GrahamTheDev

Reputation: 24865

The mobile score is almost always worse than the desktop score

The mobile test simulates 4G connection and a 4x CPU slowdown to more accurately reflect the reduced power of a mobile phone and the fact it may not be connected to WiFi.

The mobile score will always be worse than the desktop score unless you serve two completely different websites rather than a responsive site.

From what I understand the scores matter the most on First Contentful Paint, and Largest Contentful Paint. If that was the case, then wouldn't a phone technically have less area to paint resulting in the scores being higher?

"First Contentful Paint" and "Largest Contentful Paint" are time based - the size does not matter it is when they occur that matters. Obviously with a slower connection and CPU this will always be slower on the mobile test.

The scores fluctuate.

If your score is fluctuating a lot then you probably have some form of race condition occurring. If you are loading everything asynchronously but the load order impacts how the page is displayed this is to be expected. I would suggest running a performance trace on the page with network throttling applied and see if you get varying results there.

Either that or your server is at capacity and is sometimes slower to respond.

What matters in scoring

From what I understand the scores matter the most on First Contentful Paint, and Largest Contentful Paint.

As for what matters in scoring this answer I gave explains the new scoring. You will see that "largest contentful paint" and "total blocking time" are the most important items. The second part of that is the clue for Single Page Applications, if the JavaScript is heavy the TBT will be higher and can fluctuate a lot depending on when scripts are loaded.

Also you say not to ask you to check the diagnostics, but I would encourage you to do so but in detail. If you see that it is complaining about a particular item on one run but then passing it on the next run it is useful information to investigate the discrepancies between scores and pinpoint the issue.

Upvotes: 4

Related Questions