drew moore
drew moore

Reputation: 32680

"fragmenting" HTTP requests

I have an Angular app pulling data from a REST server. Each item we pull has some "core" data - what's needed to display it's basic representation - and then what I call "secondary" data, comments and other things that the user might want to see and might not.

I'm trying to optimize our request pattern to minimize the overall amount of time the user spends looking at a loading spinner: Pulling all (core/secondary) data at once causes the initial request to return far too slowly, but pulling only the bare essentials until the user asks for something we haven't requested yet also creates unnecessary load time, at least inasmuch as I could've anticipated them wanting to see it and loaded it while they were busy reading the core content.

So, right now I'm doing a "core content" pull first and then initiating a "secondary" pull at the end of the success callback from the first. This is going to be an experimental process, but I'm wondering what (if any) best practices have been established in this situation. (I'm sure a good answer to that is a google away, but in this instance I'm not quite sure what to google - thus the quotation marks in this question's title)

A more concrete question: Am I better off initiating many small HTTP transactions or a few large ones? My instinct is to do many small ones, particularly if I can anticipate a few things the user is likeliest to want to see first and get those loaded as soon as possible. But surely there's an asymptote here? Or am I off-base in this line of thinking entirely?

Upvotes: 1

Views: 59

Answers (2)

Manube
Manube

Reputation: 5242

I use the same approach as you, and it works pretty well for a many-keyed, 10,0000+ collection.

The collection is paginated with ui.bootstrap.pagination, only a maximum of 10 items are displayed at once. It can be searched on title.

So my approach is to retrieve only id and title, for the whole collection, so the search can be used straight away.

Then, as the items displayed on screen are in an array, I place a $watch on that array. The job of the $watch is to go fetch full details of the items in the array (secondary pull), but of course only when the array is changed. So, in the worst case scenario, you are pulling the full details of only 10 items.

Results are cached for more efficiency. It displays instant results, as the $watch acts as a pre-loader.

Am I better off initiating many small HTTP transactions or a few large ones?

I believe large transactions, for just a few items (the ones which are clickable on the screen) are very efficient.

Regarding the best practice bit: I suppose there are many ways to achieve your goals; however, the technique you are using works extremely well, as it retrieves only what is needed, and only just before it is needed. Besides, it is simple enough to implement.

Also, like you I would have thought many smaller pulls were surely better than several large ones. However, I was advised to go for a large pull as a comment to this question: Fetching subdocuments with angular $http

Upvotes: 2

Rias
Rias

Reputation: 1996

To answer you question about which keywords to search for, I suggest:

progressive loading

An alternative could be using websockets and streaming loading: Oboe.js does this quite well: http://oboejs.com/examples

Upvotes: 0

Related Questions