Reputation: 22191
Suppose a server (or server cluster) launching a web extractor each time a web client calls a specific API. This web extractor aims to collect some legal data in order to be displayed by web client.
Should be there a single well-designed and secured database on server side and force client to call API each time it wants to display some concerned data?
In this case, suppose client wants to link these retrieved data to its own data present in its own database.
Does it make sense for client to link them with retrieved data IDs/tokens? So that, no needs for it to modelize specific database tables for these remote data but involves more requests in order to find remote data targeted by these IDs/tokens.
Or should there be a specific database on client side in order to avoid huge networks calls caused by all these API Client => Server flows?
Still wonder this question by supposing scenario changed and these data become strictly confidential and no client webs are allowed to store it...
Or both? Even if there are redundancies between both database.
Upvotes: 0
Views: 66
Reputation: 2939
Your question refers rather to caching problem. Database is not the only solution for storing the cached data. I'd choose database only if this data has long effective period and is supposed for querying for different representations on the client side. If the data has short time to live probably some in-place caching solutions are preferable where data is represented effectively in serialized manner. To have this cache on client or server side depends on the security requirements and the way the architecture you are keeping in mind should move towards.
Upvotes: 1