Reputation: 1127
I am writing an application, where it is necessary to fetch data from a third party website. Unfortunately, a specific type of info needed (a hotel name) can only be obtained by CURLing the webpage, and then parsing it (I'm using XPATHs) looking for an < h1> DOM element.
Since I'm going to run this script many times within the day, and I'll probably have to fetch the same hotel names again and again, I thought that a caching mechanism would be good: Checking if the hotel has been parsed in the past and then decide whether to make the webpage request or not.
However I have two concerns: this implementation is better to be made in a DB (since there will be an ID-Hotel name matching) or in a file? The second one is whether this "optimization" worth the whole trouble. Will I gain some significant speed up?
Upvotes: 1
Views: 32
Reputation: 5252
Go with DB, because it will give to you more flexibility and functionality for the data manipulation (filtering, sorting, etc.) by default.
Upvotes: 2