Reputation: 564
I am encountering a task and I am not entirely sure what the best solution is.
I currently have one data set in mongo that I use to display user data on a website, backend is in Python. A different team in the company recently created an API that has additional data that I would let to show along side the user data, and the data from the newly created API is paired to my user data (Shows specific data per user) that I will need to sync up.
I had initially thought of creating a cron job that runs weekly (as the "other" API data does not update often) and then taking the information and putting it directly into my data after pairing it up.
A coworker has suggested caching the "other" API data and then just returning the "mixed" data to display on the website.
What is the best course of action here? Actually adding the data to our data set would allow us to have 1 source of truth and not rely on the other end point, as well as doing less work each time we need the data. Also if we end up needing that information somewhere else in the project, we already have the data in our DB and can just use it directly without needing to re-organize/pair it.
Just looking for general pro's and cons for each solution. Thanks!
Upvotes: 0
Views: 70
Reputation: 9479
Synchronization will always cost more than federation. I would either A) embrace CORS and integrate it in the front-end, or B) create a thin proxy in your Python App.
Which you choose depends on how quickly this API changes, whether you can respond to those changes, and whether you need graceful degradation in case of remote API failure. If it is not mission-critical data, and the API is reliable, just integrate it in the browser. If they support things like HTTP cache-control, all the better, the user's browser will handle it.
If the API is not scalable/reliable, then consider putting in a proxy server-side so that you can catch errors and provide graceful degradation.
Upvotes: 2