How to do a collaborative API cache?
from MindfulMaverick@piefed.zip to programming@programming.dev on 26 Feb 12:24
https://piefed.zip/c/programming/p/1148123/how-to-do-a-collaborative-api-cache

I’m looking for advice on building a collaborative caching system for APIs with strict rate limits that automatically commits updates to Git, allowing multiple users to share the scraping load and reduce server strain. The idea is to maintain a local dataset where each piece of data has a timestamp, and when anyone runs the script, it only fetches records older than a configurable threshold from the API, while serving everything else from the local cache. After fetching new data, the script would automatically commit changes to a shared Git repository, so subsequent users benefit from the updated cache without hitting the server. This way, the same task that would take days for one person could be completed in seconds by the next. Has anyone built something like this or know of existing tools/frameworks that support automated Git commits for collaborative data collection with timestamp-based incremental updates?

#programming

threaded - newest

bricked@feddit.org on 26 Feb 14:52 next collapse

This could easily be implemented with an SQL-like database. Are you sure you want to use Git for this? The only advantage would be that you get historic data out of the box, but you’ll probably only fetch the latest data anyways

derek@infosec.pub on 26 Feb 15:48 collapse

What problem are you attempting to solve?

How to you imagine this solution solves that problem?