A major issue databases face is trying to make large amount of combined data available to users while not locking records. A properly normalized database can sometimes make this even more difficult, if the dataset requested is a combination of many tables. If the request is slow, you can face record-locking issues as well as CPU and bandwith issues.
One solution is to pre-compile a large chunk of the request as JSON, then host it elsewhere. In our case, we have thousands upon thousands of active “orders” which are necessary for staff to view and filter and pivot and edit. Rather than have all users pull the entire dataset from the server, locking up a multitude of tables each time, we build a JSON file and push it to Google Storage.
Our application’s “refresh” option will request a signed-link from our server that keeps track of the latest Cache File (generally 5am). Then it will download that file directly from Google storage and load it into the browser’s local IndexedDB. Once complete, we simply request records with an updated date greater than 5am of the same day, which is generally only 2-3k.
In this fashion, we have replaced the entire company’s reload with the equivalent of a single user at 5AM. The entire bandwidth of a 20MB file being loaded thousands of times a day has been given to google storage. Security for the file is done with signed-links that expire in 2 minutes and limited with CORS origins.