Cache / queue for high frequency writes

921 views Asked by At

Assume a (PHP) web application with X00,000 requests per minute, and a SQL database with about the same number of entries.

Each request will trigger a few (<5) db reads, and one db write (update, mostly). I am mainly worried about performance issues being caused by the writes, and looking for solutions to increase possible requests per minute.

What solutions would you suggest, how would you go about implementing it, and why?

Things I thought about include a cache to write to, which is synced with the db once every some time-interval, or a queue like Gearman, or simply SQL's insert delayed.

However, since the same load is expected to be constant, insert delayed will probably not help much when server load is constantly at >90%, so I am looking for ways to significantly reduce the server load compared to single writes.

Thanks

1

There are 1 answers

7
Marki555 On

If you are updating the table you are also reading, then yes, these updates will slow down reads, because they wil invalidate the MySQL query-cache and possible table locking. One common example is table for product details, where you are updating visit counts.

Simple approach is not updating the main table, but instead use separate table for the changing values (for example product_views) and alter your queries to use JOIN for fetching the values. Using this approach your main (and big) table will be mostly static and unaffected by query-cache invalidation and locking.

More advanced approach would be to NOT use UPDATEs, but INSERTs. Instead of updating a row on every pageview, INSERT new value into temp table (e.g. product_views_temp) - so each view will be 1 row. Then schedule a cron job for example every 5 minutes to do a query like SELECT COUNT(*) FROM product_views_temp GROUP BY prod_id and based on the numbers update the main table (should be possible in 1 query). Main benefit is that these INSERTs are faster and you can have the counts in the main table if you need for whatever reason. Little drawback is that that there will be little delay before visitors will see the updated counts.

EDIT: If you can afford to lose the counts data for those 5 minutes, you can make it super-fast by creating that temp table with MySQL MEMORY storage engine. They are very fast for INSERTs (but can be little slower for UPDATEs). When the server crashes / shutdowns, the table itself still exists, but its contents is lost.

You can also use the MEMORY engine for session data (it doesn't matter if users need to log-in again after an unexpected server crash as it is something which will not happen often).