php caching

Here is an example: when a user logs in, there is a check to see if his data cache exists or not, if not, then the database is fetched and written to the cache. Further, all changes occur in the cache. How to make an update from the cache to the database, because you need to somehow periodically save new values, because the cache, as you know, is not eternal?

There is such an option: put the daemon (cron) to run through all users and if it sees that the data has changed, it updates the account in the database. Do a check, for example, once every 2 or 1 minute, taking into account the limit.

I don’t know if this option will work or not, is it optimal or not, maybe there are some ideas?

Adding.

The question is more than relevant. One of the suggestions: you don’t need crowns, just an additional field is made in the cache table – the counter of checks and is set, for example, with every 1000 request to drop the data into the database and the counter is reset after that. This is a much better solution. More flexible, because you can change the number 1000 to taste and does not give a peak load, as when running through cron.


Answer 1, authority 100%

My suggestion is nosql in sql. Make one plate with two fields and use it. Such a system is also easily scalable. Although in practice I did not implement just such a scheme. Implemented projects with one redis, redis + oracle, redis + mysql. redis has the ability to save to disk, but it is limited by RAM.

There was a project: a personal account for a user with an Internet service provider, so nothing was included in redis except for directories.

If all your business logic fits in 8 GB of server RAM, or 2×8 GB of two servers, then only redis is possible, but again, 16 GB will not overload mysql unnecessarily with proper design.

Many people naively believe that by adding a cache and stuffing everything in there, they will solve all their problems. I don’t want to offend anyone, but if it doesn’t fail during the tests, then there will be an unrecoverable situation (think about shutting down and working without cache, just in case).

And with a scheme with a database and a cache, I will advise the following (from my experience, about 4 projects, 4 I am doing now, maybe about):

  1. Develop a datalogical model for all business logic. Try to design in such a way that in the future there will be no mixing of insert, update and select operations. Let me explain: if only insert occurs, then rid the table of indexes. Try not to use update, or better do it according to the principle of updating one field (foreign key), and creating a new field in another table.
    For example: some guys created a project that constantly crashed due to heavy loads (they have a 4-tier architecture with an app server on Delphi, it crashed at the 4-tier level – the database could not cope with the influx of huge transactions with update). We also decided to cache everything, but it was of little use. They referred to the inability to design in nosql. And all the brakes were on update in the usual database, and nosql could not cope with the volume (please understand correctly by the amount of RAM). Helped them redesign their approach to data storage, they had 5 out of 5 relationality: advised them to get rid of update and replace everything with insert. So far it works without redis, although it is also there but for intermediate garbage (references and settings).
  2. Now there will be a clearer picture of how much data is chasing, which is easy to calculate even in bytes, multiplied by the activity and the number of users. Here you can immediately see what to put in the cache and when it overflows and everything collapses. Maybe it will be possible to drive only immutable directories there, but all associations will not fit. I saw guys who shove everything from the database into redis, my theory worked, everything flew for them, for the time being, and then collapsed. In addition, it is better to optimize some points in the database.
    And these guys came to such a decision based on successful experience with the previous project, only they were looking in the wrong place, they didn’t cache the data from the database there, but the content already given to the user (html pages).
  3. Don’t use redis or memcash alone. Well or on kraynyak use behind them basis with tables without indexes, insert will fly. And it will take a long time to start, but depending on what is more priority.

P.S.: I have experience, but my answers lack terminology and sentence coherence due to the fact that thoughts overtake printed text, so sorry for the spelling.


Answer 2, authority 33%

A classic of the genre is memcached. Use it. As for periodic saving, in this way you reduce reliability: if something happens in the system … your cache will fall, the connection with the database will be lost or something else and all data will be lost, although you have already assured the user that everything is done. So this may not be the best idea if the data has any meaning for users.


Answer 3, authority 33%

Your task is very elegantly solved by Redis.

The variant with the demon is a crutch. Which will heavily load the system as soon as the number of users exceeds a certain limit.

In general, changes should be made not in the cache, but in the database and, possibly, in the cache, and in order for the cache to perform its function, it is necessary to implement the raceless caching paradigm: when one process is involved in installing and updating the cache (there are such solutions in the ready-made view on github.com).