If, for example, there are 3k people on the site at the same time, every 30 seconds from each of them at least 1 request, and possibly 5-6, so how to make sure that the server does not crash? I suppose you need to make a file where to write requests, and then process 300 requests, for example, of course there will be a delay in updating the data, but this is not scary. Or am I not speaking correctly? How to reduce load on mysql server?
Answer 1, authority 100%
First of all, it depends on the nature of the requests. Selects, inserts, or updates.
Selects can be optimized with query revisions, indexes (I suspect already done), and caching of results. For the cache – memcached to help, just beware dogpile effect, hang up a semaphore for updating.
For inserts and updates, it depends on the transactional requirements which ACID properties they (and accompanying selects) must satisfy. If there are no requirements (say, a banal increment of counters or inserting a series of logs), you can add them to an intermediate buffer (memcached or Redis, but files can also be used), from where they are periodically sent to the database in batches.
Then, in some cases, sharding is possible. If there are elements of the same type in the site structure (for example, articles or chat rooms), then they can be distributed to different servers. For example, on the fingers – server A will process articles with even IDs, and B – with odd ones. Of course, the servers must be on different machines.
Well, the improvement of iron is also not worth discounting. The problem is solved comprehensively, there is no universal “do this and everything will be fine” here.
Give more specific examples and you can try to give more specific answers.
Do you know the word “optimization”? It is necessary to optimize requests, obviously. And because Language is also indicated in the labels. Then read about caching.