Locking the base in large loops


There is a script that takes an array of data (let’s say it’s a csv file), and in a loop adds data from this array to the database in the form of new records (I use symfony\doctrine). In addition to adding, there may be other third-party requests to the same database. Let the script run for 5-6 minutes, and it was launched from the website admin panel. Why from this moment the site becomes inaccessible (this site, and php-scripts, neighboring sites work, and static is regularly heard from this site), until the script cycle ends?

Doctrine blocks access to the database, or is the problem something else? I noticed the same picture on some shared hostings, when you call archiving a large folder from the panel, the pages stop updating for a while, nginx issues a gateway timeout, while everything works fine on the same hosting but under a different account.

Explain the effect, please.

Answer 1, authority 100%

MySQL, what do you want from it? The table is blocked and until the transaction closes, access is not obtained. It may be worth twisting transaction isolation (if it has one, of course). And nothing to do with nginx. Gateway timeout is simply because a lock wait timeout occurs.

Answer 2

Perhaps, when inserting data, the table is blocked for reading and PHP scripts cannot get data from the database, so you may get the impression that PHP scripts do not respond at all…

Perhaps the SQL selection of php site scripts has a lower priority than the “admin” script.

PS: I didn’t have anything to do with nginx, please don’t kick me if I wrote something completely off topic 🙂