php editor Youzi will introduce you to an important technique today - how to reduce memory consumption when executing large transactions. Memory consumption can become a serious problem when processing large amounts of data or performing complex operations. To solve this problem, we need to take some optimization measures to improve the efficiency and performance of the code. This article will introduce you in detail some methods and techniques to reduce memory consumption and help you use memory resources more efficiently when processing large transactions.
I followed the examples on the internet and used the following sqlite parameters to improve the speed of insert queries:
pragma journal_mode = off; pragma synchronous = 0; pragma cache_size = 1000000; pragma locking_mode = exclusive; pragma temp_store = memory;
This is my code:
tx, err := db.begin() if err != nil { log.fatal(err) } pr, err := tx.prepare("insert into table (p1, p2, p3, p4, p5) values (?, ?, ?, ?, ?)") if err != nil { log.fatal(err) } defer pr.close() for i := 0; i < maxi; i++ { for j := 0; j < maxj; j++ { ... _, err = pr.exec(param1, param2, param3, param4, param5) if err != nil { log.fatal(err) } } } err = tx.commit() if err != nil { log.fatal(err) }
Now the query runs fast but consumes too much ram. Because the data is stored in ram, it is only saved to the database file at the end of the execution.
I think it is possible to save the data to a database file periodically, which will slightly increase the execution time but reduce memory consumption. With each change of "i", the transaction starts, and when all "j" are completed, the transaction ends:
for i := 0; i < maxI; i++ { tx, err := db.Begin() if err != nil { log.Fatal(err) } pr, err := tx.Prepare("INSERT INTO Table (p1, p2, p3, p4, p5) VALUES (?, ?, ?, ?, ?)") if err != nil { log.Fatal(err) } defer pr.Close() for j := 0; j < maxJ; j++ { ... _, err = pr.Exec(param1, param2, param3, param4, param5) if err != nil { log.Fatal(err) } } err = tx.Commit() if err != nil { log.Fatal(err) } }
I think now the data should be written to the file in chunks and there should be only one data chunk in the ram.
But during execution, the data is not saved to the file, and the ram continues to be filled. That is, there is no difference in the execution of the first and second code options.
I think when "commit" of the transaction is called, the data should be saved to the file and the ram should be cleared. Please tell me what I'm doing wrong.
The parameter of PRAGMAcache_size
is the number of pages (generally 4k bytes per page).
PRAGMA cache_size = 1000000;
A maximum of 4GB of RAM will be allocated to the page cache.
The page cache is allocated when needed, up to a maximum size, but is not released until the connection is closed.
Since you are inserting a large number of rows, they will end up on different pages, so you will end up keeping in the cache all the pages that have been written to disk until the cache is filled.
If you want to reduce memory consumption, just reduce the value to something like 1000 (equivalent to 4 MB), or simply remove it. The default cache is 2 MB, which is enough if you are just inserting rows.
Also note that the data does get written to disk when you call COMMIT (or even before committing if there is not enough cache). But Sqlite keeps a copy in cache in case it is needed later to avoid re-reading it from disk.
The above is the detailed content of Reduce memory consumption when executing large transactions. For more information, please follow other related articles on the PHP Chinese website!