Home >Database >Mysql Tutorial >Detailed introduction to the InnoDB storage engine in MySQL (code example)

Detailed introduction to the InnoDB storage engine in MySQL (code example)

不言
不言forward
2019-02-21 11:33:102476browse

This article brings you a detailed introduction (code example) about the InnoDB storage engine in MySQL. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.

InnoDB belongs to the storage engine layer in MySQL and is integrated into the database in the form of a plug-in. Starting from MySQL 5.5.8, InnoDB becomes its default storage engine. InnoDB storage engine supports transactions, and its design goal is mainly for OLTP applications. Its main features include: supporting transactions, row lock design to support high concurrency, foreign key support, automatic crash recovery, clustered index organization table structure, etc. (Related recommendations: MySQL Tutorial)

System Architecture

The InnoDB storage engine is composed of three parts: memory pool, background thread, and disk storage. .

Detailed introduction to the InnoDB storage engine in MySQL (code example)

Threads

InnoDB uses a multi-threading model, with multiple different threads in the background responsible for processing different tasks

Master Thread

Master Thread is the core background thread, which is mainly responsible for asynchronously refreshing the data in the buffer pool to the disk to ensure data consistency. Including dirty page refresh, merged insertion buffer, UNDO page recycling, etc.

IO Thread

In the InnoDB storage engine, asynchronous IO (Async IO) is used extensively to handle write IO requests. The job of IO Thread is mainly responsible for the callback of these IO requests.

Purge Thread

After a transaction is committed, the undo log used by it may no longer be needed, so Purge Thread is required to recycle the UNDO pages that have been allocated and used. InnoDB supports multiple Purge Threads, which can speed up the recycling of UNDO pages, increase CPU usage and improve storage engine performance.

Page Cleaner Thread

Page Cleaner Thread is used to replace the dirty page refresh operation in Master Thread. Its purpose is to reduce the work of the original Master Thread and the blocking of user query threads, and further improve Performance of the InnoDB storage engine.

Memory

InnoDB Storage Engine Memory Structure

Detailed introduction to the InnoDB storage engine in MySQL (code example)

Buffer Pool

InnoDB Storage The engine is based on disk storage and manages the records in pages. But due to the gulf between CPU speed and disk speed, disk-based database systems often use buffer pool records to improve the overall performance of the database.

The buffer pool actually uses the speed of memory to compensate for the impact of slow disk speed on database performance. When the database performs a read operation, the page in the disk is first put into the buffer pool. The next time the same page is read, the page data is first obtained from the buffer pool to act as a cache.

For data modification operations, the page data in the buffer pool is first modified, and then flushed to the disk using a mechanism called Checkpoint.

The size of the buffer pool directly affects the overall performance of the database. For the InnoDB storage engine, the buffer pool configuration is set through the parameter innodb_buffer_pool_size. Use the SHOW VARIABLES LIKE 'innodb_buffer_pool_size' command to view the buffer pool configuration:

mysql> SHOW VARIABLES LIKE 'innodb_buffer_pool_size' \G
*************************** 1. row ***************************
Variable_name: innodb_buffer_pool_size
        Value: 134217728
1 row in set (0.01 sec)

The types of data pages cached in the buffer pool are: index pages, undo pages, insert buffers, adaptive hashing Index, InnoDB lock information, data dictionary information, etc. Index pages and data pages account for a large part of the buffer pool.

Redo log buffering

When the page data in the buffer pool is newer than the disk, the new data needs to be flushed to the disk. InnoDB uses the Write Ahead Log strategy to refresh data. That is, when a transaction is submitted, the redo log buffer is first written. The redo log buffer will be flushed to the reset log file at a certain frequency, and then the dirty pages will be flushed to the disk according to the checkpoint mechanism. .

The redo log buffer does not need to be set very large. Normally 8M can meet most application scenarios. The redo log supports the following three situations to trigger refresh:

  • Master Thread flushes the redo log buffer to the redo log file every second

  • When each transaction is committed, the redo log buffer is flushed to the redo log file

  • When the remaining space in the redo log buffer pool is less than 1/2, the redo log buffer is flushed to the redo log file. Make log files

Detailed introduction to the InnoDB storage engine in MySQL (code example)

##Extra memory pool

In the InnoDB storage engine, memory management is through a process called memory performed in a heap manner. When allocating the memory of some data structures themselves, it is necessary to apply from an additional memory pool. When the memory in this area is not enough, it will be applied from the buffer pool.

Lock

The locks supported by InnoDB are:

    ##Shared lock and exclusive lock
  • Intention Lock
  • Record Lock
  • Gap lock

  • Auto-increment lock

##Shared lock and exclusive lock

InnoDB The engine implements two standard row-level locks, shared (S) locks and exclusive (X) locks. A shared lock allows a transaction that holds the lock to read a row of data, and an exclusive lock allows a transaction to write to a row of records.

If a transaction holds a shared lock, other transactions can still obtain the shared lock of this row record, but cannot obtain the exclusive lock of this row record. When a transaction acquires an exclusive lock on a row, other transactions will no longer be able to acquire shared locks and exclusive locks on this row.

Intention lock

In InnoDB, intention lock is a table-level lock, which is divided into shared lock and exclusive lock:

  • Intention shared lock : About to acquire the shared lock of a certain row

  • Intentional exclusive lock: About to acquire the exclusive lock of a certain row

The transaction is acquiring Before sharing/exclusive locks, you must first acquire intention sharing/exclusive locks. Intention locks will not block any other operations on the table. They just tell other transactions that they are going to acquire a shared lock or exclusive lock on a certain row.

Record lock

Record is a kind of lock that acts on the index. It locks the index of a certain record rather than the record itself. If the current table does not have an index, InnoDB will Create a hidden clustered index for it, and Record Locks will lock the hidden clustered index.

Gap lock

Gap lock and record lock also act on the index. The difference is that record lock only acts on one index record and gap lock can lock an index within a range. The only function of gap locks in InnoDB is to prevent other transactions from inserting operations, thereby preventing phantom reads from occurring.

Auto-increment lock

Auto-increment lock is a special table-level lock, which only applies to insert operations involving auto-increment columns. When a transaction is inserting a piece of data, any other transaction must wait for the entire transaction to complete the insert operation, and then acquire the lock to perform the insert operation.

Transaction

ACID

Transaction is the most important feature of the database as OLTP. When talking about transactions, we have to mention the four basic features of ACID:

  • Atomicity: The smallest unit of work for a transaction, either all successful or all failed

  • Consistency: The start and end of a transaction Afterwards, the integrity of the database will not be destroyed

  • Isolation (Isolation): Different transactions do not affect each other. The four isolation levels are RU (read uncommitted), RC ( Read committed), RR (repeatable read), SERIALIZABLE (serialization)

  • Durability (Durability): After the transaction is submitted, the modification to the data is permanent, even if System failure will not be lost

The atomicity, persistence and consistency of InnoDB are mainly accomplished through the Redo Log, Undo Log and Force Log at Commit mechanisms. Redo Log is used to recover data in the event of a crash, Undo Log is used to undo the impact of transactions, and can also be used for multi-version control. The Force Log at Commit mechanism ensures that the Redo Log has been persisted after the transaction is committed. Isolation is guaranteed by locks and MVCC.

Isolation level

In MySQL, there are 4 isolation levels for transactions, which are:

  • Read Uncommitted Read

  • Read Committed Read Committed

  • Repeatable Read Repeatable Read

  • Serializable Serializable

Before understanding the four isolation levels, we need to understand three other terms:

  • Dirty reading

Transaction a will read the uncommitted data of transaction b, but transaction b will perform a rollback operation for some reason. In this way, the data read by transaction a will be unavailable, which will cause some abnormal results.

  • Non-repeatable read

A certain data is queried multiple times during a transaction cycle, and at the same time, these data are updated or updated in b transaction delete operation. Then the results of each query for transaction a may be different.

  • Phantom reading

The result of phantom reading is actually the same as non-repeatable reading. The difference is that non-repeatable reading is mainly for other purposes. The transaction performed edit (update) and delete (delete) operations. Phantom reading is mainly for insert operations. That is to say, during the life cycle of a transaction, newly inserted data from another transaction will be queried.

Read uncommitted read

Uncommitted read. In this case, one transaction a can see the uncommitted data of another transaction b. If transaction b is rolled back at this time, Then what transaction a gets is dirty data, which is the meaning of dirty reading.

This isolation level is generally not recommended for use in MySQL InnoDB.

Read Committed Read Committed

Read Committed, any modifications made by a transaction from the beginning until it is committed are invisible to other transactions. The dirty read problem is solved, but the phantom read phenomenon exists.

Repeatable Read Repeatable Read

Repeatable Read, this level ensures that the results of reading the same record multiple times in the same transaction are consistent, and solves both phantom reads and non-repeatable in the InnoDB storage engine. Read the question.

The InnoDB engine solves the problem of phantom reads by using Next-Key Lock. Next-Key Lock is a combination of row lock and gap lock. When InnoDB scans the index record, it will first add a row lock (Record Lock) to the index record, and then add a gap to the gaps on both sides of the index record. Gap Lock. After adding the gap lock, other transactions cannot modify or insert records in this gap.

Serializable Serializable

Serializable is the highest isolation level. It avoids the problem of phantom reads by forcing transactions to be executed serially. However, Serializable will be executed on each row of data read. All are locked, so it may cause a lot of timeout and lock contention problems, so the concurrency drops sharply, and it is not recommended to use it in MySQL InnoDB.

Open transaction

  • BEGIN, BEGIN WORK, START TRANSACTION

## Executing the BEGIN command will not actually open a new transaction at the engine layer A transaction simply sets a mark for the current thread and represents an explicitly opened transaction.

  • START TRANSACTION READ ONLY

Turn on read-only transactions. When MySQL Server receives any data-changing SQL, it will directly reject the modification and Returns an error. I will not enter the engine layer for this error.

  • START TRANSACTION READ WRITE

Allows the super user to start a read-write transaction when the read-only status of the current thread is true.

  • START TRANSACTION WITH CONSISTENT SNAPSHOT

  • ## Opening a transaction will enter the engine layer and open a
readview

. This operation is only valid under the RR isolation level, otherwise an error will be reported. Undo log

When the data is modified, the corresponding undo log will be recorded. If the transaction fails or rolls back, you can use the recorded undo log to roll back. Undo log is a logical log that records the data image before changes. If the current data needs to be read at the same time during modification, it can analyze the data of the previous version recorded in this row based on the version information. In addition, Undo log will also generate redo logs, because Undo log also requires persistence protection.

Transaction submission

    Use the global transaction ID generator to generate transaction NO, and add the current connection's transaction pointer (
  1. trx_t

    ) to the global commit transaction in the linked list (

    trx_serial_list
  2. ) marks undo. If this transaction only uses one UndoPage and the usage is less than 3/4 Page, mark this Page as
  3. TRX_UNDO_CACHED

    , if it is not satisfied and is insert undo, it will be marked as TRX_UNDO_TO_FREE, otherwise the undo is update undo and it will be marked as TRX_UNDO_TO_PURGE. Undos marked TRX_UNDO_CACHED will be recycled by the engine.

  4. Put
  5. update undo

    into the history list of undo segment, and increment rseg_history_len(global). At the same time, update TRX_UNDO_TRX_NO on the Page. If the data is deleted, reset delete_mark

  6. and change
  7. undate undo

    from Delete from update_undo_list. If marked as TRX_UNDO_CACHED, add it to the update_undo_cached queue

  8. mtr_commit

    (Log undo/redo is written to the public buffer). At this point, the file-level transaction is committed. Even if it crashes at this time, the transaction can still be guaranteed to be submitted after restarting. The next thing to do is to update the memory data status (trx_commit_in_memory)

  9. The read-only transaction only needs to change the
  10. readview

    from the global Remove it from the readview linked list, and then reset the information in the trx_t structure. A read-write transaction first needs to set the transaction status to TRX_STATE_COMMITTED_IN_MEMORY, release all row locks and remove trx_t from rw_trx_list, readview Removed from the global readview linked list. If there is insert undo, remove it here. If there is update undo, wake up the Purge thread to clean up the garbage. Finally, reset the information in trx_t for easy downloading. A transaction uses

  11. to rollback

    If it is a read-only transaction, it returns directly
  • Determine whether to roll back the entire transaction or part of the transaction. If it is a part of the transaction, record how many Undo logs need to be kept, and roll back all the excess
  • From
  • update undo Find the last undo between

    and insert undo, and start rolling back from this undo

  • If it is
  • update undo

    , then Records marked for deletion are marked for cleaning, and updated data is rolled back to the oldest version. If it is insert undo, delete the clustered index and secondary index directly

  • If all undo has been rolled back or rolled back to the specified undo, stop and delete the Undo log

Index

The InnoDB engine uses the B-tree as the index structure. The data field of the leaf node of the primary key index stores complete field data, and the leaf nodes of the non-primary key index store the value data pointing to the primary key.

Detailed introduction to the InnoDB storage engine in MySQL (code example)

The above picture is a schematic diagram of the InnoDB main index (also a data file). You can see that the leaf nodes contain complete data records. This index is called a clustered index. Because InnoDB's data files themselves are aggregated by primary key, InnoDB requires that the table must have a primary key. If not explicitly specified, the MySQL system will automatically select a column that can uniquely identify the data record as the primary key. If such a column does not exist, then MySQL automatically generates an implicit field as the primary key for the InnoDB table. The length of this field is 6 bytes and the type is long.

InnoDB's auxiliary index data field stores the value of the corresponding record's primary key instead of the address. In other words, all secondary indexes in InnoDB reference the primary key as the data field. The implementation of the clustered index makes searching by primary key very efficient, but the auxiliary index search requires retrieving the index twice: first, retrieve the auxiliary index to obtain the primary key, and then use the primary key to retrieve the records in the primary index.

The above is the detailed content of Detailed introduction to the InnoDB storage engine in MySQL (code example). For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:segmentfault.com. If there is any infringement, please contact admin@php.cn delete