1. Overall optimization idea
First build a script to observe the number of queries, number of connections and other data, determine the environmental reasons and internal SQL execution reasons, and then perform specific processing according to the specific reasons.
Recommended: "mysql video tutorial"
2. Build script observation status
mysqladmin -uroot -p ext \G
This command can obtain information such as the current number of queries, poll regularly and redirect the results to text, and then process them into charts.
3. Solutions
1. If queries appear regularly and slowly, consider the cache avalanche problem.
For this problem, we only need to deal with the cache expiration time so that it does not expire at similar times at the same time. The expiration time should be as discrete as possible, or concentrated until midnight.
2. If the non-regular query is slow, consider the design is not optimized
Processing method:
a: Enable profiling to record query operations and obtain statement execution details
show variables like '%profiling%'; set profiling=on; select count(*) from user; show profiles; show profile for query 1; >>> +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000060 | | Executing hook on transaction | 0.000004 | | starting | 0.000049 | | checking permissions | 0.000007 | | Opening tables | 0.000192 | | init | 0.000006 | | System lock | 0.000009 | | optimizing | 0.000005 | | statistics | 0.000014 | | preparing | 0.000017 | | executing | 0.001111 | | end | 0.000006 | | query end | 0.000003 | | waiting for handler commit | 0.000015 | | closing tables | 0.000011 | | freeing items | 0.000085 | | cleaning up | 0.000008 | +--------------------------------+----------+
b: Use explain to view statement execution, index usage, scan range, etc.
mysql> explain select count(*) from goods \G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: goods
partitions: NULL
type: index
possible_keys: NULL
key: gid
key_len: 5
ref: NULL
rows: 3
filtered: 100.00
Extra: Using index
c: Related optimization techniques
Table optimization and column type selection
Column selection principles:
1: Field type priority integer> date, time > char,varchar > blob
Reason: Integer type, time operation is fast and saves space
char/varchar needs to consider the conversion of the character set and the proofreading set during sorting, and is slow
blob cannot use the memory temporary table
2: Just use enough, don’t be generous (such as smallint, varchar(N))
Reason: Large fields waste memory and affect speed
Use varchar(10), varchar( 300) stores the same content, but when querying tables, varchar(300) takes more memory
3: Try to avoid using NULL
Reason: NULL is not conducive to indexing, so use Special bytes to mark.
The space occupied on the disk is actually larger
Index optimization strategy
1. Index type
1.1 B-tree index (sorted fast search structure)
Note: In Myisam, innodb, the B-tree index is used by default
1.2 hash index
In the memory table, the default is hash index, and the theoretical query time review degree of hash is O(1)
Question: Since hash index is so efficient, why not use it?
The result of a.hash function calculation is random. If the data is placed on the disk, taking the primary key as id as an example, then as the id grows, the row corresponding to the id is randomly placed on the disk. .
b. Unable to optimize range queries
c. Unable to use prefix index, for example, in b-tree, the value of the field column is "helloworld", and the index query xx=hello/xx =helloworld can use the index (left prefix index), but the hash index cannot do it, because hash(hello) and hash(helloworld) are not related.
d. Sorting cannot be optimized either
e. Rows must be returned, the data location must be obtained through the index, and the data must be returned to the table.
2.b-tree Common misunderstandings about indexes
2.1 Add indexes to columns commonly used in where conditions
Example: where cat_id=3 and price>100; //Query the third column, more than 100 yuan The product
is incorrect: both cat_id and price are indexed. In fact, only one index can be used, they are all independent indexes.
2.2 After creating an index on multiple columns, the index will work no matter which column is queried
2.2 Creating an index on multiple columns After indexing, no matter which column is queried, the index will play a role.
Correct answer: For a multi-column index to work, the index needs to meet the left prefix requirement (layered index)
With index(a, b, c) For example:
语句 索引是否发挥作用 where a=3 是 where a=3 and b=5 是 where a=3 and b=5 and c=4 是 where b=3 or where c=4 否 where a=3 and c=4 a列能发挥索引作用,c列不能 where a=3 and b>10 and c=7 a,b能发挥索引作用,c列不能
High performance index strategy
1. For innodb, because there are data files under the node, the split of the node will It becomes slower. For the primary key of innodb, try to use integer type, and it is an increasing integer type.
2. The length of the index directly affects the size of the index file, affects the speed of additions, deletions and modifications, and indirectly affects the query speed (taking up more memory).
3. For the values in the column, intercept parts from left to right to build an index.
a. The shorter the truncation, the higher the degree of repetition, the smaller the distinction, and the worse the indexing effect
b. The longer the truncation, although the discrimination is improved, the index file becomes larger Affects speed
So try to find a balance point in length to maximize performance. Common method: intercept different lengths to test index distinction
Discrimination test:
select count(distinct left(word, 1)) / count(*) from table;
After the test is completed, you can create an index based on the optimal length obtained from the test
alter table table_name add index word(word(4));
Ideal index
1. Frequent queries
2. Discrimination High
3.Small length
4.Try to cover common query fields
The above is the detailed content of Share Mysql optimization ideas. For more information, please follow other related articles on the PHP Chinese website!
Troubleshooting MySQL Performance Drops After UpgradeJul 24, 2025 am 01:33 AMKey points for troubleshooting performance degradation after MySQL upgrade: 1. Check configuration compatibility, parameters may be deprecated or renamed, use mysqld--verbose-help to confirm the supported parameters; 2. Pay attention to the changes in index and execution plan, use EXPLAINANALYZE to compare the execution path, and if necessary, FORCEINDEX and update statistical information; 3. The buffer pool loading method is different, check innodb_buffer_pool_load_at_startup and other parameters, and adjust the loading mode appropriately; 4. Pay attention to default behavior changes such as character sets, isolation levels, etc. It is recommended to read ReleaseNotes before upgrading and go online after verification of the test environment.
Leveraging MySQL Window Functions for Rank and NTILEJul 24, 2025 am 01:32 AMTorankrowsordivideresultsetsinMySQL,usewindowfunctionslikeRANK(),DENSE_RANK(),andNTILE().1.RANK()assignsrankswithtiesskippingsubsequentnumbers(e.g.,1,1,3).2.DENSE_RANK()alsohandlestiesbutdoesn’tskipnumbers(e.g.,1,1,2).3.ROW_NUMBER()giveseachrowauniqu
Understanding MySQL Prepared Statements for Performance and SecurityJul 24, 2025 am 01:30 AMUsing precompiled statements can improve security and performance. Its core lies in separating SQL logic from data, preventing SQL injection, and improving efficiency when performing similar queries multiple times. Specific advantages include: 1. By binding the value by placeholder, malicious input is prevented from being interpreted as SQL commands, thereby resisting injection attacks; 2. No need to repeatedly parse SQL when executing queries with the same structure to improve performance; 3. Notes include: performance improvement is limited when only a single query is only, partial driver simulation implementation affects efficiency, difficulty in viewing actual parameter values during debugging, and dynamically building complex queries still require additional processing.
mysql replace statementJul 24, 2025 am 01:25 AMMySQL's REPLACE is a mechanism that combines "delete insert" to replace old data when unique constraint conflicts. When there is a primary key or unique index conflict, REPLACE will first delete the old record and then insert the new record, which is atomic. 1. There must be a primary key or a unique index to trigger the replacement; 2. The old data is deleted during conflict and the new data is inserted; 3. Unlike INSERTIGNORE, the latter ignores conflicts and does not insert them and does not report errors; 4. Pay attention to data loss, self-increasing ID changes, performance overhead and multiple triggering problems of triggers; 5. It is recommended to use INSERT...ONDUPLICATEKEYUPDATE to update some fields instead of full replacement.
mysql slow query logJul 24, 2025 am 01:22 AMTo enable MySQL slow query logs, you need to set slow_query_log=1, specify the log path slow_query_log_file, and set the threshold long_query_time. You can optionally record query log_queries_not_using_indexes that do not use the index; pay attention to the Query_time, Rows_examined and Rows_sent indicators when viewing the log; common optimization problems include adding indexes, avoiding index failures caused by functions, adjusting JOIN operations, and using cursor paging instead; excessive logs can be controlled by regular archives, reasonable thresholds, temporary log closing, etc.
Implementing MySQL Data Archiving StrategiesJul 24, 2025 am 01:21 AMThe core of MySQL data archiving strategy is to reduce the pressure on the main library while ensuring the queryability and security of historical data. 1. The timing of archiving should be selected when the data is no longer frequently accessed but still needs to be retained, such as orders, logs, and user behavior data exceeding the set period; 2. Archive methods include table partitioning, archive library/table, ARCHIVE engine and external tool pt-archiver, which should be selected according to the access frequency and resource conditions; 3. During the implementation process, attention should be paid to data consistency, index optimization, backup recovery, and permission control; 4. It is recommended to design an automated process to achieve efficient maintenance through timing tasks and monitoring systems.
Optimizing MySQL for High-Volume Log Data StorageJul 24, 2025 am 01:16 AMTo optimize MySQL to process high-capacity log data, you should first choose a suitable storage engine such as InnoDB, then optimize the table structure design, then archive old data regularly, and finally adjust the server configuration. Select InnoDB to support transaction and crash recovery, turn off automatic commits for batch inserts, use independent tablespaces, and adjust log refresh policies appropriately. In table design, use TIMESTAMP instead of DATETIME to avoid unnecessary VARCHAR fields and use indexes reasonably. Regularly reduce the amount of data in the main table through partitioning or archives to avoid direct deletion of data. Adjust innodb_log_file_size, max_connections, bulk_insert_buff
MySQL Database Query Optimization with Optimizer HintsJul 24, 2025 am 01:15 AMIn MySQL query optimization, OptimizerHints can be used to intervene when the default query optimizer does not select an ideal execution plan. 1. Control the join order: Adjust the order of multi-table connections through /* NO_JOIN_PREFIX()*/ or /* JOIN_PREFIX()*/, which is suitable for large table connections or inaccurate statistical information; 2. Force the use of specified index: specify specific indexes through /* USE_INDEX()*/ to avoid missed selection of optimizers. Pay attention to the correctness of index names and version compatibility; 3. Control resource usage: For example, set the maximum execution time/* MAX_EXECUTION_TIME()*/ or turn off semi-join optimization


Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Dreamweaver Mac version
Visual web development tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

Atom editor mac version download
The most popular open source editor

SublimeText3 Mac version
God-level code editing software (SublimeText3)







