Home > Backend Development > PHP Tutorial > Why is the filtered column value in Mysql explain extended always 100%_PHP tutorial

Why is the filtered column value in Mysql explain extended always 100%_PHP tutorial

WBOY
Release: 2016-07-12 09:03:37
Original
1875 people have browsed it

Why the filtered column value in Mysql explain extended is always 100%

1. Problem

The output of executing Mysql explain extended will have one more filtered column than the simple explain( By default, MySQL 5.7 will output filtered), which refers to the percentage of rows that return results to the rows that need to be read (the value of the rows column). It is said that filtered is a very useful value, because for join operations, the result set size of the previous table directly affects the number of loops. But the result of the test in my environment is that the value of filtered is always 100%, which means it has lost its meaning.

Refer to the MySQL 5.6 code below. The filtered value is only valid for index and all scans (this is understandable. In other situations, the rows value is usually equal to the estimated result set size.).
sql/opt_explain.cc
  1. bool Explain_join::explain_rows_and_filtered()
  2. {
  3. if (table->pos_in_table_list- >schema_table)
  4. return false;

  5. double examined_rows;
  6. if (select && select ->quick)
  7. examined_rows= rows2double(select->quick->records);
  8. else if (tab->type == JT_INDEX_SCAN || tab-> type == JT_ALL)
  9. {
  10. if (tab->limit)
  11. examined_rows= rows2double(tab->limit);
  12. else
  13. {
  14. table->pos_in_table_list->fetch_number_of_rows();
  15. examined_rows= rows2double(table->file->stats.records);
  16. }
  17. }
  18. else
  19. examined_rows= tab->position->records_read;

  20. fmt->entry()->col_rows.set(static_cast(examined_rows)) ;

  21. /* Add "filtered" field */
  22. if (describe(DESCRIBE_EXTENDED))
  23. {
  24. float f= 0.0;
  25. if (examined_rows)
  26. f= 100.0 * tab->position-> records_read / examined_rows;
  27. fmt->entry()->col_filtered.set(f);
  28. }
  29. return false;
  30. }

However, after I constructed a full table scan, the filtered result was wrong, it was still 100%, and what I expected was 0.1 %.
  1. mysql> desc tb2;
    ------- -------------- ------ ----- - -------- -------
    | Field | Type | Null | Key | Default | Extra |
    ------- --------- ----- ------ ----- --------- -------
    | id | int(11) | NO | PRI | 0 | |
    | c1 | int(11) | YES | | NULL | |
    | c2 | varchar(100) | YES | | NULL | |
    ------- ------ -------- ------ ----- --------- -------
    3 rows in set (0.00 sec)

    mysql> explain extended select * from tb2 where c1 ---- ------------- ------- ------ -- ------------- ------ --------- ------ -------- -------- ---------------
    | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
    ---- -- ----------- ------- ------ --------------- ------ ----- ---- ------ -------- ---------- -------------
    | 1 | SIMPLE | tb2 | ALL | NULL | NULL | NULL | NULL | 996355 | 100.00 | Using where |
    ---- ------------- ------- ----- - --------------- ------ --------- ------ -------- ----- ----- -------------
    1 row in set, 1 warning (10 min 29.96 sec)

    mysql> select count(*) from tb2 where c1 ----------
    | count(*) |
    ----------
    | 1001 |
    -- --------
    1 row in set (1.99 sec)

Through gdb tracking, I found that the branch taken by the code is correct. But there is something wrong with the value below.
  1. (gdb) p table->file->stats.records
  2. $18 = 996355
  3. (gdb) p tab-> position->records_read
  4. $19 = 996355
The above tab->position->records_read should be the estimated number of rows returned, and the correct value should be around 1001 , instead of the full table size 996355.

2. Reasons

Why does the above situation occur? Later, I checked the statistical information collected by MySQL and understood.
MySQL, like other mainstream databases, automatically needs to collect statistical information in order to generate better execution plans. You can also use analyze table to collect it manually. The collected statistical information is stored in mysql.innodb_table_stats and mysql.innodb_index_stats.
Reference: http://dev.mysql.com/doc/refman/5.6/en/innodb-persistent-stats.html#innodb-persistent-stats-tables

But that’s not the point, the point Yes, looking at these two tables you will find that MySQL collects very little statistics.
  1. mysql> select * from mysql.innodb_table_stats where table_name='tb2';
    --------------- -------- ---- --------------------- -------- ----------------- -------------------------------
    | database_name | table_name | last_update | n_rows | clustered_index_size | sum_of_other_index_sizes |
    --------------- ------------ --------------------- -------- ---------------------- --------------------------
    | test | tb2 | 2015-12-02 06:26:54 | 996355 | 3877 | 0 |
    --------------- ------------ --------------------- -------- ---------------------- --------------------------
    1 row in set (0.00 sec)

    mysql> select * from mysql.innodb_index_stats where table_name='tb2';
    --------------- ------------ ------------ --------------------- -------------- ------------ ------------- -----------------------------------
    | database_name | table_name | index_name | last_update | stat_name | stat_value | sample_size | stat_description |
    --------------- ------------ ------------ --------------------- -------------- ------------ ------------- -----------------------------------
    | test | tb2 | PRIMARY | 2015-12-02 06:26:54 | n_diff_pfx01 | 996355 | 20 | id |
    | test | tb2 | PRIMARY | 2015-12-02 06:26:54 | n_leaf_pages | 3841 | NULL | Number of leaf pages in the index |
    | test | tb2 | PRIMARY | 2015-12-02 06:26:54 | size | 3877 | NULL | Number of pages in the index |
    --------------- ------------ ------------ --------------------- -------------- ------------ ------------- -----------------------------------
    3 rows in set (0.00 sec)
重要的信息也就2个,一是表的总记录数(n_rows),二是索引中的列的唯一值数(n_diff_pfx01)。也就是说MySQL不会统计非索引列的值分布信息,在前面的查询的例子中,由于c1没有被索引,所以MySQL无法估算出"c1

3. 引申

后面我联系到MySQL匮乏的统计信息会带来什么后果?
不难想象,如果缺少索引,MySQL很可能会生成性能糟糕的执行计划,比如搞错大表和小表的join顺序,就像下面这样。
  1. mysql> explain extended select count(*) from tb1,tb2 where tb1.c1=tb2.c1 and tb2.c2='xx';
    ---- ------------- ------- ------ --------------- ------ --------- ------ -------- ---------- ----------------------------------------------------
    | id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
    ---- ------------- ------- ------ --------------- ------ --------- ------ -------- ---------- ----------------------------------------------------
    | 1 | SIMPLE | tb1 | ALL | NULL | NULL | NULL | NULL | 1000 | 100.00 | NULL |
    | 1 | SIMPLE | tb2 | ALL | NULL | NULL | NULL | NULL | 996355 | 100.00 | Using where; Using join buffer (Block Nested Loop) |
    ---- ------------- ------- ------ --------------- ------ --------- ------ -------- ---------- ----------------------------------------------------
    2 rows in set, 1 warning (0.00 sec)
虽然t1表时小表,tb2表是大表,但是tb2上加上tb2.c2='xx'的条件限制后结果集就变成0了,因此先扫描tb2表才是性能更好的选择。
相同的查询,PostgreSQL给出的执行计划是更好的,先扫描t2表再循环扫描t1表。

  1. postgres=# explain select count(*) from tb1,tb2 where tb1.c1=tb2.c1 and tb2.c2='xx';
  2. QUERY PLAN
  3. -------------------------------------------------------------------
  4. Aggregate (cost=20865.50..20865.51 rows=1 width=0)
  5. -> Nested Loop (cost=0.00..20865.50 rows=1 width=0)
  6. Join Filter: (tb1.c1 = tb2.c1)
  7. -> Seq Scan on tb2 (cost=0.00..20834.00 rows=1 width=4)
  8. Filter: ((c2)::text = 'xx'::text)
  9. -> Seq Scan on tb1 (cost=0.00..19.00 rows=1000 width=4)
  10. (6 rows)
下面实际对比一下执行时间看看。

MySQL花了0.34s

  1. mysql> select count(*) from tb1,tb2 where tb1.c1=tb2.c1 and tb2.c2='xx';
  2. ----------
  3. | count(*) |
  4. ----------
  5. | 0 |
  6. ----------
  7. 1 row in set (0.34 sec)

PostgreSQL花了0.139s
  1. postgres=# select count(*) from tb1,tb2 where tb1.c1=tb2.c1 and tb2.c2='xx';
  2. count
  3. -------
  4. 0
  5. (1 row)

  6. Time: 139.600 ms

The performance difference in the above example is actually not very big. If the condition of tb2.c2='xx' is removed, the difference will be very big.
Mysql took 1 minute and 8 seconds
  1. mysql> explain select count(*) from tb1,tb2 where tb1.c1=tb2.c1;
    ---- --- ---------- ------- ------ --------------- ------ ------ --- ------ -------- ---------------------------------- ------------------
    | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
    ---- ------------- ------- ------ --------------- ------ --- ------ ------ -------- ------------------------------- -----------------------
    | 1 | SIMPLE | tb1 | ALL | NULL | NULL | NULL | NULL | 1000 | NULL |
    | 1 | SIMPLE | tb2 | ALL | NULL | NULL | NULL | NULL | 996355 | Using where; Using join buffer (Block Nested Loop) |
    ---- ------------- ------- ------ --------------- ------ --------- ------ - ------------------------------------------------------- ---------
    2 rows in set (0.00 sec)

    mysql> select count(*) from tb1,tb2 where tb1.c1=tb2.c1;
    - ---------
    | count(*) |
    ----------
    | 9949 |
    ----------
    1 row in set (1 min 8.26 sec)

PostgreSQL only took 0.163 seconds
  1. postgres=# explain select count( *) from tb1,tb2 where tb1.c1=tb2.c1;
  2. QUERY PLAN
  3. ---------------- -------------------------------------------------- --------
  4. Aggregate (cost=23502.34..23502.35 rows=1 width=0)
  5. -> Hash Join (cost=31.50. .23474.97 rows=10947 width=0)
  6. Hash Cond: (tb2.c1 = tb1.c1)
  7. -> Seq Scan on tb2 (cost=0.00. .18334.00 rows=1000000 width=4)
  8. -> Hash (cost=19.00..19.00 rows=1000 width=4)
  9. -> Seq Scan on tb1 (cost=0.00..19.00 rows=1000 width=4)
  10. (6 rows)

  11. Time: 0.690 ms
  12. postgres=# select count(*) from tb1,tb2 where tb1.c1=tb2.c1;
  13. count
  14. -- -----
  15. 10068
  16. (1 row)

  17. Time: 163.868 ms

However, this performance difference has nothing to do with statistical information. The reason is that PG supports Nest Loop Join, Merge Join and Hash Join, while MySQL only supports Nest Loop Join. Without the index, Nest Loop Join will As slow as a turtle.

4. Summary

1. MySQL has very little statistical information, only the number of table rows and the number of unique values ​​of index columns, which makes the MySQL optimizer often unable to have a correct estimate of the data size. understanding and give an execution plan with poor performance.
2. The efficiency of MySQL's join operation is very dependent on the index (the two previous times I helped people tune MySQL's SQL statements were all indexes). It's not that PG's join does not require indexes, but it is not as big a reaction as MySQL's lack of indexes. In the above example, MySQL took more than 1 minute to execute. After adding the index, the execution time of both MySQL and PG immediately dropped to less than 10 milliseconds. Therefore, developers should evaluate the possible query methods when designing tables and build all the indexes that should be built (not less or more).
3. In contrast, PG not only counts the value distribution of all columns, but also includes histograms, frequent values ​​and other information in addition to unique values, which supports the PG optimizer to make correct decisions. It is speculated that for this reason, the PG community believes that PG's optimizer is smart enough and there is no need to add hint functions similar to Oracle to the PG kernel (because hints may be abused by people, making the system difficult to maintain; however, in fact, If you want to use it, you can install the pg_hint_plan plug-in yourself).

www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/1080260.htmlTechArticleWhy the filtered column value in Mysql explain extended is always 100% 1. The output of Mysql explain extended will be There is one more filtered column than simple explain (MySQL5.7 will output fil by default...
Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template