> 데이터 베이스 > MySQL 튜토리얼 > 转载:Why does MYSQL higher LIMIT offset slow the query down

转载:Why does MYSQL higher LIMIT offset slow the query down

WBOY
풀어 주다: 2016-06-07 15:40:51
원래의
1290명이 탐색했습니다.

来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query b

来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down


Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query becomes, when using ORDER BY *primary_key*

So

<code>SELECT * FROM large ORDER BY `id`  LIMIT 0, 30 
</code>
로그인 후 복사

takes far less than

<code>SELECT * FROM large ORDER BY `id` LIMIT 10000, 30 
</code>
로그인 후 복사

That only orders 30 records and same eitherway. So it's not the overhead from ORDER BY.
Now when fetching the latest 30 rows it takes around 180 seconds. How can I optimize that simple query?

It's normal that higher offsets slow the query down, since the query needs to count off the first OFFSET + LIMIT records (and take only LIMIT of them). The higher is this value, the longer the query runs.

The query cannot go right to OFFSET because, first, the records can be of different length, and, second, there can be gaps from deleted records. It needs to check and count each record on its way.

Assuming that id is a PRIMARY KEY of a MyISAM table, you can speed it up by using this trick:

<code>SELECT  t.*
FROM    (
        SELECT  id
        FROM    mytable
        ORDER BY
                id
        LIMIT 10000, 30
        ) q
JOIN    mytable t
ON      t.id = q.id
</code>
로그인 후 복사

See this article:

  • MySQL ORDER BY / LIMIT performance: late row lookups

MySQL cannot go directly to the 10000th record (or the 80000th byte as your suggesting) because it cannot assume that it's packed/ordered like that (or that it has continuous values in 1 to 10000). Although it might be that way in actuality, MySQL cannot assume that there are no holes/gaps/deleted ids.

So, as bobs noted, MySQL will have to fetch 10000 rows (or traverse through 10000th entries of the index on id) before finding the 30 to return.

EDIT : To illustrate my point

Note that although

<code>SELECT * FROM large ORDER BY id LIMIT 10000, 30 
</code>
로그인 후 복사

would be slow(er),

<code>SELECT * FROM large WHERE id >  10000 ORDER BY id LIMIT 30 
</code>
로그인 후 복사

would be fast(er), and would return the same results provided that there are no missing ids (i.e. gaps).


参考:

1.

为什么长尾数据的翻页技术实现复杂  --文章很好

http://timyang.net/data/key-list-pagination/





관련 라벨:
원천:php.cn
본 웹사이트의 성명
본 글의 내용은 네티즌들의 자발적인 기여로 작성되었으며, 저작권은 원저작자에게 있습니다. 본 사이트는 이에 상응하는 법적 책임을 지지 않습니다. 표절이나 침해가 의심되는 콘텐츠를 발견한 경우 admin@php.cn으로 문의하세요.
인기 튜토리얼
더>
최신 다운로드
더>
웹 효과
웹사이트 소스 코드
웹사이트 자료
프론트엔드 템플릿