Take mysql as an example. Generally, when the amount of data is relatively large, the offset will be larger when paging, and the query efficiency will be lower.
For example, if a post has 2 million comments and 10 are displayed on each page, there are 200,000 pages. How should the data be processed when 200,000 pages are taken? Is there any good way to solve this demand?
Example: NetEase Music, the maximum number of comments for a single song is more than 1.4 million, and the paging speed is very good,
第一我相信網易音樂讀取數據的方式絕對是使用nosql,去讀取數據。
當然假如你的數據表訪問頻率不高的話,還是可以直接讀取數據庫,當然mysql innodb庫有個坑爹的弱點,就是你查詢頁數越大,所以的效果就越小。所以一般我們是通過id去提高查詢的效果
舊的查詢用法 limit 100000,10 建議查詢用法 where id > 100000 limit 10。這樣可以保證到索引的使用。
當然你可以使用分錶的形式,降低單表數據,從而提高查詢效率
如果用redis的化应该是将id都存在list中,然后利用redis获得list的某个片段,然后拿片段里的ID到mysql去查询结果。
当然如果你要加上排序的话就没辙了。