About The Implementation Principle of MySQL Limit

In actual work, we often use the LIMIT clause in MySQL to control the size of data returned by a query, especially in scenarios such as paging and performance optimization. In this article, we will explore in depth the implementation principle of LIMIT in MySQL and how to effectively use this function in different scenarios.

What is LIMIT?

LIMIT is a clause in SQL query statements that is used to limit the number of rows in the query results. In MySQL, the LIMIT clause can also be used in conjunction with offset to implement more complex application scenarios, such as paging queries. The syntax of LIMIT is as follows:

SELECT column1, column2, ... FROM table_name LIMIT [offset,] row_count;

In the above syntax: row_count indicates the number of rows to return. offset indicates the number of records to skip. It is optional and defaults to 0 if not specified.

The following example:

SELECT * FROM order LIMIT 10; # 返回表中的前 10 行记录。
SELECT * FROM order LIMIT 10 10; # 从第 11 行开始返回接下来的 10 行记录

Implementation of LIMIT in MySQL

How does MySQL implement LIMIT internally? To better understand its implementation principle, we need to first understand the query execution process of MySQL. In MySQL, the query execution process mainly consists of three parts: parser, optimizer, and executor:

  • Parser: Parses SQL statements into data structures, usually parse trees.
  • Optimizer: Optimizes queries, such as selecting the best execution plan.
  • Executor: Executes queries step by step according to the plan provided by the optimizer.

The processing of the LIMIT clause mainly occurs in two stages: the optimizer and the executor. The following describes these two stages separately.

1. Optimizer stage

During the optimizer phase, MySQL considers LIMIT and OFFSET to optimize the query plan. The query optimizer generates a more efficient execution plan by considering whether to use indexes, when to apply sorting, when to filter, where to apply the LIMIT clause, etc.

  • Utilization of indexes: When a query involves sorting (ORDER BY) and it is possible to use an index, the optimizer will try to apply LIMIT at the index stage, which can avoid full table scans and improve query speed.
  • Subquery optimization: In some cases, if LIMIT appears in a subquery, the optimizer may choose to reduce unnecessary data processing by inferring LIMIT to the parent query.

2. Executor stage

During the executor phase, MySQL applies the LIMIT clause when reading data row by row. During the data reading process, the executor controls the number of rows to be returned based on the values ​​of LIMIT and offset.

  • Data truncation: For a LIMIT clause without an offset, the executor will immediately interrupt the reading process after reading row_count rows, which can greatly save resources.
  • Skip records: If an offset exists, the executor skips the previous offset row data and starts counting row_count until the requirement is met.

Performance impact and optimization

When using LIMIT for paging queries, you need to pay attention to performance issues. Generally, a large OFFSET may result in performance degradation because MySQL has to scan and discard a large number of records. In this case, you can consider the following optimization strategies:

1. Index optimization

By properly designing the index, you can reduce the number of full table scans. For example, if the query contains an index that can be used by sorting (ORDER BY), the index can be used to find the required data rows more quickly, thereby reducing unnecessary data scans.

The following example creates an index for the created_at field:

SELECT * FROM orders ORDER BY created_at DESC LIMIT 1000, 10;

2. Covering Index

Use covering indexes to speed up queries. When the index itself contains the data columns to be queried, MySQL can directly obtain data from the index without accessing the table, which can improve efficiency.

As an example, you can create an idx_user_id index for the user_id field so that the value of user_id can be directly obtained from the index.


SELECT user_id FROM user_actions WHERE user_id = ? LIMIT 10;

3. Subquery and join optimization

In some cases, you can use pseudo columns or auxiliary scripts to pre-calculate intermediate results for a large number of pages to reduce the impact of offsets.

-- 使用子查询减少偏移量
SELECT * FROM (SELECT * FROM orders ORDER BY created_at DESC LIMIT 1000, 10) as temp;

4. Other technologies

Deferred Join: The core idea of ​​deferred join is to first obtain the primary key (or candidate key) of the target record through a simple and efficient query, and then use these primary keys for further complex association queries. This avoids processing a large amount of unnecessary data in the initial stage, reducing I/O and CPU overhead. Deferred join can be used to avoid multiple accesses to large tables during paging. Bookmarking: The bookmarking method aims to use a unique and sequentially comparable field (usually a primary key or timestamp) to determine the starting point of paging data instead of using OFFSET. In this way, larger offset queries can also maintain good performance because the query is limited to the smaller data set that will be affected. For example, use the unique identifier of the last row of the previous page as the query condition for the next page.

Practical advice

Reasonable use of LIMIT: Try to avoid excessively large OFFSET values. Make full use of indexes: In large-scale data paging scenarios, well-designed indexes are crucial. Use cache: For the same query, you can use cache to avoid repeated calculations and data access. Batch processing: For possible large data processing tasks, you can consider processing in batches and then displaying them in pages.

Summarize

In this article, we analyzed the execution principle of MySQL's LIMIT. In actual use, when the offset is large, the performance may degrade. We should consider improving the performance through index optimization, covering index, subquery, etc.