Skip to content

Commit f8cad39

Browse files
MySQL 101: How to Find and Tune a Slow MySQL Query.md
1 parent aa59289 commit f8cad39

File tree

1 file changed

+230
-0
lines changed

1 file changed

+230
-0
lines changed
Lines changed: 230 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,230 @@
1+
原文链接:https://www.percona.com/blog/mysql-101-how-to-find-and-tune-a-slow-sql-query/
2+
3+
# MySQL 101: How to Find and Tune a Slow MySQL Query
4+
5+
This blog was originally published in June 2020 and was updated in April 2024.
6+
7+
One of the most common support tickets we get at Percona is the infamous “database is running slower” ticket. While this can be caused by a multitude of factors, it is more often than not caused by a bad or slow MySQL query. While everyone always hopes to recover through some quick config tuning, the real fix is to identify and fix the problem query. Sure, we can generally alleviate some pain by throwing more resources at the server. But this is almost always a short-term bandaid and not the proper fix.
8+
9+
## Fixing Slow Queries With Percona Monitoring and Management
10+
So how do we find the queries causing problems and fix them? If you have Percona Monitoring and Management (PMM) installed, the identification process is swift. With the Query Analytics enabled (QAN) in PMM, you can simply look at the table to identify the top query:
11+
12+
![image](https://github.com/user-attachments/assets/e8a20cda-fff8-46d4-a91a-385b8ec22997)
13+
14+
When you click on the query in the table, you should see some statistics about that query and also (in most cases), an example:
15+
16+
![image](https://github.com/user-attachments/assets/447ba024-b367-4e41-9eb9-8b2854c210ec)
17+
18+
![image](https://github.com/user-attachments/assets/f4ef1eef-2395-48c8-8cdd-3fa13d2a6a86)
19+
20+
## Fixing Slow Queries Without Percona Monitoring and Management
21+
Now, let’s assume that you don’t have PMM installed yet (I’m sure that is being worked on as you read this). To find the problem queries, you’ll need to do some manual collection and processing that PMM does for you. The following is the best process for collecting and aggregating the top queries:
22+
23+
1.Set long_query_time = 0 (in some cases, you may need to rate limit to not flood the log)
24+
25+
2.Enable the slow log and collect for some time (slow_query_log = 1)
26+
27+
3.Stop collection and process the log with pt-query-digest
28+
29+
4.Begin reviewing the top queries in times of resource usage
30+
31+
Note – you can also use the performance schema to identify queries, but setting that up is outside the scope of this post. Here is a good reference on how to use P_S to find suboptimal queries.
32+
33+
When looking for bad queries, one of the top indicators is a large discrepancy between rows_examined and rows_sent. In cases of suboptimal queries, the rows examined will be very large compared with a small number of rows sent.
34+
35+
Once you have identified your query, it is time to start the optimization process. The odds are that the queries at the top of your list (either in PMM or the digest report) lack indices. Indexes allow the optimizer to target the rows you need rather than scanning everything and discarding non-matching values. Let’s take the following sample query as an example:
36+
37+
```
38+
SELECT *
39+
FROM user
40+
WHERE username = "admin1"
41+
ORDER BY last_login DESC;
42+
```
43+
44+
This looks like a straightforward query that should be pretty simple. However, it is showing up as a resource hog and is bogging down the server. Here is how it showed up in the pt-query-digest output:
45+
46+
```
47+
# Profile
48+
# Rank Query ID Response time Calls R/Call V/M Item
49+
# ==== ================== ============= ===== ====== ===== ===========
50+
# 1 0xA873BB85EEF9B3B9 0.4011 98.7% 2 0.2005 0.40 SELECT user
51+
# MISC 0xMISC 0.0053 1.3% 7 0.0008 0.0 <7 ITEMS>
52+
```
53+
54+
```
55+
# Query 1: 0.18 QPS, 0.04x concurrency, ID 0xA873BB85EEF9B3B9 at byte 3391
56+
# This item is included in the report because it matches --limit.
57+
# Scores: V/M = 0.40
58+
# Time range: 2018-08-30T21:38:38 to 2018-08-30T21:38:49
59+
# Attribute pct total min max avg 95% stddev median
60+
# ============ === ======= ======= ======= ======= ======= ======= =======
61+
# Count 22 2
62+
# Exec time 98 401ms 54us 401ms 201ms 401ms 284ms 201ms
63+
# Lock time 21 305us 0 305us 152us 305us 215us 152us
64+
# Rows sent 6 1 0 1 0.50 1 0.71 0.50
65+
# Rows examine 99 624.94k 0 624.94k 312.47k 624.94k 441.90k 312.47k
66+
# Rows affecte 0 0 0 0 0 0 0 0
67+
# Bytes sent 37 449 33 416 224.50 416 270.82 224.50
68+
# Query size 47 142 71 71 71 71 0 71
69+
# String:
70+
# Databases plive_2017
71+
# Hosts localhost
72+
# Last errno 0
73+
# Users root
74+
# Query_time distribution
75+
# 1us
76+
# 10us ################################################################
77+
# 100us
78+
# 1ms
79+
# 10ms
80+
# 100ms ################################################################
81+
# 1s
82+
# 10s+
83+
# Tables
84+
# SHOW TABLE STATUS FROM `plive_2017` LIKE 'user'G
85+
# SHOW CREATE TABLE `plive_2017`.`user`G
86+
# EXPLAIN /*!50100 PARTITIONS*/
87+
SELECT * FROM user WHERE username = "admin1" ORDER BY last_login DESCG
88+
```
89+
90+
We can see right away the high number of rows examined vs. the rows sent, as highlighted above. So now that we’ve identified the problem query let’s start optimizing it. Step 1 in optimizing the query would be to run an EXPLAIN plan:
91+
92+
```
93+
mysql> EXPLAIN SELECT * FROM user WHERE username = "admin1" ORDER BY last_login DESCG
94+
*************************** 1. row ***************************
95+
id: 1
96+
select_type: SIMPLE
97+
table: user
98+
partitions: NULL
99+
type: ALL
100+
possible_keys: NULL
101+
key: NULL
102+
key_len: NULL
103+
ref: NULL
104+
rows: 635310
105+
filtered: 10.00
106+
Extra: Using where; Using filesort
107+
1 row in set, 1 warning (0.00 sec)
108+
```
109+
110+
The EXPLAIN output is the first clue that this query is not properly indexed. The type: ALL indicates that the entire table is being scanned to find a single record. In many cases, this will lead to I/O pressure on the system if your dataset exceeds memory. The Using filesort indicates that once it goes through the entire table to find your rows, it has to then sort them (a common symptom of CPU spikes).
111+
112+
**MySQL Performance Tuning 101: Key Tips to Improve MySQL Database Performance**
113+
114+
### Limiting Rows Examined
115+
One thing that is critical to understand is that query tuning is an iterative process. You won’t always get it right the first time and data access patterns may change over time. In terms of optimization, the first thing we want to do is get this query using an index and not using a full scan. For this, we want to look at the WHERE clause: **where username = “admin1”.**
116+
117+
With this column theoretically being selective, an index on username would be a good start. Let’s add the index and re-run the query:
118+
119+
```
120+
mysql> ALTER TABLE user ADD INDEX idx_name (username);
121+
Query OK, 0 rows affected (6.94 sec)
122+
Records: 0 Duplicates: 0 Warnings: 0
123+
124+
mysql> EXPLAIN SELECT * FROM user WHERE username = "admin1" ORDER BY last_login DESCG
125+
*************************** 1. row ***************************
126+
id: 1
127+
select_type: SIMPLE
128+
table: user
129+
partitions: NULL
130+
type: ref
131+
possible_keys: idx_name
132+
key: idx_name
133+
key_len: 131
134+
ref: const
135+
rows: 1
136+
filtered: 100.00
137+
Extra: Using index condition; Using filesort
138+
1 row in set, 1 warning (0.01 sec)
139+
```
140+
141+
### Optimizing Sorts
142+
So we are halfway there! The type: ref indicates we are now using an index, and you can see the rows dropped from 635k down to 1. This example isn’t the best as this finds one row, but the next thing we want to address is the filesort. For this, we’ll need to change our username index to be a composite index (multiple columns). The rule of thumb for a composite index is to work your way from the most selective to the least selective columns, and then if you need sorting, keep that as the last field. Given that premise, let’s modify the index we just added to include the last_login field:
143+
144+
```
145+
mysql> ALTER TABLE user DROP INDEX idx_name, ADD INDEX idx_name_login (username, last_login);
146+
Query OK, 0 rows affected (7.88 sec)
147+
Records: 0 Duplicates: 0 Warnings: 0
148+
149+
mysql> EXPLAIN SELECT * FROM user WHERE username = "admin1" ORDER BY last_login DESCG
150+
*************************** 1. row ***************************
151+
id: 1
152+
select_type: SIMPLE
153+
table: user
154+
partitions: NULL
155+
type: ref
156+
possible_keys: idx_name_login
157+
key: idx_name_login
158+
key_len: 131
159+
ref: const
160+
rows: 1
161+
filtered: 100.00
162+
Extra: Using where
163+
1 row in set, 1 warning (0.00 sec)
164+
```
165+
166+
And there we have it! Even if this query scanned more than one row, it would read them in sorted order, so the extra CPU needed for the sorting is eliminated. To show this, let’s do this same index on a non-unique column (I left email as non-unique for this demo):
167+
168+
```
169+
mysql> select count(1) from user where email = "[email protected]";
170+
+----------+
171+
| count(1) |
172+
+----------+
173+
| 64 |
174+
+----------+
175+
1 row in set (0.23 sec)
176+
177+
mysql> ALTER TABLE user ADD INDEX idx_email (email, last_login);
178+
Query OK, 0 rows affected (8.08 sec)
179+
Records: 0 Duplicates: 0 Warnings: 0
180+
181+
mysql> EXPLAIN SELECT * FROM user WHERE email = "[email protected]" ORDER BY last_login DESCG
182+
*************************** 1. row ***************************
183+
id: 1
184+
select_type: SIMPLE
185+
table: user
186+
partitions: NULL
187+
type: ref
188+
possible_keys: idx_email
189+
key: idx_email
190+
key_len: 131
191+
ref: const
192+
rows: 64
193+
filtered: 100.00
194+
Extra: Using where
195+
1 row in set, 1 warning (0.00 sec)
196+
```
197+
198+
## In summary, the general process to tune a slow MySQL query follows this process:
199+
1.Identify the query (either manually or with a tool like PMM)
200+
201+
2.Check the EXPLAIN plan of the query
202+
203+
3.Review the table definition
204+
205+
4.Create indexes
206+
207+
(1.)Start with columns in the WHERE clause
208+
209+
(2)For composite indexes, start with the most selective column and work to the least selective column
210+
211+
(3)Ensure sorted columns are at the end of the composite index
212+
213+
5.Review the updated explain plan and revise as needed
214+
215+
6.Continue to review the server to identify changes in access patterns that require new indexing
216+
217+
**Ensure your databases are performing their best — today and tomorrow — with proactive database optimization and query tuning. Book a database assessment**
218+
219+
## Struggling with Slow Queries? Percona Monitoring and Management Can Help!
220+
While query optimization can seem daunting, using a process can make it much easier to achieve. Naturally, optimizing complex queries isn’t trivial like the above example, but is definitely possible when broken down. If you’re struggling to find and tune slow queries in your database, Percona Monitoring and Management (PMM) can help! PMM is an open source database observability, monitoring, and management tool for use with MySQL, PostgreSQL, MongoDB, and the servers on which they run. PMM’s Query Analytics tool helps you quickly locate costly and slow-running queries so you can quickly address bottlenecks impacting performance. And remember that Percona engineers are always available to help you when you get stuck! Happy optimizing!
221+
222+
## FAQs
223+
### What defines a query as “slow” in MySQL?
224+
In MySQL, a query is considered “slow” if it takes longer than the configured “long_query_time” threshold to execute. This threshold can vary depending on the application’s requirements and the database environment. MySQL’s slow query log is a feature that helps identify these queries by logging any query that exceeds this execution time limit, allowing for further analysis and optimization.
225+
226+
### How do I improve query speed in MySQL?
227+
To improve query speed in MySQL, you can employ various techniques such as indexing your tables properly, optimizing your queries with appropriate JOINs and WHERE clauses, denormalizing data when necessary, and ensuring your hardware resources (CPU, RAM, and disk I/O) are sufficient for your workload. Additionally, monitoring and tuning MySQL configuration parameters can often yield performance gains.
228+
229+
### What common issues cause queries to slow down in MySQL?
230+
Several factors can contribute to slow query performance in MySQL, including lack of proper indexing, inefficient query patterns (such as full table scans), resource contention (CPU, memory, or disk I/O bottlenecks), poorly designed schemas, locking issues, and suboptimal MySQL configuration settings. Additionally, high concurrency and high volumes of data can also impact query performance if not properly managed.

0 commit comments

Comments
 (0)