Why are bad row estimates slow in Postgres?

3.9k views Asked by At

What makes bad row estimates a pain point in SQL query performance? I’m interested to know the internal reasons why.

Often a bad row estimate will actually pick the correct plan, and the only difference between a good query and a bad query will be the estimated row counts.

Why is there frequently such a massive performance difference?

Is it because Postgres uses row estimates to allocate memory?

3

There are 3 answers

0
D-Shih On BEST ANSWER

Postgresql optimizer is a cost-based optimizer (CBO), queries will be executed by the smallest cost from execution plans, and the cost will calculate by the statistic of the table.

Why are bad row estimates slow in Postgres?

Because the wrong statistic might choose a bad execution plan. Here is an example

There are two tables, T1 has 20000000 rows, T2 has 1000000 rows.

CREATE TABLE T1 (
    ID INT NOT NULL PRIMARY KEY,
    val INT NOT NULL,
    col1 UUID NOT NULL,
    col2 UUID NOT NULL,
    col3 UUID NOT NULL,
    col4 UUID NOT NULL,
    col5 UUID NOT NULL,
    col6 UUID NOT NULL
);


INSERT INTO T1
SELECT i,
       RANDOM() * 1000000,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid
FROM generate_series(1,20000000) i;


CREATE TABLE T2 (
    ID INT NOT NULL PRIMARY KEY,
    val INT NOT NULL,
    col1 UUID NOT NULL,
    col2 UUID NOT NULL,
    col3 UUID NOT NULL,
    col4 UUID NOT NULL,
    col5 UUID NOT NULL,
    col6 UUID NOT NULL
);

INSERT INTO T2
SELECT i,
       RANDOM() * 1000000,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid,
       md5(random()::text || clock_timestamp()::text)::uuid
FROM generate_series(1,1000000) i;

when we do join on tables we will get an execution plan which might use Merge JOIN

EXPLAIN (ANALYZE,TIMING ON,BUFFERS ON)
SELECT t1.*
FROM T1 
INNER JOIN T2 ON t1.id = t2.id 
WHERE t1.id < 1000000 
"Gather  (cost=1016.37..30569.85 rows=53968 width=104) (actual time=0.278..837.297 rows=999999 loops=1)"
"  Workers Planned: 2"
"  Workers Launched: 2"
"  Buffers: shared hit=38273 read=21841"
"  ->  Merge Join  (cost=16.37..24173.05 rows=22487 width=104) (actual time=11.993..662.770 rows=333333 loops=3)"
"        Merge Cond: (t2.id = t1.id)"
"        Buffers: shared hit=38273 read=21841"
"        ->  Parallel Index Only Scan using t2_pkey on t2  (cost=0.42..20147.09 rows=416667 width=4) (actual time=0.041..69.947 rows=333333 loops=3)"
"              Heap Fetches: 0"
"              Buffers: shared hit=6 read=2732"
"        ->  Index Scan using t1_pkey on t1  (cost=0.44..48427.24 rows=1079360 width=104) (actual time=0.041..329.874 rows=999819 loops=3)"
"              Index Cond: (id < 1000000)"
"              Buffers: shared hit=38267 read=19109"
"Planning:"
"  Buffers: shared hit=4 read=8"
"Planning Time: 0.228 ms"
"Execution Time: 906.760 ms"

but when I update a lot of rows as below let id plus 100000000 when id smaller than 1000000

update T1
set id = id + 100000000
where id < 1000000

we use the same query again, it will use Merge JOIN, but There should be another better option instead of Merge JOIN.

if you didn't hit the autovacuum_analyze_threshold (autovacuum_analyze_threshold default value was 0.1 that mean we need to create more than 10% deadtuple postgresql will update statistic automatically)

EXPLAIN (ANALYZE,TIMING ON,BUFFERS ON)
SELECT t1.*
FROM T1 
INNER JOIN T2 ON t1.id = t2.id 
WHERE t1.id < 1000000 
"Gather  (cost=1016.37..30707.83 rows=53968 width=104) (actual time=51.403..55.517 rows=0 loops=1)"
"  Workers Planned: 2"
"  Workers Launched: 2"
"  Buffers: shared hit=8215"
"  ->  Merge Join  (cost=16.37..24311.03 rows=22487 width=104) (actual time=6.736..6.738 rows=0 loops=3)"
"        Merge Cond: (t2.id = t1.id)"
"        Buffers: shared hit=8215"
"        ->  Parallel Index Only Scan using t2_pkey on t2  (cost=0.42..20147.09 rows=416667 width=4) (actual time=0.024..0.024 rows=1 loops=3)"
"              Heap Fetches: 0"
"              Buffers: shared hit=8"
"        ->  Index Scan using t1_pkey on t1  (cost=0.44..50848.71 rows=1133330 width=104) (actual time=6.710..6.710 rows=0 loops=3)"
"              Index Cond: (id < 1000000)"
"              Buffers: shared hit=8207"
"Planning:"
"  Buffers: shared hit=2745"
"Planning Time: 3.938 ms"
"Execution Time: 55.550 ms"

when we use manual ANALYZE T1; which mean update T1 table statistic, then query again the query will get Nested Loop which is better than Merge JOIN

"QUERY PLAN"
"Nested Loop  (cost=0.86..8.90 rows=1 width=104) (actual time=0.004..0.004 rows=0 loops=1)"
"  Buffers: shared hit=3"
"  ->  Index Scan using t1_pkey on t1  (cost=0.44..4.46 rows=1 width=104) (actual time=0.003..0.003 rows=0 loops=1)"
"        Index Cond: (id < 1000000)"
"        Buffers: shared hit=3"
"  ->  Index Only Scan using t2_pkey on t2  (cost=0.42..4.44 rows=1 width=4) (never executed)"
"        Index Cond: (id = t1.id)"
"        Heap Fetches: 0"
"Planning:"
"  Buffers: shared hit=20"
"Planning Time: 0.232 ms"
"Execution Time: 0.027 ms"

small conclusion:

A precise statistic in table will help the optimizer get the right execution plan by precise COST from tables.

Here is a script that helps us search last_analyze & last_vacuum the last time.

SELECT
  schemaname, relname,
  last_vacuum, last_autovacuum,
  vacuum_count, autovacuum_count,
  last_analyze,last_autoanalyze
FROM pg_stat_user_tables
where relname = 'tablename';
0
Eelke On

The row count estimates are used to calculate the cost of different plans. When those estimates are way of so will the resulting costs of the plans which will means it ends up using the wrong plan. Like for instance scanning a table because it thinks it needs a significant part of the table while in reality only a few rows are needed that could have been much faster retrieved using an index.

0
Laurenz Albe On

The row count estimates influence further decisions by the optimizer, so a bad estimate can lead to a bad plan.

In my experience, the problem typically happens during the decision for the correct join strategy:

  • When row counts are underestimated, PostgreSQL may pick a nested loop join instead of a hash or a merge join, but the inner table ends up being scanned more often than PostgreSLQ thought, leading to bad performance.

  • Conversely, if PostgreSQL overestimates the row count, it may pick a hash or merge join and scan both table fully, which can be much slower than a couple of index scans on the inner table.