pgr_dijkstra returns "invalid memory alloc request size 1080000000" for large table

46 views Asked by At

Consider postgressql 14.11 (Ubuntu 14.11-0ubuntu0.22.04.1) and the following table

CREATE TABLE test_path (id INTEGER, source INTEGER, target INTEGER, cost NUMERIC);
INSERT INTO test_path (id, source, target, cost) VALUES (generate_series(1, 30000000),  trunc(random()*1500000), trunc(random()*1500000), random());

The line SELECT * FROM pgr_dijkstra('SELECT * FROM test_path', 1, 1234); returns

ERROR:  invalid memory alloc request size 1080000000
CONTEXT:  SQL function "pgr_dijkstra" statement 1

I tried following different solutions. In particular it seems that it is necessary to increase the value of some parameters:

  • work_mem
  • wal_segment_size
  • max_connections
  • shared_buffers
  • maintenance_work_mem
  • effective_cache_size
  • max_wal_size

Now the query SELECT name, setting FROM pg_settings WHERE name IN ('work_mem', 'wal_segment_size', 'max_connections', 'shared_buffers', 'maintenance_work_mem', 'effective_cache_size', 'max_wal_size'); produces the following output

name setting
effective_cache_size 3276800
maintenance_work_mem 30000000
max_connections 100
max_wal_size 30000000
shared_buffers 3276800
wal_segment_size 16777216
work_mem 30000000

Maybe these settings are still too restrictive? Maybe there's a better way to use pgr_dijkstra on large tables?

Update

After many tests it seems that the problem is related to the size of the table loaded inside pgr_dijkstra: the query SELECT * FROM pgr_dijkstra('SELECT * FROM test_path LIMIT 22500000', 1, 1234); responds correctly, as well as SELECT * FROM pgr_dijkstra('SELECT * FROM test_path LIMIT 26250000', 1, 1234); (although obviously they become increasingly slower as the LIMIT increases). But for higher values, the query fails.

0

There are 0 answers