PostgreSQL 12.3: ERROR: out of memory for query result

16.3k views Asked by At

I have an AWS RDS PostgreSQL 12.3 (t3.small, 2CPU 2GB RAM). I have this table:

CREATE TABLE public.phones_infos
(
    phone_id integer NOT NULL DEFAULT nextval('phones_infos_phone_id_seq'::regclass),
    phone character varying(50) COLLATE pg_catalog."default" NOT NULL,
    company_id integer,
    phone_tested boolean DEFAULT false,
    imported_at timestamp with time zone NOT NULL,
    CONSTRAINT phones_infos_pkey PRIMARY KEY (phone_id),
    CONSTRAINT fk_phones_infos FOREIGN KEY (company_id)
        REFERENCES public.companies_infos (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE CASCADE
)

There are exactly 137468 records in this table, using:

SELECT count(1) FROM phones_infos;

The ERROR: out of memory for query result occurs with this simple query when I use pgAdmin 4.6:

SELECT * FROM phones_infos;

I have tables with 5M+ records and never had this problem before.

EXPLAIN SELECT * FROM phones_infos;
Seq Scan on phones_infos  (cost=0.00..2546.68 rows=137468 width=33)

I read this article to see if I could find answers, but unfortunately as we can see on metrics:enter image description here there are no old pending connections that could eat memory.

As suggested, the shared_buffers seems to be correctly sized:

SHOW shared_buffers;
449920kB

What should I try?

1

There are 1 answers

0
Laurenz Albe On BEST ANSWER

The problem must be on the client side. A sequential scan does not require much memory in PostgreSQL.

pgAdmin will cache the complete result set in RAM, which probably explains the out-of-memory condition.

I see two options:

  • Limit the number of result rows in pgAdmin:

    SELECT * FROM phones_infos LIMIT 1000;
    
  • Use a different client, for example psql. There you can avoid the problem by setting

    \set FETCH_COUNT 1000
    

    so that the result set is fetched in batches.