In my project I have a db table with ~8M records. Every record contains an integer width:

create table blocks(id bigserial, width int);
insert into blocks(width) values (1),(2),(6),(3),(3),(3),(2),(5),(4),(2);

Frontend part displays all these blocks in a single extremely long web page which loads dynamically on scroll. Widths from the above would fit into a view with its own width equal to 10 like that:

0: 1, 2, 6
1: 3, 3, 3
2: 2, 5
3: 4, 2

The sum of widths of elements on a single line must be less or equal to the line width. So, for now I have a list of tables of line offsets for a query on the app's side, for example, for line width == 10:

0: 0
1: 3
2: 6
3: 8

I'd like to create a temporary table at the moment of cursor opening but cannot get how to calculate and build it. It must utilize line width parameter and probably should use rolling sum, but I am unlucky with writing the sql statement.

UPD: I found exactly the same question on dba.stackexchange.com , but the answer there is incorrect, I left a comment under it.

1 Answers

1
Jeremy On Best Solutions

The most basic way to do this is with a recursive query:

with recursive row_selector(id, width_collect, row_number, counter) AS (
  select id, width, 0, 0
    from blocks
    where id = 1
  UNION
    select b.id,
    case when row_selector.width_collect + b.width > 10 
      then b.width 
      else 
        row_selector.width_collect + b.width 
      end, 
    case when row_selector.width_collect + b.width > 10 
      then row_selector.row_number + 1 
      else 
        row_selector.row_number
      end,
    row_selector.counter + 1
    from blocks b
    JOIN row_selector on row_selector.id + 1 = b.id
)
select row_number, min(counter)
FROM row_selector
group by row_number
order by 1
;
 row | offset
-----+--------
   0 |      0
   1 |      3
   2 |      6
   3 |      8

Basically, we iterate through blocks and increment the row_number each time we get over 10. We also use a counter to count the offset as we go along. Then we can just grab the minimum counter and group by the row to get the offset for each row.

EDIT: The approach outlined above works, but as the comment rightly states, it's very slow for any appreciable amount of rows. A faster approach would be to create a custom aggregate function.

create type row_offsets_type AS (counter int, width_cum int, offsets int[]);

create function row_offset_final_func(offsets row_offsets_type)
  RETURNS int[]
  IMMUTABLE
  AS $$
SELECT $1.offsets;
$$
LANGUAGE SQL;

create function row_offsets_func(offsets row_offsets_type, width int, row_width int)
    RETURNS row_offsets_type
    IMMUTABLE
    AS $$
      select ROW(
        offsets.counter + 1,
        CASE WHEN offsets.width_cum + width > row_width THEN width ELSE  offsets.width_cum + width END,
        CASE WHEN offsets.width_cum + width > row_width THEN array_append(offsets.offsets, offsets.counter) ELSE offsets.offsets END
      )::row_offsets_type;
    $$
    LANGUAGE SQL;

create aggregate row_offsets(width int, row_width int)
(
  SFUNC = row_offsets_func(row_offsets_type, int, int),
  STYPE = row_offsets_type,
  FINALFUNC = row_offset_final_func,
  INITCOND = '(0, 0, {0})'

);

WITH offsets AS 
  (select row_offsets(width, 10 ORDER BY id) FROM blocks)
SELECT nr - 1 as row_num,
offset_num
FROM offsets, unnest(row_offsets) with ordinality as a(offset_num, nr);
 row_num | offset_num
---------+------------
       0 |          0
       1 |          3
       2 |          6
       3 |          8

It's still not fast. In an untuned docker container, the custom aggregate approach took 20s for 100k rows. I gave up on the recursive query approach after it ran for a couple minutes.