I have a PL/pgSQL function like this (thanks to the guy who made this possible):
CREATE OR REPLACE FUNCTION public.split_string(text, text)
RETURNS SETOF text
LANGUAGE plpgsql
AS $function$
DECLARE
pos int;
delim_length int := length($2);
BEGIN
WHILE $1 <> ''
LOOP
pos := strpos($1,$2);
IF pos > 0 THEN
RETURN NEXT substring($1 FROM 1 FOR pos - 1);
$1 := substring($1 FROM pos + delim_length);
ELSE
RETURN NEXT $1;
EXIT;
END IF;
END LOOP;
RETURN;
END;
$function$
It splits a string with a delimiter. Like this:
select * from split_string('3.584731 60.739211,3.590472 60.738030,3.592740 60.736220', ' ');
"3.584731"
"60.739211,3.590472"
"60.738030,3.592740"
"60.736220"
How can I save the results in a temp_array or temp_table. So I can get the the results in temp_x and split up these points again. Like:
"3.584731"
"60.739211"
"3.590472"
"60.738030"
"3.592740"
"60.736220"
and return the values as double precision
. And all of this should be done in the function.
If you need the intermediary step:
This is faster than
regexp_split_to_table()
. Regular expressions are powerful but expensive. Test withEXPLAIN ANALYZE
.I first split at
','
, and next at' '
- the reversed sequence of what you describe seems adequate.If need be, you can wrap this into a PL/pgSQL function:
Or as plain SQL function:
Declare it
IMMUTABLE
(which it is) to allow performance optimization and other uses.Declare it
PARALLEL SAFE
(which it is) to not disable parallel query plans.Call (using the provided defaults for
_delim1
and_delim2
):Or with custom delimiters:
Fastest
For top performance, combine
translate()
withstring_to_table(...)
:In Postgres 13 or older use
unnest(string_to_array())
instead ofstring_to_table()
.