Using Postgres 9.5, I have a table addresses.
CREATE TABLE addresses (
id integer PRIMARY KEY,
address text
);
In that table I have 7.5 million rows. Example:
1, "1600 Pennsylvania Avenue NW, Washington, DC, 20500"
I'm using this table for an autosuggest search in my application so I need to use this type of query:
SELECT * FROM addresses WHERE address LIKE '123 Main St%';
I created this index:
CREATE INDEX address_idx ON addresses (address);
But the problem is it's taking around 1 sec which is too slow.
Here's the query plan:
EXPLAIN SELECT * FROM addresses WHERE address LIKE '123 Main St%';
----
Seq Scan on addresses (cost=0.00..161309.76 rows=740 width=41)
Filter: (address ~~ '123 Main St%'::text)
I tried creating a few types of gin indexes but they either had no effect or made the query slower. I'm not sure if I was using them correctly though.
Any ideas on how to create an index that's optimized for this kind of query?
EDIT
The best solution found so far is to use a text range scan:
SELECT *
FROM addresses
WHERE address >= '123 Main St' AND
address <= concat('123 Main St', 'z');
This is an elaboration on the
betweenmethod and too long for a comment.If you are using standard ASCII characters, you can use the tilde-trick:
Tilde has a larger ASCII value than other characters.
I do note that Postgres should use the index for the
LIKEquery as well. My guess is that the problem is something to do with compatibility of the types. Perhaps if you converted the pattern to avarchar(), Postgres would use the index.