I am using the free version of redis in docker.
My laptop has 32 gb RAM. I have very large dataset about 11 gb worth of keys. Each keys has 14 columns in it (as json), I have few millions of keys.
I only have one redisearch index consist of the alias of all 14 components. My aggregate search is quite simple, Get total value of the specific numeric column (let's say field c) with group based on the three other string columns, but I need to sometime calculate the whole 10 gb of keys but I am only ever interested with of filtering with two specific columns (let's say field a and field b). It is very slow (sometimes more than 100 seconds).
What is your advice to make it happen to below 10 seconds for 10 gb of data? Is there any way like composite index in MySQL to make the redis search faster?
Thank you so much
Eko
As Redis says in its document: https://redis.io/commands/ft.search/#complexity. FT.SEARCH's response time is based on the number of the results in the result set. From my understanding, you are querying 10GB of keys (I guess about 4-5 million keys) from Redis and then doing some calculations on the results. If it's correct, I believe that it's normal to take more than 100 seconds to finish the request.
Possible solution: use multithread and query the sufficient number of keys on each thread (each query). When all threads finish, you can do the calculation.
Lastly, providing some code may help readers understand your question better.