workaround for "Too many keys specified; max 64 keys allowed" error in MariaDB / TokuDB?

3k views Asked by At

I need to create more than 64 indexes on a table but am running into the "Too many keys specified; max 64 keys allowed" error. Is there some workaround that would allow me to increase this limit beyond 1000 for MariaDb / TokuDB? or is there a reason why this limit is necessary?

(I've seen this question asked/answered for MySQL - with the answers being either to pass --with-max-indexes=256 to ./configure, or modify MAX_KEY in one of the header files at compile time. Unfortunately, these answers don't appear to work for MariaDB)

Ps. Since a typical response is "if you need this many indexes you're doing something wrong", I'll explain why I want to do this, and would appreciate any advice on modifying the design if that's the best "workaround".

My data is stored in 2 tables:

table1 stores 5 columns: (unique key x_position int, column1 string, column2 float, column3 int, column4 tinyint) - it can be as large as 100 million rows

table2 conceptually can be represented as 4 columns: (foreign key x_position int, sample_id string, value1 tinyint, value2 float) - since there could be up to 5000 unique sample_id values, and a different value1 would exist for each (x_position, sample_id) pair, the max number of rows would be 100 million x 5000 = 500 billion rows

The queries I need to do are like:

select  column1, column2, column3... sample_id,
        group_concat(value1)
    from  table1, table2
    where  column1 = string1
      and  column2 < float2
      and  ( (sample_id = string1  and  value1=1)
         or  (sample_id = string2  and  value1=0)
         or  (sample_id = string3  and  value1=1)
           )
      and  value2 < float1
    group by  sample_id;

Instead, I was thinking it would be more efficient to pivot table2 so that it's columns are: (foreign key x_position, sample_id1_value1 tinyint, sample_id1_value2 float, sample_id2_value1 tinyint, sample_id2_value2 float, ...)

and then create composite indexes on small subsets of the (sample_id1_value1, sample_id1_value2, .. ) columns based on domain-specific details of which of these columns will be queried together. This table would have 100 million rows x 10,000 columns (split across several tables to avoid the column limit) which seems better than 500 billion rows. Also it would eliminate the need for "or" and "group by" clauses in the queries, allowing queries to be rewritten like:

select  column1, column2, column3... sample_id,
        sample_id1_value1,
        sample_id1_value2
    from  table1, table2
    where  column1 = string1
      and  column2 < float2
      and  sample_id1_value1=1
      and  sample_id2_value1=0
      and  sample_id3_value1=1
      and  sample_id1_value2 < float1
      and  sample_id2_value2 < float1
      and  sample_id3_value2 < float1; 

Unfortunately the "Too many keys" error is getting in the way of this.

0

There are 0 answers