I have a SQL Compact 4.0 database with a table that has about 20 columns and about 1,000,000 rows.
The problem I have is with performance which because users might use these 20 columns to filter the result the query will take a lot of time to execute (may be about a minute).
For testing purpose I created indexes for these columns and although the performance was incredibly good I want to know is there any catch to doing so?
Thanks.
The catch is that inserts will take more time (or other table modifications) and the index file size.
If your application is mostly for querying over huge database you might live fine with this catch, and basically improve performance with no catch.
If insertion or other reindexing table modifications occur frequently you would have to balance number of indexes so performance of most important tasks is optimal, and less important tasks will be not so fast.
What you cannot do without a catch is to keep both select and insert speed super fast.