Im not sure whether this is the place for this question but here goes:
I have a read-only database, it contains many tables which are accessed and searched using c# desktop application.
I am looking at indexes and most of the tutorials and information about indexes focus on the tradeoff between SELECT performance and INSERT / UPDATE performance with introducing indexes.
My question is, with a read only database, what would be the downside of putting indexes on every column and every combination of columns? (assuming I also dont care much about the size of the database?)
or to put it another way, can you "Over Index" a read only database?
Let's think about what happens when you insert / update a row in an indexed table (let's assume we're using standard B-tree indexes). The entry will be added to the table itself as well as an entry being made in each of the indexes on the table. That's what creates the time / space overhead.
To answer your question directly no, outside the initial time / space overhead of generating the indexes, there's no major downside of putting indexes on every column on every table. Remember, when you're performing a query, only up to one index can be used per table. By having lots of indexes / composite indexes, you're giving the optimiser the best choice when deciding which indexes to use.
With that said, it's messy to start generating arbitrary indexes having put little thought into. If I were you, I'd look at what queries you need to run faster and start generating indexes accordingly.