Let's say that oracle db 40M bytes of cache memory.I am the only user of the database and I want to understand how is a query executed so as to compute the cache hit ratio for a query.
Let's say we have this query:
SELECT column1, count(*)
FROM table1
GROUP BY column1
ORDER BY column1 desc
Assuming that table1
is less than 40M size, now explain plan says:
TABLE ACESS(FULL)
that has a cost of 1330 (I/O ?), then SORT(GROUP BY)
that costs 1340 and SELECT STATEMENT
that costs 1340 also.
I can't understand something, why do SORT
and SELECT STATEMENT
cost 1340 I/O each?
Since we have a cache greater than the table size when we do table acess we load the disk contents in the cache and then when we want to sort and select, we only have to retrieve the contents of the cache, so in my mind it should be zero I/O for sort and select.
Also how do I compute the cache hit ratio for that query?
The cost is the expected time for the operation, expressed in terms of the equivalent number of single block reads that would take that amount of time.
So a 100ms operation on a system for which one single block read takes 0.5ms would have a cost of 200.
The numbers you describe sound like they are cumulative, so the 1340 for the select includes 1340 for the group by, which itself includes 1330 for the table access. Therefore the group by costs 10.
The cache hit ratio for the query would depend on how much of that table is in the SGA prior to the query being executed -- if none of it was, the BCHR would be 0%. If all of it was, the BCHR would be 100%.
Note that as a system tuning tool, the BCHR is widely deprecated as the correlation between high BCHR and efficient query plans is very weak. In fact, you can raise your BCHR by making query plans much less efficient.