Query rewrite (again)

One of our customers uses Oracle materialized views and query rewrite as the key component of their summarisation strategy. The front-end query tool is Business Objects (or rather its web version) The great thing about rewrite is that the BO universe designer only needed to include the base level summaries (for stock, sales etc) and let the database work out how to get to the higher levels.
Looking at performance, the database is hardly stressed – many queries complete quickly, the odd difficult query takes a long time – but then that is not too surprising.

The previous version of this system used traditional tables for the majority of summaries – these had to be hard-coded into the then OLAP tool. In those days there were about 120 aggregates. I wanted to reduce this to around 20 or 30 when we re-designed the system, but the client was scared of all this new technology and asked us to put 70 mvs in.
I still want to reduce this number, but I need evidence to prove that it is sensible. Capturing evidence of re-write is somewhat tricky. The queries are presented as selects on the base tables, the explain plans show which tables are really accessed. But getting to those plans is the hard part. So instead I have tackled this another way. Each morning I capture the logical and physical read statistics from V_$SEGSTAT which I join to DBA_OBJECTS to obtain the summary table names. I do this at the end of the batch data load. At the end of the day I do the same again and then find the difference in read numbers. Tables that have had at least one read are appended to a table_used history table. I do something similar for the bitmap indexes. In about a month I’ll have enough data to review the existing aggregation tree