Winding down
In addition to moving my responsibilities to other managers, I have been tying up the odd technical loose end.
During the recent upgrade to 10gR2 for one of our DW customers we noticed a frequent query that queried for raw sales for a named customer over a rolling 12 week period. The problem here was that there were no indexes on either of the raw sales (header and line) tables. When the system was originally designed this table was only used to build sales summary tables so we expected to read every row in a partition to aggregate it, and if we read each row then accessing an index is superfluous; but since go live a new class of users requires access to the raw data. Hence our new need to index it. In our testing we found that we could reduce a query that table scanned 9% of the raw data in two tables (partitioning saved us from table scanning 100%) to a index skip scan from the header followed by an index read of the detail table and the query time drops from 300 to 10 seconds. Of course we waited for a week or so post upgrade to put the change in (it is easier to identify problem causes if you reduce the number of changes implemented in one day to a minimum)
Another customer moved their telephone call logging to a nice new linux RAC database and found that they were no longer capturing calls using their high-speed data loader written in Pro C. The data receipt tables existed and the grants matched the original system, but what was going on? Then the application developers told me that the code to load to the two tables was identical except for a user id - then I suggested a sloppy coding approach that could cause the problem, and sadly I was right. Suppose the module inserted in to table T1, but T1 was actually a synonym to another table, and that synonym was different depending on the user that invoked the module. The same code could be then used to load two tables and very few would be the wiser as to how it it was done (especially since it was not in the documentation)
My third loose end comes from a very old application(it's at least 12 years old) that in small part creates FoxPro DBF files from Oracle tables (yes, there is still a valid business reason for this) but this for a long while been a little buggy with unexpected locks on the FoxPro files causing the odd error. One of my team was trawling through source code to find something not closing a file when he found the most horrendous code written by a true non-believer in databases. There is a requirement the Foxpro file is sorted in key column order. This was achieved by [note to Tom Kyte, please don't read this whilst drinking coffee]
- Selecting the required rows from an Oracle view and writing them out as a CSV file on a UNIX server
- Reading the newly written file into an array on the UNIX server
- quick sorting the array on the key column
- sharing the array over the wire to a NT server with the FoxPro ODBC driver
- writing out the DBF file to disk
- Repeat for each DBF file in the set (50 files)