Leaf Page Scans
Let's say you have an index on columns A and B. If you have an SQL query that says WHERE B = 'some value' but does not include search criteria for column A, then it is highly likely that DB2 will asynchronously prefetch the leaf pages of the index to scan for occurrences of B = 'some value'. If there are 1000's of leaf pages in a 3 or 4 level index, this can be a very CPU, and possibly I/O, expensive operation.
BPLITX (Bufferpool Logical Index reads per Transaction)
To compute BPLITX, first find the number of transactions from the Database Snapshot ("dbsnap2.txt") by adding together the number of Commit Statements Attempted plus Rollback Statements Attempted.
TXCNT = Commits Attempted + Rollbacks Attempted
Next, simply divide the number of "Buffer pool index logical reads" by the number of transactions giving BPLITX.
BPLITX = Buffer pool index logical reads / TXCNT
The DBA can reduce BPLITX by creating indexes that better match the WHERE clause predicates of SQL statements.
Let's chat for a moment about a reasonable expected value for BPLITX.
We would like most SQL statements to navigate the DB2 B-tree index structure to precisely find qualifying rids for qualifying rows. This means DB2 will visit the root (top) page, then visit one, maybe two or three, intermediate pages, then access the leaf page which points to the row rid on the target data page. In sum, one SQL might access 3-4 index pages on average.
In addition to SELECT statements, UPDATE and DELETE statements can, and should, also be making efficient use of indexes. So, to find a reasonable expectation for the value of BPLITX, we need to sum Selects per Transaction (SELTX) and DML per Transaction (DMLTX) then multiply by the average of index NLEVELs in the database (or just multiply by 4, since 4 is common). To this value, I add 50% as a fudge factor to allow for some sloppy index access.
By way of example, then, let's assume your application does an average of 10 SELECTs per Transaction and 2 DML per Transaction. The expected maximum "ideal" value of BPLITX should not exceed:
Anticipated Max BPLITX= (((10+2) x 4) x 1.5) = 72
Thus, in this example, if your actual BPLITX was 60, then that would be very good since 60 is less than 72. If your actual BPLITX was 384, then this would indicate the very high likelihood that DB2 is scanning leaf pages to find rids for rows that belong in result sets.
Again, remember this:
Logical Reads = CPU Consumption
If you want to avoid unnecessary hardware upgrades and improve database performance, you need to reduce costs such as Bufferpool Logical Reads per Transaction ( described in previous blog post as the most important cost ), Sorts per Transaction, and Bufferpool Logical Index Reads per Transaction.
Shameless Marketing Moment
You can earn a $100 USD Best Buy or Amazon.com Gift Certificate plus other valuable awards, like a cool T-Shirt and coffee mug, by becoming a DBI Certified Performance Hero. DBI will reward you for making performance improvements to your databases. It's free. We're spending over $15,000 in this campaign which is intended to help companies avoid unnecessary hardware upgrades and lower electric bills. So, do the "green" thing for our environment and your company's profitability while putting some green in your wallet! Learn how easy it is at: www.performance-hero.com
Just for Fun
I had to replace my garbage disposal this past weekend. It was an enlightening experience. Read about it in my DBI blog.
Until next time,
Cheers,
Scott
Scott Hayes
President & CEO, DBI
IBM GOLD Consultant
www.database-performance.info
www.database-auditing.info