美文网首页
Index Usage – 3

Index Usage – 3

作者: 2548d1d6a965 | 来源:发表于2015-11-16 17:17 被阅读500次

    In my last note on index usage I introduced the idea of looking at v$segstat (or v$segment_statistics) and comparing the “logical reads” statistic with the “db block changes” statistic as an indicator of whether or not the index was used in execution plans. This week I’ll explain the idea and show you some results – with a little commentary – from a production system that was reported on the OTN database forum.
    The idea is fairly simple (and simplistic). If you update a typical index you will traverse three blocks (root, branch, leaf) to find the index entry that has to be updated, so if the only reason you use an index is to find out which index entry has to be updated than the number of “db block changes” for that index will be (we hope) roughly one-third of the number of “session logical I/Os” of the index.
    We can do some testing of this hypothesis with some simple SQL:

    create table t1 nologging as
    with generator as (
            select  --+ materialize
                    rownum id
            from dual
            connect by
                    level <= 1e4
    )
    select
            rownum                                  id,
            trunc(dbms_random.value(0,333333))      n1,
            rpad('x',100)                           padding
    from
            generator       v1,
            generator       v2
    where
            rownum <= 1e6 ; begin dbms_stats.gather_table_stats( ownname => user,
                    tabname          =>'T1',
                    method_opt       => 'for all columns size 1'
            );
    end;
    /
     
    alter table t1 add constraint t1_pk primary key(id) using index nologging;
    create index t1_i1 on t1(n1)nologging;
    

    So I’ve got a table with a million rows, a primary key, and an index on a column of randomly generated data. Now all I need to do is run the following little script a few thousand times and check the segment stats – I’ve avoided using a pl/sql script because of all the special buffer-handling optimisations could appear if I did:

    exec :b1 := trunc(dbms_random.value(1,1000001))
     
    update t1
            set n1 = trunc(dbms_random.value(0,333333))
            where   id = :b1;
     
    commit;
    

    There are various ways of checking the segment stats, you could simply launch an AWR snapshot (or statspack snapshot at level 7) before and after the test – the results from the “Segments by …” sections of the report should tell you all you need to know; or you could run a simple piece of SQL like the following before and after the test and then doing some arithimetic:

    select
            object_name, statistic_name, value 
    from
           v$segment_statistics
    where
           owner = {your user name here}
    and    object_name in ('T1','T1_PK','T1_I1')
    and    statistic_name in (
                  'db block changes',
                  'logical reads'
    )
    and     value != 0
    order by
            object_name,
            statistic_name
    ;
    

    I happen to have some snapshot code in a little procedure that does the job I need, so my testbed code looks like this:

    execute snap_my_stats.start_snap
    execute snap_segstat.start_snap
     
    set termout off
    set serveroutput off
     
    variable b1 number
     
    @start_10000    -- invoke my script 10,000 times
     
    spool test
     
    set serveroutput on
    set termout on
     
    execute snap_segstat.end_snap
    execute snap_my_stats.end_snap
     
    spool off
    

    The question is, what do we expect the results to look like, and what do they actually look like. Given we have 10,000 updates going on we might expect something like the following:

    • T1_PK – index access by primary key, 10,000 * 3 logical I/Os
    • T1 – 10,000 logical I/Os as we find the rows then 10,000 db block changes
    • T1_I1 – index access to find entry to be deleted (10,000 * 3 logical I/Os), repeated to find leaf block for insertion of new entry (10,000 * 3 logical I/Os), with 10,000 * 2 db block changes for the delete/insert actions.
      Here are a few results from 12.1.0.2 – if I don’t include a commit in the update script:
    12.1.0.2 with no commit
    Segment stats
    =======================
    T1
    logical reads                               20,016
    db block changes                            19,952
     
    T1_PK
    logical reads                               30,016
    physical reads                                  19
    physical read requests                          19
     
    T1_I1
    logical reads                               60,000
    db block changes                            21,616
     
    Session Stats
    =============
    Name                                         Value
    ----                                         -----
    session logical reads                      110,919
    consistent gets                             30,051
    consistent gets examination                 30,037
    db block gets                               80,868
    db block changes                            81,989
    

    Some of the figures match the predictions very nicely – in particular the logical reads and db block changes on the T1_I1 index are amazing (so good I feel I have to promise that I didn’t fake them, or wait until after the test to make my prediction;)
    There are, however, some anomalies: why have I got 20,000 logical reads and db block changes on the table when I did only 10,000 updates. I was surprised by this, but it is something I’ve seen before: Oracle was locking each row before updating it, so generating two changes and two redo entries (Op Codes 11.4 and 11.5). In the past I’d noticed this as a side effect of setting the audit_trail to DB, but it was happening here with audit_trail =none. (Something to add to my “todo” list – why is this happening, when did it appear.)
    You’ll also notice that the session level stats for logical reads nearly matches the table and index level (20K + 30K + 60K = ca. 110K) while the db block changes stats are out by a factor of 2. Don’t forget that for each change to a table or index we make a change to an undo block describing how to reverse that change so the 40,000 data changes are matched by a further 40,000 undo block changes; and on top of this every time we get the next undo block we change our transaction table entry in the undo segment header we’re using, and that accounts for most of the rest. The discrepancy in the number of logical reads is small because while we keeping getting and releasing the table and index blocks, we pin the undo block from the moment we acquire it to the moment it’s full so we don’t record extra logical reads each time we modify it.
    Big observation
    Based on the figures above, we could probably say that, for an index with a blevel = 2 (height = 3), if the number of db block changes recorded is close to one-third of the logical reads recorded, then that index is a good candidate for review as it may be an index that is not used to access data, it may be an index that does nothing except use up resources to keep itself up to date.
    Big problem
    Take a look at the statistics when I included the commit in my test case:

    12.1.0.2 with commit
    Segment Stats
    ====================
    T1
    logical reads                               20,000
    
    T1_PK
    logical reads                               30,000
    
    T1_I1
    logical reads                                  512
    db block changes                               160
    
    Session Stats
    =============
    Name                                         Value
    ----                                         -----
    session logical reads                       80,625
    consistent gets                             30,106
    consistent gets examination                 30,039
    db block gets                               50,519
    db block changes                            60,489
    

    Apparently my session has made 60,000 changes – but none of them applied to the table or index! In fact I haven’t even accessed the T1_I1 index! The segment statistics have to be wrong. Moreover, if I commit every update I ought to change a segment header block at the start and end of every update, which means I should see at least 20,000 more db block changes in the session (not 20,000 less); and since I’m not pinning undo blocks for long transaction I should see about 10,000 extra logical reads as I acquire 10,000 undo blocks at the start of each short transaction. The session statistics have to be wrong as well!
    A quick check on the redo stream shows exactly the change vectors I expect to see for these transactions:

    • 11.4 – lock row price (table)
    • 5.2 – start transaction (update undo segment header)
    • 11.5 – update row piece (table)
    • 10.4 – delete leaf row (index)
    • 10.2 – insert leaf row (index)
    • 5.4 – commit (update undo segment header)
    • 5.1 – update undo block (op 11.1 – undo table row operation)
    • 5.1 – update undo block (op 11.1 – undo table row operation)
    • 5.1 – update undo block (op 10.22 – undo leaf operation)
    • 5.1 – update undo block (op 10.22 – undo leaf operation)
      That’s a total of 10 changes per transaction – which means 100,000 db block changes in total, not 60,000.
      This anomaly is so large that it HAS to make my suggested use of the segment stats suspect. Fortunately, though, the error is in a direction that, while sapping our confidence, doesn’t make checking the numbers a completely pointless exercise. If the error is such that we lose sight of the work done in modifying the index then the figures remaining are such that they increase our perception of the index as one that is being used for queries as well – in other words the error doesn’t make an index that’s used for queries look like an index that’s only used for self-maintenance.
      Case Study
      The following figures were the results from the OTN database forum posting that prompted me to write this note and the previous one:
    otn.png

    The poster has some code which gives a report of the indexes on a table (all 26 of them in this case) with their column definition and segment statistics. What (tentative) clues do we get about these indexes as far as this article is concerned ?
    Conveniently the code arranges the indexes in order of “change percentage”, and we can see very easily that the first nine indexes in the list show “db block changes” > one-third of “logical reads”, the cut-off point for the article, so it’s worth taking a quick look at those indexes to see if they are suitable candidates for dropping. Inevitably the moment you start looking closely there are a number of observations to add to this starting point.

    1. Look at the number of changes in the first 12 indexes, notice how frequently numbers around 300,000 appear – perhaps that’s indicative of about 300,000 inserts taking place in the interval, in which case the first and 14th indexes (on (zcid) and (ps_spdh) respectively) must be on columns which are very frequently null and are therefore much smaller than the rest of the indes. Even though the index on (zcid) is reported at 39%, perhaps this is an index with a blevel of 1 (height = 2) in which case its cut-off point would be 50% rather than 33% – which means it could well be used for a lot of queries.
    2. The tenth index on (dp_datetime) reports 26%, “change percentage” which is below the cut-off, but it’s worth noting that are three other indexes (12, 13 and 21) on that table that start with a column called dp_datetime_date. Is dp_datetime_date the truncated value of db_datetime and is it a real column or a virtual column ? Given my comments about the optimizer’s clever trick with indexes on trunc(date_column) in the second post in this series perhaps there’s scope here for getting rid of the dp_datetime index even though the simple numeric suggests that it probably is used for some queries.
    3. Of the three indexes starting with db_datetime_date, one consists of just that single column – so perhaps (as suggested in the first post in this series) we could simply drop that too. Then, when we look at the other two (indexes 12 and 13) we note that index 13 is subject to fives time as much change as index 12 (is that one insert plus 2 updates, given that an update means two changes), but fifteen times as much logical I/O. The extra LIO may be because the index is larger (so many more columns), it may be because the index is used very inefficiently – either way, we might look very carefully at the column ordering to see if index 13 could be rearranged to start the same way as index 12, and then drop index 12. On top of everything else we might also want to check whether we have the right level of compression on the index – if it’s not very effective until we’ve selected on many columns then it must be subject to a lot of repetition in the first few columns.
    4. I gave a few examples in part one of reasons for dropping indexes based on similarity of columns used – the examples came from this output so I won’t repeat them, but if you refer back to them you will note that the desirability of some of the suggestions in the earlier article is re-inforced by the workload statistics – for example: the similarity of indexes 24 and 24, with an exact ordered match on the first 4 columns, suggests that we consider combining the two indexes into a single index: the fact that both indexes were subject to 2.7 million changes makes this look like a highly desirable target.
      Summary
      There are a lot of indexes on this table but it looks as if we might be able to drop nearly half of them, although we will have to be very careful before we do so and will probably want to make a couple at a time invisible (and we can make the change “online” in 12c) for a while before dropping them.
      Remember, though, that everything I’ve said in this note is guesswork based on a few simple numbers, and I want to emphasise an important point – this note wasn’t trying to tell you how to decide if an index could be dropped, it was pointing out that there’s a simple way to focus your attention on a few places where you’re most likely to find some indexes that are worth dropping. Run a report like this against the five biggest tables or the five busiest tables or the five tables with the most indexes and you’ll probably find a few easy wins as far as redundant indexes are concerned.
      Footnote
      While writing up my comments about the optimizer’s tricks with columns like dp_datetime and a virtual dp_datetime_date I had a sudden sneaky thought about how we could play games with the optimizer if both columns were real columns that were kept in synch with each other. If it works out I’ll write it up in a further blog.

    相关文章

      网友评论

          本文标题:Index Usage – 3

          本文链接:https://www.haomeiwen.com/subject/xkikhttx.html