DB OPTimize

Yes, sometimes it is the database fault

Your database history is precious. Keeping it is crucial. Keep it as if you keep a valuable treasure.

And why is that? because it holds the normal and the abnormal behaviour of your database, and both or them are equally important.

It will save you, over and over again. That’s a guarantee.

Your database history keeps information of everything that goes well and everything that goes wrong (will assist in the detection stage).

And, it gives you the ability to return to the normal desired behaviour (will assist in fixing the problem).

Oracle changes SQL execution plan. Till now, it’s like an axiom. A well known fact. Like knowing the sun will shine in the morning, but with a bad connotation.

It happens all the time, even if you don’t want it to, and it causes severe problems.

The bad effect is wide. Online suddenly get stuck; DW reports suddenly finish long hours after they should; Night processes leak into morning hours, overloading the system; and this time it is the database fault (as opposed to a previous post I once wrote “No, it is not always the database fault” ).

How to detect things that suddenly go wrong?

First, keep your history.

OEM does it by default, and keeps 8 days of performance history in its AWR tables with 1 hour interval (note that using AWR tables has some licensing issues).

Eight days of history is not enough, as proven many times in my experience. You should increase it to 2-3 months at least.

And why is that? because, for an instance, there are some monthly processes which run once a month, and in order to be able to compare this current-month execution behaviour to last-month execution behaviour, or to even two-months-ago execution behaviour, you have to keep old-enough history.

I would say that in most cases keeping a history of somewhere between 70-100 days is enough.

I also prefer to change the default interval from 1 hour to 15 minutes. It will allow you to detect issues faster. Otherwise you will need to wait until one hour will pass before last-one-hour history is kept in the database.

So how can you see what is your current retention period (how long your history is kept) and what is your retention interval?

By running the following query:

select extract( day from snap_interval) * 24 * 60 +
extract( hour from snap_interval) * 60 +
extract( minute from snap_interval ) Interval_in_minutes,
extract( day from retention) retention_peroid_in_days,
extract( day from retention) * 24 * 60+
extract( hour from retention) * 60 +
extract( minute from retention ) retention_peroid_in_minutes
from dba_hist_wr_control;

This is what you will see in case you haven’t changed Oracle’s default setting:

INTERVAL_IN_MINUTES RETENTION_PEROID_IN_DAYS RETENTION_PEROID_IN_MINUTES
60 8 11520

 

And how to change it to something else? by using dbms_workload_repository.modify_snapshot_settings :

This will change the interval to 15 minutes, and the history period to 70 days ( as 70 days * 24 hours * 60 minutes = 100800 minutes)

execute dbms_workload_repository.modify_snapshot_settings (interval => 15, retention => 100800);

This is what you see after the change:

INTERVAL_IN_MINUTES RETENTION_PEROID_IN_DAYS RETENTION_PEROID_IN_MINUTES
15 70 100800

 

OK. Now you are set. This was the first step to success. That was easy.

Next steps are to find the abnormal behaviour, and then fix it. This can be useful in several situations. Problems can occur online when someone is complaining things are not working as they should. It can be a complaint requiring checking things in past. It can be proactive action the DBA does for detecting performance issue. And so on.

The following query can show you the top SQLs running in your database for the last 5 minutes:

select a.sql_id,
count(a.sql_id),
sql_text
from v$active_session_history a,
v$sql b
where sample_time between sysdate – 5/60/24 and sysdate
and a.sql_id=b.sql_id (+)
and session_type != ‘BACKGROUND’
and a.sql_id is not null
group by a.sql_id , sql_text
order by count(a.sql_id) desc;

After spotting a problematic sql_id, the query below will show you its current performance information, such as: how many executions were running, what is the average execution time, how many rows are being returned in average, when was it last active, etc…

select sql_id,
plan_hash_value,
LAST_ACTIVE_TIME,
A.PARSING_SCHEMA_NAME,
executions,
trunc(elapsed_time/1000000/executions,5 ) avg_exec_sec,
trunc(a.ROWS_PROCESSED/executions,3) avg_rows,
a.*
from v$sql a
where sql_id=’175qmdzg827bw’;

Then, you can explore the historical behaviour of the same problematic SQL:

select b.begin_interval_time ,
a.plan_hash_value,
a.parsing_schema_name,
a.executions_delta,
trunc(a.elapsed_time_delta/1000000,4) total_elps_sec,
trunc(a.elapsed_time_delta/1000000/nullif(a.executions_delta,0),5) avg_elps,
trunc(a.rows_processed_delta/a.executions_delta,2) rows_avg,
trunc( a.buffer_gets_delta/a.executions_delta,2) buff_avg,
a.fetches_delta
from dba_hist_sqlstat a ,
dba_hist_snapshot b
where sql_id=’175qmdzg827bw’
and a.snap_id=b.snap_id
and executions_delta > 0
order by a.snap_id desc;

This query will show you what was the performance behaviour of this SQL overtime, within each snapshot interval.

As my interval was set to 15 minutes, I have performance statistics for every 15 minutes. For each snapshot (15 minutes interval) I can see what execution plan was being used, how many executions were running, what was the average elapsed time, how many rows were returned, how many  buffers were read, what was the total time it consumed, and more.

These are the results of an online query constantly changing its execution plan (a very undesired behaviour):

BEGIN_INTERVAL_TIME PLAN_HASH_VALUE PARSING_SCHEMA_NAME EXECUTIONS TOTAL_ELPS_SEC AVG_ELPS ROWS_AVG BUFF_AVG FETCHES_DELTA
25-NOV-15 11.01.00 2391881619 APP_SCHEMA 152 0.3954 0.0026 1 34.48 152
25-NOV-15 10.45.57 2391881619 APP_SCHEMA 120290 159.1953 0.00132 0.99 29.93 120283
25-NOV-15 10.30.53 2391881619 APP_SCHEMA 68461 54.0491 0.00078 1 29.77 68465
25-NOV-15 10.15.51 2391881619 APP_SCHEMA 8098 6.6335 0.00081 1 29.8 8098
25-NOV-15 10.00.49 2391881619 APP_SCHEMA 170 0.0688 0.0004 1 34.06 170
24-NOV-15 20.00.05 3887476682 APP_SCHEMA 81 0.0302 0.00037 1 34.2 81
24-NOV-15 16.00.43 3887476682 APP_SCHEMA 197 0.0687 0.00034 1 34.04 197
24-NOV-15 14.45.36 2688402359 APP_SCHEMA 4 0.0078 0.00196 1 36.25 4
24-NOV-15 14.00.31 2688402359 APP_SCHEMA 81462 88.4826 0.00108 0.99 38.45 81463
24-NOV-15 13.45.29 4279890028 APP_SCHEMA 2251 4381.0747 1.94627 1 329578.94 2264
24-NOV-15 13.45.29 2688402359 APP_SCHEMA 121075 95.0484 0.00078 0.99 38.64 121072
24-NOV-15 13.30.26 4279890028 APP_SCHEMA 7139 13726.4371 1.92273 1 329407.69 7141
24-NOV-15 13.15.23 4279890028 APP_SCHEMA 7870 14370.3959 1.82597 1 0 7870
24-NOV-15 13.00.19 4279890028 APP_SCHEMA 6522 14222.7887 2.18074 0.99 319982.21 6519
24-NOV-15 12.45.16 4279890028 APP_SCHEMA 7429 14519.0715 1.95437 1 0 7432
24-NOV-15 12.30.13 4279890028 APP_SCHEMA 6104 11162.5476 1.82872 0.99 327928.49 6088
24-NOV-15 12.00.09 2483120730 APP_SCHEMA 174 0.4711 0.0027 1 92.99 174
24-NOV-15 11.00.03 2483120730 APP_SCHEMA 96 0.0487 0.0005 1 35.16 96
24-NOV-15 10.00.57 2483120730 APP_SCHEMA 124 0.0503 0.0004 1 30.72 124
24-NOV-15 09.45.56 2483120730 APP_SCHEMA 8 0.0073 0.00091 1 37.37 8
24-NOV-15 09.00.51 2483120730 APP_SCHEMA 24 0.0102 0.00042 1 37.7 24
24-NOV-15 08.45.49 2483120730 APP_SCHEMA 8 0.0316 0.00395 1 39.75 8
24-NOV-15 08.00.45 1816876849 APP_SCHEMA 3 0.0007 0.00026 1 42.33 3
24-NOV-15 07.45.44 1816876849 APP_SCHEMA 2 0.0008 0.00041 1 49 2
24-NOV-15 06.45.38 1816876849 APP_SCHEMA 4 0.0234 0.00586 1 26.5 4
24-NOV-15 06.00.34 2261649723 APP_SCHEMA 1 0.0003 0.00038 1 60 1
24-NOV-15 05.45.32 2261649723 APP_SCHEMA 1 0.0002 0.00028 1 20 1
24-NOV-15 05.00.28 2261649723 APP_SCHEMA 2 0.0009 0.00048 1 27.5 2
24-NOV-15 04.45.27 2261649723 APP_SCHEMA 1 0.0147 0.01473 1 19 1

 

Now I can easily detect abnormal behaviour of this SQL.

I can tell that a severe performance issue started at 12:30 and ended around 14:00. I can also say that the problem occurred when execution plan was changed, and oracle had chosen a bad execution plan for this SQL. During this 1.5 hour, average execution time went up from few milliseconds (and even below) to somewhere around 2 sec. Multiply this bad average execution time with thousands of executions (as seen in executions column) and you can clearly see the problem.

Lets say in words what we see in the above results:

At 24-NOV-15 12.45 (i.e 12:45-13:00, line marked with red color), this SQL was being executed 7429 times. Oracle run it with a particular execution plan: plan_hash_value=4279890028. It returned an average of 1 row per execution, average execution time was 1.95 seconds, and the total time those 7429 executions had consumed between 12:45-13:00 was 14519 seconds (~4 hours).

WOW!!! That’s a lot.

As appose, at 25-NOV-15 10.15 (i.e 10:15-10:30, line marked with green color), same SQL was running 8098 times. This time Oracle had run it with different execution plan, plan_hash_value=2391881619. It also returned an average of 1 row per execution. Nevertheless, average execution time was 0.00081 seconds, and the total time those 8098 executions had consumed between 10:15-10:30 was only 6.6 seconds.

WOW again. What a huge difference!

Same interval (15 minutes), almost same amount of executions (7429 vs 8098), but totally not the same average time (0.00081 sec vs. 1.95 sec), thus extremely not the same total time (14519 sec vs. 6.6 sec).

This had involved into a huge performance issue in my customer’s site. Application was stuck and couldn’t progress as it should.

Now you probably ask yourself how to fix it and and how to prevent it from occurring again?

Well, this will be discussed in the future… hold your breath for now 🙂

Comments are always welcomed

Merav Kedem,

Founder of

DB OPTimize

Oracle Performance Tuning & DBA Consulting