Wait
Event
|
Possible Causes
|
Actions
|
Remarks
|
db file sequential
reads
|
Use of
an unselective index
Fragmented Indexes
High
I/O on a particular disk or mount point
Bad
application design
Index
reads performance can be affected by
slow I/O subsystem and/or poor database files layout, which result in a higher average wait time |
Check
indexes on the table to ensure
that the right index is being used
Check
the column order of the index
with the WHERE clause of the Top SQL statements
Rebuild
indexes with a high clustering
factor
Use
partitioning to reduce the amount
of blocks being visited
Make
sure optimizer statistics are up
to date
Relocate ‘hot’ datafiles
Consider the usage of multiple buffer
pools and cache frequently used indexes/tables in the KEEP pool
Inspect
the execution plans of the
SQL statements that access data through indexes
Is it
appropriate for the SQL
statements to access data through index lookups?
Is the
application an online transaction
processing (OLTP) or decision support system (DSS)?
Would
full table scans be more
efficient?
Do the
statements use the right driving
table?
The
optimization goal is to minimize
both the number of logical and physical I/Os. |
The
Oracle process wants a block that is currently not in the SGA, and it is
waiting for the database block to be read into the SGA from
disk.
If
the
DBA_INDEXES.CLUSTERING_FACTOR of the index approaches the
number of blocks in the table, then most of the rows in the table are
ordered. This is desirable.
However, if the clustering factor
approaches the number of rows in the table, it means the rows in the table
are randomly ordered and thus it requires more I/Os to complete the
operation. You can improve the index’s clustering factor by rebuilding the
table so that rows are ordered according to the index key and rebuilding
the index thereafter.
The
OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING initialization
parameters can influence the optimizer to favour the nested loops
operation and choose an index access path over a full table
scan.
Tuning
I/O related waits Note# 223117.1
db file
sequential read Reference Note# 34559.1
|
db file scattered
reads
|
The
Oracle session has requested and is
waiting for multiple contiguous database blocks (up to DB_FILE_MULTIBLOCK_READ_COUNT) to be read into the SGA from disk.
Full
Table scans
Fast
Full Index Scans
|
Optimize multi-block I/O by setting the
parameter DB_FILE_MULTIBLOCK_READ_COUNT
Partition pruning to reduce number of
blocks visited
Consider the usage of multiple buffer
pools and cache frequently used indexes/tables in the KEEP pool
Optimize the SQL statement that
initiated most of the waits. The goal is to minimize the number of physical and logical reads.
Should
the statement access the data
by a full table scan or index FFS? Would an index range or unique scan be more efficient?
Does
the query use the right driving
table?
Are the
SQL predicates appropriate
for hash or merge join?
If full scans are appropriate, can
parallel query improve the response time?
The
objective is to reduce the
demands for both the logical and physical I/Os, and this is best achieved through SQL and application tuning.
Make
sure all statistics are
representative of the actual data. Check the LAST_ANALYZED date |
If an
application that has been running fine for a while suddenly clocks a lot
of time on the db file scattered read event and there hasn’t been a
code change, you might want to check to see if one or more indexes has
been dropped or become unusable.
db file
scattered read Reference Note# 34558.1
|
log file parallel
write
|
LGWR
waits while writing contents of the
redo log buffer cache to the online log files on disk
I/O
wait on sub system holding the online
redo log files |
Reduce
the amount of redo being
generated
Do not
leave tablespaces in hot
backup mode for longer than necessary
Do not
use RAID 5 for redo log files
Use
faster disks for redo log files
Ensure
that the disks holding the
archived redo log files and the online redo log files are separate so as to avoid contention
Consider using NOLOGGING or
UNRECOVERABLE options in SQL statements |
Reference Note# 34583.1
|
log file
sync
|
Oracle
foreground processes are waiting
for a COMMIT or ROLLBACK to complete |
Tune
LGWR to get good throughput to
disk eg: Do not put redo logs on RAID5
Reduce
overall number of commits by
batching transactions so that there are fewer distinct COMMIT operations |
Reference Note# 34592.1
High
Waits on log file sync Note# 125269.1
Tuning
the Redolog Buffer Cache and Resolving Redo Latch
Contention
Note#
147471.1
|
buffer busy
waits
|
Buffer
busy waits are common in an I/O-
bound Oracle system.
The two
main cases where this can occur
are:
Another
session is reading the block into the
buffer
Another
session holds the buffer in an
incompatible mode to our request
These waits
indicate read/read, read/write,
or write/write contention.
The
Oracle session is waiting to pin a buffer.
A buffer must be pinned before it can be read or modified. Only one process can pin a buffer at any one time.
This
wait can be intensified by a large block
size as more rows can be contained within the block
This
wait happens when a session wants to
access a database block in the buffer cache but it cannot as the buffer is "busy
It is
also often due to several processes
repeatedly reading the same blocks (eg: if lots of people scan the same index or data block) |
The
main way to reduce buffer busy
waits is to reduce the total I/O on the system
Depending on the block type, the
actions will differ
Data
Blocks
Eliminate HOT blocks from the
application.
Check
for repeatedly scanned /
unselective indexes.
Try rebuilding the object with a higher
PCTFREE so that you reduce the number of rows per block. Check for 'right- hand-indexes' (indexes that get inserted into at the same point by many processes).
Increase INITRANS and MAXTRANS
and reduce PCTUSED This will make the table less dense .
Reduce
the number of rows per block
Segment
Header
Increase of number of FREELISTs
and FREELIST GROUPs
Undo
Header
Increase the number of Rollback
Segments |
A
process that waits on the buffer busy waits event publishes the
reason code in the P3 parameter of the wait event.
The
Oracle Metalink note # 34405.1
provides a table of reference - codes 130 and 220 are the most
common.
Resolving intense and random buffer busy wait performance
problems. Note# 155971.1
|
free buffer
waits
|
This
means we are waiting for a free buffer
but there are none available in the cache because there are too many dirty buffers in the cache
Either
the buffer cache is too small or the
DBWR is slow in writing modified buffers to disk
DBWR is
unable to keep up to the write
requests
Checkpoints happening too fast – maybe due
to high database activity and under-sized online redo log files
Large
sorts and full table scans are filling the
cache with modified blocks faster than the DBWR is able to write to disk
If
the number of dirty buffers
that need to be
written to disk is larger than the number that DBWR can write per batch, then these waits can be observed |
Reduce
checkpoint frequency -
increase the size of the online redo log files
Examine
the size of the buffer cache
– consider increasing the size of the buffer cache in the SGA Set disk_asynch_io = true set
If not using asynchronous I/O
increase the number of db writer
processes or dbwr slaves
Ensure
hot spots do not exist by
spreading datafiles over disks and disk controllers
Pre-sorting or reorganizing data can
help |
Note#
163424.1
|
enqueue
waits
|
This
wait event indicates a wait for a lock
that is held by another session (or sessions) in an incompatible mode to the requested mode.
TX
Transaction Lock
Generally due to table or application set up
issues
This
indicates contention for row-level lock.
This wait occurs when a transaction tries to update or delete rows that are currently locked by another transaction.
This
usually is an application issue.
TM DML
enqueue lock
Generally due to application issues,
particularly if foreign key constraints have
not been indexed.
ST lock
Database actions that modify the UET$ (used
extent) and FET$ (free extent) tables require
the ST lock, which includes actions such as
drop, truncate, and coalesce.
Contention for the ST lock indicates there are
multiple sessions actively performing
dynamic disk space allocation or deallocation
in dictionary managed tablespaces
|
Reduce
waits and wait times
The
action to take depends on the lock
type which is causing the most problems
Whenever you see an enqueue wait
event for the TX enqueue, the first step is to find out who the blocker is and if there are multiple waiters for the same resource
Waits
for TM enqueue in Mode 3 are primarily due to unindexed foreign key
columns.
Create
indexes on foreign keys <
10g
Following are some of the things you
can do to minimize ST lock contention in your database:
Use locally managed tablespaces
Recreate all temporary tablespaces
using the CREATE TEMPORARY TABLESPACE TEMPFILE… command.
|
Maximum
number of enqueue resources that can be concurrently locked is controlled
by the ENQUEUE_RESOURCES parameter.
Reference Note# 34566.1
Tracing sessions waiting on an enqueue Note# 102925.1
Details
of V$LOCK view and lock modes Note:29787.1
|
Cache buffer chain
latch
|
This
latch is acquired when searching
for
data blocks
Buffer cache is a chain of blocks and each chain is protected by a child latch when it needs to be scanned
Hot
blocks are another common
cause of cache buffers chains latch contention. This happens when multiple sessions repeatedly access one or more blocks that are protected by the same child cache buffers chains latch.
SQL statements
with high
BUFFER_GETS (logical reads) per EXECUTIONS are the main culprits
Multiple concurrent sessions are
executing the same inefficient SQL that is going after the same data set |
Reducing contention for the cache buffer chains latch will usually require reducing logical I/O rates by tuning and minimizing the I/O requirements of the SQL involved. High I/O rates could be a sign of a hot block (meaning a block highly accessed).
Exporting the table, increasing the
PCTFREE significantly, and importing the data. This minimizes the number of rows per block, spreading them over many blocks. Of course, this is at the expense of storage and full table scans operations will be slower
Minimizing the number of records per
block in the table
For
indexes, you can rebuild them
with higher PCTFREE values, bearing in mind that this may increase the height of the index.
Consider reducing the block size
Starting in
Oracle9i Database, Oracle
supports multiple block sizes. If the current block size is 16K, you may move the table or recreate the index in a tablespace with an 8K block size. This too will negatively impact full table scans operations. Also, various block sizes increase management complexity. |
The
default number of hash latches is usually 1024
The
number of hash latches can be adjusted by the parameter
_DB_BLOCKS_HASH_LATCHES
|
Cache buffer LRU chain
latch
|
Processes need to get this latch when they
need to move buffers based on the LRU block replacement policy in the buffer cache
The
cache buffer lru chain latch is acquired
in order to introduce a new block into the buffer cache and when writing a buffer back to disk, specifically when trying to scan the LRU (least recently used) chain containing all the dirty blocks in the buffer cache. Competition for the cache buffers lru chain
latch is symptomatic of intense buffer cache
activity caused by inefficient SQL
statements. Statements that repeatedly scan
large unselective indexes or perform full
table scans are the prime culprits.
Heavy contention for this latch is generally
due to heavy buffer cache activity which
can be caused, for example, by:
Repeatedly scanning large unselective
indexes
|
Contention in this latch can be
avoided implementing multiple buffer pools or increasing the number of LRU latches with the parameter DB_BLOCK_LRU_LATCHES (The default value is generally sufficient for most systems).
Its
possible to reduce
contention for the cache buffer lru chain latch by increasing the size of the buffer cache and thereby reducing the rate at which new blocks are introduced into the buffer cache |
|
Direct Path
Reads
|
These
waits are associated with direct read operations which read data directly
into the sessions PGA bypassing the SGA
The
"direct path read" and "direct path write" wait events are related to
operations that are performed in PGA like sorting, group by operation,
hash join
In DSS
type systems, or during heavy batch periods, waits on "direct path read"
are quite normal
However, for an OLTP system these waits are
significant
These
wait events can occur during sorting operations which is not surprising as
direct path reads and writes usually occur in connection with temporary
tsegments
SQL
statements with functions that require sorts, such as ORDER BY, GROUP BY,
UNION, DISTINCT, and ROLLUP, write sort
runs to the temporary tablespace when the input size is larger than the
work area in the PGA
|
Ensure
the OS asynchronous IO is configured correctly.
Check
for IO heavy sessions / SQL and see if the amount of IO can be reduced.
Ensure
no disks are IO bound.
Set
your PGA_AGGREGATE_TARGET to appropriate value (if the parameter
WORKAREA_SIZE_POLICY = AUTO)
Or set
*_area_size manually (like sort_area_size and then you have to set
WORKAREA_SIZE_POLICY = MANUAL
Whenever possible use UNION ALL instead of UNION, and where applicable use HASH JOIN instead of
SORT MERGE and NESTED LOOPS instead of HASH JOIN.
Make sure the optimizer selects the
right driving table. Check to see if the composite index’s columns can be
rearranged to match the ORDER BY clause to avoid sort entirely.
Also,
consider automating the SQL work areas using PGA_AGGREGATE_TARGET in
Oracle9i Database.
|
Default
size of HASH_AREA_SIZE is
twice that of SORT_AREA_SIZE
Larger
HASH_AREA_SIZE will influence optimizer to go for hash joins instead of
nested loops
Hidden
parameter DB_FILE_DIRECT_IO_COUNT can impact the direct path read
performance.It sets the maximum I/O buffer size of direct read and write
operations. Default is 1M in 9i
|
Direct Path Writes
|
These
are waits that are associated with
direct write operations that write data from users’ PGAs to data files or temporary tablespaces
Direct
load operations (eg: Create Table as
Select (CTAS) may use this)
Parallel DML operations
Sort IO
(when a sort does not fit in memory
|
If the file indicates a temporary
tablespace check for unexpected disk sort operations.
Ensure
<Parameter:DISK_ASYNCH_IO> is TRUE . This is unlikely to reduce wait times from the wait event timings but may reduce sessions elapsed times (as synchronous direct IO is not accounted for in wait event timings).
Ensure the OS asynchronous IO is
configured correctly.
Ensure no disks are IO bound
|
|
Latch Free
Waits
|
This
wait indicates that the process is
waiting for a latch that is currently busy (held by another process).
When
you see a latch free wait event in the
V$SESSION_WAIT view, it means the process failed to obtain the latch in the willing-to-wait mode after spinning _SPIN_COUNT times and went to sleep. When processes compete heavily for latches, they will also consume more CPU resources because of spinning. The result is a higher response time |
If the
TIME spent waiting for latches is
significant then it is best to determine which latches are suffering from contention. |
A latch
is a kind of low level lock.
Latches
apply only to memory
structures in the SGA. They do not apply to database objects. An Oracle SGA has many latches, and they exist to protect various memory structures from potential corruption by concurrent access.
The
time spent on latch waits is an
effect, not a cause; the cause is that you are doing too many block gets, and block gets require cache buffer chain latching |
Library cache
latch
|
The
library cache latches protect the
cached SQL statements and objects definitions held in the library cache within the shared pool. The library cache latch must be acquired in order to add a new statement to the library cache
Application is making heavy use of literal
SQL- use of bind variables will reduce this latch considerably |
Latch
is to ensure that the application
is reusing as much as possible SQL statement representation. Use bind variables whenever possible in the application
You can
reduce the library cache
latch hold time by properly setting the SESSION_CACHED_CURSORS parameter
Consider increasing shared pool
|
Larger
shared pools tend to have
long free lists and processes that need to allocate space in them must spend extra time scanning the long free lists while holding the shared pool latch
if your
database is not yet on
Oracle9i Database, an oversized shared pool can increase the contention for the shared pool latch. |
Shared pool
latch
|
The
shared pool latch is used to protect
critical operations when allocating and freeing memory in the shared pool
Contentions for the shared pool and library
cache latches are mainly due to intense hard parsing. A hard parse applies to new cursors and cursors that are aged out and must be re-executed
The
cost of parsing a new SQL statement is
expensive both in terms of CPU requirements and the number of times the library cache and shared pool latches may need to be acquired and released. |
Ways to
reduce the shared pool latch
are, avoid hard parses when possible, parse once, execute many.
Eliminating literal SQL is also useful to
avoid the shared pool latch. The size of the shared_pool and use of MTS (shared server option) also greatly influences the shared pool latch.
The
workaround is to set the
initialization parameter CURSOR_SHARING to FORCE. This allows statements that differ in literal values but are otherwise identical to share a cursor and therefore reduce latch contention, memory usage, and hard parse. |
<Note
62143.1>
explains how to
identify and correct problems with the shared pool, and shared pool latch. |
Row cache objects
latch
|
This
latch comes into play when user
processes are attempting to access the cached data dictionary values. |
It is
not common to have contention in
this latch and the only way to reduce contention for this latch is by increasing the size of the shared pool (SHARED_POOL_SIZE).
Use Locally Managed tablespaces for
your application objects especially indexes
Review and amend your database
logical design , a good example is to merge or decrease the number of indexes on tables with heavy inserts |
Configuring the library cache to an
acceptable size usually ensures that the data dictionary cache is also properly sized. So tuning Library Cache will tune Row Cache indirectly |
This man is too old to remember everything in his brain. Right now, he needs a place to write down what he has studied.
標籤
4GL
(1)
人才發展
(10)
人物
(3)
太陽能
(4)
心理
(3)
心靈
(10)
文學
(31)
生活常識
(14)
光學
(1)
名句
(10)
即時通訊軟體
(2)
奇狐
(2)
爬蟲
(1)
音樂
(2)
產業
(5)
郭語錄
(3)
無聊
(3)
統計
(4)
新聞
(1)
經濟學
(1)
經營管理
(42)
解析度
(1)
遊戲
(5)
電學
(1)
網管
(10)
廣告
(1)
數學
(1)
機率
(1)
雜趣
(1)
證券
(4)
證券期貨
(1)
ABAP
(15)
AD
(1)
agentflow
(4)
AJAX
(1)
Android
(1)
AnyChart
(1)
Apache
(14)
BASIS
(4)
BDL
(1)
C#
(1)
Church
(1)
CIE
(1)
CO
(38)
Converter
(1)
cron
(1)
CSS
(23)
DMS
(1)
DVD
(1)
Eclipse
(1)
English
(1)
excel
(5)
Exchange
(4)
Failover
(1)
Fedora
(1)
FI
(57)
File Transfer
(1)
Firefox
(3)
FM
(2)
fourjs
(1)
Genero
(1)
gladiatus
(1)
google
(1)
Google Maps API
(2)
grep
(1)
Grub
(1)
HR
(2)
html
(23)
HTS
(8)
IE
(1)
IE 8
(1)
IIS
(1)
IMAP
(3)
Internet Explorer
(1)
java
(4)
JavaScript
(22)
jQuery
(6)
JSON
(1)
K3b
(1)
ldd
(1)
LED
(3)
Linux
(117)
Linux Mint
(4)
Load Balance
(1)
Microsoft
(2)
MIS
(2)
MM
(51)
MSSQL
(1)
MySQL
(27)
Network
(1)
NFS
(1)
Office
(1)
OpenSSL
(1)
Oracle
(126)
Outlook
(3)
PDF
(6)
Perl
(60)
PHP
(33)
PL/SQL
(1)
PL/SQL Developer
(1)
PM
(3)
Postfix
(2)
postfwd
(1)
PostgreSQL
(1)
PP
(50)
python
(5)
QM
(1)
Red Hat
(4)
Reporting Service
(28)
ruby
(11)
SAP
(234)
scp
(1)
SD
(16)
sed
(1)
Selenium
(3)
Selenium-WebDriver
(5)
shell
(5)
SQL
(4)
SQL server
(8)
sqlplus
(1)
SQuirreL SQL Client
(1)
SSH
(2)
SWOT
(3)
Symantec
(2)
T-SQL
(7)
Tera Term
(2)
tip
(1)
tiptop
(24)
Tomcat
(6)
Trouble Shooting
(1)
Tuning
(5)
Ubuntu
(37)
ufw
(1)
utf-8
(1)
VIM
(11)
Virtual Machine
(2)
VirtualBox
(1)
vnc
(3)
Web Service
(2)
wget
(1)
Windows
(19)
Windows
(1)
WM
(6)
Xvfb
(2)
youtube
(1)
yum
(2)
2015年8月28日 星期五
Resolving common Oracle Wait Events using the Wait Interface
http://gavinsoorma.com/wp-content/uploads/2011/12/Resolving-common-Oracle-Wait-Events-using-the-Wait-Interface.htm
Buffer busy wait event
http://adminoracle10g.blogspot.tw/2012/12/buffer-busy-wait-event_28.html
SELECT
&Min : time you want to see history
Reference:-
Apress.Troubleshooting.Oracle.Performance.Jun.2008
oracle_database_11g_performance_tuning_recipes
Buffer busy wait event
Buffer busy wait
Oracle
has several types of buffer classes, such as data block, segment
header, undo header, and undo block. How you fix a buffer busy wait
situation will depend on the types of buffer classes that are causing
the problem.
Buffer
Busy Waits usually happen on Oracle 10 and 11 mainly because of insert
contention into tables or Indexes. There are a few other rare cases of
contention on old style RBS segments, file headers blocks and freelists
Before
Oracle 10 and 11 there was one other major reason which was readers
waiting for readers, ie one user does a phyiscal IO of a block into
memory and a second user want to read that block. The second user waits
until the IO is finished by the first user. Starting in 10g this wait
has been given the name "read by other session". Before Oracle 10g this
was also a "buffer busy wait".
How can we find the block contention?
àTo find the block on which currently suffering from read by other session wait event.
SELECT sw.sql_id ,sw.p1 "file#", sw.p2 "block#", sw.p3 "class#" ,event
FROM v$session sw WHERE event= 'buffer busy wait';
àUsing blok_id or file_id we can find the segment
SELECT relative_fno, owner, segment_name, segment_type
FROM dba_extents
WHERE file_id = &FILE
AND &BLOCK BETWEEN block_id AND block_id + blocks - 1;
àFind the sql causing issue using the sql_id we find in first query
SELECT
s.p1 file_id, s.p2 block_id,o.object_name obj,
o.object_type otype,
s.SQL_ID,
w.CLASS,event
FROM v$session s,
( SELECT ROWNUM CLASS#, CLASS FROM v$waitstat ) w,
all_objects o
WHERE
event IN ('buffer busy wait')
and
w.CLASS#(+)=s.p3
AND o.object_id (+)= s.row_wait_OBJ#
ORDER BY 1;
SELECT sql_text FROM v$sqltext WHERE sql_id=&sq_id ORDER BY piece
àWe can also find block and sql on which causing “read by other session” using v$active_session_history
SELECT
p1 file_id , p2 block_id ,o.object_name obj,
o.object_type otype,
ash.SQL_ID,
w.CLASS
FROM v$active_session_history ash,
( SELECT ROWNUM CLASS#, CLASS FROM v$waitstat ) w,
all_objects o
WHERE event='buffer busy wait'
AND w.CLASS#(+)=ash.p3
AND o.object_id (+)= ash.CURRENT_OBJ#
AND ash.sample_time > SYSDATE - &MIN/(60*24)
ORDER BY 1;
&Min : time you want to see history
SELECT sql_text FROM v$sqltext WHERE sql_id=&sq_id ORDER BY piece
SELECT
bbw.cnt,
bbw.obj,
bbw.otype,
bbw.sql_id,
bbw.block_type,
NVL(tbs.NAME,TO_CHAR(bbw.p1)) TBS,
tbs_defs.assm ASSM
FROM (
SELECT
COUNT(*) cnt,
NVL(object_name,CURRENT_OBJ#) obj,
o.object_type otype,
ash.SQL_ID sql_id,
NVL(w.CLASS,'usn '||TO_CHAR(CEIL((ash.p3-18)/2))||' '||
DECODE(MOD(ash.p3,2),
1,'header',
0,'block')) block_type,
--nvl(w.class,to_char(ash.p3)) block_type,
ash.p1 p1
FROM v$active_session_history ash,
( SELECT ROWNUM CLASS#, CLASS FROM v$waitstat ) w,
all_objects o
WHERE event='buffer busy wait'
AND w.CLASS#(+)=ash.p3
AND o.object_id (+)= ash.CURRENT_OBJ#
AND ash.session_state='WAITING'
AND ash.sample_time > SYSDATE - &min/(60*24)
--and w.class# > 18
GROUP BY o.object_name, ash.current_obj#, o.object_type,
ash.sql_id, w.CLASS, ash.p3, ash.p1
) bbw,
(SELECT file_id,
tablespace_name NAME
FROM dba_data_files
) tbs,
(SELECT
tablespace_name NAME,
extent_management LOCAL,
allocation_type EXTENTS,
segment_space_management ASSM,
initial_extent
FROM dba_tablespaces
) tbs_defs
WHERE tbs.file_id(+) = bbw.p1
AND tbs.NAME=tbs_defs.NAME
ORDER BY bbw.cnt
The preceding queries will reveal the specific type of buffer causing the high buffer waits. Your fix
will depend on which buffer class causes the buffer waits, as summarized in the following subsections.
Contention for Segment Header
Every table and index segment has a header block. This block contains the following metadata:
information about the high watermark of the segment, a list of the extents making up the segment,
and information about the free space. To manage the free space, the header block contains
(depending on the type of segment space management that is in use) either freelists or a list of
blocks containing automatic segment space management information. Typically, contention
for a segment header block is experienced when its content is modified by several processes
concurrently. Note that the header block is modified in the following situations:
• If INSERT statements make it necessary to increase the high watermark
• If INSERT statements make it necessary to allocate new extents
• If DELETE, INSERT, and UPDATE statements make it necessary to modify a freelist
A possible solution for these situations is to partition the segment in order to spread the
load over several segment header blocks. Most of the time, this might be achieved with hash
partitioning, although, depending on the load and the partition key, other partitioning methods
might work as well. However, if the problem is because of the second or third situation, other
solutions exist. For the second, you should use bigger extents. In this way, new extents would
seldom be allocated. For the third, which does not apply to tablespaces using automatic segment
space management, freelists can be moved into other blocks by means of freelist groups. In
fact, when several freelist groups are used, the freelists are no longer located in the segment
header block (they are spread on a number of blocks equal to the value specified with the
parameter FREELIST GROUPS, so you will have less contention on them—you are not simply
moving the
contention to another place!). Another possibility is to use a
tablespace with automaticsegment space management instead of freelist
segment space management
In short
If your
queries show that the buffer waits are being caused by contention on the
segment header, there’s free list contention in the database, due to
several processes attempting to insert into the same data block—each of
these processes needs to obtain a free list before it can insert data
into that block. If you aren’t already using it, you must switch from
manual space management to automatic segment space management
(ASSM)—under ASSM, the database doesn’t use free lists. However, note
that moving to ASSM may not be easily feasible in most cases. In cases
where you can’t implement ASSM, you must increase the free lists for the
segment in question. You can also try increasing the free list groups
as well.
Contention for Undo Header and Undo Block
Contention for these types of blocks occurs in two situations. The first, and only for undo
header blocks, is when few undo segments are available and lots of transactions are concurrently
committed (or rolled back). This should be a problem only if you are using manual undo
management. In other words, it usually happens if the database administrator has manually
created the rollback segments. To solve this problem, you should use automatic undo management.
The second situation is when several sessions modify and query the same blocks at the
same time. As a result, lots of consistent read blocks have to be created, and this requires you
to access both the block and its associated undo blocks. There is little that can be done about
this situation, other than reducing the concurrency for the data blocks, thereby reducing the
ones for the undo blocks at the same time.
Contention for Extent Map Blocks
As discussed in the section “Contention for Segment Header Blocks,” the segment header
blocks contain a list of the extents that make up the segment. If the list does not fit in the
segment header, it is distributed over several blocks: the segment header block and one or
more extent map blocks. Contention for a segment header block is experienced when concurrent
INSERT statements have to constantly allocate new extents. To solve this problem, you
should use bigger extents.
Contention for Freelist Blocks
As discussed in the section “Contention for Segment Header Blocks,” freelists can be moved
into other blocks, called freelist blocks, by means of freelist groups. Contention for a freelist
block is experienced when concurrent DELETE, INSERT, or UPDATE statements
have to modify the freelists. To solve this problem, you should
increase the number of freelist groups. Another possibility is to use a
tablespace with automatic segment space management instead of freelist
segment space management.
Contention for Data Blocks
All the blocks that make up a table or index which stor actual data are called data blocks. Contention for them has two main causes.
The first is
when the frequency of table or index scans on a given segment is very
high. This problem is because of inefficient execution plans causing
frequent table or index scans over the same blocks. Usually it is
because of inefficient related-combine operations (for example, nested
loop joins). Here, even two or three SQL statements executed
concurrently might be enough to cause contention.
The second is
when the frequency of executions is very high. This problem is the
execution of several SQL statements accessing the same block at the same
time. In other words, it is the number of SQL statements executed
concurrently against (few) blocks that is the problem. It might be that
both happen at the same time. If this is the case, take care of solving
the first problem before facing the second one. In fact, the second
problem might disappear when the first is gone.
To solve the first problem, SQL tuning is necessary. An efficient execution plan must be
executed in place of the inefficient one.
To solve the second problem, several approaches are available. Which one you have to use
depends on the type of the SQL statement (that is, DELETE, INSERT, SELECT,1 and UPDATE) and on the type of the segment (that is, table or index).
However,
before starting, you should always ask one question when the frequency
of execution is high: is it really necessary to execute those SQL
statements against the same data so often? Actually, it is not unusual
to see applications that unnecessarily execute the same SQL
statement too often
. If the frequency of execution cannot be reduced, there are the following
possibilities.
• If there is contention for a table’s blocks because of DELETE, SELECT, and UPDATE statements, you should reduce the number of rows per block. Note that this is the opposite of
the common best practice to fit the maximum number of rows per block. To store fewer
rows per block, either a higher PCTFREE or a smaller block size can be used.
• If there is contention for a table’s blocks because of INSERT statements and freelist segment
space management is in use, the number of freelists can be increased. In fact, the goal of
having several freelists is precisely to spread concurrent INSERT statements over several
blocks. Another possibility is to move the segment into a tablespace with automatic
segment storage management.
• If there is contention for an index’s blocks, there are two possible solutions. First, the
index can be created with the option REVERSE. Note, however, that this method does not
help if the contention is on the root block of the index. Second, the index can be hash
partitioned, based on the leading column of the index key (this creates multiple root
blocks and so helps with root block contention if a single partition is accessed). Because
global hash-partitioned indexes are available as of Oracle Database 10g only, this is not
an option with Oracle9i.
The important thing to note about reverse indexes is that range scans on them cannot
apply restrictions based on range conditions (for example, BETWEEN, >, or <=). Of course, equality predicates are supported
A buffer
busy wait indicates that more than one process is simultaneously
accessing the same data block. One of the reasons for a high number of
buffer busy waits is that an inefficient query is reading too many data
blocks into the buffer cache, thus potentially keeping in wait other
sessions that want to access one or more of those same blocks. Not only
that, a query that reads too much data into the buffer cache may lead to
the aging out of necessary blocks from the cache. You must investigate
queries that involve the segment causing the buffer busy waits with a
view to reducing the number of data blocks they’re reading into the
buffer cache.
If your
investigation of buffer busy waits reveals that the same block or set of
blocks is involved most of the time, a good strategy would be to delete
some of these rows and insert them back into the table, thus forcing
them onto different data blocks.
Check your current memory allocation to the buffer cache, and, if necessary, increase it. A larger
buffer cache can reduce the waiting by sessions to read data from disk, since more of the data will
already be in the buffer cache. You can also place the offending table in memory by using the KEEP POOL in the buffer cache By making the hot block always available in memory, you’ll
avoid the high buffer busy waits.
Indexes that have a very low number of unique values are called low cardinality indexes. Low
cardinality indexes generally result in too many block reads. Thus, if several DML operations are
occurring concurrently, some of the index blocks could become “hot” and lead to high buffer busy waits.
As a long-term solution, you can try to reduce the number of the low cardinality indexes in your
database.
Each Oracle data segment such as a table or an index contains a header
block that records information such as free blocks available. When
multiple sessions are trying to insert or delete rows from the same
segment, you could end up with contention for the data segment’s header
block.
Summary
data block
IF OTYPE =
INDEX , then the insert index leaf block is probably hot, solutions are
Hash partition the index
Use reverse key index
TABLE, then insert block is hot,solutions
Use free lists
Put Object in ASSM tablespace
Segment header - If "segment header"
occurs at the same time as CLASS= "data block" on the same object and
the object is of OTYPE= "TABLE" then this is just a confirmation that
the TABLE needs to use free lists or ASSM.
File Header Block - Most likely extent
allocation problems, look at extent size on tablespace and increase the
extent size to there are few extent allocations and less contention on
the File Header Block.
free lists - Add free list groups to the object
undo header - Not enough UNDO segments, if using old RBS then switch to AUM
undo block - Hot spot in UNDO, application issue
Reference:-
Apress.Troubleshooting.Oracle.Performance.Jun.2008
oracle_database_11g_performance_tuning_recipes
訂閱:
文章 (Atom)