Oracle8i Designing and Tuning for Performance Release 2 (8.1.6) Part Number A76992-01 |
|
Contention occurs when multiple processes try to access the same resource simultaneously. Some processes must then wait for access to various database structures.
This chapter contains the following sections:
Symptoms of resource contention problems can be found in V$SYSTEM_EVENT
. This view reveals various system problems that may be impacting performance, problems such as latch contention, buffer contention, and I/O contention. It is important to remember that these are only symptoms of problems--not the actual causes.
For example, by looking at V$SYSTEM_EVENT
you might notice lots of buffer-busy waits. It may be that many processes are inserting into the same block and must wait for each other before they can insert. The solution might be to introduce free lists for the object in question.
Buffer busy waits may also have caused some latch free waits. Because most of these waits were caused by misses on the cache buffer hash chain latch, this was also a side effect of trying to insert into the same block. Rather than increasing SPINCOUNT
to reduce the latch free waits (a symptom), you should change the object to allow for multiple processes to insert into free blocks. This approach effectively reduces contention.
The V$RESOURCE_LIMIT
view provides information about current and maximum global resource utilization for some system resources. This information enables you to make better decisions when choosing values for resource limit-controlling parameters.
If the system has idle time, then start your investigation by checking V$SYSTEM_EVENT
. Examine the events with the highest average wait time, then take appropriate action on each. For example, if you find a high number of latch free waits, then look in V$LATCH
to see which latch is the problem.
For excessive buffer busy waits, look in V$WAITSTAT
to see which block type has the highest wait count and the highest wait time. Look in V$SESSION_WAIT
for cache buffer waits so you can decode the file and block number of an object.
The rest of this chapter describes common contention problems. Remember that the different forms of contention are symptoms which can be fixed by making changes in one of two places:
Sometimes you have no alternative but to change the application in order to overcome performance constraints.
The rest of this chapter examines various kinds of contention and explains how to resolve problems. Contention may be for rollback segments, multi-threaded servers, parallel execution servers, redo log buffer latches, LRU latch, or for free lists.
This section discusses how to reduce contention for rollback segments. The following issues are explained:
Contention for rollback segments is reflected by contention for buffers that contain rollback segment blocks. You can determine whether contention for rollback segments is adversely affecting performance by checking the dynamic performance table V$WAITSTAT
.
V$WAITSTAT
contains statistics that reflect block contention. By default, this table is available only to the user SYS
and to other users who have SELECT
ANY
TABLE
system privilege, such as SYSTEM
. These statistics reflect contention for different classes of blocks:
Use the following query to monitor these statistics over a period of time while your application is running:
SELECT CLASS, COUNTFROM V$WAITSTAT WHERE CLASS IN ('SYSTEM UNDO HEADER', 'SYSTEM UNDO BLOCK','UNDO HEADER', 'UNDO BLOCK');
The result of this query might look like this:
CLASS COUNT ------------------ ---------- SYSTEM UNDO HEADER 2089 SYSTEM UNDO BLOCK 633 UNDO HEADER 1235 UNDO BLOCK 942
Compare the number of waits for each class of block with the total number of requests for data over the same period of time. You can monitor the total number of requests for data over a period of time with this query:
SELECT SUM(VALUE)FROM V$SYSSTAT WHERE NAME IN ('DB BLOCK GETS', 'CONSISTENT GETS');
The output of this query might look like this:
SUM(VALUE) ---------- 929530
The information in V$SYSSTAT
can also be obtained through SNMP.
If the number of waits for any class of block exceeds 1% of the total number of requests, then consider creating more rollback segments to reduce contention.
To reduce contention for buffers containing rollback segment blocks, create more rollback segments. Table 21-1 shows some general guidelines for choosing how many rollback segments to allocate based on the number of concurrent transactions on your database. These guidelines are appropriate for most application mixes.
Table 21-1 Choosing the Number of Rollback SegmentsNumber of Current Transactions (n) | Number of Rollback Segments Recommended |
n < 16 |
4 |
16 <= n < 32 |
8 |
32 <= n |
n/4 |
Performance of certain database features may degrade slightly when MTS is used. These features include BFILEs
, parallel execution, inter-node parallel execution, and hash joins. This is because these features may prevent a session from migrating to another shared server while they are active.
A session may remain non-migratable after a request from the client has been processed. Use of the above mentioned features may make sessions non-migratable, because the features have not stored all the user state information in the UGA, but have left some of the state in the PGA. As a result, if different shared servers process requests from the client, then the part of the user state stored in the PGA is inaccessible. To avoid this, individual shared servers often need to remain bound to a user session. This makes the session non-migratable among shared servers.
When using these features, you may need to configure more shared servers. This is because some servers may be bound to sessions for an excessive amount of time.
This section discusses how to reduce contention for processes used by Oracle's multi-threaded server (MTS) architecture:
The following views provide dispatcher performance statistics:
V$DISPATCHER
provides general information about dispatcher processes. V$DISPATCHER_RATE
view provides dispatcher processing statistics.
The V$DISPATCHER_RATE
view contains current, average, and maximum dispatcher statistics for several categories. Statistics with the prefix "CUR_
" are statistics for the current session. Statistics with the prefix "AVG_
" are the average values for the statistics since the collection period began. Statistics with "MAX_
" prefixes are the maximum values for these categories since statistics collection began.
To assess dispatcher performance, query the V$DISPATCHER_RATE
view and compare the current values with the maximums. If your present system throughput provides adequate response time and current values from this view are near the average and below the maximum, then you likely have an optimally-tuned MTS environment.
If the current and average rates are significantly below the maximums, then consider reducing the number of dispatchers. Conversely, if current and average rates are close to the maximums, then you may need to add more dispatchers. A good rule-of-thumb is to examine V$DISPATCHER_RATE
statistics during both light and heavy system use periods. After identifying your MTS load patterns, adjust your parameters accordingly.
If needed, you can also mimic processing loads by running system stress-tests and periodically polling the V$DISPATCHER_RATE
statistics. Proper interpretation of these statistics varies from platform to platform. Different types of applications also can cause significant variations on the statistical values recorded in V$DISPATCHER_RATE
.
This section discusses how to identify contention for dispatcher processes, how to add dispatcher processes, and how to enable connection pooling.
Contention for dispatcher processes is indicated by either of these symptoms:
V$DISPATCHER
contains statistics reflecting the activity of dispatcher processes. By default, this view is available only to the user SYS
and to other users who have SELECT
ANY
TABLE
system privilege, such as SYSTEM
. These columns reflect busy rates for dispatcher processes:
IDLE |
Displays the idle time for the dispatcher process in hundredths of a second. |
BUSY |
Displays the busy time for the dispatcher process in hundredths of a second. |
If the database is only in use 8 hours per day, then statistics need to be normalized by the effective work times. You cannot simply look at statistics from the time the instance started. Instead, record statistics during peak workloads. If the dispatcher processes for a specific protocol are busy for more than 50% of the peak workload period, then by adding dispatcher processes, you may improve performance for users connected to Oracle using that protocol.
V$QUEUE
contains statistics reflecting the response queue activity for dispatcher processes. By default, this table is available only to the user SYS
and to other users who have SELECT
ANY
TABLE
system privilege, such as SYSTEM
. These columns show wait times for responses in the queue:
WAIT |
The total waiting time, in hundredths of a second, for all responses that have ever been in the queue. |
TOTALQ |
The total number of responses that have ever been in the queue. |
Use the following query to monitor these statistics occasionally while your application is running:
SELECT CONF_INDX "INDEX",DECODE( SUM(TOTALQ), 0, 'NO RESPONSES',SUM(WAIT)/SUM(TOTALQ) || ' HUNDREDTHS OF SECONDS')"AVERAGE WAIT TIME PER RESPONSE" FROM V$QUEUE Q, V$DISPATCHER D WHERE Q.TYPE = 'DISPATCHER'AND Q.PADDR = D.PADDRGROUP BY CONF_INDX;
This query returns the average time, in hundredths of a second, that a response waits in each response queue for a dispatcher process to route it to a user process. This query uses the V$DISPATCHER
table to group the rows of the V$QUEUE
table by MTS_DISPATCHERS
parameter value index. The query also uses the DECODE
syntax to recognize those protocols for which there have been no responses in the queue. The result of this query might look like this:
INDEX AVERAGE WAIT TIME PER RESPONSE -------- ------------------------------ 0 .1739130 HUNDREDTHS OF SECONDS 1 NO RESPONSES
From this result, you can tell that a response in the queue for the first MTS_DISPATCHERS
value's dispatchers waits an average of 0.17 hundredths of a second, and that there have been no responses in the queue for the second MTS_DISPATCHERS
value's dispatchers.
If the average wait time for a specific MTS_DISPATCHERS
value continues to increase steadily as your application runs, then by adding dispatchers, you may be able to improve performance of those user processes connected to Oracle using that group of dispatchers.
Add dispatcher processes while Oracle is running by using the SET
option of the ALTER
SYSTEM
statement to increase the value for the MTS_DISPATCHERS
parameter.
The total number of dispatcher processes is limited by the value of the initialization parameter MTS_MAX_DISPATCHERS
. You may need to increase this value before adding dispatcher processes. The default value of this parameter is 5 and the maximum value varies depending on your operating system.
See Also:
For more information on adding dispatcher processes, see Oracle8i Administrator's Guide and Net8 Administrator's Guide. |
When system load increases and dispatcher throughput is maximized, it is not necessarily a good idea to immediately add more dispatchers. Instead, consider configuring the dispatcher to support more users with connection pooling.
MTS_DISPATCHERS
lets you enable various attributes for each dispatcher. Oracle supports a name-value syntax to let you specify attributes in a position-independent, case-insensitive manner. For example:
MTS_DISPATCHERS = "(PROTOCOL=TCP)(POOL=ON)(TICK=1)"
The optional attribute POOL
is used to enable the Net8 connection pooling feature. TICK
is the size of a network TICK
in seconds. The TICK
- default is 15 seconds.
See Also:
For more information about the |
Multiplexing is used by a connection manager process to establish and maintain connections from multiple users to individual dispatchers. For example, several user processes may connect to one dispatcher by way of a single connection manager process.
The connection manager manages communication from users to the dispatcher by way of the single connection. At any one time, zero, one, or a few users may need the connection, while other user processes linked to the dispatcher by way of the connection manager process are idle. In this way, multiplexing is beneficial as it maximizes use of user-to-dispatcher process connections.
Multiplexing is also useful for multiplexing database link connections between dispatchers. The limit on the number of connections for each dispatcher is platform dependent. For example:
MTS_DISPATCHERS="(PROTOCOL=TCP)(MULTIPLEX=ON)"
This section discusses how to identify contention for shared servers and how to increase the maximum number of shared servers.
Steadily increasing wait times in the requests queue indicate contention for shared servers. To examine wait time data, use the dynamic performance view V$QUEUE
. This view contains statistics showing request queue activity for shared servers. By default, this view is available only to the user SYS
and to other users with SELECT
ANY
TABLE
system privilege, such as SYSTEM
. These columns show wait times for requests in the queue:
WAIT |
Displays the total waiting time, in hundredths of a second, for all requests that have ever been in the queue. |
TOTALQ |
Displays the total number of requests that have ever been in the queue. |
Monitor these statistics occasionally while your application is running by issuing the following SQL statement:
SELECT DECODE(TOTALQ, 0, 'No Requests',WAIT/TOTALQ || ' HUNDREDTHS OF SECONDS')"AVERAGE WAIT TIME PER REQUESTS" FROM V$QUEUE WHERE TYPE = 'COMMON';
This query returns the results of a calculation that shows the following:
AVERAGE WAIT TIME PER REQUEST ----------------------------- .090909 HUNDREDTHS OF SECONDS
From the result, you can tell that a request waits an average of 0.09 hundredths of a second in the queue before processing.
You can also determine how many shared servers are currently running by issuing this query:
SELECT COUNT(*) "Shared Server Processes"FROM V$SHARED_SERVER WHERE STATUS != 'QUIT';
The result of this query might look like this:
SHARED SERVER PROCESSES ----------------------- 10
If you detect resource contention with MTS, then first make sure that this is not a memory contention issue by examining the shared pool and the large pool. If performance remains poor, then you may want to create more resources to reduce shared server process contention. Do this by modifying the optional server process parameters as explained under the following headings.
This section explains how to set optional parameters affecting processes for the multi-threaded server architecture. This section also explains how and when to modify these parameters to tune performance.
The static initialization parameters discussed in this section are:
This section also describes the initialization/session parameters:
Values for the initialization parameters MTS_MAX_DISPATCHERS
and MTS_MAX_SERVERS
define upper limits for the number of dispatchers and servers running on an instance. These parameters are static and cannot be changed after your database is running. You can create as many dispatcher and server processes as you need, but the total number of processes cannot exceed the host operating system's limit for the number of running processes.
You can also define starting values for the number of dispatchers and servers by setting the MTS_DISPATCHERS
parameter's DISPATCHER
attribute and the MTS_SERVERS
parameter. After system startup, you can dynamically re-set values for these parameters to change the number of dispatchers and servers using the SET
option of the ALTER
SYSTEM
statement. If you enter values for these parameters in excess of limits set by the static parameters, then Oracle uses the static parameter values.
The default value of MTS_MAX_SERVERS
is dependent on the value of MTS_SERVERS
. If MTS_SERVERS
is less than or equal to 10, then MTS_MAX_SERVERS
defaults to 20. If MTS_SERVERS
is greater than 10, then MTS_MAX_SERVERS
defaults to 2 times the value of MTS_SERVERS
.
When the database starts, MTS_SERVERS
is the number of shared servers created. Oracle will not allow the number of shared servers to fall below this minimum. During processing, Oracle automatically adds shared servers up to the limit defined by MTS_MAX_SERVERS
if Oracle perceives that the load based on the activity of the requests on the common queue warrant additional shared servers. Therefore, you are unlikely to improve performance by explicitly adding shared servers. However, you may need to adjust your system to accommodate certain resource issues.
If the number of shared server processes has reached the limit set by the initialization parameter MTS_MAX_SERVERS
and the average wait time in the request queue is still unacceptable, then you might improve performance by increasing the MTS_MAX_SERVERS
value.
If resource demands exceed expectations, then you can either allow Oracle to automatically add shared server processes or you can add shared processes by altering the value for MTS_SERVERS
. You can change the value of this parameter in the initialization parameter file, or alter it using the MTS_SERVERS
parameter of the ALTER
SYSTEM
statement. Experiment with this limit and monitor shared servers to determine an ideal setting for this parameter.
This is the first stage in troubleshooting MTS. Performance can degrade if there are not enough shared servers to process all the requests put toward the database.
Check for the initial setting of the maximum number of shared servers. For example:
SHOW PARAMETER MTS_MAX_SERVERS
Check for the highwater mark for shared servers. For example:
SELECT maximum_connections "MAXIMUM_CONNECTIONS",servers_started "SERVERS_STARTED", servers_terminated "SERVERS_TERMINATED", servers_highwater "SERVERS_HIGHWATER"FROM V$MTS;
The output is:
MAXIMUM_CONNECTIONS SERVERS_STARTED SERVERS_TERMINATED SERVERS_HIGHWATER ------------------- --------------- ------------------ ----------------- 60 30 30 50
Here, HIGHWATER
should not be equal to the parameter MTS_MAX_SERVERS
.
The other parameters are:
The shared servers are the processes that perform data access and pass back this information to the dispatchers.
The dispatchers then forward the data to the client process. If there are not enough shared servers to handle all the requests, then the queue backs up (V$QUEUE
), and requests take longer to process. However, before you check the V$QUEUE
statistics, it is best to first check if you are running out of shared servers.
Find out the amount of free RAM in the system. Examine ps
or any other operating system utility to find out the amount of memory a shared server uses. Divide the amount of free RAM by the size of a shared server. This gives you the maximum number of shared servers you can add to your system.
The best way to proceed is to increase the MTS_MAX_SERVERS
parameter gradually until you begin to swap. If swapping occurs due to the shared server, then back off the number until swapping stops, or increase the amount of physical RAM.
Because each operating system and application is different, the only way to find out the correct setting for MTS_MAX_SERVERS
is through trial and error.
To change the MTS_MAX_SERVERS
, first edit the initialization parameter file. Find in the file the parameter MTS_MAX_SERVERS
and change it there. Save the file and restart the instance. Remember that setting MTS_SERVERS
to MTS_MAX_SERVERS
should only be done if you are sure that you will be using the machine at 100% all the time. The general rules are:
MTS_SERVERS
should be set for slightly greater than the expected number of shared servers that will be needed when the database is at an average load.
MTS_MAX_SERVERS
should be set for slightly greater than the expected number of shared servers that will be needed when the database is at an peak load.
As mentioned, MTS_SERVERS
determines the number of shared servers activated at instance startup. The default setting for MTS_SERVERS
is 1 which is the default setting when MTS_DISPATCHERS
is specified.
To determine the optimal number of dispatchers and shared servers, consider the number of users typically accessing the database and how much processing each requires. Also consider that user and processing loads vary over time. For example, a customer service system's load might vary drastically from peak OLTP-oriented daytime use to DSS-oriented nighttime use. System use can also predictably change over longer time periods such as the loads experienced by an accounting system that vary greatly from mid-month to month-end.
If each user makes relatively few requests over a given period of time, then each associated user process is idle for a large percentage of time. In this case, one shared server process can serve 10 to 20 users. If each user requires a significant amount of processing,then establish a higher ratio of servers to user processes.
In the beginning, it is best to allocate fewer shared servers. Additional shared servers start automatically as needed and are deallocated automatically if they remain idle too long. However, the initial servers always remain allocated, even if they are idle.
If you set the initial number of servers too high, then your system might incur unnecessary overhead. Experiment with the number of initial shared servers and monitor shared servers until you achieve ideal system performance for your typical database activity.
Use values for MTS_MAX_DISPATCHERS
and MTS_DISPATCHERS
that are at least equal to the maximum number of concurrent sessions divided by the number of connections per dispatcher. For most systems, a value of 1,000 connections per dispatcher provides good performance.
As mentioned, you can use the SET
option of the ALTER
SYSTEM
statement to alter the number of active, shared servers. To prevent additional users from accessing shared servers, set MTS_SERVERS
to 0. This temporarily disables additional use of MTS. Re- setting MTS_SERVERS
to a positive value enables MTS for all current users.
See Also:
For information about dispatchers, see the description of the |
This section describes how to detect and alleviate contention for parallel execution servers when using parallel execution:
Statistics in the V$PQ_SYSSTAT
view are useful for determining the appropriate number of parallel execution servers for an instance. The statistics that are particularly useful are SERVERS
BUSY
, SERVERS
IDLE
, SERVERS
STARTED
, and SERVERS
SHUTDOWN
.
Frequently, you cannot increase the maximum number of parallel execution servers for an instance, because the maximum number is heavily dependent upon the capacity of your CPUs and your I/O bandwidth. However, if servers are continuously starting and shutting down, then you should consider increasing the value of the initialization parameter PARALLEL_MIN_SERVERS
.
For example, if you have determined that the maximum number of concurrent parallel execution servers that your machine can manage is 100, then you should set PARALLEL_MAX_SERVERS
to 100. Next, determine how many parallel execution servers the average parallel operation needs, and how many parallel operations are likely to be executed concurrently. For this example, assume you have two concurrent operations with 20 as the average degree of parallelism. Thus, at any given time there could be 80 parallel execution servers busy on an instance. Thus you should set the PARALLEL_MIN_SERVERS
parameter to 80.
Periodically examine V$PQ_SYSSTAT
to determine whether the 80 parallel execution servers for the instance are actually busy. To do so, issue the following query:
SELECT * FROM V$PQ_SYSSTAT WHERE STATISTIC = "SERVERS BUSY";
The result of this query might look like this:
STATISTIC VALUE --------------------- ----------- SERVERS BUSY 70
If you find that typically there are fewer than PARALLEL_MIN_SERVERS
busy at any given time, then your idle parallel execution servers constitute system overhead that is not being used. Consider decreasing the value of the parameter PARALLEL_MIN_SERVERS
. If you find that there are typically more parallel execution servers active than the value of PARALLEL_MIN_SERVERS
and the SERVERS
STARTED
statistic is continuously growing, then consider increasing the value of the parameter PARALLEL_MIN_SERVERS
.
Contention for redo log buffer access rarely inhibits database performance. However, Oracle provides methods to monitor and reduce any latch contention that does occur. This section covers:
Access to the redo log buffer is regulated by two types of latches: the redo allocation latch and redo copy latches.
The redo allocation latch controls the allocation of space for redo entries in the redo log buffer. To allocate space in the buffer, an Oracle user process must obtain the redo allocation latch. Because there is only one redo allocation latch, only one user process can allocate space in the buffer at a time. The single redo allocation latch enforces the sequential nature of the entries in the buffer.
After allocating space for a redo entry, the user process may copy the entry into the buffer. This is called "copying on the redo allocation latch". A process may only copy on the redo allocation latch if the redo entry is smaller than a threshold size.
The user process first obtains the copy latch which allows the process to copy. Then it obtains the allocation latch, performs allocation, and releases the allocation latch. Next the process performs the copy under the copy latch, and releases the copy latch. The allocation latch is thus held for only a very short period of time, as the user process does not try to obtain the copy latch while holding the allocation latch.
If the redo entry is too large to copy on the redo allocation latch, then the user process must obtain a redo copy latch before copying the entry into the buffer. While holding a redo copy latch, the user process copies the redo entry into its allocated space in the buffer and then releases the redo copy latch.
If your computer has multiple CPUs, then your redo log buffer can have multiple redo copy latches. These allow multiple processes to concurrently copy entries to the redo log buffer concurrently.
On single-CPU computers, there should be no redo copy latches, because only one process can be active at once. In this case, all redo entries are copied on the redo allocation latch, regardless of size.
Heavy access to the redo log buffer can result in contention for redo log buffer latches. Latch contention can reduce performance. Oracle collects statistics for the activity of all latches and stores them in the dynamic performance view V$LATCH
. By default, this table is available only to the user SYS
and to other users who have SELECT
ANY
TABLE
system privilege, such as SYSTEM
.
Each row in the V$LATCH
table contains statistics for a different type of latch. The columns of the table reflect activity for different types of latch requests. There is a distinction between the different types of latch requests. The distinction is:
These columns of the V$LATCH
view reflect willing-to-wait requests:
For example, consider the case in which a process makes a willing-to-wait request for a latch that is unavailable. The process waits and requests the latch again and the latch is still unavailable. The process waits and requests the latch a third time and acquires the latch. This activity increments the statistics as follows:
GETS
value increases by one because one request for the latch (the third request) was successful.
MISSES
value increases by one each time because the initial request for the latch resulted in waiting.
SLEEPS
value increases by two because the process waited for the latch twice, once after the initial request and again after the second request.
These columns of the V$LATCH
table reflect immediate requests:
IMMEDIATE GETS |
This column shows the number of successful immediate requests for each latch. |
IMMEDIATE MISSES |
This column shows the number of unsuccessful immediate requests for each latch. |
Use the following query to monitor the statistics for the redo allocation latch and the redo copy latches over a period of time:
SELECT ln.name, gets, misses, immediate_gets, immediate_missesFROM v$latch l, v$latchname ln WHERE ln.name IN ('redo allocation', 'redo copy')AND ln.latch# = l.latch#;
The output of this query might look like this:
NAME GETS MISSES IMMEDIATE_GETS IMMEDIATE_MISSES ------------------------ ---------- ---------- --------------- ---------------- redo allocation 252867 83 0 0 redo copy 0 0 22830 0
From the output of the query, calculate the wait ratio for each type of request.
Contention for a latch may affect performance if either of these conditions is true:
MISSES
to GETS
exceeds 1%.
IMMEDIATE_MISSES
to the sum of IMMEDIATE_GETS
and IMMEDIATE_MISSES
exceeds 1%.
If either of these conditions is true for a latch, then try to reduce contention for that latch. These contention thresholds are appropriate for most operating systems, though some computers with many CPUs may be able to tolerate more contention without performance reduction.
Most cases of latch contention occur when two or more Oracle processes concurrently attempt to obtain the same latch. Latch contention rarely occurs on single-CPU computers, where only a single process can be active at once.
To reduce contention for the redo allocation latch, you should minimize the time that any single process holds the latch. To reduce this time, reduce copying on the redo allocation latch. Decreasing the value of the LOG_SMALL_ENTRY_MAX_SIZE
initialization parameter reduces the number and size of redo entries copied on the redo allocation latch.
On multiple-CPU computers, multiple redo copy latches allow multiple processes to copy entries to the redo log buffer concurrently. The default value of LOG_SIMULTANEOUS_COPIES
is the number of CPUs available to your Oracle instance.
If you observe contention for redo copy latches, then add more latches by increasing the value of LOG_SIMULTANEOUS_COPIES
. Consider having twice as many redo copy latches as CPUs available to your Oracle instance.
The LRU (least recently used) latch controls the replacement of buffers in the buffer cache. For symmetric multiprocessor (SMP) systems, Oracle automatically sets the number of LRU latches to a value equal to one half the number of CPUs on the system. For non-SMP systems, one LRU latch is sufficient.
Contention for the LRU latch can impede performance on SMP machines with a large number of CPUs. You can detect LRU latch contention by querying V$LATCH
, V$SESSION_EVENT
, and V$SYSTEM_EVENT
. To avoid contention, consider bypassing the buffer cache or redesigning the application.
You can specify the number of LRU latches on your system with the initialization parameter DB_BLOCK_LRU_LATCHES
. This parameter sets the maximum value for the desired number of LRU latches. Each LRU latch controls a set of buffers; Oracle balances allocation of replacement buffers among the sets.
To select the appropriate value for DB_BLOCK_LRU_LATCHES
, consider the following:
DB_BLOCK_LRU_LATCHES
can range from 1 to twice the number of CPUs.
Free list contention can reduce the performance of some applications. This section covers:
A free list is a list of free data blocks that can be drawn from a number of different extents within the segment. Blocks in free lists contain free space greater than PCTFREE
. This is the percentage of a block to be reserved for updates to existing rows. In general, blocks included in process free lists for a database object must satisfy the PCTFREE
and PCTUSED
constraints.
You can specify the number of process free lists with the FREELISTS
parameter. The default value of FREELISTS
is 1. This is the minimum value. The maximum value depends on the data block size. If you specify a value that is too large, an error message informs you of the maximum value. In addition, for each free list, you need to store a certain number of bytes in a block to handle overhead.
Contention for free lists is reflected by contention for free data blocks in the buffer cache. You can determine whether contention for free lists is reducing performance by querying the dynamic performance view V$WAITSTAT
.
Use the following procedure to find the segment names and free lists that have contention:
V$WAITSTAT
for contention on DATA
BLOCKS
.
V$SYSTEM_EVENT
for BUFFER
BUSY
WAITS
.
High numbers indicate that some contention exists.
V$SESSION_WAIT
to see, for each buffer busy wait, the values for FILE
, BLOCK
, and ID
.
SELECT SEGMENT_NAME, SEGMENT_TYPE FROM DBA_EXTENTS WHERE FILE_ID = file AND BLOCK BETWEEN block_id AND block_id + blocks;
This returns the segment name (segment) and type (type).
SELECT SEGMENT_NAME, FREELISTS FROM DBA_SEGMENTS WHERE SEGMENT_NAME = SEGMENT AND SEGMENT_TYPE = TYPE;
The ALTER
FREELISTS
statement lets you modify the FREELIST
setting of the existing database objects. To reduce contention for the free lists of a table, use the ALTER
FREELISTS
statement to add free lists. Set the value of this parameter proportional to the number of processes doing concurrent INSERTs
in the steady state.
See Also:
For information about using free list groups in a Parallel Server environment, see Oracle8i Parallel Server Administration, Deployment, and Performance. |
|
![]() Copyright © 1996-2000, Oracle Corporation. All Rights Reserved. |
|