`

Chapter 4. Locks

 
阅读更多

Chapter 4. Locks
Oracle uses latches to protect data structures that are accessed briefly and intermittently. However, latches are not suitable for protecting resources that may be needed for a relatively long time, such as database tables. In such cases, a lock must be used instead. Locks allow sessions to join a queue for a resource that is not immediately available. This avoids spinning. Locks also allow multiple sessions to share a resource if their activities are compatible.
4.1 Lock Usage
Oracle uses locks for many different purposes. The following are the most important ones to understand for performance tuning.
4.1.1 Transaction Locks and Row-Level Locks
Oracle's much vaunted row-level locks are subtle. When a transaction modifies a row, its transaction identifier is recorded in an entry in the interested transaction list (ITL) in the header of the data block itself, and the row header is modified to point to that ITL entry. Once these changes have been made, no lock is retained. The ITL entry for the uncommitted transaction, together with the row header that references it, constitutes an implicit lock on the row.
When another transaction wants to modify the same row, and sees that an uncommitted transaction has modified that row, that transaction waits, not on a row-level lock, but on the transaction lock for the blocking transaction.
When the blocking transaction commits or rolls back, its transaction lock will be released. Its implicit row-level locks are thereby released, and so the blocked transaction can then proceed. Note that rolling back to a savepoint does not free previously blocked transactions that were waiting for a row-level lock.

4.1.2 Buffer Locks
Row-level locks protect data integrity at the lowest feasible level of granularity, and remain in force for the duration of a transaction. However, Oracle also needs short-term block-level locks to be in force while accessing or modifying blocks in its cache.
Buffer locks are used to provide simple read/write locking for blocks in the database buffer cache. Although they are often taken for granted and seldom mentioned, buffer locks are essential to data integrity, and can feature prominently in certain performance tuning scenarios.

4.1.3 Data Dictionary Locks
The definitions of database objects in the data dictionary must be protected while they are being referenced. This is necessary to prevent those objects from being dropped, and to prevent their definitions from being changed, while they are being used. Dictionary locks must be held while dependent SQL statements are being parsed or executed, and must be retained for the duration of dependent transactions.
Several types of locks are used for dictionary locking. All of these are covered in some detail later in this chapter. The data dictionary rows themselves are locked with row cache enqueue locks. Dependent SQL statements are protected with library cache pins, and dependent transactions hold DML (Data Manipulation Language) locks. Logically, both DML locks and library cache pins are dependent on the corresponding row cache enqueue locks. However, this dependency is implicit in the code, rather than explicit in the structures.


4.2 Lock Modes
Locks are applied to both compound and simple objects. The classic example of a compound object and its component parts is a table and its rows. A cache buffer is an example of a simple object. Simple objects may only be locked in the following modes:
Exclusive
If a session needs to modify a simple object, then an exclusive lock is required on the resource to prevent any concurrent access.
Shared
If a session needs to inspect a simple object, then a shared lock on the resource is sufficient to ensure that the data structure will not be modified by another session, while allowing concurrent shared access.
Null
If a session has some information cached about an object, then a null mode lock may be held as a placeholder, even when the resource is not actively being used. A null mode lock does not inhibit any concurrent access, but if the resource is invalidated, the null mode lock acts as a trigger for the session to invalidate its private cached information. There is an important difference between holding a null mode lock, and not holding a lock at all.
In addition to the modes above, compound objects may also be locked in the following modes:
Sub-shared
If a session needs shared access to part of a compound object, then a shared lock on the entire compound resource would be unduly restrictive, because it would prevent exclusive access to other parts of the compound resource. In such cases, a sub-shared lock is used instead.
Sub-exclusive
If a session needs exclusive access to part of a compound resource, then a sub-exclusive lock is sufficiently restrictive.
Shared-sub-exclusive
This lock mode is used when a session needs exclusive access to part of a compound resource and shared access to the entire compound resource at the same time.
These lock modes apply both to local locks and to the instance locks that are used between parallel server instances. However, different terminology is used for instance locks. Table 4.1 shows the corresponding lock mode names together with the symbolic and numeric representations used in dumps and wait parameter values.
Local Lock Modes Instance Lock Modes
Name Symbo l Numbe r Name Symbo l Number
(No lock) NLCK 0
Null N 1 Null NL 0
Sub-Shared SS 2 Concurrent Read CR 1
Sub-Exclusive SX 3 Concurrent Write CW 2
Shared S 4 Protected Read PR 3
Shared-Sub-Exclusive SSX 5 Protected Write PW 4
Exclusive X 6 Exclusive EX 5

It is important to understand which lock modes are compatible with one another. Table 4.2 shows the complete lock mode compatibility matrix.


4.3 Enqueue Locks
Many of Oracle's locks are called enqueue locks. To enqueue a lock request is to place that request on the queue for its resource. So although the word "enqueue" is strictly speaking a verb, it is used adjectivally in the term enqueue lock. It is also used as a noun when referring to a particular enqueue resource, such as the CF (control file) enqueue.
Oracle uses two classes of local locks—those for which the lock and resource data structures are dynamically allocated in the shared pool, and those that use fixed arrays for the lock and resource data structures. Although almost all types of lock requests may be enqueued, the term enqueue should be taken to refer exclusively to those locks that use the fixed arrays for the lock and resource data structures, unless otherwise qualified.
4.3.1 Enqueue Resources
The fixed array for enqueue resources is sized by the ENQUEUE_RESOURCES parameter. The number of slots in this array that are in use varies from time to time, and these can be seen in V$RESOURCE . Each row in V$RESOURCE represents a resource that is currently locked in any mode by one or more sessions. These resources are not persistent in that they are no longer defined once all locks on the resource have been released.
Rows in V$RESOURCE are identified by a two-character code representing the type of resource, and two numeric fields used to encode either the resource identity or the activities protected by locks on the resource, depending on the resource type. For example, resources of type TX represent entries in the transaction table of a rollback segment. The high-order two bytes of the first identifier contain the rollback segment number, and the low-order two bytes contain the transaction table slot number, while the second identifier contains the rollback segment wrap or sequence number.
All enqueue operations access the enqueue resource structure via a hash table. The hash value is based on the resource type and the numeric identifiers. The length of the enqueue hash table is set by the _ENQUEUE_HASH parameter. The default value of this parameter is derived directly from the PROCESSES parameter, as follows:
45 + 2 * ( PROCESSES +
PROCESSES/10
)
Because _ENQUEUE_HASH is derived directly from PROCESSES rather than from ENQUEUE_RESOURCES, it may be necessary to tune _ENQUEUE_HASH explicitly if ENQUEUE_RESOURCES has been raised significantly from its default value. Otherwise lengthy enqueue hash chains may develop. As with all hash tables, if you have cause to tune the number of buckets, you should make it a prime number (see Hash Tables and Prime Numbers).
The enqueue hash chains are accessed under the protection of the enqueue hash chains latches. The number of child enqueue hash chains latches is set by the _ENQUEUE_HASH_CHAIN_LATCHES parameter, which defaults to the CPU_COUNT. In a high concurrency environment, sleeps may be recorded against the enqueue hash chains latches if the hash chains are allowed to become unduly long. However, sleeps against these latches should normally be regarded as a secondary result of contention for a higher-level latch, rather than attributed to long hash chains.

Hash Tables and Prime Numbers
Oracle uses hash tables internally so that objects can be located efficiently. For example, a hash table is used to locate database blocks in the buffer cache, and another hash table is used to locate named objects in the library cache.
To locate an object via a hash table, Oracle uses an algorithm to convert the object's name or identifier into a number. That number may be much larger than the size of the hash table, so it is converted to an index into the hash table using a simple modulus function.
Multiple objects may map to the same hash table entry. This is called a hash collision. Oracle normally resolves hash collisions using collision chains. This means that objects that map to the same hash table entry are linked together using a chain of pointers. These objects are said to fall into the same hash bucket.
The performance of hash-based access is sensitive to the length of the hash chains because they must be searched linearly. Therefore hash tables must be large enough to ensure that the average hash chain length remains short.
Long hash chains can also develop if the distribution of objects to hash buckets is uneven. This happens if there is any pattern in the names of the objects being hashed that the hash function is not able to randomize. This is surprisingly common.
By making the number of hash buckets a prime number, you can greatly reduce the risk of any pattern in the hash values resulting in hash collisions once the modulus function has been applied.
4.3.2 Enqueue Locking
In addition to the enqueue resources, a second fixed array is used for enqueue locking—namely, the enqueue locks themselves. The size of the enqueue locks fixed array is set by the _ENQUEUE_LOCKS parameter, and the active rows can be seen in V$ENQUEUE_LOCK .
An enqueue lock structure is used by each session waiting for or holding a lock on a resource. If one or more sessions are waiting for locks on a resource, then their enqueue lock structures are linked together into a two-way linked list, with the enqueue resource structure as the list header. This linked list is maintained and serviced in the order in which the locks were requested. For example, if a lock is held in shared mode, and the first waiter requires access to the resource in exclusive mode, then other sessions that require shared access must queue for the resource behind the first waiter, despite the fact that their requests are compatible with the mode in which the resource is currently locked.
Similar two-way linked lists are used to link together the enqueue lock structures for sessions holding a lock on the resource, and for sessions waiting to change the mode of the lock that they are holding.
The operation of changing the mode of a lock is called an enqueue conversion. For example, if a transaction holds a lock on a particular table in sub-share mode, and needs to update a row of that table, then the enqueue lock must be converted to sub-exclusive mode. However, if the resource is currently locked in an incompatible mode by another session, then the conversion cannot proceed immediately and the enqueue lock structure is placed in the conversion queue. Enqueue conversions are serviced in order before new enqueue requests.
During enqueue operations, modifications to the enqueue resources and enqueue locks fixed array free lists (see the sidebar, "Fixed Array Free Lists") are made under the protection of the enqueues latch. There is only one enqueues latch, and it is often taken and released twice during the course of a single enqueue operation. However, the relevant enqueue hash chains latch is held for the duration of the operation.

4.3.3 Enqueue Waits
An enqueue wait occurs whenever an enqueue request or enqueue conversion cannot be granted immediately because another session is holding a lock on the resource in an incompatible mode. The blocked process records an enqueue wait. The wait parameters are shown in Table 4.3.
Table 4.3. Wait Parameters (enqueue waits)
Parameter Description
The high-order 2 bytes contain the ASCII codes for the resource type.
p1
The low-order 2 bytes contain the mode in which a lock is required. p2
The id1 identifier for the resource. p3
The id2 identifier for the resource.
Whenever a session releases an enqueue lock, it examines the lock request and conversion queues for the resource and, if appropriate, posts the next process that will be able to acquire a lock on the resource.
Processes waiting in an enqueue wait also set an alarm before they begin to wait. The timeout duration is dependent on the type of resource. For most enqueues, the enqueue wait timeout is 3 seconds.
Consecutive waits during a single attempt to acquire an enqueue lock are recorded as separate waits in the session and system wait statistics. However, the enqueue waits statistic in V$SYSSTAT is only incremented by one, after the lock has been acquired, as are the enqueue requests and enqueue conversions statistics. Note also that the enqueue timeouts statistic in V$SYSSTAT does not represent the number of enqueue wait timeouts. Rather, this statistic is incremented when an enqueue request or enqueue conversion is aborted entirely. This can be due to a distributed transaction timeout, but usually relates to locks requested in no-wait mode.

Fixed Array Free Lists
The free slots in each of Oracle's fixed arrays are maintained on a free list. For each of these arrays, there is a free list header pointer that points to one of the free slots in the array. That slot, in turn, holds a pointer to the next free slot in the free list, and so on.
Free slots are always taken from the head of the free list, and are always returned to the head of the free list. This means that the tail of the free list normally remains unused, and the high-water mark is only advanced when necessary. This fact was used by the APT script fixed_table_hwms.sql under Oracle7 to extract the maximum usage of each fixed array from the corresponding X$ tables. This script is redundant in Oracle8, because the same functionality is now provided by the V$RESOURCE_LIMIT view.
The free list for each fixed array must be protected by a latch. For example, the process allocation latch protects the free list for the array of processes, and the session allocation latch protects the free list for the array of sessions.
If V$SYSSTAT shows a significant number of enqueue waits, then a breakdown of the resource types for which these waits have been sustained can be obtained from X$KSQST , or from the APT script enqueue_stats.sql . Unfortunately, X$KSQST does not contains any indication of the duration of the waits, so care is needed when interpreting these figures.
It is sometimes suggested that ENQUEUE_RESOURCES should be increased to combat enqueue waits. But please note that there is absolutely no substance to this suggestion. Oracle will return an ORA-52 or ORA-53 error if it fails to find a free slot in the enqueue resources or enqueue locks fixed arrays respectively. Beyond that, the setting of the ENQUEUE_RESOURCES and _ENQUEUE_LOCKS parameters is unimportant.
The V$RESOURCE_LIMIT view should be used to adjust your settings for the ENQUEUE_RESOURCES and _ENQUEUE_LOCKS parameters to ensure that you will not run out of slots in these arrays. You can afford to be generous, because slots in these arrays only take on the order of 72 bytes and 60 bytes respectively. I like to maintain headroom of at least 20% above the maximum utilization ever recorded.
4.3.4 Deadlock Detection
Oracle performs automatic deadlock detection for enqueue locking deadlocks. Deadlock detection is initiated whenever an enqueue wait times out, if the resource type required is regarded as deadlock sensitive, and if the lock state for the resource has not changed. If any session that is holding a lock on the required resource in an incompatible mode is waiting directly or indirectly for a resource that is held by the current session in an incompatible mode, then a deadlock exists.
If a deadlock is detected, the session that was unlucky enough to find it aborts its lock request and rolls back its current statement in order to break the deadlock. Note that this is a rollback of the current statement only, not necessarily the entire transaction. Oracle places an implicit savepoint at the beginning of each statement, called the default savepoint, and it is to this savepoint that the transaction is rolled back in the first case. This is enough to resolve the technical deadlock. However, the interacting sessions may well remain blocked.
An ORA-60 error is returned to the session that found the deadlock, and if this exception is not handled, then depending on the rules of the application development tool, the entire transaction is normally rolled back, and a deadlock state dump written to the user dump destination directory. This, of course, resolves the deadlock entirely. The enqueue deadlocks statistic in V$SYSSTAT records the number of times that an enqueue deadlock has been detected.
Application developers can eliminate all risk of enqueue deadlocks by ensuring that transactions requiring multiple resources always lock them in the same order. However, in complex applications, this is easier said than done, particularly if an ad hoc query tool is used. To be safe, you should adopt a strict locking order, but you must also handle the ORA-60 exception appropriately. In some cases it may be sufficient to pause for three seconds, and then retry the statement. However, in general, it is safest to roll back the transaction entirely, before pausing and retrying.

4.3.5 Blocking Locks
Oracle resolves true enqueue deadlocks so quickly that overall system activity is scarcely affected. However, blocking locks can bring application processing to a standstill. For example, if a long-running transaction takes a shared mode lock on a key application table, then all updates to that table must wait.
There are numerous ways of attempting to diagnose blocking lock situations, normally with the intention of killing the offending session. I will mention just a few.
Blocking locks are almost always TX (transaction) locks or TM (table) locks . When a session waits on a TX lock, it is waiting for that transaction to either commit or roll back. The reason for waiting is that the transaction has modified a data block, and the waiting session needs to modify the same part of that block. In such cases, the row wait columns of V$SESSION can be useful in identifying the database object, file, and block numbers concerned, and even the row number in the case of row locks. V$LOCKED_OBJECT can then be used to obtain session information for the sessions holding DML locks on the crucial database object. This is based on the fact that sessions with blocking TX enqueue locks always hold a DML lock as well, unless DML locks have been disabled.
It may not be adequate, however, to identify a single blocking session, because it may, in turn, be blocked by another session. To address this requirement, Oracle's utllockt.sql script gives a tree-structured report showing the relationship between blocking and waiting sessions. Some DBAs are loath to use this script because it creates a temporary table, which will block if another space management transaction is caught behind the blocking lock. Although this is extremely unlikely, the same information can be obtained from the DBA_WAITERS view if necessary. The DBA_WAITERS view is created by Oracle's catblock.sql script.
Some application developers attempt to evade blocking locks by preceding all updates with a SELECT FOR UPDATE NOWAIT statement. However, if they allow user interaction between taking a sub-exclusive lock in this way and releasing it, then a more subtle blocking lock situation can still occur. If a user goes out to lunch while holding a sub-exclusive lock on a table, then any shared lock request on the whole table will block at the head of the request queue, and all other lock requests will queue behind it.
Diagnosing such situations and working out which session to kill is not easy, because the diagnosis depends on the order of the waiters. Most blocking lock detection utilities do not show the request order, and do not consider that a waiter can block other sessions even when it is not actually holding any locks. The APT script enqueue_locks.sql shows the locks held and wanted for each resource in order, together with the number of seconds that the lock has been held or wanted. This is intended to supplement other blocking lock detection utilities, such as Oracle's utllockt.sql.
Application developers can greatly reduce the risk of blocking lock problems by adopting an optimistic locking strategy (see the sidebar, "Optimistic Locking"), and by cultivating an aversion to coarse granularity locking and so designing their applications to run without DML locks.

4.3.6 Distributed Transactions
For distributed transactions, Oracle is unable to distinguish blocking locks and deadlocks, because not all of the lock information is available locally. To prevent distributed transaction deadlocks, Oracle times out any call in a distributed transaction if it has not received any response within the number of seconds specified by the _DISTRIBUTED_LOCK_TIMEOUT parameter. This timeout defaults to 60 seconds. If a distributed transaction times out, an ORA-2049 error is returned to the controlling session. Robust applications should handle this exception in the same way as local enqueue deadlocks.
Similarly, under release 8.0, parallel transactions, which consist of multiple sibling transaction branches, could deadlock undetectably with other simple transactions. If a simple transaction was blocked by one branch of a global transaction, and was blocking another, then Oracle's normal deadlock detection mechanism in release 8.0 would fail to detect the deadlock. To prevent this, Oracle timed out any enqueue lock acquisition or conversion request in a branch of a parallel transaction as though it were a distributed transaction, and an ORA-99 error was returned. The PARALLEL_TRANSACTION_RESOURCE_TIMEOUT parameter, which defaulted to 300 seconds, was used to control this timeout. In release 8.1, the deadlock detection algorithm has been improved to detect these deadlocks, and so this timeout is no longer required.


Optimistic Locking

Consider an airline seat reservation application. Two different customers may simultaneously ask two different operators whether a seat is available on a particular flight. What should the application do?
The application can use SELECT FOR UPDATE NOWAIT to retrieve the information. This guarantees that if a seat appears to be available, then it has already been locked, and a booking for that seat will be able to be successfully taken. This is called early locking, or pessimistic locking.
The alternative is to defer the taking of a lock until the customer resolves to make a booking. This is called late locking, or optimistic locking.
The choice of either pessimistic or optimistic locking affects the design of both business and application processes. So careful thought is needed. Pessimistic locking should be avoided where possible, despite being slightly easier to implement, because it increases the risk of blocking locks.
4.3.7 ITL Entry Shortages
There is an interested transaction list (ITL) in the variable header of each Oracle data block. When a new block is formatted for a segment, the initial number of entries in the ITL is set by the INITRANS parameter for the segment. Free space permitting, the ITL can grow dynamically if required, up to the limit imposed by the database block size, or the MAXTRANS parameter for the segment, whichever is less.
Every transaction that modifies a data block must record its transaction identifier and the rollback segment address for its changes to that block in an ITL entry. (However, for discrete transactions, there is no rollback segment address for the changes.) Oracle searches the ITL for a reusable or free entry. If all the entries in the ITL are occupied by uncommitted transactions, then a new entry will be dynamically created, if possible.
If the block does not have enough internal free space (24 bytes) to dynamically create an additional ITL entry, then the transaction must wait for a transaction using one of the existing ITL entries to either commit or roll back. The blocked transaction waits in shared mode on the TX enqueue for one of the existing transactions, chosen pseudo-randomly. The row wait columns in V$SESSION show the object, file, and block numbers of the target block. However, the ROW_WAIT_ROW# column remains unset, indicating that the transaction is not waiting on a row-level lock, but is probably waiting for a free ITL entry.
The most common cause of ITL entry shortages is a zero PCTFREE setting. Think twice before setting PCTFREE to zero on a segment that might be subject to multiple concurrent updates to a single block, even though those updates may not increase the total row length. The degree of concurrency that a block can support is dependent on the size of its ITL, and failing that, the amount of internal free space. Do not, however, let this warning scare you into using unnecessarily large INITRANS or PCTFREE settings. Large PCTFREE settings compromise data density and degrade table scan performance, and non-default INITRANS settings are seldom warranted.
One case in which a non-default INITRANS setting is warranted is for segments subject to parallel DML. If a child transaction of a PDML transaction encounters an ITL entry shortage, it will check whether the other ITL entries in the block are all occupied by its sibling transactions and, if so, the transaction will roll back with an ORA-12829 error, in order to avoid self-deadlock. The solution in this case is to be content with a lower degree of parallelism, or to rebuild the segment with a higher INITRANS setting. A higher INITRANS value is also needed if multiple serializable transactions may have concurrent interest in any one block.

4.4 Row Cache Enqueues
A cache of rows from the data dictionary is kept in the shared pool. This cache serves not only to reduce physical access to the data dictionary tables in the SYSTEM tablespace, but also enables fine-grained locking of individual data dictionary rows. The need for data dictionary locking was introduced at the start of this chapter (see Section 4.1.3).
The locks on the data dictionary rows themselves are called row cache enqueue locks. These locks are implemented in much the same way as general enqueue locks. The cached data dictionary row acts as the resource structure, and enqueue lock structures are dynamically allocated from the shared pool as required. Locks can be requested, converted, and released, and requests can wait and time out, just like the general enqueue locks. However, row cache enqueue locks are not included in V$LOCK. In fact, they are not visible anywhere except in system and process state dumps.
Depending on the operation, some row cache enqueue locks are requested in no-wait mode and an ORA-54 error is returned if the lock is not immediately available. Otherwise, row cache lock requests are enqueued if necessary, and the process waits on a row cache lock wait. The parameters for this wait are shown in Table 4.4.
Table 4.4. Wait Parameters (row cache lock waits)
Parameter
Description
A number corresponding to the CACHE# column of V$ROWCACHE
p1
representing the data dictionary table for which a row lock is needed
p2
The mode in which the lock is already held
p3
The mode in which the lock is needed
The numeric codes used for the lock modes in the parameters for this wait are those for instance locks, rather than local locks, even when running single-instance Oracle. However, this wait is relatively rare in single-instance Oracle, resulting only from resource conflicts, whereas it is routine in parallel server because new lock requests must be socialized via the distributed lock manager.
Oracle does not expect row cache enqueue lock acquisitions and conversions to block for more than a few seconds. Therefore, row cache lock waits time out every 3 seconds, and if the lock has still not been acquired after 100 timeouts (5 minutes), an internal deadlock is assumed, and the operation is aborted. A message is written to the alert log saying that a process "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK," and a process state dump is written to a trace file. Except for DDL against a long-running, in-use function, procedure, or package, this error should be treated as an Oracle bug and reported to Oracle Support.


4.5 Library Cache Locks and Pins
The library cache is not one cache, but many. It contains the pseudo code for PL/SQL program units. It contains parse trees and execution plans for shareable SQL statements. It also contains abstract representations in a form called DIANA of the database objects referenced by the SQL statements. The information is needed in this form for PL/SQL program unit compilation and SQL statement parsing and execution, despite the fact that the dictionary cache contains the same information in a different form. The library cache also contains control structures such as synonym translations, dependency tracking information, and library cache locks and pins.
Library cache locks are referred to as breakable parse locks in the Oracle documentation. They are applied to the library cache objects for SQL statements and PL/SQL program units, and recursively to the library cache objects for the database objects on which they depend. Library cache locks are held in shared mode during parse operations and are converted to null mode thereafter. If a DDL statement later modifies the definition of a database object, then the library cache information for that database object and all dependent library cache objects is invalidated by breaking the library cache locks.
Library cache locks can only be broken, however, when the library cache object is not also pinned. A pin is applied to the library cache object for a PL/SQL program unit or SQL statement while it is being compiled, parsed, or executed. Pins are normally held in shared mode, but are also held in exclusive mode while the library cache information for the object is being changed. The library cache objects for pipes and sequences are most subject to change. When a library cache object is pinned, pins are applied to all referenced objects in turn. When a pin is applied to the library cache object for a database object, then a corresponding row cache enqueue lock is acquired on the underlying data dictionary row, thereby preventing conflicting DDL.
Every object in the library cache has a handle that acts as the resource structure for library cache locks and pins. The handle, lock, and pin structures are all dynamically allocated within the shared pool. The handle implements two-way linked lists of locks held, locks waited for, pins held, and pins waited for. Sessions waiting for a lock or pin report a library cache lock or library cache pin wait respectively. The parameters for these waits are shown in Table 4.5.

1 table, procedure, and others 2 package body 3 trigger 4 index 5 cluster 6 object 7 pipe
If there are multiple readers of a single pipe, then library cache pin waits on the library cache object for that pipe will be routine, but brief. Other than that, library cache waits are relatively rare, although much more likely to be prolonged. These waits time out after three seconds and, if they do time out, deadlock detection is performed. If a deadlock is found, the lock or pin request is aborted and an ORA-4020 error is returned. This error is normally caused by ad hoc DDL. It should not be necessary to code your applications to handle this error.

4.6 DML Locks
Library cache pins and the associated row cache enqueue locks protect object definitions for the duration of parse and execute calls. However, for transactions that consist of a series of statements, equivalent locks need to be held for the duration of the transaction.
More than that, the lock mode may need to be raised partway through the transaction. For example, a table may first be queried, and then updated. This, of course, is why lock conversions are necessary. If the existing lock were to be released, even momentarily, it would be possible for the referenced object to be dropped or changed, and the transaction would then be unable to either proceed or roll back.
The possibility of rollback, particularly rollback to a savepoint, adds another dimension of complexity to dictionary locking. Namely, if a transaction is rolled back beyond the point at which a lock was upgraded, then the lock must be downgraded correspondingly, as part of the rollback operation, in order to reduce the risk of artificial deadlocks.
The requirements of dictionary locking for transactions and, in particular, the maintenance of a history of lock conversions, is provided by DML locks in conjunction with TM enqueues. Every transaction holding a DML lock also holds a TM enqueue lock. The basic locking functionality is provided by the enqueue, and the DML lock adds the maintenance of the conversion history.
The fixed array of DML lock structures is sized by the DML_LOCKS parameter. Its free list is protected by the dml lock allocation latch , and the active slots are visible in V$LOCKED_OBJECT . As with enqueue resources and locks, the number of slots in the DML locks fixed array is unimportant to performance, as long as you don't run out of free slots and get an ORA-55 error. Once again, V$RESOURCE_LIMIT can be used to adjust your setting for DML_LOCKS to ensure that this does not happen. Each slot only takes on the order of 116 bytes, so having a generous number of slots is not a problem.
4.6.1 Disabling DML Locks
DML locks and the associated TM enqueue locks can be disabled, either entirely, or just for certain tables. To disable these locks entirely, the DML_LOCKS parameter must be set to zero. In a parallel server database, it must be set to zero in all instances. To disable such locks against a particular table, the DISABLE TABLE LOCKS clause of the ALTER TABLE statement must be used.
If locks are disabled for a table, then DML statements can still modify the table's blocks, and row-level locks are still held. However, the sub-shared mode table locks normally associated with queries, and the sub-exclusive mode table locks normally associated with DML, are not taken. Instead, transactions against the table are protected from conflicting DDL by simply prohibiting all attempts to take a lock on the entire table, and thus all DDL against the table.
There are two reasons for disabling DML locks and table locks. The first is to avoid the lock acquisition overhead. This is particularly important in parallel server databases where the transactions are short. In such cases, it may take longer to acquire the TM instance lock than to complete the rest of the transaction.
In single-instance Oracle, the lock acquisition overhead is relatively trivial. However, the disabling of table locks should still be considered to efficiently prevent blocking lock problems. A large class of blocking lock problems is caused by attempts to lock an entire table, sometimes for ad hoc DDL such as creating an index, but often for ad hoc DML against a referenced table where the relationship is not supported by a foreign key index.
Foreign keys referring to tiny reference tables are often indexed to prevent such problems. However, the presence of such indexes adds a significant overhead to DML against the main table. It is better to do without these indexes, and prevent blocking locks by disabling table locks. Of course, table locks will need to be enabled temporarily for maintenance tasks such as updating the reference data or rebuilding indexes. However, that is no hardship, as such operations are normally performed during a special maintenance window.
Of course, it is preferable to disable table locks on each table individually, rather than to disable them entirely by setting the DML_LOCKS parameter to zero. If DML_LOCKS is zero, you can create temporary tables but never drop them, and you have to shut down and start up the system twice for maintenance operations such as rebuilding indexes.


4.7 Buffer Locks
A form of enqueue locking is used to protect cached database blocks. For each buffer in the database buffer cache, there is a buffer header. The buffer headers constitute a fixed array in the permanent memory part of the shared pool. These buffer headers act as the resource structures for buffer locks. Sessions manipulate buffer headers, and thus buffers, via dynamically allocated structures known as buffer handles. The buffer handles act as the lock structures for buffer locks.
Buffer locks are taken only in shared and exclusive modes.[1] The buffer headers implement a two-way linked list of the buffer handles for sessions that are using the buffer, and another for the buffer handles of sessions waiting for the buffer. Sessions waiting for a buffer lock report either buffer busy waits, or buffer busy due to global cache waits, or write complete waits. The parameters for buffer busy waits are shown in Table 4.6.
[1] This is a simplification, but adequate for our purpose here.

The timeout for buffer busy waits backs off from 1 to 3 seconds. If a buffer lock for a block that is in cache cannot be acquired within a certain number of timeouts, and if the session is holding buffer locks on one or more other buffers, then a buffer lock deadlock is assumed. The number of timeouts to wait before a buffer lock deadlock is assumed is dependent on the operation being attempted, and whether it is part of a discrete transaction. Because discrete transactions do not hold transaction locks, and thus row-level locks, they must acquire all the buffer locks they need before any modifications can be made, and hold them all until the transaction is ready to make its changes and commit. This means that discrete transactions hold more buffer locks than normal transactions, and hold them for much longer.
If a buffer lock deadlock is suspected, the session that timed out trying to acquire a buffer lock releases the buffer locks that it is holding on other buffers, and immediately enqueues them again, thereby falling to the end of the queue of waiting sessions. It also posts the first process that was waiting for a lock on each of the buffers concerned, and then yields the CPU. Although yielding the CPU does not really constitute a wait, a buffer deadlock wait is recorded and the exchange deadlocks statistic is incremented. Assumed buffer lock deadlocks signal event 370, which can be caught to investigate such problems.
In parallel server databases, buffers can be locked for global cache operations such as writes in response to ping requests, and consistent reads for direct memory transfers by the block server process. If a request for a buffer lock cannot proceed because the buffer is locked for a global cache operation, then a buffer busy due to global cache wait is recorded.
Similarly, when buffer lock requests cannot proceed because the buffers are locked by DBWn as part of a batch of blocks to be written, then write complete waits are recorded . The timeout for these waits is 1 second, and the parameters are as shown in Table 4.7.
Table 4.7. Wait Parameters (write complete waits)
Parameter Description
p1 The file number of the database block.
p2 The block number of the database block in its file.
p3 The reason for the wait. The normal reason code is 1029; however, other values are seen at times.

4.8 Sort Locks
Sort locks apply to the disk space being used for disk sort operations. There are two types of sort locks: temporary table locks and sort segment locks. These correspond to temporary segments in PERMANENT tablespaces and TEMPORARY tablespaces respectively. There are fixed arrays in the SGA for each type of sort lock. Both arrays are sized by the SESSIONS parameter, which allows for the maximum possible usage of sort locks.
Sort locks are used merely to track disk sort space usage, and do not suffer from lock conflicts, waits, or deadlocks. However, you should not confuse sort locks with the ST (space transaction) enqueue , which is extremely prone to lock conflicts, waits, and even deadlocks. Contention for the ST enqueue is often associated with disk sorts, because it is needed for the creation, extension, and deallocation of temporary segments.

4.9 Reference
This section contains a quick reference to the parameters, events, statistics, waits, and APT scripts mentioned in Chapter 4.
4.9.1 Parameters
Parameter Description
_DISTRIBUTED_LOCK_TIMEOUT Timeout for assumed deadlocks on distributed transactions. Defaults to 60 seconds.
_ENQUEUE_HASH The size of the enqueue hash table.
_ENQUEUE_HASH_CHAIN_LATCHES The number of latches used for access to the enqueue hash table. Defaults to the CPU count.
_ENQUEUE_LOCKS The number of enqueue lock structures.
DML_LOCKS The size of the DML locks fixed array. Where possible, DML locking should be disabled to reduce locking overheads and the risk of blocking locks.
ENQUEUE_RESOURCES The size of the enqueue resources array.
PARALLEL_TRANSACTION_RESOURCE_TIMEOUT Timeout for assumed deadlocks

between the branches of a parallel transaction and another transaction in release 8.0.
This parameter is obsolete in TEMPORARY_TABLE_LOCKS
Oracle8. It does still exist in release 8.0, but setting it has no effect.
4.9.2 Events
Event Description
60 This is the enqueue deadlock detection error. In cases of recurrent, mysterious deadlock problems, you may need to take a systemstate dump on this event to diagnose the interactions causing the deadlocks.
370 This event is signaled for assumed buffer cache deadlocks, and can be used for investigating severe buffer locking contention, using processstate dumps.
4020 This is the library cache deadlock detection error. With a systemstate dump on this event, you will be able to see what happened. Without it, you will never know.
4021 This is the library cache assumed deadlock timeout error. This timeout is needed because the library cache deadlock detection mechanism is not exhaustive, lest it be too expensive. Once again, this error is normally caused by ad hoc DDL.

4.9.3 Statistics 4.9.4 Waits
Statistic Source Description
enqueue conversions V$SYSSTAT Local enqueue conversions.
enqueue deadlocks V$SYSSTAT Local enqueue deadlocks detected and broken.
enqueue releases V$SYSSTAT Local enqueue releases.
enqueue requests V$SYSSTAT Local enqueue requests.
enqueue timeouts V$SYSSTAT Aborted local enqueue operations.
enqueue waits V$SYSSTAT The number of enqueue operations that waited. Not the number of waits.
exchange deadlocks V$SYSSTAT Number of local buffer deadlocks assumed. The statistic name reflects the fact that index block exchanges are one possible cause of such deadlocks.

Event Description
buffer busy due to global cache Waits to acquire a local buffer lock on a buffer that is locked for a global cache operation, such as a ping.
buffer busy waits Waits for a local buffer lock on a buffer that is locked in an incompatible mode.
buffer deadlock Assumed deadlocks while waiting for a local buffer lock.
enqueue These are waits for both local and global enqueues.
library cache load lock This wait is seen if two sessions attempt to load (not reload) the library cache information for an object simultaneously. Simultaneous reloads cause library cache pin waits instead.
library cache lock Waits to reference a library cache object that is in flux.
library cache pin Waits to modify a library cache object that is in flux.
row cache lock Waits to obtain either a local row cache enqueue or a row cache instance lock.
write complete waits Waits for a buffer lock on a block that is part of a normal write batch.

4.9.5 APT Scripts
Script Description
enqueue_locks.sql Shows enqueue locks held and wanted in the order requested.
enqueue_stats.sql Shows the breakdown of enqueue gets and waits by type.
fixed_table_hwms.sql Shows the high-water mark usage for the fixed tables under Oracle7. This can be used to check whether your settings for the corresponding initialization parameters are inadequate or overly generous. U nder Oracle8, use V$RESOURCE_LIMIT instead.

from:internet

分享到:
评论

相关推荐

    java concurrency programming in practice

    Chapter 13 - Explicit Locks 171 13.1.LockandReentrantLock 171 13.2.PerformanceConsiderations 174 13.3.Fairness 175 iv JavaConcurrencyInPractice 13.4.ChoosingBetweenSynchronizedandReentrantLock 176 ...

    MySql存储过程编程.chm

    Chapter 4. Blocks, Conditional Statements, and Iterative Programming Section 4.1. Block Structure of Stored Programs Section 4.2. Conditional Control Section 4.3. Iterative Processing with ...

    understanding linux network internals

    Chapter 4. Notification Chains (消息)通知链 Section 4.1. Reasons for Notification Chains 通知链的原因 Section 4.2. Overview 简介 Section 4.3. Defining a Chain 定义一个链 Section 4.4. Registering ...

    UNIX Network Programming Volume 2(Unix网络编程卷2英文版)

    a.4 message passing latency programs 480 a.5 thread synchronization programs 486 a.6 process synchronization programs 497 appendix b. a threads primer b.1 introduction 501 b.2 basic thread ...

    Oracle Core Essential Internals for DBAs and Developers ■ ■

    Chapter 1: Getting Started . . . ................................................................................. 1 Oracle in Processes ..................................................................

    Oracle Core Essential Internals for DBAs and Developers.zip

    ■Chapter 4: Locks and Latches ............................................................................... 59 ■Chapter 5: Caches and Copies ..........................................................

    UNIX环境高级编程英文第三版+源码

    11.6.4 Reader–Writer Locks 409 11.6.5 Reader–Writer Locking with Timeouts 413 11.6.6 Condition Variables 413 11.6.7 Spin Locks 417 11.6.8 Barriers 418 11.7 Summary 422 Chapter 12. Thread Control 425...

    Sql for mysql

    CHAPTER 4 SQL in a Nutshell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 Logging ...

    Pro.SQL.Server.Internals

    Chapter 4: Special Indexing and Storage Features Chapter 5: SQL Server 2016 Features Chapter 6: Index Fragmentation Chapter 7: Designing and Tuning the Indexes Part II: Other Things That Matter ...

    Java 7 Concurrency Cookbook

    Chapter 4, Thread Executors will teach the readers to delegate the thread management to executors. They allow running, managing, and getting the results of concurrent tasks. Chapter 5, Fork/Join ...

    微软内部资料-SQL性能优化3

    Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file ...

    SQL Server 2014 Development Essentials - Masood-Al-Farooq, Basit A. [SRG].pdf

    Chapter 4, Data Modiication with SQL Server Transact-SQL Statements, illustrates how to add, modify, and delete data in tables using Transact-SQL DML statements. This chapter covers how to add data ...

    Learn.Apple.HomeKit.on.iOS

    Chapter 4: Exploring Your Development Environment Chapter 5: Working with HomeKit Accessories Chapter 6: Exploring the HomeKit World as a Developer, Designer, or Device Manufacturer Chapter 7: Dive ...

    Professional Android 4 Application Development 源代码

    Chapter 4: Understanding Fragments Fundamental Android UI Design Android User Interface Fundamentals Introducing Layouts To-Do List Example Introducing Fragments The Android Widget Toolbox Creating ...

    Java并发:基础线程同步

    In this chapter, we will cover: 1.Synchronizing a method; 2.Arranging independent attributes in synchronized classes; 3.Using conditions in synchronized code; 4.Synchronizing a block of code with a ...

    Python Parallel Programming Cookbook

    Master efficient parallel programming to build powerful applications using Python ...Chapter 4: Asynchronous Programming Chapter 5: Distributed Python Chapter 6: GPU Programming with Python

    Learning Functional Programming in Go-Packt Publishing(2017).pdf

    Simpler code: No shared data means not having to deal with semaphores, locks, race conditions, or deadlocks. Most people have difficulty grasping FP. I did too. And when I got it, I wrote this book. ...

    MS-DOS 5.0

    See the procedure for repartitioning hard disks in Chapter 4 of the Microsoft MS-DOS Getting Started guide. 2.11 286 Accelerator Cards -------------------------- If your system includes a 286 ...

    微软内部资料-SQL性能优化5

    Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you ...

Global site tag (gtag.js) - Google Analytics