`

Chapter 6. Memory(Final,over)

 
阅读更多

Chapter 6. Memory
Many tuning issues involve making decisions about memory allocation. Those decisions are complicated by the fact that Oracle manages much of its memory dynamically. To tune Oracle effectively, you need to understand both what it uses memory for and how it manages that memory.
6.1 The SGA
The System Global Area (SGA), together with the essential background processes, is definitive of an Oracle instance. It is a global area in the sense that it contains global variables and data structures, and it is a system area in the sense that it contains data structures that must be accessible to the entire Oracle instance, rather than just a particular process.
6.1.1 The SGA Areas
The SGA contains four or five main areas:
?? The fixed area?? The variable area?? The database block buffers?? The log buffer?? The instance lock database (for parallel server instances)
In terms of memory size, the fixed area and the log buffer should be trivial.
6.1.1.1 The fixed area
The fixed area of the SGA contains several thousand atomic variables, small data structures such as latches and pointers into other areas of the SGA. These variables are all listed in the fixed table X$KSMFSV along with their data types, sizes, and memory addresses, as shown in Example 6.1. The names of these SGA variables are cryptic, and seldom of use to know. However, senior Oracle staff can obtain advanced diagnostic information by joining X$KSMFSV with X$KSMMEM to monitor the values of these variables or to probe the data structures that they point to. X$KSMMEM has one row for every memory address in the SGA, and one non-key column which exposes the contents of the memory locations.
Example 6.1. The Redo Allocation Latch as Seen from X$KSMFSV
SQL> select ksmfsnam, ksmfstyp, ksmfssiz, ksmfsadr 2> from x$ksmfsv where ksmfsnam = `kcrfal_';
KSMFSNAM KSMFSTYP KSMFSSIZ KSMFSADR
kcrfal_ ksllt 120 C3F4D13C
The size of each component of the fixed area of the SGA is fixed. That is, they are not dependent on the setting of any initialization parameters, or anything else. Thus the offset into the fixed SGA for each variable is fixed, as is the total size of the fixed area itself.

6.1.1.2 The variable area
The variable area of the SGA is made up of the large pool and the shared pool. All memory in the large pool is dynamically allocated, whereas the shared pool contains both dynamically managed memory and permanent memory. The SHARED_POOL_SIZE parameter actually specifies the approximate amount of memory in the shared pool available for dynamic allocation, rather than the total size of the shared pool itself.
The permanent shared pool memory contains a variety of data structures such as the buffer headers, the process, session, and transaction arrays, the enqueue resources and locks, the online rollback segment array, and various arrays for recording statistics.
The sizes of most of these arrays are dependent on the settings of one or more initialization parameters. These initialization parameters cannot be changed without shutting down the instance, and so the sizes of the permanent memory arrays are fixed for the life of each instance. For example, the size of the process array is set by the PROCESSES parameter. If all the slots in this array are in use, then any further attempts to create another process in the instance will fail, because the array cannot be dynamically resized.
For many of the permanent memory arrays, there are X$ tables that export each array element as a row, and certain of the structure members as columns. These X$ tables are sometimes called fixed tables. There are also corresponding V$ views that expose the most useful columns of the X$ tables, but only for the rows representing array slots that are currently in use. For example, the V$PROCESS view is based on the X$KSUPR fixed table, which is in turn based on the process array in memory. V$PROCESS does not include all the rows and columns in X$KSUPR, and X$KSUPR does not expose all the members of the SGA process structure.


Learning More About the X$ Tables
People often ask how they can learn more about the X$ tables. My first answer is to say that there is not much of use in the X$ tables that is not also visible in the V$ views. Most of the few useful scraps of information that can be gleaned directly from the X$ tables, but not the V$ views, can be readily obtained using scripts such as those referred to in this book.
But, for those with the passion to know and the hours to burn, the APT script fixed_table_columns.sql , which is based on V$FIXED_TABLES, will give you a list of all the X$ tables, their columns, and their data types. You can then use the APT script fixed_view_text.sql , which is based on V$FIXED_VIEW_DEFINITION, to get the SQL statement text for all the V$ view definitions. From this information it is easy to work out which X$ tables and which X$ table columns are visible in a V$ view and which are not. Then, working out what extra information the X$ tables contain is a matter of guesswork, trial, and probably some error.
Remember that the X$ tables change significantly from release to release, so scripts should only be based on the X$ tables when it is really necessary.
The size of the variable area of the SGA is equal to the LARGE_POOL_SIZE, plus the SHARED_POOL_SIZE, plus the size of the permanent memory arrays. The total size of the permanent memory arrays can, in theory, be calculated from the settings of the initialization parameters. However, you need to know the formulae used to derive the array sizes from the parameters, the size of each type of array element in bytes, and the sizes of the array headers where applicable. These all change from release to release, and there are also operating system dependent differences. You also need to be aware that each permanent memory array is aligned on a memory page boundary to optimize memory addressing, and so some space is left unused. Fortunately, it is seldom necessary to calculate the permanent memory size precisely. If you really need this information, you can start up a test instance with a dummy SID and measure the permanent memory size, without needing to mount a database.
6.1.1.3 The database block buffers
This area of the SGA buffers copies of the database blocks. The number of buffers is specified by the DB_BLOCK_BUFFERS parameter, and the size of each buffer is equal to the DB_BLOCK_SIZE for the database. This area of the SGA contains only the buffers themselves, not their control structures. For each buffer there is a corresponding buffer header in the variable area of the SGA. Similarly, the working set headers, the hash chain headers, and their latches reside in the variable area of the SGA. Therefore, you will notice that the size of the variable area of the SGA will change by approximately 1K for every four buffers in the database block buffers area of the SGA.

6.1.1.4 The log buffer
The size of the log buffer area of the SGA is based on the value specified by the LOG_BUFFER parameter. However, the log buffer will be silently enlarged if an attempt is made to set it to less than its minimum size. The minimum size is four times the maximum database block size supported for the platform. On operating systems that support memory protection, the log buffer is bracketed by two guard pages (or, more correctly, memory protection units) to prevent corruption of the log buffer by errant Oracle processes. Nevertheless, the log buffer area of the SGA should be trivial by comparison with the size of the variable area and the database block buffers. The log buffer is internally divided into blocks. For each log buffer block, there is an 8-byte header in the variable area of the SGA.

6.1.1.5 The instance lock database
In parallel server configurations, instance locks are used to serialize access to resources that are shared between instances. This area of the SGA maintains a database of the resources in which this instance has an interest, the processes and instances that may need those resources, and the locks currently held or requested by those processes and instances. The sizes of these three arrays are set by the LM_RESS, LM_PROCS, and LM_LOCKS parameters respectively. The instance lock database also includes message buffers and other structures. This area of memory is required even in single-instance Oracle. However, in this case its size is trivial.
This area is not presently included by Oracle when reporting the composition and size of the SGA at instance startup; however, it can be seen in dumps taken with the ORADEBUG IPC command in svrmgrl.

6.1.1.6 Overhead
The last small area of the SGA is the shared memory overhead itself. This area contains information about the shared memory segments in use, and the SGA areas and sub-areas that they contain.

6.1.2 Shared Memory
The SGA resides in shared memory on most operating systems. To understand shared memory segments, you need some understanding of memory segments generally, and thus of virtual memory.
6.1.2.1 Virtual memory addressing
Today, virtual memory addressing is so prevalent that the alternative of direct memory addressing is almost only a memory. If you once programmed for the Z80 or 8086 CPUs, then you may remember direct memory addressing. You had to know exactly which memory addresses were available to you, so that you did not reference nonexistent memory or corrupt the BIOS. If you needed to write a big program, bigger than the available memory or address space, then you had to break it into sections that could be loaded or switched into memory as required. In fact, the Oracle two-task architecture was initially adopted for this very reason.
Virtual memory addressing introduces a layer of abstraction between program code and physical memory. All memory references are dynamically translated from virtual memory addresses to physical memory addresses before each instruction is executed. The operating system maintains data structures, called page tables, to support virtual-to-physical memory address translation. The most recently used page table entries are cached in each CPU to optimize address translation. This cache is commonly called a translation lookaside buffer (TLB). To further optimize address translation, TLB lookups are performed in hardware. A TLB miss must be resolved by reference to the page tables in main memory. This operation is also performed by hardware in some cases. If hardware address translation fails, the CPU switches into a special execution context to ensure that a physical memory page is allocated for the virtual page and refreshed from disk if necessary. The page table entry is also copied into the TLB. Such hardware address translation failures are called page faults. If a page has to be read from disk, it is called a hard or major page fault—otherwise, it is a soft or minor page fault. After a minor page fault has been resolved, the CPU switches back into user mode and restarts the current instruction. However, while a major page fault is being resolved, the CPU time may be used to service other processes.
Virtual memory addressing enables programs to run when not all of their program code or data is currently in physical memory. This means that relatively inactive virtual pages can be temporarily removed from physical memory if necessary. If these pages have been modified, they must be saved to a temporary storage area on disk, called a paging file or swap space. The operation of writing one or a cluster of inactive memory pages to disk is called a page out, and the corresponding operation of reading them in again later when one of the pages is referenced is called a page in. Paging is the aspect of virtual memory management that allows large programs to run. It is effective because programs typically use only a small proportion of their virtual memory pages actively at any one a time. The set of pages in active use by a process is called its working set.
Virtual memory addressing also enables programs to run from almost any location in physical memory. This means that it is possible to have many programs and their data in memory at the same time, and to switch between them very quickly. CPU time is not wasted while a process performs disk I/O, or waits for user input, or to resolve a page fault.

6.1.2.2 Memory access
Heavy paging activity can have a major impact on system performance, as is discussed later in this chapter. But first, it's important to note that address translation itself and memory access generally, apart from paging, also affect system performance significantly. Main memory access is expensive in terms of CPU cycles. Memory operates at much lower hardware clock speeds than CPUs do, and there is also a recovery time component required after each memory access before that memory bank can service another memory access by either the same CPU or another one. This is why computer manufacturers put so much effort into their CPU caching technology. Not only are page table entries cached in the TLB, but portions of user memory (called cache lines) are cached in a general cache as well. Sophisticated mechanisms are used to maintain consistency between main memory and the CPU caches (called cache coherency mechanisms). Cache lines are retained as long as possible to maximize cache hits, with a distinction often being made between program code and data because of their different locality properties. At the operating system level, scheduling algorithms are biased towards scheduling processes to get a time slice on the CPU on which they ran most recently. This is intended to minimize the probability of hardware address translation failures and CPU cache misses, and thereby to reduce main memory access.
Your control over memory access performance is limited to purchasing decisions. If you are lucky enough to have a say in such matters, here are the guidelines:
1.
Reduce the impact of cache coherency mechanisms by buying fewer, faster CPUs.

2.
Further reduce the risk of memory access contention between CPUs by buying a large number of small memory boards.

3.
Reduce the cost of memory access for each CPU by buying the fastest available memory. However, if fast memory implies only a few large memory boards, and if you expect to scale beyond six CPUs, then prefer slower memory in more, smaller boards.

4.
There are pitfalls associated with mixing different types of memory in the same system. Avoid this, unless your hardware vendor assures you that it is OK.

6.1.2.3 Process memory segments
One of the benefits of virtual memory addressing is that processes can use a large virtual memory address space regardless of the physical memory available. This enables process memory to be logically divided into distinct segments based on usage. These segments may be mapped to non-contiguous virtual memory addresses to allow for segment growth. Oracle uses the following segment types, as do programs generally:
Program text
The text segment contains the executable machine code for the program itself, excluding dynamically linked shared libraries. Text segments are normally marked read-only, so that they can be shared between multiple processes running the same program. For example, all Oracle processes execute the same oracle binary, albeit with different personalities. Regardless of how many processes are running in an instance, and regardless of how many instances are running that release of Oracle on the same server, only one copy of the program text is required in physical memory.
Initialized global data
This segment contains global data structures that are initialized by the compiler, such as text strings for use in trace output. Initialized data can theoretically be modified, and thus it is not shared between processes running the same program. Oracle makes little use of this segment.
Uninitialized global data
The uninitialized data segment is normally called the BSS (Block Started by Symbol) segment. This segment contains statically allocated global data structures that are initialized at runtime by the process itself. Oracle makes minimal use of the BSS segment.
Data heap
The heap is available to processes for the dynamic allocation of memory at runtime using the malloc ( ) or sbrk ( ) system calls. Oracle uses the heap for its PGA (process global area) which is discussed later in this chapter.
Execution stack
Whenever a function is called, the arguments and the return context are pushed onto the execution stack. The return context is essentially a set of CPU register values that describe the exact state of the process at the point of the function call. When the function call completes, the stack is popped and the context is resumed so that execution continues from the instruction immediately following the function call. The stack also holds variables that are local to a code block. Stack size is dependent on the depth of function call nesting, or recursion, and the memory requirements of the arguments and local variables. Oracle's stack space requirements are modest given its complexity.
Shared libraries
Shared libraries are collections of position-independent executable code implementing functions that may be required by a number of programs—in particular, collections of system call functions. Shared library segments are marked read-only and shared between all dependent processes, including Oracle processes. No more than one copy of each shared library is required in physical memory. Before a function in a shared library can be called, the process must open the shared library file, and map it into its address space using the mmap ( ) system call. The alternative to using shared libraries is to include the required system call functions in the program text segment itself. This is necessary on operating systems that do not support shared libraries, or where the implementation is problematic. On most operating systems, Oracle uses shared libraries for system call functions but not for the Oracle server code itself. However, Java class libraries are compiled and dynamically linked as shared libraries.
Shared memory segments
Shared memory allows associated processes to cooperatively read and write common data structures in memory. Each process that needs to address a shared memory segment must first attach that segment into its virtual address space. This is normally done using the shmat ( ) system call. Oracle uses shared memory segments for the SGA.
The location of these segments in the virtual address space of a process is operating system specific. Some operating systems reserve certain virtual address ranges for particular types of segments. Others allocate the text, data, and BSS segments at the extremities of the virtual address space range, leaving a contiguous unused address space range in between. The stack and heap are allocated at the opposite ends of this range, and grow towards the center. Other segments, such as shared memory segments, must be located between the stack and heap at a location specified by the program itself.
On such operating systems, it is sometimes necessary to control the address at which the SGA is attached, to prevent address range conflicts between the segments. In some cases, this can be done with the SHARED_MEMORY_ADDRESS and HI_SHARED_MEMORY_ADDRESS parameters, but on other systems it is necessary to use genksms and modify the attach address in the ksms.s file before relinking Oracle. Consult your Oracle installation guide for details.

6.1.2.4 Intimate shared memory
Each segment in the virtual address space of a process requires page table entries to support virtual-to-physical address translation. If two or more processes have mapped a single memory segment into the same location in their virtual address space, then they can theoretically share the same page table entries. This is called intimate shared memory.
Intimate shared memory boosts Oracle performance in several ways. In particular, it greatly increases the TLB hit rate for page table entries and thus reduces main memory access and speeds up execution significantly. In instances with large shared memory requirements and large numbers of processes, it also results in a significant saving in page table memory—on the order of hundreds of megabytes.
Under some operating systems intimate shared memory is used automatically for Oracle because there is no alternative. In some cases, it is not available because either the operating system or the hardware does not support it. However, in other cases, it is dependent on the _USE_ISM parameter or the size of the shared memory segments.
If _USE_ISM is set to TRUE (the default) on an operating system that supports program-selectable intimate shared memory, then Oracle uses a flag to request intimate shared memory from the operating system for its shared memory segments. However, on some operating systems intimate shared memory is only available for segments for which the page table is itself an exact number of pages in size, and if so it is used automatically. For example, assuming a 32-bit address space and a 4KB memory page size, one page in the page table can address 4MB of memory. In this case shared memory segments must be an exact multiple of 4MB in size if intimate shared memory is to be used. This is always possible to ensure by making small adjustments to the SHARED_POOL_SIZE, DB_BLOCK_BUFFERS, and LOG_BUFFER parameters, and then checking the sizes of the SGA segments using the ORADEBUG IPC command.
A further optimization to address translation is possible on operating systems that allow some segments to use larger than normal memory page sizes. For example, you may be able to use the chatr command to request a large page size for the data or instruction segments for a particular executable. Using a large page size reduces the number of page table entries required for each segment, and thus improves the TLB hit rate for the segment, as well as reducing its load on the TLB. Oracle has some built-in dependencies on its memory page size, so check with Oracle Support as to whether it is safe to use a large page size for Oracle on your platform, before attempting to do so.

6.1.2.5 SGA allocation
When an Oracle instance is started, the sizes of the main SGA areas are first calculated based on the initialization parameters. These are the sizes shown by Oracle when reporting the SGA size. However, before shared memory segments are allocated, the size of each area is rounded up to a memory page boundary.
The areas are then divided into sub-areas, if necessary, so that no sub-area is larger than the operating system limit on the size of a shared memory segment (SHMMAX for System V shared memory under Unix). In the case of the variable area, there is an operating system specific minimum sub-area size, and so the size of the variable area is rounded up further to a multiple of the minimum sub-area size.
Oracle will allocate a single shared memory segment for the entire SGA if possible. However, if the SGA is larger than the operating system limit on the size of a single shared memory segment, then Oracle will use a best fit algorithm to group the sub-areas together into multiple shared memory segments no larger than the maximum size.
Under Oracle7 the variable area of the SGA had to reside in contiguous memory. Therefore, if the operating system did not allow Oracle to specify the virtual memory address at which shared memory segments were to be attached, and thereby to attach them contiguously, then the variable segment had to be small enough to fit in a shared memory segment by itself. This constraint no longer applies in Oracle8, because of the introduction of sub-areas.
It is commonly suggested that the operating system limit on the size of a single shared memory segment should be raised in order to allow Oracle to allocate the SGA in a single shared memory segment if possible. I follow this advice, but only for reasons of manageability. The performance difference is negligible at instance and process startup and is nil otherwise.

6.1.2.6 Paging
The operating system allocates physical memory pages for the SGA and Oracle processes from its page pool. The page pool comprises all physical system memory excluding that reserved for the operating system itself. A page is allocated from the page pool's free list whenever a virtual memory page that is not in physical memory is referenced. Pages are returned to the head of the free list when memory is deallocated.
If the number of pages on the free list falls below a configurable threshold (LOTSFREE in Unix System V based systems) then the operating system begins to look for inactive pages to page out. Pages are regarded as inactive if they have not been referenced for a certain amount of time. Inactive pages are moved to the end of the free list, but if they have been modified then their contents must first be saved to disk. Paging stops as soon as the number of free pages rises back above the threshold.
If the number of pages on the free list continues to fall, then the operating system steps up the pace of paging by regarding pages as inactive more quickly. However, under extreme memory pressure it is possible for the majority of physical memory to remain very active, so that the operating system searches in vain for sufficient inactive pages. In this case, some low-priority processes will be selected and deactivated entirely to ensure that inactive pages will be able to be found. Although many aspects of this operating system paging behavior are highly tunable, such tuning is seldom beneficial.
Heavy paging activity can have a disastrous effect on system performance. However, high memory usage with intermittent light paging is of no concern. Most systems have plenty of inactive memory that can be paged out with very little performance impact. However, consistent light paging is of some concern because some moderately active pages in the SGA will be paged out repeatedly. Most operating systems provide a mechanism for Oracle to lock the SGA into physical memory to prevent it from paging. If paging is consistent then the LOCK_SGA parameter should be set to TRUE to prevent the SGA from paging.
On some operating systems, Oracle needs a special system privilege to be able to use this facility.
How do you determine whether your operating system is paging and, if it is, whether it's paging consistently or heavily? If you have plenty of free memory, then your system will not page at all. If free memory seems scarce, then you can monitor the number of pages paged out per second. This metric is available from the Performance Monitor under NT, or from the vmstat command under Unix. If this metric is constantly nonzero, then your system is paging consistently and the SGA should be locked into physical memory if possible. This applies particularly to operating systems with a paged file system buffer cache, such as NT and Solaris.
Note, however, that the page out rate is not a good indication of the intensity of paging activity on operating systems with a paged buffer cache. This is because buffered file system writes are handled by the paging subsystem and thus exaggerate the page out rate. A better indication of the intensity of paging activity on such systems is the scan rate. The scan rate is the number of pages that the operating system has searched per second while looking for inactive pages. The scan rate is reported by vmstat on Unix systems under the sr column heading. Paging may be regarded as light if the scan rate is below 10 pages per second.
If paging activity is moderate or heavy, then memory pressure must be reduced either by reducing the demand for memory, or by buying more. In particular, beware of oversizing the SGA and then locking it into memory.


6.2 The Shared Pool
The part of the SGA that is most commonly oversized is the shared pool. Many DBAs have little understanding of what the shared pool is used for, and how to determine whether it is correctly sized. So they just make it "BIG!" Sometimes that is not big enough, but more often it is wasteful and can also impair performance.
6.2.1 Chunks
To understand the shared pool better, you need to do little more than take a careful look at X$KSMSP . Each row in this table represents a chunk of shared pool memory. Example 6.2 shows some sample rows.
Example 6.2. Sample Chunks in the Shared Pool
SQL> select ksmchcom, ksmchcls, ksmchsiz from x$ksmsp; KSMCHCOM KSMCHCLS KSMCHSIZ
KGL handles recr 496
PL/SQL MPCODE recr 1624
dictionary cach freeabl 4256
free memory free 1088
library cache freeabl 568
library cache recr 584
multiblock rea freeabl 2072
permanent memor perm 1677104
row cache lru recr 48
session param v freeabl 2936
sql area freeabl 2104
sql area recr 1208
...

When each shared pool chunk is allocated, the code passes a comment to the function that is called to perform the allocation. These comments are visible in the KSMCHCOM column of X$KSMSP, and describe the purpose for which the memory has been allocated.
Each chunk is a little larger than the object it contains because there is a 16-byte header to identify the type, class, and size of the chunk and to contain linked-list pointers used for shared pool management.
There are four main classes of memory chunks. These can be seen in the KSMCHCLS column of X$KSMSP.
free
Free chunks do not contain a valid object, and are available for allocation without restriction.
recr
Recreatable chunks contain objects that may be able to be temporarily removed from memory if necessary, and recreated again as required. For example, many of the chunks associated with shared SQL statements are recreatable.
freeabl
Freeable chunks contain objects that are normally needed for the duration of a session or call, and are freed thereafter. However, they can sometimes be freed earlier, either in whole or in part. Freeable chunks are not available for temporary removal from memory, because they are not recreatable.
perm
Permanent memory chunks contain persistent objects. The large permanent memory chunk may also contain internal free space, which can be released into the shared pool as required.
The APT script called shared_pool_summary.sql shows a useful summary of the type, class, and size of all chunks in the shared pool. Example 6.3 is a sample of its output. The total size of the chunks for each type of memory is also visible in the shared pool rows of V$SGASTAT , except that some of the structures in the main permanent memory chunk are also broken out and shown separately.
Example 6.3. Sample Output of shared_pool_summary.sql
SQL> @shared_pool_summary KSMCHCOM CHUNKS RECR FREEABL TOTAL
KGFF heap 6 1296 2528 3824 KGK contexts 2 2400 2400 KGK heap 2 1136 1136 KGL handles 571 178616 178616 KQLS heap 404 87952 524888 612840 PL/SQL DIANA 274 42168 459504 501672 PL/SQL MPCODE 57 14560 88384 102944 PLS cca hp desc 1 168 168 PLS non-lib hp 1 2104 2104 character set m 5 23504 23504 dictionary cach 108 223872 223872 fixed allocatio 9 360 360 free memory 185 614088 kzull 1 48 48 library cache 1612 268312 356312 624624 multiblock rea 1 2072 2072 permanent memor 1 1677104 reserved stoppe 2 48 row cache lru 24 1168 1168 session param v 8 23488 23488 sql area 983 231080 1303792 1534872 table columns 19 18520 18520 table definiti 2 176 176

6.2.2 Free Lists
Free chunks in the shared pool are organized into free lists or buckets, based on their size. The bucket numbers and free chunk sizes are as shown in Table 6.1.

5 1040 bytes to 2063 bytes
6 2064 bytes to 4111 bytes
7 4112 bytes to 8207 bytes
8 8208 bytes to 16399 bytes
9 16400 bytes to 32783 bytes
10 32784 bytes and larger

You may notice that the lower bound on the free chunk sizes for each free list is a binary power plus the 16-byte header. The APT script shared_pool_free_lists.sql uses this fact to be able to report the number of chunks and the amount of free space on each free list. Example 6.4 shows some interesting output.
Example 6.4. Sample Output of shared_pool_free_lists.sql
SQL> @shared_pool_free_lists BUCKET FREE_SPACE FREE_CHUNKS AVERAGE_SIZE BIGGEST
0 166344 3872 42 72
1 32208 374 86 96
4 928 1 928 928
6 11784 4 2946 3328

When a process needs a chunk of shared pool memory, it first scans the target free list for the chunk of best fit. If a chunk of exactly the right size is not found, then the scan continues to the end of that free list, looking for the next largest available chunk. If the next largest available chunk is 24 or more bytes larger than required, then that chunk is split and the remaining free space chunk is added to the appropriate free list. If, however, the free list does not contain any chunks of the required size, then the smallest chunk is taken from the next nonempty free list. If all of the remaining free lists are empty, then an LRU chain scan will be attempted, as explained in the next section.
Free list scans, management, and chunk allocations are all performed under the protection of the shared pool latch. Clearly, if the shared pool contains a large number of very small free chunks, as illustrated in Example 6.4, then the shared pool latch will be held for a relatively long time when searching these particular free lists. It is, in fact, normal to have a large number of very small free chunks like this, and this is the major cause of contention for the shared pool latch. DBAs often respond to shared pool latch contention by increasing the size of the shared pool. Unfortunately, this merely delays the onset of shared pool latch contention, and in the end exacerbates it.

6.2.3 LRU Lists
If a process fails to find a free memory chunk of the required size on the shared pool free lists, then it will attempt to remove chunks containing recreatable objects from the shared pool in order to free a large enough chunk.
There are two categories of recreatable chunks—those that are pinned, and those that are not pinned. The concept of chunks in the shared pool being pinned is often confused with the concept of marking the objects that they contain to be kept using the DBMS_SHARED_POOL.KEEP procedure. Keeping applies only to library cache objects, and is a DBA responsibility. However, all chunks are pinned automatically while the objects that they contain are in use. Recreatable chunks cannot be freed while they are pinned. However, unpinned recreatable chunks can normally be freed.
Unpinned recreatable chunks are organized in the shared pool on two lists, each of which is maintained in LRU (least recently used) order. These are called the transient and recurrent LRU lists. Transient objects are unlikely to be required again, whereas recurrent objects may be. The composition of these lists changes rapidly. Chunks are added to the MRU (most recently used) ends whenever they are unpinned, and they are removed from the lists whenever they are pinned again.
Chunks are also removed from the LRU ends of the lists when a process needs to free shared pool memory for a new allocation. Chunks are flushed in sets of eight chunks alternately—first from the transient list, and then from the recurrent list. Chunks are flushed in LRU order regardless of their size. However, some chunks cannot be flushed. For example, chunks containing library cache objects that have been marked for keeping with DBMS_SHARED_POOL.KEEP cannot be flushed. These chunks are instead removed from the LRU lists by being pinned.
The length of the transient and recurrent LRU lists of unpinned recreatable chunks can be seen in X$KGHLU , together with the number of chunks that have been flushed, and the number of chunks that have been added to or removed from the LRU lists due to pinning and unpinning. X$KGHLU also shows the number of times that the LRU lists were flushed completely but unsuccessfully, and the size of the most recent such request failure. All these statistics can be checked with the APT script shared_pool_lru_stats.sql . See Example 6.5 for sample output.
Example 6.5. Sample Output of shared_pool_lru_stats.sql
SQL> @shared_pool_lru_stats RECURRENT TRANSIENT FLUSHED PINS AND ORA-4031 LAST ERROR CHUNKS CHUNKS CHUNKS RELEASES ERRORS SIZE
121 164 148447 4126701 0 0
Beware how you interpret these figures, because they are only part of the story. The lengths of the LRU lists and the rate of flushing are both heavily dependent of the memory requirements of the application, and variations in its workload. Neither long nor short LRU lists are necessarily a problem, and the flushing of dead chunks is an important part of healthy memory management. However, based on my experience, if the transient list is more than three times longer than the recurrent list, then the shared pool is probably oversized, and if the ratio of chunk flushes to other LRU operations is more than 1 in 20, then the shared pool is probably too small.

6.2.4 Spare Free Memory
If a large memory request cannot be satisfied either directly from the free lists or from the LRU lists by flushing, then Oracle has one more strategy to try.
Surprisingly, the last resort is not to coalesce contiguous free chunks. When chunks are freed, they may be coalesced with the following chunk, if that chunk is also free. However, Oracle only fully coalesces shared pool free space when the ALTER SYSTEM FLUSH SHARED_POOL command is executed explicitly. So memory allocation requests can and do fail even when the shared pool contains enough contiguous free memory. If that free memory is fragmented into multiple small chunks, then it cannot be used to satisfy large memory allocation requests.
Rather, Oracle's last resort for satisfying large memory allocation requests is to release more memory into the shared pool. Oracle actually keeps aside about half the shared pool memory at instance startup. This memory is then released gradually under memory pressure. Oracle does this to limit fragmentation.
Oracle's spare free memory is concealed in the main permanent memory chunk in the shared pool, together with the fixed tables and other genuine permanent memory structures. This memory is not on the shared pool free lists, and is therefore not available for immediate allocation. It is, however, included in the free memory statistic shown in V$SGASTAT .
Chunks of spare free memory are released into the shared pool when necessary. An ORA-4031 error, "unable to allocate x bytes of shared memory," will not be raised for the shared pool until all of this spare free memory has been exhausted.
If an instance still has a fair amount of spare free memory after it has been working at peak load for some time, then that is an indication that the shared pool is considerably larger than necessary. The amount of spare free memory remaining can be checked with the APT script shared_pool_spare_free.sql .

6.2.5 The Reserved List
Since the introduction of paged PL/SQL code in release 7.3, the vast majority of shared pool memory chunks are less than 5000 bytes in size. So much so, that in a mature instance it would be almost futile to search the shared pool free lists and LRU lists for chunks of that size or greater. So, Oracle does not.
Instead, Oracle reserves part of the shared pool for large chunks. The size of this reserved area defaults to 5% of the shared pool, and may be adjusted using the SHARED_POOL_RESERVED_SIZE parameter. As the parameter name indicates, this memory is taken out of the shared pool. The informal term, the reserved pool, should be thought of as a contraction for a longer term, the reserved part of the shared pool. There is just one shared pool, part of which is reserved for large chunks.
Chunks larger than 5000 bytes are placed into the reserved part of the shared pool. This threshold can be set with the _SHARED_POOL_RESERVED_MIN_ALLOC parameter but should not be changed. Small chunks never go into the reserved pool, and large chunks never go into the rest of the shared pool, except during instance startup.
Free memory in the reserved part of the shared pool is not included on the general shared pool free lists. Instead, a separate reserved free list is maintained. The reserved pool does not, however, have its own LRU lists for unpinned recreatable chunks. Nevertheless, large chunks are not flushed when freeing memory for the general free lists, and small chunks are not flushed when freeing memory for the reserved free list.
Reserved pool statistics are visible in the V$SHARED_POOL_RESERVED view. In particular, the REQUEST_MISSES column shows the number of times that requests for a large chunk of memory were not able to be satisfied immediately from the reserved free list. This metric should be zero. That is, there should be enough free memory in the reserved part of the shared pool to satisfy short-term demands for freeable memory, without needing to flush unpinned recreatable chunks that would otherwise be cached for the long term.
You can configure your monitoring software to watch the USED_SPACE column of V$SHARED_POOL_RESERVED in an attempt to determine whether the size of the reserved part of the shared pool is appropriate. Alternatively, you can use the APT script reserved_pool_hwm.sql to obtain a high-water mark for reserved shared pool usage since instance startup. This script relies upon the fact that, in the absence of reserved list request misses, the first chunk of the reserved list has never been used, and all other chunks have been. Example 6.6 shows some sample output. In many cases you will find that the reserved pool is scarcely used, if at all, and that the default reservation of 5% of the shared pool for large chunks is unduly wasteful. I recommend that you run this script routinely prior to shutdown, and also check the maximum utilization of other resources as shown in V$RESOURCE_LIMIT.
Example 6.6. Sample Output of reserved_pool_hwm.sql
SQL> @reserved_pool_hwm RESERVED_SIZE HIGH_WATER_MARK USAGE
256000 15080 6%

6.2.6 Marking Objects for Keeping
In a well-sized shared pool, dead chunks will be flushed out. However, any flushing introduces a risk that valuable objects will be flushed out as well. This applies particularly to recreatable objects that are used only intermittently, but are expensive to recreate, because they are large or require complex processing. You may also not want cached sequences to be flushed out, because this results in the remaining cached sequence numbers never being used.
Of course, the way to mitigate this risk is to mark known valuable objects for keeping in the shared pool using DBMS_SHARED_POOL.KEEP . This procedure loads the object and all subordinate objects into the library cache immediately, and marks them all for keeping. So far as possible, this should be done directly after instance startup to minimize shared pool fragmentation.
It is sometimes mistakenly claimed that large objects such as packages do not have to be marked for keeping, because they will be placed in the reserved part of the shared pool and thus be much less likely to be flushed out. However, most large objects are actually loaded into the shared pool in multiple small chunks, and therefore get no special protection by virtue of their size.
It is also unwise to rely on a high frequency of use to prevent objects from being aged out of the shared pool. If your shared pool is well sized, the LRU lists will be fairly short during periods of peak load, and unpinned objects will age out very quickly, unless they are marked for keeping.
If you don't already have your own scripts to do the job, take a look at APT; it includes a set of scripts that you can use for keeping. The keep_sys_packages.sql script keeps some key packages in the SYS schema. You will need to customize this script to include any other SYS packages that may be required by your application. The keep_cached_sequences.sql script can be used to mark all cached sequences in the database for keeping. And the keep_schema.sql script can be used to mark all candidate objects in your key application schemata for keeping.
Keeping should also be used to protect repeatedly executed cursors, once again, regardless of their size. The APT script keep_cursors.sql marks all cursors that have been executed five or more times for keeping.
For completeness, I should also mention that the X$KSMLRU fixed table can also be used to help you identify additional library cache objects that should be kept. X$KSMLRU records statistics about up to ten shared pool chunk allocations that have required flushes. Not all chunk allocations are captured, however. In fact, only the largest candidate allocation is guaranteed to be captured. Another, most unusual aspect of this fixed table is that it is cleared entirely whenever it is queried, so it should not be queried casually.

6.2.7 Flushing the Shared Pool
The only way to coalesce contiguous free chunks in the shared pool is to explicitly flush the shared pool using the ALTER SYSTEM FLUSH SHARED_POOL command. The question of whether you should, or should not do so, tends to divide DBAs.
In practice, flushing the shared pool can relieve shared pool latch contention and greatly reduce the risk of ORA-4031 errors, with much less immediate impact on performance than is commonly believed, particularly if key objects have been marked for keeping. On the other hand, if all key objects have been marked for keeping, and if your shared pool is not oversized, then you should scarcely need to flush the shared pool, unless your instance has very demanding, long-term uptime requirements.
My personal preference is to flush the shared pool nightly (after backups) and at other times if shared pool free space is becoming too scarce or too fragmented. However, you may need to ensure that flushing the shared pool does not leave unwanted gaps in cached sequences. This can be done either by marking the sequences for keeping, or, in single-instance Oracle, by temporarily unloading the sequences using the ALTER SEQUENCE NOCACHE command. There are APT scripts to do both. The first has already been mentioned, and the second is called nice_shared_pool_flush.sql . The two methods work rather well together. Unloading the sequences does not affect their kept status, but protects them even if they were not kept. Also, using nice_shared_pool_flush.sql before instance shutdown prevents sequence number loss even if a SHUTDOWN ABORT is necessary.

6.2.8 Heaps and Subheaps
You may have noticed that the names of the X$ tables for the shared pool begin with either KSM or KGH. These are the names for the Oracle memory manager and heap manager modules, respectively. These two modules work together in very close cooperation. The memory manager is responsible for interfacing with the operating system to obtain memory for use by Oracle, and for static allocations of memory. Dynamic memory management is performed by the heap manager. This is why the shared pool is also called the SGA heap.
A heap consists of a heap descriptor and one or more memory extents. A heap can also contain subheaps. In this case, the heap descriptor and extents of the subheap are seen as chunks in the parent heap. Heap descriptors vary in size depending on the type of heap and contain list headers for the heap's free lists and LRU lists. An extent has a small header for pointers to the previous and next extents, and the rest of its memory is available to the heap for dynamic allocation.
Except for the reserved list feature, subheaps within the shared pool have exactly the same structure as the shared pool itself. Memory is allocated in chunks. Free chunks are organized on free lists according to size. And unpinned recreatable chunks are maintained on two LRU lists for recurrent and transient chunks, respectively. Subheaps even have a main permanent memory chunk that may contain spare free memory. Subheaps may also contain further subheaps, up to a nesting depth of four.
The concept of subheaps is important to understand because most of the objects that are cached in the shared pool actually reside in subheaps, rather than in the top-level heap itself. Finding space for a new chunk within a subheap is much like finding space for a new chunk within the shared pool itself, except that subheaps can grow by allocating a new extent, whereas the shared pool has a fixed number of extents. The allocation of new extents for subheaps is governed by a minimum extent size, so it is possible to search for a small chunk in a subheap and fail, because none of the parent heaps could allocate a chunk of the required minimum extent sizes.

6.2.9 The Large Pool
If the LARGE_POOL_SIZE parameter is set, then the large pool is configured as a separate heap within the variable area of the SGA. The large pool is not part of the shared pool, and is protected by the large memory latch . The large pool only contains free and freeable chunks. It does not contain any recreatable chunks, and so the heap manager's LRU mechanism is not used.
To prevent fragmentation of the large pool, all large pool chunks are rounded up to _LARGE_POOL_MIN_ALLOC, which defaults to 16K. This parameter should not be tuned. It does not affect whether or not certain chunks will be allocated in the large pool. Rather, if a large pool is configured, chunks are allocated explicitly in the large pool based on their usage, and rounded up to the required size if necessary.
It is recommended that you configure a large pool if you use any of the following Oracle features:
?? Multi-Threaded Server (MTS ) or Oracle*XA?? Recovery Manager (RMAN )?? Parallel Query Option (PQO)


6.3 Process Memory
In addition to the SGA, or System Global Area, each Oracle process uses three similar global areas as well:
?? The Process Global Area (PGA)
?? The User Global Area (UGA)?? The Call Global Area (CGA)
Many DBAs are unclear about the distinction between the PGA and the UGA. The distinction is as simple as that between a process and a session. Although there is commonly a one-to-one relationship between processes and sessions, it can be more complex than that. The most obvious case is a Multi-Threaded Server configuration, in which there can be many more sessions than processes. In such configurations there is one PGA for each process, and one UGA for each session. The PGA contains information that is independent of the session that the process may be serving at any one time, whereas the UGA contains information that is specific to a particular session.
6.3.1 The PGA
The Process Global Area, often known as the Program Global Area, resides in process private memory, rather than in shared memory. It is a global area in the sense that it contains global variables and data structures that must be accessible to all modules of the Oracle server code. However, it is not shared between processes. Each Oracle server process has its own PGA, which contains only process-specific information. Structures in the PGA do not need to be protected by latches because no other process can access them.
The PGA contains information about the operating system resources that the process is using, and some information about the state of the process. However, information about shared Oracle resources that the process is using resides in the SGA. This is necessary so those resources can be cleaned up and freed in the event of the unexpected death of the process.
The PGA consists of two component areas, the fixed PGA and the variable PGA, or PGA heap. The fixed PGA serves a similar purpose to the fixed SGA. It is fixed in size, and contains several hundred atomic variables, small data structures, and pointers into the variable PGA.
The variable PGA is a heap. Its chunks are visible to the process in X$KSMPP , which has the same structure as X$KSMSP. The PGA heap contains permanent memory for a number of fixed tables, which are dependent on certain parameter settings. These include DB_FILES, LOG_FILES (prior to release 8.1), and CONTROL_FILES. Beyond that, the PGA heap is almost entirely dedicated to its subheaps, mainly the UGA (if applicable) and the CGA.

6.3.2 The UGA
The User Global Area contains information that is specific to a particular session, including:
?? The persistent and runtime areas for open cursors?? State information for packages, in particular package variables?? Java session state?? The roles that are enabled?? Any trace events that are enabled?? The NLS parameters that are in effect?? Any database links that are open?? The session's mandatory access control (MAC) label for Trusted Oracle
Like the PGA, the UGA also consists of two component areas, the fixed UGA and the variable UGA, or UGA heap. The fixed UGA contains about 70 atomic variables, small data structures, and pointers into the UGA heap.
The chunks in the UGA heap are visible to its session in X$KSMUP , which has the same structure as X$KSMSP. The UGA heap contains permanent memory for a number of fixed tables, which are dependent on certain parameter settings. These include OPEN_CURSORS, OPEN_LINKS, and MAX_ENABLED_ROLES. Beyond that, the UGA heap is largely dedicated to private SQL and PL/SQL areas.
The location of the UGA in memory depends on the session configuration. In dedicated server connections where there is a permanent one-to-one relationship between a session and a process, the UGA is located within the PGA. The fixed UGA is a chunk within the PGA, and the UGA heap is a subheap of the PGA. In Multi-Threaded Server and XA connections, the fixed UGA is a chunk within the shared pool, and the UGA heap is a subheap of the large pool or, failing that, the shared pool.
In configurations in which the UGA is located in the SGA, it may be prudent to constrain the amount of SGA memory that each user's UGA can consume. This can be done using the PRIVATE_SGA profile resource limit .

6.3.3 The CGA
Unlike the other global areas, the Call Global Area is transient. It only exists for the duration of a call. A CGA is required for most low-level calls to the instance, including calls to:
?? Parse an SQL statement?? Execute an SQL statement?? Fetch the outputs of a SELECT statement
A separate CGA is required for recursive calls. Recursive calls to query data dictionary information may be required during statement parsing, to check the semantics of a statement, and during statement optimization to evaluate alternative execution plans. Recursive calls are also needed during the execution of PL/SQL blocks to process the component SQL statements, and during DML statement execution to process trigger execution.
The CGA is a subheap of the PGA, regardless of whether the UGA is located in the PGA or SGA. An important corollary of this fact is that sessions are bound to a process for the duration of any call. This is particularly important to understand when developing applications for Oracle's Multi-Threaded Server. If some calls are protracted, the number of processes configured must be increased to compensate for that.
Of course, calls do not work exclusively with data structures in their CGA. In fact, the most important data structures involved in calls are typically in the UGA. For example, private SQL and PL/SQL areas and sort areas must be in the UGA because they must persist between calls. The CGA only contains data structures that can be freed at the end of the call. For example, the CGA contains direct I/O buffers, information about recursive calls, stack space for expression evaluation, and other temporary data structures.
Java call memory is also allocated in the CGA. This memory is managed more intensively than any other Oracle memory region. It is divided into three spaces, the stack space, the new space, and the old space. Chunks within the new space and old space that are no longer referenced are garbage collected during call execution with varying frequency based on their length of tenure and size. New space chunks are copied to the old space once they have survived a certain number of new space garbage collection iterations. This is the only garbage collection in Oracle's memory management. All other Oracle memory management relies on the explicit freeing of dead chunks.

6.3.4 Process Memory Allocation
Unlike the SGA, which is fixed in size at instance startup, the PGA can and does grow. It grows by using the malloc ( ) or sbrk ( ) system calls to extend the heap data segment for the process. The new operating system virtual memory is then added to the PGA heap as a new extent. These extents are normally only a few kilobytes in size, and Oracle may well allocate thousands of them if necessary.
There are, however, operating system limits on the growth of the heap data segment of a process. In most cases the default limit is set by an operating system kernel parameter (commonly MAXDSIZ). In some cases that default can be changed on a per-process basis. There is also a system-wide limit on the total virtual memory size of all processes. That limit is related to the amount of swap space[1] available. If either of these limits is exceeded, then the Oracle process concerned will return an ORA-4030 error.
[1] Please read paging file space for swap space in this discussion, if that is the correct term on your operating system.
This error is only rarely due to the per-process resource limit, and normally indicates a shortage of swap space. To diagnose the problem, you can use the operating system facility to report swap space usage. Alternatively, on some operating systems Oracle includes a small utility called maxmem which can be used to check the maximum heap data segment size that a process can allocate, and which limit is being hit first.
If the problem is a shortage of swap space, and if paging activity is moderate or heavy, then you should attempt to reduce the system-wide virtual memory usage either by reducing the process count or by reducing the per-process memory usage. Otherwise, if paging activity is light or nil, you should increase the swap space or, preferably, if your operating system supports it, you should enable the use of virtual or pseudo swap space.
This operating system facility allows system-wide total virtual memory to exceed swap space by approximately the amount of physical memory that is not locked. Some system administrators are unreasonably opposed to the use of this feature in the mistaken belief that it causes paging to memory. It does not. It does, however, significantly reduce the amount of swap space required on large memory systems. Incidentally, the truism that swap space should exceed physical memory by a factor of at least two is not true. It depends on the operating system, memory size, and memory usage, but many systems need virtually no swap space at all.

6.3.5 Process Memory Deallocation
Oracle heaps grow much more readily than they shrink, but contrary to popular belief they can and do shrink. The session statistics session uga memory and session pga memory visible in V$MYSTAT and V$SESSTAT show the current size of the UGA and PGA heaps respectively, including internal free space. The corresponding statistics session uga memory max and session pga memory max show the peak size of the respective heaps during the life of the session.
The UGA and PGA heaps only shrink after certain operations, such as the merge phase of a disk sort, or when the user explicitly attempts to free memory using the DBMS_SESSION.FREE_UNUSED_USER_MEMORY procedure. However, only entirely free heap extents are released to the parent heap or to the process data heap segment. So some internal free space remains, even after memory has been explicitly freed.
Although it is technically possible to do so, on most operating systems Oracle does not attempt to reduce the size of the process data heap segment and release that virtual memory back to the operating system. So from an operating system point of view, the virtual memory size of an Oracle process remains at its high-water mark. Oracle relies on the operating system to page out any unused virtual pages if necessary. For this reason, operating system statistics about the virtual memory sizes of Oracle processes should be regarded as misleading. The internal Oracle statistics should be used instead, and even these tend to overstate the true memory requirements.
The DBMS_SESSION.FREE_UNUSED_USER_MEMORY procedure need only be used in Multi-Threaded Server applications. It should be used sparingly and only to release the memory used by large package array variables back to the large pool or shared pool. However, that memory must first be freed within the UGA heap, either by assigning an empty array to the array variable, or by calling the DBMS_SESSION.RESET_PACKAGE procedure.
Please disregard the comments in the DBMS_SESSION package specification to the effect that memory, once used for a purpose, can only ever be reused for the same purpose, and that it is necessary to free unused user memory after a large sort. What is intended is that memory, once allocated to a subheap, is normally only available within that subheap, until the entire subheap has been freed. However, many subheaps, such as the CGA, are freed so quickly that the statement is, at best, misleading. Moreover, it is not normally necessary to free unused user memory after a sort, not even in Multi-Threaded Server applications, because the majority of sort memory is, in fact, freed automatically.

Taking Heapdumps

Oracle Support may sometimes ask you to take heapdumps to help to diagnose a potential memory problem. Heapdumps may be taken in the current process using the ALTER SESSION SET EVENTS command, or in another session using the ORADEBUG EVENT command. Heapdumps are written to a trace file in the process's dump destination directory, and contain largely the same information as the corresponding X$ tables.
The event syntax for heapdumps of the primary heaps is IMMEDIATE TRACE NAME HEAPDUMP LEVEL n. The level number is a bit pattern representing which heaps should be dumped: 1 for the PGA, 2 for the SGA, 4 for the UGA, 8 for the CGA, and 32 for the large pool.
The event syntax for heapdumps of arbitrary subheaps is IMMEDIATE TRACE NAME HEAPDUMP_ADDR LEVEL n, where n is the decimal equivalent of the hexadecimal address of the heap descriptor. Subheap heap descriptor addresses are visible in the KSMCHPAR column of the KSM X$ tables, and in heapdumps of their parent heaps alongside the ds=string.
6.4 Reference
This section contains a quick reference to the parameters, events, statistics, and APT scripts mentioned in Chapter 6.
6.4.1 Parameters
Parameter Description
_LARGE_POOL_ MIN_ALLOC Large pool chunk allocations are rounded up to this size. This parameter defaults to 16K, and should not be changed.
_USE_ISM Intimate shared memory is used by default where possible. However, the implementation is problematic on some operating systems, and so it is sometimes necessary to set this parameter to FALSE.
DB_BLOCK_BUF FERSDB_BLOCK _SIZE The product of these two parameters dictates the size of the SGA area for the database block buffers.
DB_FILESLOG_F ILES (prior to 8.1)CONTROL_FI LES These parameters affect the size of the fixed PGA. They should not be any larger than reasonably necessary.

LARGE_POOL_SI ZE Certain demands for large chunks of memory are satisfied from the large pool, if a large pool has been configured. This parameter sets the size of the large pool in bytes.
LOCK_SGA If operating system paging is consistent, this parameter should be set to TRUE, to prevent the SGA from paging.
LOG_BUFFER Although the log buffer has a separate area in the SGA, it should nevertheless be trivial in size.
OPEN_CURSORS OPEN_LINKSMA X_ENABLED_RO LES These parameters affect the size of the fixed UGA. They should not be any larger than necessary.
PRE_PAGE_SGA If set to TRUE, this causes all Oracle server processes to page in the entire SGA on process startup if necessary. This may yield a marginal improvement in performance during the period shortly after instance startup, but only at the considerable cost of slowing down all process startups.
SESSIONS This is the parameter that has the greatest effect on the total size of the fixed tables in the permanent memory chunk of the shared pool.
SHARED_MEMOR Y_ADDRESSHI_ SHARED_MEMOR Y_ADDRESS On some platforms, these parameters may be used to specify the virtual memory address at which the SGA should be attached.
SHARED_POOL_ RESERVED_SIZE Shared pool chunk allocations larger than 5000 bytes are satisfied from the shared pool reserved list. This parameter sets the size of the reserved list in bytes. The threshold size for reserved list allocation, which is set by the _SHARED_POOL_RESERVED_MIN_ALLOC parameter, should not be changed.
SHARED_POOL_ SIZE This parameter sets the approximate amount of memory in the shared pool available for dynamic allocation, expressed in bytes.
SORT_AREA_SIZ E This parameter can have a big impact on memory usage and performance.

6.4.2 Events
Event Description
4030 This is the out of process memory error event. To take PGA, UGA, and CGA heapdumps at the exact time of this error, set the following event in your parameter file: event = "4030 trace name heapdump level 13"
4031 This is the out of shared memory error event. If you are struggling with repeated

ORA-4031 errors, you may wish to take an SGA heapdump at the exact time of the error by setting the following event in your parameter file:
event = "4031 trace name heapdump level 2"
In Multi-Threaded Server environments, you may wish to use level 6 instead, to
include a UGA heapdump as well. This event causes the Oracle server code to continually check the integrity of the memory and heap management data structures. This is sometimes necessary to diagnose suspected memory corruption issues. Unfortunately, this event can only10235
be set instance-wide. It cannot be set on a single process.Only set this event under direction from Oracle Support, and then only as a lastresort. Even the minimal checking at level 1 has a severe impact on performance.
6.4.3 Statistics
Statistic Source Description
free memory V$SGASTAT Free memory in the SGA heap. This includes chunks on the free lists and spare free memory in the permanent memory chunk, but does not include unpinned recreatable chunks.
session uga memory V$MYSTAT and V$SESSTAT The current size of the UGA heap for the session, excluding the fixed UGA.
session uga memory max V$MYSTAT and V$SESSTAT The UGA heap size high-water mark.
session pga memory V$MYSTAT and V$SESSTAT The current size of the PGA heap for the session, excluding the fixed PGA.
session pga memory max V$MYSTAT and V$SESSTAT The PGA heap size high-water mark.

6.4.4 APT Scripts
Script Description
fixed_table_columns.sql Gets a description of all the X$ tables.
fixed_view_text.sql Extracts the SQL statement text for all the V$ views.
keep_cached_sequences.sql Marks all cached sequences for keeping in the shared pool.
keep_cursors.sql Marks cursors that have been executed five or more times for keeping in the shared pool.

keep_schema.sql Marks all candidate objects in an application schema for keeping in the shared pool.
keep_sys_packages.sql Marks some key packages in the SYS schema for keeping.
nice_shared_pool_flush.sql Flushes the shared pool, but unloads all cached sequences first, to prevent gaps lest they were not kept.
reserved_pool_hwm.sql Shows the high-water mark usage of the reserved pool. This can be used to check whether the reserved pool is too large.
shared_pool_free_lists.sql Shows the composition of the shared pool free lists.
shared_pool_lru_stats.sql Shows key statistics for the shared pool LRU lists.
shared_pool_spare_free.sql Shows how spare free memory remains in the shared pool.
shared_pool_summary.sql Shows a summary of the shared pool by chunk usage, class, and size.

from:internet

分享到:
评论

相关推荐

    Physically Based Rendering from Theory to Implementation - part1

    Final Imaging Pipeline Stages Further Reading Exercises CHAPTER 09. REFLECTION MODELS Basic Interface Reflectance BRDF>BTDF Adapter Specular Reflection and Transmission Fresnel Reflectance ...

    Linux.Forensics.1515037630

    CHAPTER 6 Analyzing Mounted Images CHAPTER 7 Extended Filesystems CHAPTER 8 Memory Analysis CHAPTER 9 Dealing with More Advanced Attackers CHAPTER 10 Malware CHAPTER 11 The Road Ahead

    Parallel Algorithms

    Focusing on algorithms for distributed-memory parallel architectures, Parallel Algorithms presents a rigorous yet accessible treatment of theoretical models of parallel computation, parallel algorithm...

    微软内部资料-SQL性能优化2

    Chapter 15: Overview of Performance Monitoring  Inside Microsoft Windows 2000, Third Edition, David A. Solomon and Mark E. Russinovich  Windows 2000 Server Operations Guide, Storage, File ...

    ARM System Developer's Guide: Designing and Optimizing System Software

    A final chapter looks forward to the future of the ARM architecture considering ARMv6, the latest change to the instruction set, which has been designed to improve the DSP and media processing ...

    ARM System Developer's Guide

    A final chapter looks forward to the future of the ARM architecture considering ARMv6, the latest change to the instruction set, which has been designed to improve the DSP and media processing ...

    Mastering+Java+Machine+Learning-Packt+Publishing(2017).epub

    Chapter 6, Probabilistic Graph Modeling, shows that many real-world problems can be effectively represented by encoding complex joint probability distributions over multi-dimensional spaces....

    Fuzzy and Neuro-Fuzzy Systems in Medicine

    Chapter 6—An Identification of Handling Uncertainties Within Medical Screening: A Case Study Within Screening for Breast Cancer 1. Introduction 2. Screening 2.1 Notations 2.2 The Screening ...

    Advanced Apple Debugging & Reverse Engineering v0.9.5

    For the final chapter in this section, you’ll go through the same steps I myself took to understand how the MallocStackLogging environment variable is used to get the stack trace when an object is ...

    Professional Assembly Language

    Chapter 6, “Controlling Execution Flow,” describes the branching instructions used in assembly lan- guage programs. Possibly one of the most important features of programs, the ability to recognize ...

Global site tag (gtag.js) - Google Analytics