24 | 06 | 2017
Latest Articles
Popular Articles


Automatic Memory Management

User Rating:  / 0

Automatic Memory Management

I know it is boring but here is some history about memory sizing.

In the early days of Oracle 7.2 a DBA could configure sort_direct_writes and sort_write_buffer_size.
When sort_direct_writes was set to true, each sort allocated additional buffers in memory for direct writes to disk. There was no (private) sort operation anymore into the (shared) buffer cache.

Even though sort_direct_writes became obsolete in 8.1.X ( sorting into the buffer cache became impossible, thus there was no need anymore to set sort_direct_writes ) ... it was common for a DBA to size properly the sort_area_size, sort_area_retained_size and the hash_area_size in order to have a significant part of sorting and hash joining done in memory. Less well known were the instance initialization parameters like bitmap_merge_area_size. There were lots of parameters to configure a users available PGA memory. 

The SGA was traditionally sized with the shared_pool_size, db_block_buffers, log_buffer and large_pool_size parameter. Less well known were some instance initialization parameters like shared_pool_reserved_size. There were lots of parameters to configure every SGA component.

From Oracle 9i R1 onwards a DBA could configure the pga_aggregate_target. If workarea_size_policy was set to auto parameters like sort_area_size , hash_area_size became obsolete if dedicated connections where used.
There was one PGA memory pool. Every session got a part from it. The disadvantage of a large sort area was that it was at the session level. Many sort area' s allocated could "eat" too much os memory. The idea was to control the overall users PGA memory area allocations at the instance level with one parameter : pga_aggregate_target.
There were less changes at the level of the SGA in 9i. However, multiple blocksizes were allowed. Prior to 9i there was "only" one block size at the database level. From 9i R1 onwards a DBA could choose a block size at the tablespace level. The buffer cache could be divided into a cache for the 8K blocks, a cache for the 16K blocks, a cache for the 32K blocks, ... Sizing of these subcaches was done with db_cache_size, db_16K_cache_size, db_32k_cache_size. The db_cache_size parameter was used to configure the buffer cache for the default block size which often was --- and still is --- 8K. Also in 9i Oracle introduced the possibility to resize online — without the need to shutdown the database instance --- some components of the SGA assuming the overall memory allocation stayed below sga_max_size.

As with every new release Oracle 10G came with something new at the level of SGA and PGA. If workarea_size_policy was set to auto --- which is the default --- the pga_aggregate_target was taken into account ALSO by shared server sessions.

The SGA was sized with sga_target assuming it was was set to a non zero value. So if sga_target was set to 1G then your SGA was 1G. Also your SGA could be dynamically changed assuming the overall memory allocation stayed below sga_max_size. If you would have set sga_target to a non zero value and even though would have configured a shared_pool_size, db_cache_size, large_pool_size and a java_pool_size then these values were thresholds under which the value could not be dynamically resized by MMAN. MMAN was and still is the background process which based upon statistical information resizes various SGA components. So during the lifetime of your instance --- possible several times a day --- your shared pool is resized. What the actual values of the various SGA components are at a given moment can be retrieved through a simple select from v$sga_dynamic_components. Additional resizing info can be retrieved from v$sga_resize_ops.

Also with10G R2 the famous parameter log_buffer became obsolete. ( Oracle clearly forgot to document it )

And with 11G 

If you want you can configure 1 and only 1 parameter to configure both SGA and PGA and to get the correct size, resizing by the background processes --- MMAN , MMON --- itself : MEMORY_TARGET. The equivalent of sga_max_size is the MEMORY_MAX_TARGET. If you have a non zero value for MEMORY_TARGET then the background processes of the Oracle instance will size and resize --- possible frequently --- the various memory areas for both PGA and SGA.
If you define 2G for MEMORY_TARGET then at a given moment your PGA might be 1G and your SGA 1G wheras later one your PGA can automatically be resized to let' s say 500 Mb and your SGA to 1500 Mb.

The philosophy behind is probably that you have one database server with let' s say 8 Gb RAM. You have two instances running on it. You define a MEMORY_TARGET of 2 Gb for the first instance and a MEMORY_TARGET of 3 Gb for the second one and have automatically adjusted the various memory components by the Oracle instances itself.

Do I like these new feature ? NO
Why not ? When i configured sga_target in 10G R2 and had to deal with thousands of database sessions I was forced to configure shared server. I ran several times into ORA 4031 and noticed there was one component in the SGA which became alsways larger and larger and larger : the KGH NO ACCESS component. It is told that a memory granule needs some time to be transitioned from buffer pool ( cache data ) to shared pool ( cache SQL code, library cache ). v$sgastat tells you how big the KGH NO ACCESS component is.
This is actually understandable caching code seems totally different to caching data. And in 11G one given memory granule could be shared pool ( code ) memory, 10 minutes buffer cache ( DATA ) and another 10 minutes later PGA memory ( sorts, hashing, session cursor state, UGA  ).
I am suspicious this will become my favourite 11G feature. if I would use MEMORY_TARGET for a busy instance with hundreds of connections, sometimes batch jobs, sometimes OLTP ... I prefer to stay in standby with the old fashionned parameters to correctly size each individual compenent.


Some dynamic performance views for Automatic Shared Memory management ( 10 G )





Some dynamic performance views for Automatic Memory management ( 11 G )