Reputation: 11
had somebody an practical experience (or read or heard of any practical experiences) with creating a good SGA- reserve/stock in an Oracle-database? (for example 50% of SGA that is currently never used, but just added as reserve-memory for the future !)
Could it ( itself! so, directly! ) be somehow bad for the database-performance? ( So, fully regardless of the T.Kyte's opinion "a huge SGA to mask an underlying problem" or of HugePages, conflict with other instances and other indirect problems )
Thank you.
Upvotes: 1
Views: 808
Reputation: 11403
The only real cost of an SGA larger than you need is the allocation of more memory. If you have plenty of memory and don't need it for PGA or OS-processes then, within reason, overallocating the SGA isn't a bad thing.
That being said, there are a few caveats:
In older OS's the maximum size of a shared memory segment may be less than your sga_max_size, in which case Oracle will create multiple shared memory segments to satisfy your request. This historically has been viewed as having some performance consequences, though I can't quantify how much. One could easily address this by adjusting the kernel parameter that specified the maximum shared memory segment size.
If your shared pool is allowed to get excessively large, monitoring tools that may scan v$sql, for example, can use a lot more CPU and take a lot more time.
If your buffer cache is excessively large, you run the risk of getting into bug territory. I've had sporadic problems with SGAs in the > 1 TB range. Remember, whatever is uncommon is not as well tested and therefore more likely to hide bugs. Also there are some recommended parameter tweaks for extremely large SGAs.
If you are using cooked filesystems for storage and filesystemio_options is set to either ASYNC or NONE then your single-block I/O is probably being double-buffered by the OS, which has its own block cache. If this cache is significantly larger than Oracle's own buffer cache, you are getting the benefit of an effectively larger caching mechanism - what Oracle doesn't have in its cache may be in the OS's cache and still avoid a physical I/O to storage. If you enlarge the SGA excessively, you diminish the amount of memory OS has available for its own block cache. The overall effective caching which is the larger of the two can then diminish, which will increase your I/O to strorage and negatively impact your performance. This continues until the point where your Oracle buffer cache is as large or larger than the OS buffer cache used to be, then you're back in OK territory. In fact Oracle recommends using direct path I/O to bypass the OS buffer cache, but this is only helpful (for single block I/O) if your buffer cache in Oracle takes up the bulk of the available memory on the host. Otherwise it can hurt.
Use common sense. Figure a rough estimate of how much cache you need + how much shared pool you need + other pools, then give yourself a buffer. 50% higher isn't unreasonable. Just keep it sane - don't waste memory by allocating 10x more than you need, but you're regret if you try to be too stingy.
Upvotes: 1