This post will direct you to the necessary articles and tools, SEG.NLM and MEMCALC which will assist you in further tuning your NetWare 6.5 Server.
Back in May 2005, I posted a detailed blog on the memory issues before SP3 and what changes were made:
Memory Fragmentation on NetWare
Ed Liebing, technical editor at Novell, wrote a great article on the enhancements and understanding memory through Novell Remote Manager (NoRM). Read this for a great review and understanding of memory on NetWare 6.5.
Novell has released a new SEG.NLM memory analysis tool.
Download the latest SEG.NLM.
To examine your memory on NetWare 6.5 SP3 or SP4 you'll need to download the latest SEG.NLM and use SEG.NLM to write a SEGSTATS.TXT file.
Next read the Novell Cool Solutions article on Memory Tuning Calculator.
Download the Memory Calculator from http://www.caledonia.net/hamish.html
There is a windows or linux version of memcalc and it is a DOS executable. Copy your segstats.exe from sys:\system to a dos folder or directory.
Usage:
memcalc segfile | /i
Where segfile is the path and file name of the segstats.txt file
or use /i to enter the figure manually
So from my lab server:
F:\myfiles\>memcalc segstats.txt
NLM Memory = 356843520
NLMHWM Memory = 392794112
DS Memory = 10142080
Phys Memory = 1068937216
UAS Memory = 936566784
Calculating settings based on following values:
Physical memory (Bytes) : 1,068,937,216
NLM Footprint (Bytes) : 356,843,520
NLM High Water (Bytes) : 392,794,112
UAS (Bytes) : 936,566,784
DS Foot print (Bytes) : 10,142,080
Physical memory is less than 2GB - no tuning recomended
From the readme of memcalc and the cool solutions article:
"In recent NetWare 6.x Service Packs, the memory management with NetWare has undergone a rather radical overhaul to help address limitations NetWare was starting to experience under intensive loads - e.g., Running large databases, multi gigabyte eDirectory trees with millions of objects, multiple Java applications etc.
These changes to memory management were initially quite problematic, but have become much more reliable in the current service packs - except, in my opinion, for the “auto tuning” feature that is enabled by default.
The Auto Tuning feature monitors the memory usage on the server and adjusts two parameters to try and free up more logical address space on the server.
The AutoTuning feature operates by lowering the “File Cache Maximum Size” (FCMS) setting, which controls how much memory is available to the server for use as NSS and/or TFS cache. If the FCMS setting reaches its minimum possible value of 1GB then the auto tuning will then start recommending that the “-u” setting is reduced. The “-u” setting controls how much space is available for the “User Address Space”. This is a logical memory region that is reserved for running protected mode and Java applications.
In my opinion, the memory tuning algorithm is too aggressive, and too simplistic:
- It wants to keep too much memory in the VM pool.
- It’s too keen to drive the FCMS setting down.
- It will only "tune" in one direction, and it "tunes" the server to accommodate one off memory allocations - an NLM accidentally requesting a 1GB allocation today will mean a server "tuned" for that size memory footprint a year from now. If I remove a large memory footprint NLM from the server, memory will not be “tuned” back towards a larger cache- the memory will be forever reserved for the VM pool.
- The “tuning” can result in even more memory fragmentation than the tuning is designed to prevent. When the FCMS setting is reduced, the complete NSS cache is thrown away (flushed), then its starts growing again. I’ve seen servers with 2-3% of fragmented memory suddenly have 15% or more after being “tuned”.
- The tuning can cause server abends. I’ve seen it cause Poison Pill abends on Cluster nodes, and other abends on standard servers.
Read more about it...
No comments:
Post a Comment