Skip to main content.
home | support | download

Back to List Archive

Re: Segfaults

From: <jmruiz(at)>
Date: Wed Sep 27 2000 - 16:05:30 GMT
Hi Bill,

Long time ago, we have a problem with the resources of our server.

We have a very heavy loaded server (2 GB of RAM with upto 800 
possible apache proccess running simultaneously).  It is a 4 
PowerPC RS6000 box. AIX has a parameter to limit the maximun 
number of proccess per user. So, when it was heavy loaded only a 
few free apache proccess were available to fork cgi's. Remember 
that all apache and cgi proccess are handle by the same user 
(normally nobody). So, if you have a max memory, cpu or proccess 
limit per user you can get errors when your server is heavily loaded.

BTW, remember that if you are using a perl cgi script you need 2 
proccess: The perl interpreter and the swish-e exec.

If swish-e does not have enough memory it may print the following 
message to stderror:
swish: Ran out of memory ...
This message should be in apache's error_log

I have checked apache in a SUN box to see the size of the the httpd 
process. Here is the output:
# ps -o rss -o comm -fu nobody | sort -u
1544 /usr/local/apache/bin/httpd
1554 /usr/local/apache/bin/httpd

is'nt 11M too high for an apache proccess?

I am not a solaris expert but you may check your /etc/system to 
check your kernel configuration parameters.  You can also see your 
kernel parameters issuing a sysdef -i.

Hope this helps
Received on Wed Sep 27 16:05:56 2000