Skip to main content.
home | support | download

Back to List Archive

Re: Running swish with mod_perl

From: Bill Moseley <moseley(at)not-real.hank.org>
Date: Thu May 02 2002 - 19:14:43 GMT
At 11:23 AM 05/02/02 -0700, Alex Lyons SercoAssurance-Winfrith
Tel01305-202368 
>>Doesn't the memory just get shifted to the backend processes?  Kind of like
>>using the standard config of a proxy in front of mod_perl?  Or am I mixing
>>that up with SpeedyCGI?
>
>I suppose so, but the memory has got to get loaded sometime.  So there's a
single 
>perl/swish-e-lib process (I don't need more than one) sitting on the server 
>handling all search requests from the httpds.  Think of it as a persistent 
>swish-e process.  If it dies, the FastCGI handler just starts another one
when 
>the next request comes in.

You say you only need one FastCGI process.  But that can only process one
request at a time, correct?  

The setup sounds a lot like mod_perl with a reverse proxy, but maybe
lighter weight.
  
Normally under mod_perl you set up a reverse proxy that forwards requests
to the back end mod_perl server.  The idea is that you can have many httpd
processes running on the front end server accepting requests (because they
are "light weight", small processes), and those many process are served by
just a few mod_perl processes on the back end.  It's a memory saving
technique (and a more modular and flexible configuration).  You don't want
people on slow connections holding big processes open.

So that's similar to what you are describing, right?  You could have a back
end mod_perl server that is only one or two processes.  More processes are
spawned when needed, and processes are restarted if they die.

Does FastCGI require the mod_fastcgi to be installed in Apache (or
something equivalent for what ever server is running)?  I assume that's
similar to the proxy/mod_perl setup.


I guess it's time for me to read the FastCGI site again...


-- 
Bill Moseley
mailto:moseley@hank.org
Received on Thu May 2 19:14:48 2002