On Wed, 26 Mar 2003, Brian Tingle wrote:
> yes, using ab it never happens at -c1; but if I set -c10 and do a new
> search or clear CGI::Cache's cache it happens every time. ~10 to 15
> messages, randomly flipping between the -3 error the -5 error and the
> memory error.
Interesting. Do you have a small test case? If you index something like
the Apache docs or /usr/share/doc and get the same thing?
> One of the error that pops up is
> "err: Ran out of memory (could not allocate 4294967169 more bytes)!"
> Is that coming from SWISH?
Yep, that's a swish-e error message.
This is just a guess, but assuming that's when it's trying to uncompress
a property, swish is reading a table table entry for a given record's
property. The table says where the property is, its size on disk, and its
With that large memory request it would seem that it's reading the wrong
uncompressed size from the table for some reason and then trying to
allocate that amount of memory.
In your code I assume you keep the swish handle open between requests?
You might try recreating that every request (it will be slower) but that
might indicate if it's a problem with data not getting cleaned up
> Could it be a memory thing? the index file is 27M * 10 requests coming in
> at once (all before CGI::Cache has cached anything). The prop file is 9M
> -- I think it only has 2 properties in it.
No, I think it's some other problem because of that error message.
> Does the library try to lock the property file?
A assume you are not running mod_perl on Apache 2.
> >By the way, what kind of requests per second are you seeing with using the
> I'm using CGI::Cache too, so its hard to say.
Doesn't apache bench report that?
Bill Moseley email@example.com
Received on Wed Mar 26 23:46:16 2003