ok, so i am impatient (no new news there). with 2.4.4 and -e it does make
it through eventually. It seems odd that it hangs for about 30 minutes
midway (doesn't do this on other, wimpier boxes) but it does finish.
I am now trying 2.4.4 with -e and incrimental. 2.4.3 would dump core on
this data set in these conditions, if it does, i will move forward with
debugging that, i sort of need incrimental now.
one question about debugging with gdb, what do i do :)? Sorry, i know how
to do gdb swish-e, and then run <switches and whatnot> but what do i do
after the crash to get more info?
finally, why do i need to use -e when i have so many resources? when
swish-e gave that out of memory error, i still had over 2G totally free
via top. sorry that i don't understand a lot of this stuff, and feel free
to direct me to RTFM...
thanks as always,
866 476 7862 x902
On Thu, 19 Oct 2006, Bill Moseley wrote:
> On Thu, Oct 19, 2006 at 05:24:26PM -0700, brad miele wrote:
>> This is a new out of memory adventure, and I am getting ready to undertake
>> the gdb stuff, but here is the scenario...
>> I installed 2.4.4 on a new machine (never had swish-e on it) and tried
>> indexing my 900K fileset. The process gets about 2/3 of the way through
>> checking dirs when it fails with:
>> err: Ran out of memory (could not allocate 262144 more bytes)!
> Brad, is this only with 2.4.4? If we knew it was something between the
> two versions it would likely help in tracking it down.
> I would think you would need -e whenever indexing that many files --
> but with 4G ram maybe that's not the issue.
> Bill Moseley
> Unsubscribe from or help with the swish-e list:
> Help with Swish-e:
Received on Thu Oct 19 19:25:32 2006