Ah great, this fixed the problem, thanks! I think my script isn't totally
working the way it should as far as how it indexes the file, but this
incremental indexing gets around it.
One more question, is there a way to get the results of a particular search
result using the perl API by passing it the internal file number, or to search
by the url? I understand how to search using keywords but I didn't know if it
supported searching other attributes. Thanks!
Quoting Bill Moseley <email@example.com>:
> On Mon, Aug 30, 2004 at 01:14:11PM -0700, Jason Camp wrote:
> > Ok cool, thanks for your help on this! One more question - I have a
> > spider program that will index documents, and each document it passes to
> > swish-e program via stdin. It looks like for each document, it's
> > the index each time. Does this make sense? Is there any way to tell it to
> add a
> > page to the index instead of overwriting it? Or am I missing something?
> > a lot for your help!
> Sounds like you are missing something. Swish-e mostly doesn't do
> incremental indexing. There's an option to build swish-e with a
> different backend data store (btree) where you can add files to an
> existing index. Old files are not removed, so the index will
> continue to grow (IIRC). ./configure --help and look for the
> enable-incremental option.
> Bill Moseley
> Unsubscribe from or help with the swish-e list:
> Help with Swish-e:
Received on Mon Aug 30 18:38:38 2004