On Wed, Jan 12, 2005 at 07:20:05AM -0800, Walter Lewis wrote:
> I believe the standard practice is to set up a script that generates
> "HTML" pages on the fly (without writing them to the filesystem). These
> are then fed to the spider program (I haven't needed to touch the spider
> code at all.)
> You end up with something like this in the conf (the indexing
> configuration) file:
> IndexDir spider.pl ./NewsDB2.pl
You don't need the spider. It does http requests for the docs.
There's an example script called MySQL.pl that fetches the records
from the database and feeds them directly to swish-e (via STDOUT).
Bryon, keep in mind that swish-e is not a relational database so you
won't be able to do the queries you are used to doing.
Also be aware that swish-e won't likely scale as well as your
database. Make sure you use -e when indexing or you may run out of
Unsubscribe from or help with the swish-e list:
Help with Swish-e:
Received on Wed Jan 12 07:29:01 2005