Thank you for your reply!
I just tried the spider.pl method you suggested and I added an external link
"http://www.amazon.com" to the list, but the spider still does not index it.
What's more, it still does not index any webpages outside the local server,
My spider config file:
@servers = (
base_url => '
email => 'email@example.com',
# other spider settings described below
max_depth => 1,
My swish config file:
# Use spider.pl as the external program:
# And pass the name of the spider config file to the spider:
On Thu, Mar 26, 2009 at 5:23 PM, Peter Karman <firstname.lastname@example.org> wrote:
> Zhou Xiang wrote on 03/26/2009 03:29 PM:
> > Hi David,
> > Thank you for your reply!
> > I tested it again today. It shows that the crawler can only index the
> > webpages within "http://digital.lib.lehigh.edu". It cannot crawl the
> > on "rust.cc.lib.lehigh.edu" or any other websites, even though i used
> > URLs instead of queries.
> > Any ideas about it?
> don't use the old spider.
> Use spider.pl instead with -S prog.
> See this documentation:
> Note that with spider.pl there are 2 config files: 1 for swish-e, and 1
> for spider.pl.
> Your swish-e config file can remain unchanged with the exception of
> MaxDepth 2
> TmpDir /usr/local/swish-e-2.4.5/tmp
> since those are ignored with the -S prog method.
> Peter Karman . peter(at)not-real.peknet.com . http://peknet.com/
> Users mailing list
Users mailing list
Received on Fri Mar 27 12:50:43 2009