我在使用以下 wget 命令时遇到问题:
wget -nd -r -l 10 http://web.archive.org/web/20110726051510/http://feedparser.org/docs/
它应该递归下载原始网络上的所有链接文档,但它只下载两个文件(index.html
和robots.txt
)。
如何实现递归下载这网络?
答案1
wget
默认情况下尊重robots.txt 标准像搜索引擎一样抓取页面,对于 archive.org,它禁止整个 /web/ 子目录。要覆盖,请使用-e robots=off
,
wget -nd -r -l 10 -e robots=off http://web.archive.org/web/20110726051510/http://feedparser.org/docs/
答案2
$ wget --random-wait -r -p -e robots=off -U Mozilla \
http://web.archive.org/web/20110726051510/http://feedparser.org/docs/
递归下载 url 的内容。
--random-wait - wait between 0.5 to 1.5 seconds between requests.
-r - turn on recursive retrieving.
-e robots=off - ignore robots.txt.
-U Mozilla - set the "User-Agent" header to "Mozilla". Though a better choice is a real User-Agent like "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729)".
其他一些有用的选项是:
--limit-rate=20k - limits download speed to 20kbps.
-o logfile.txt - log the downloads.
-l 0 - remove recursion depth (which is 5 by default).
--wait=1h - be sneaky, download one file every hour.