使用 curl 下载文件中列出的 URL?

使用 curl 下载文件中列出的 URL?

我有一个文件,其中包含我需要下载的所有 URL。但是我需要限制一次下载一个。也就是说,下一个下载应该只在前一个下载完成后才开始。使用 curl 可以做到这一点吗?或者我应该使用其他方法。

答案1

xargs -n 1 curl -O < your_files.txt

答案2

wget(1)默认情况下按顺序工作,并且内置此选项:

   -i file
   --input-file=file
       Read URLs from a local or external file.  If - is specified as file, URLs are read from the standard input.  (Use ./- to read from a file literally named -.)

       If this function is used, no URLs need be present on the command line.  If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved.  If
       --force-html is not specified, then file should consist of a series of URLs, one per line.

       However, if you specify --force-html, the document will be regarded as html.  In that case you may have problems with relative links, which you can solve either by adding "<base href="url">" to the documents
       or by specifying --base=url on the command line.

       If the file is an external one, the document will be automatically treated as html if the Content-Type matches text/html.  Furthermore, the file's location will be implicitly used as base href if none was
       specified.

答案3

这可以在 shell 脚本中使用 curl 来实现,类似这样,但你需要自己研究 curl 等的适当选项

while read URL
    curl some options $URL
    if required check exit status 
          take appropriate action
done <fileontainingurls

答案4

根据@iain的回答,但使用适当的shell脚本 -

while read url; do
  echo "== $url =="
  curl -sL -O "$url"
done < list_of_urls.txt

还可以处理诸如“&”等奇怪的字符......

可以-O用重定向到文件来代替,或者用任何合适的方式。

相关内容