如果您在网站 自动开始下载 pdf:https://magazines-static.raspberrypi.org/issues/full_pdfs/000/000/308/original/MagPi89.pdf?1576682030
我正在写信给 MagPi 杂志,他们向我解释说,后面的数字 308full_pdfs/000/000/308
是一个随机数。
长话短说:我必须解决我的下载器上的问题https://github.com/joergi/MagPiDownloader/issues/15无需复制粘贴每一期。所以自动关注会很好。
但我尝试过的一切:
curl -JLO https://magpi.raspberrypi.org/issues/89/pdf
和
wget --user-agent=Mozilla --content-disposition -E -c https://magpi.raspberrypi.org/issues/80/pdf
没有工作。
答案1
通过检查该网站的 html,可以看出“下载pdf" 链接使用带有 http-eqiv="refresh" 属性的元元素来重定向到真实链接。而像curl
或 之类的工具wget
可以处理标准http重定向,它们不解析或解释 html,因此无法处理这种类型的重定向。由于我们使用的是 shell,一种可能的解决方案是使用curl
or下载页面wget
并过滤 html 以查看它是否包含http-eqiv="refresh"
.
由于该网站似乎已经不再将新版本放在无障碍https://www.raspberrypi.org/magpi-issues/(没有过去的问题#86),似乎你的脚本当前的工作方式(即本质上是一个 pdf 链接的静态数据库)变得违反直觉。这特刊下载脚本看起来有与列出的相同的一堆pdfhttps://magpi.raspberrypi.org/books,但脚本缺少最新版本。
所以我尝试制作一个更有活力的脚本。它看着/图书&/问题查看可用/最新的内容。如果您愿意,欢迎使用其中任何一个 - 它使用zsh
, gawk
& curl
:
#!/usr/bin/env zsh
typeset -aU standard book ignore directory
typeset all latest list_books all_books baseurl="https://magpi.raspberrypi.org/"
setopt extendedglob
function books() {
typeset -aU books filter minus
typeset i
>&2 echo "getting list of available books..."
books=( $(2>/dev/null curl -fs ${baseurl}books | gawk -v 'RS=href="' -F '"' \
'$1 ~ /^[/]books[/][^/]*[/]pdf$/{split($1,a,"/"); print "books/"a[3]}') )
if [[ -z $books ]]
then
>&2 echo "unable to find any books"
return 0
fi
case $1 in
(list)
printf '\t%s\n' $books
return 0
;;
(all)
>&2 echo "Attempting to download all the books, this may take a while ..."
>&2 printf '\t%s\n' $books
get_pdfs $books
return 0
;;
(*)
for i in $@
do
case $i[1] in
(-)
minus+=( ${(M)books:#*${i:1}*} )
books=( ${books:#*${i:1}*} )
;;
(*)
filter+=( ${(M)books:#*$i*} )
;;
esac
done
if [[ -z $filter && -n $minus ]]
then
filter=( ${books:|minus} )
else
filter=( ${filter:|minus} )
fi
if [[ -z $filter ]]
then
>&2 echo "books: no matches found (try book:list)"
return 0
else
>&2 printf '\t%s\n' $filter
get_pdfs $filter
fi
;;
esac
}
function issues() {
typeset -aU issues
typeset max i
>&2 printf '%s' "finding most recent issue # ..."
if ! max=${(M)$(2>/dev/null curl -fs https://magpi.raspberrypi.org/issues | gawk -v 'RS=href="' -F '"' \
'$1 ~ /^[/]issues.*[0-9]$/{a=$1;exit}
END{if(a){print a} else{exit 1}}')%%[0-9]#}
then
>&2 echo "couldn't determine what number the latest issue is."
return 1
fi
>&2 echo "it's $max"
if [[ $1 = all ]]
then
>&2 echo "Attempting to download all the issues, this may take a while ..."
get_pdfs issues/{1..$max}
return 0
fi
if [[ -n $latest ]]
then
issues+="issues/$max"
fi
for i in $@
do
if [[ $i -le $max ]]
then
issues+="issues/$i"
else
>&2 echo "issues/$i is larger than $max, ignoring"
fi
done
if [[ -z $issues ]]
then
>&2 printf '\t%s\n' "there are no issues to download"
return 0
fi
>&2 printf '\t%s\n' $issues
get_pdfs $issues
return 0
}
function get_pdfs() {
typeset url i
for i in $@
do
if ! url=$(2>/dev/null curl -fs "$baseurl$i/pdf" | \
gawk -v 'RS=http-equiv="[rR]efresh".*[0-9 ;]*[uU][rR][lL]=' -F '"' \
'$1 ~ /^http.*[.]pdf/{a=$1;exit}
END{if(a){print a} else{exit 1}}')
then
>&2 echo "unable to extract url for $i"
continue
fi
if [[ -e ${directory:+$directory[-1]/}${${url##*/}%%\?*} ]]
then
>&2 echo "looks like $i was already downloaded to ${directory:+$directory[-1]/}${${url##*/}%%\?*}"
continue
fi
curl -f --create-dirs -o ${directory:+$directory[-1]/}${${url##*/}%%\?*} $url
done
}
if ! zparseopts -D 'd:=directory'
then
return 1
fi
if [[ -z $@ ]]
then
>&2 printf '\t%s\n' \
"[-d DIRECTORY]" \
"[NUMBER] ... download issue by number" \
"[latest] download most recent issue" \
"[all] download all issues" \
"[book:list] list all books" \
"[book:WORD] ... download books matching WORD" \
"[book:-WORD] ... don't download books matching WORD" \
"[book:all] download all books" \
"... no args specified, nothing to do ... exiting"
return 0
fi
if [[ -n $directory ]]
then
if ! mkdir -p $directory[-1]
then
return 1
fi
>&2 echo "files will be saved in $directory[-1]"
fi
for (( i=1; i<=${#@}; i++ ))
do
case ${@[i]} in
(all) all=all ;; #download all standard issues
(latest) latest=1 ;; #download most recent issue
(book:list) list_books=list ;; #print a list of books
(book:all) all_books=all ;; #download all books
(book:[[:alnum:]-]##) book+=( ${@[i]#*:} ) ;; #download matching books (or books not matching if book:- is used)
([0-9]##) standard+=${@[i]} ;; #download standard issue by number
(*) ignore+=${@[i]} ;; #tell user about unused args
esac
done
if [[ -n $list_books || -n $all_books || -n $book ]] #book argument was specified - get books
then
books ${list_books:-${all_books:-$book}}
fi
if [[ -n $standard || -n $latest || -n $all ]] #issue args - get issues
then
issues $all $standard
fi
if [[ -n $ignore ]]
then
>&2 printf '\t%s\n' "the following arguments were ignored:" $ignore
fi
return 0
脚本末尾有一些注释,希望能解释其用法。
- 有效的问题相关参数包括:
latest
、all
或 aNUMBER
– 在调用脚本来指定某个范围时,您可以使用 shell 扩展,例如。从 bash/zsh,要下载问题 12–15,指定{12..15}
应扩展到数字 12 到 15 - 有效的书籍相关参数包括:
book:list
、book:all
、 或book:WORD
匹配、book:-WORD
排除,例如。book:{begin,-3rd}
将下载“初学者”系列书籍,“第 3 册”除外 -d DIR
将文件输出到指定目录的选项- 参数的任何组合都应该有效并且无效的参数将被忽略
- 不尝试恢复下载,您可以通过将该选项添加到中的
-C -
最终命令中来做到这一点,并从之前的文件冲突测试中删除curl
get_pdfs()
continue
我可能错过了一些东西——使用风险自负!
上面的函数中找到了使用curl“跟随”html重定向的示例get_pdfs()
:
url=$(curl -fs <url_of_document_using_html_redirect> | \
gawk -v \
'RS=http-equiv="[rR]efresh" *content="[0-9 ;]*[uU][rR][lL]=' \
-F '"' \
'/^http/{print $1;exit}')
然后,我们应该可以下载"$url"