使用 wget (或其他) 下载整个网站,包括其所有可下载内容

使用 wget (或其他) 下载整个网站,包括其所有可下载内容

我正在尝试下载 winamp 的网站,以防他们关闭它。我需要下载所有内容。

我尝试过一次,wget成功下载了网站本身,但当我尝试从中下载任何文件时,它给出的文件没有扩展名或名称。我该如何修复?

答案1

您可能需要完全镜像该网站,但请注意,某些链接可能确实失效。您可以使用 HTTrack 或 wget:

wget -r http://winapp.com # or whatever

使用HTTrack,首先安装它:

sudo apt-get install httrack

现在仅运行 1 个外部链接:

httrack --ext-depth=1 http://winapp.com

这将下载 winapp CDN 文件,但不会下载整个互联网上的文件中的文件。

答案2

这是我发现的最有效和最简单的方法,可以创建一个可以在本地查看的网站完整镜像,其中包含工作脚本、样式等:

wget -mpEk "url"

最好使用-m(镜像)而不是,-r因为它可以直观地下载资产,并且您不必指定递归深度,使用镜像通常可以确定返回正常站点的正确深度。

这些命令-p -E -k可确保您不会下载可能链接到的整个页面(例如,链接到 Twitter 个人资料会导致您下载 Twitter 代码),同时包含站点所需的所有先决条件文件(JavaScript、css 等)。正确的站点结构也会保留(而不是有时输出一个带有嵌入脚本/样式表的大 .html 文件)。

它速度很快,我从来没有限制任何东西来使它工作,并且生成的目录看起来比简单地使用 arg 更好,-r "url"并且可以更好地了解网站是如何组合在一起的,特别是如果你为了教育目的进行逆向工程。

如果您最终被踢出网站的 IP,或者下载停止,请尝试运行相同的命令,但启用:--wait="duration"。这会增加请求之间的持续时间,以免触发其端的任何 DDoS 标志。

请注意,如果您下载的是包含大量从 TypeScript 编译的 JavaScript 的 Web 应用或网站,您将无法获取最初使用的 TypeScript,只能获取编译并发送到浏览器的内容。如果网站包含大量脚本,请考虑到这一点。

答案3

wget -p -k http://somewebsite.com

man wget

-p
--page-requisites
   This option causes Wget to download all the files that are
   necessary to properly display a given HTML page.  This includes
   such things as inlined images, sounds, and referenced stylesheets.

   Ordinarily, when downloading a single HTML page, any requisite
   documents that may be needed to display it properly are not
   downloaded.  Using -r together with -l can help, but since Wget
   does not ordinarily distinguish between external and inlined
   documents, one is generally left with "leaf documents" that are
   missing their requisites.

   For instance, say document 1.html contains an "<IMG>" tag
   referencing 1.gif and an "<A>" tag pointing to external document
   2.html.  Say that 2.html is similar but that its image is 2.gif and
   it links to 3.html.  Say this continues up to some arbitrarily high
   number.

   If one executes the command:

           wget -r -l 2 http://<site>/1.html

   then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded.
   As you can see, 3.html is without its requisite 3.gif because Wget
   is simply counting the number of hops (up to 2) away from 1.html in
   order to determine where to stop the recursion.  However, with this
   command:

           wget -r -l 2 -p http://<site>/1.html

   all the above files and 3.html's requisite 3.gif will be
   downloaded.  Similarly,

           wget -r -l 1 -p http://<site>/1.html

   will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded.  One
   might think that:

           wget -r -l 0 -p http://<site>/1.html

   would download just 1.html and 1.gif, but unfortunately this is not
   the case, because -l 0 is equivalent to -l inf---that is, infinite
   recursion.  To download a single HTML page (or a handful of them,
   all specified on the command-line or in a -i URL input file) and
   its (or their) requisites, simply leave off -r and -l:

           wget -p http://<site>/1.html

   Note that Wget will behave as if -r had been specified, but only
   that single page and its requisites will be downloaded.Links from
   that page to external documents will not be followed.  Actually, to
   download a single page and all its requisites (even if they exist
   on separate websites), and make sure the lot displays properly
   locally, this author likes to use a few options in addition to -p:

          wget -E -H -k -K -p http://<site>/<document>

   To finish off this topic, it's worth knowing that Wget's idea of an
   external document link is any URL specified in an "<A>" tag, an
   "<AREA>" tag, or a "<LINK>" tag other than "<LINK
   REL="stylesheet">".

  ==================================================================

 -k
 --convert-links
   After the download is complete, convert the links in the document to make them suitable for local viewing.  This affects not only the visible hyperlinks, but any part of the document that
   links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.

   Each link will be changed in one of the two ways:

   ·   The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link.

       Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point to ../bar/img.gif.  This kind of transformation
       works reliably for arbitrary combinations of directories.

   ·   The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to.

       Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif.

   Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet
   address rather than presenting a broken link.  The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory.

   Note that only at the end of the download can Wget know which links have been downloaded.  Because of that, the work done by -k will be performed at the end of all the downloads.

  --convert-file-only
   This option converts only the filename part of the URLs, leaving the rest of the URLs untouched. This filename part is sometimes referred to as the "basename", although we avoid that term
   here in order not to cause confusion.

   It works particularly well in conjunction with --adjust-extension, although this coupling is not enforced. It proves useful to populate Internet caches with files downloaded from different
   hosts.

   Example: if some link points to //foo.com/bar.cgi?xyz with --adjust-extension asserted and its local destination is intended to be ./foo.com/bar.cgi?xyz.css, then the link would be converted
   to //foo.com/bar.cgi?xyz.css. Note that only the filename part has been modified. The rest of the URL has been left untouched, including the net path ("//") which would otherwise be
   processed by Wget and converted to the effective scheme (ie. "http://").

抱歉我的缩进不好 :(

答案4

如果你想下载与链接相关的所有内容,你可以尝试这个

wget -r -U "BrowserName" "Url"

你可能想使用它--wait="duration"来避免你的 IP 被封锁。在没有等待期的情况下一页一页地请求很奇怪。这不是人性

相关内容