使用 Python/Scrapy 通过 docker 运行无头 selenium

使用 Python/Scrapy 通过 docker 运行无头 selenium

我正在尝试在安装了 Kubuntu 的笔记本电脑上使用 Scrapy 和 Selenium,但我只使用命令行(不启动 X 服务器)。

我的第一个问题是:我还需要 Xvfb 吗?

无论如何,我现在正在做的事情:

sudo service docker start
sudo service docker status
sudo docker run -it --rm --name chrome --shm-size=1024m -p=9222:9222 --cap-add=SYS_ADMIN  yukinying/chrome-headless-browser --enable-logging --v=10000

; Now docker is running, in a second SSH session I do now:

Xvfb :99 &
export DISPLAY=:99

; In the second SSH session now:

scrapy crawl weibospider

现在我得到了大量的 DEBUG 消息和选项参数等列表。

2017-07-09 18:37:23 [easyprocess] DEBUG: param: "['Xvfb', '-help']" 
2017-07-09 18:37:23 [easyprocess] DEBUG: command: ['Xvfb', '-help']
2017-07-09 18:37:23 [easyprocess] DEBUG: joined command: Xvfb -help
2017-07-09 18:37:24 [easyprocess] DEBUG: process was started (pid=5235)
2017-07-09 18:37:26 [easyprocess] DEBUG: process has ended
2017-07-09 18:37:26 [easyprocess] DEBUG: return code=0
2017-07-09 18:37:26 [easyprocess] DEBUG: stdout=
2017-07-09 18:37:26 [easyprocess] DEBUG: stderr=use: X [:<display>] [option]
-a #                   default pointer acceleration (factor)
-ac                    disable access control restrictions
...
2017-07-09 18:38:35 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
Unhandled error in Deferred:
2017-07-09 18:38:35 [twisted] CRITICAL: Unhandled error in Deferred:

2017-07-09 18:38:35 [twisted] CRITICAL: 
Traceback (most recent call last):
  File "/home/spidy/.local/lib/python3.5/site-packages/twisted/internet/defer.py", line 1386, in _inlineCallbacks
    result = g.send(result)
  File "/home/spidy/.local/lib/python3.5/site-packages/scrapy/crawler.py", line 76, in crawl
    self.spider = self._create_spider(*args, **kwargs)
  File "/home/spidy/.local/lib/python3.5/site-packages/scrapy/crawler.py", line 99, in _create_spider
    return self.spidercls.from_crawler(self, *args, **kwargs)
  File "/home/spidy/.local/lib/python3.5/site-packages/scrapy/spiders/__init__.py", line 51, in from_crawler
    spider = cls(*args, **kwargs)
  File "/home/spidy/var/scrapy/weibo/weibo/spiders/weibobrandspider.py", line 26, in __init__
    self.browser = webdriver.Firefox()
  File "/home/spidy/.local/lib/python3.5/site-packages/selenium/webdriver/firefox/webdriver.py", line 152, in __init__
    keep_alive=True)
  File "/home/spidy/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 98, in __init__
    self.start_session(desired_capabilities, browser_profile)
  File "/home/spidy/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 188, in start_session
    response = self.execute(Command.NEW_SESSION, parameters)
  File "/home/spidy/.local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 256, in execute
    self.error_handler.check_response(response)
  File "/home/spidy/.local/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py", line 194, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: connection refused

我的环境:

  • Python 3.5.2
  • /usr/local/bin/geckodriver
  • Docker 版本 17.03.1-ce,内部版本 c6d412e
  • Mozilla Firefox 54.0
  • Ubuntu 16.04.2 LTS

脚本如下:

from pyvirtualdisplay import Display

class WeiboSpider(scrapy.Spider):
name = "weibospider"

def __init__(self):
    display = Display(visible=0, size=(1200, 1000))
    display.start()

    # This is the problematic line:
    self.browser = webdriver.Firefox()

我没有主意了——我做错了什么或者我忽略了什么?

答案1

或许:

docker 镜像适用于 chromedriver: yukinying/chrome 无头浏览器

并且你正在使用 geckodriver: /usr/local/bin/geckodriver

相关内容