无法为我的 scrapy 程序创建 crontab 作业

无法为我的 scrapy 程序创建 crontab 作业

我编写了一个小型 Python 抓取工具(使用 Scrapy 框架)。该抓取工具需要无头浏览...我使用的是 ChromeDriver。

由于我在没有任何 GUI 的 Ubuntu 服务器上运行此代码,因此我必须安装 Xvfb 才能在我的 Ubuntu 服务器上运行 ChromeDriver(我遵循了本指南

这是我的代码:

class MySpider(scrapy.Spider):
    name = 'my_spider'

    def __init__(self):
        # self.driver = webdriver.Chrome(ChromeDriverManager().install())
        chrome_options = Options()
        chrome_options.add_argument('--headless')
        chrome_options.add_argument('--no-sandbox')
        chrome_options.add_argument('--disable-dev-shm-usage')
        self.driver = webdriver.Chrome('/usr/bin/chromedriver', chrome_options=chrome_options)

我可以从 Ubuntu shell 运行上述代码,并且它执行时没有任何错误:

ubuntu@ip-1-2-3-4:~/scrapers/my_scraper$ scrapy crawl my_spider

现在我想设置一个 cron 作业来每天运行上述命令:

# m h  dom mon dow   command
PATH=/usr/local/bin:/home/ubuntu/.local/bin/
05 12 * * * cd /home/ubuntu/scrapers/my_scraper && scrapy crawl my_spider >> /tmp/scraper.log 2>&1

但是 crontab 作业给了我以下错误:

Traceback (most recent call last):
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 192, in crawl
    return self._crawl(crawler, *args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 196, in _crawl
    d = crawler.crawl(*args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator
    return _cancellableInlineCallbacks(gen)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks
    _inlineCallbacks(None, g, status)
--- <exception caught here> ---
  File "/home/ubuntu/.local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
    result = g.send(result)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 86, in crawl
    self.spider = self._create_spider(*args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/crawler.py", line 98, in _create_spider
    return self.spidercls.from_crawler(self, *args, **kwargs)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/scrapy/spiders/__init__.py", line 19, in from_crawler
    spider = cls(*args, **kwargs)
  File "/home/ubuntu/scrapers/my_scraper/my_scraper/spiders/spider.py", line 27, in __init__
    self.driver = webdriver.Chrome('/usr/bin/chromedriver', chrome_options=chrome_options)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
    desired_capabilities=desired_capabilities)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
    self.start_session(capabilities, browser_profile)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
    response = self.execute(Command.NEW_SESSION, parameters)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
  (unknown error: DevToolsActivePort file doesn't exist)
  (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
  (Driver info: chromedriver=2.41.578700 (2f1ed5f9343c13f73144538f15c00b370eda6706),platform=Linux 5.4.0-1029-aws x86_64)

更新

这个答案帮我解决这个问题(但我不太明白为什么)

echo $PATH在我的 Ubuntu shell 上运行并将值复制到 crontab 中:

PATH=/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
05 12 * * * cd /home/ubuntu/scrapers/my_scraper && scrapy crawl my_spider >> /tmp/scraper.log 2>&1

注意:由于我已经为这个问题创建了悬赏,我很乐意将其奖励给任何能够解释为什么改变 PATH 可以解决问题的答案。

答案1

cron这就是几乎所有无法运行的情况的原因。

Cron 总是在几乎空的环境中运行. HOMELOGNAMESHELL已设置;并且 非常有限PATH。因此建议使用可执行文件的完整路径,并在使用 时导出脚本中所需的任何变量cron

你也可以:

  • 使用 shell 上使用的环境变量

  • 模拟,通过临时将其添加到您的 crontab 并等待一分钟以将 cron 环境保存到~/cronenv(然后您可以删除它):

    * * * * * env > ~/cronenv
    

    然后测试运行具有该环境的 shell(默认情况下SHELL=/bin/sh):

    env - $(cat ~/cronenv) /bin/sh
    
  • 强制运行 crontab

另外,你不能使用变量替换就像在 shell 中一样,类似的声明将按PATH=/usr/local/bin:$PATH字面意思进行解释。

答案2

无法找到命令readlinkdirname和 ,因为未包含在环境变量中。cat/binPATH

解释

未知错误:Chrome 无法启动:异常退出从 chrome 位置 /usr/bin/google-chrome 启动的进程不再运行,因此 ChromeDriver 假定 Chrome 已崩溃。

尝试设置PATH=/usr/local/bin:/home/ubuntu/.local/bin/并执行 /usr/bin/google-chrome --no-sandbox --headless --disable-dev-shm-usage你将获得

/usr/bin/google-chrome: line 8: readlink: command not found
/usr/bin/google-chrome: line 10: dirname: command not found
/usr/bin/google-chrome: line 45: exec: cat: not found
/usr/bin/google-chrome: line 46: exec: cat: not found

答案3

你也可以试试这个。Crontab 为用户 ubuntu 打开一个新的 shell。

05 12 * * *   su - ubuntu -c 'cd /home/ubuntu/scrapers/my_scraper && scrapy crawl my_spider >> /tmp/scraper.log 2>&1'

相关内容