Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Properly Unit Testing Scrapy Spiders

DZone's Guide to

Properly Unit Testing Scrapy Spiders

· DevOps Zone
Free Resource

Download “The DevOps Journey - From Waterfall to Continuous Delivery” to learn about the importance of integrating automated testing into the DevOps workflow, brought to you in partnership with Sauce Labs.

Scrapy, being based on Twisted, introduces an incredible host of obstacles to easily and efficiently writing self-contained unit tests:

1. You can't call reactor.run() multiple times
2. You can't stop the reactor multiple times, so you can't blindly call "crawler.signals.connect(reactor.stop, signal=signals.spider_closed)"
3. Reactor runs in its own thread, so your failed assertions won't make it to the main unittest thread, so test failures will be thrown as assertion errors but unittest doesn't know about them

To get around these hurdles, I created a BaseScrapyTestCase class that uses tl.testing's ThreadAwareTestCase and the following workarounds.

class BaseScrapyTestCase(ThreadAwareTestCase):
    in_suite = False
    def setUp(self):
        self.last_crawler = None
        self.settings = get_project_settings()

    def run_reactor(self, called_from_suite=False):
        if not called_from_suite and BaseScrapyTestCase.in_suite:
            return
        log.start()
        self.last_crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
        reactor.run()

    def queue_spider(self, spider, callback):
        crawler = Crawler(self.settings)
        self.last_crawler = crawler
        crawler.signals.connect(callback, signal=signals.spider_closed)
        crawler.configure()
        crawler.crawl(spider)
        crawler.start()
        return crawler

    def wrap_asserts(self, fn):
        with ThreadJoiner(1):
            self.run_in_thread(fn)

You'll use it like so:

class SimpleScrapyTestCase(BaseScrapyTestCase):
    def test_suite(self):
        BaseScrapyTestCase.in_suite = True
        self.do_test_simple()
        self.run_reactor(True)
    def do_test_simple(self):
        spider = Spider("site.com")
        def _fn():
            def __fn():
                self.assertTrue(False)
            self.wrap_asserts(__fn)
        self.queue_spider(spider, _fn)
        self.run_reactor()
 

1. Call run_reactor() at the end of test method.
2. You have to place your assertions in its own function which gets called in a ThreadJoiner so that unittest knows about assertion failures.
3. If you're testing multiple spiders, just call queue_spider() for each, and run_reactor() at the end.
4. BaseScrapyTestCase keeps track of the crawlers created, and makes sure to only attach a reactor.stop signal to the last one.

Let me know if you come up with a better/more elegant way of testing scrapy spiders!


Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs

Topics:

Published at DZone with permission of Kelvin Tan. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}