1. .
  2. Blog
  3. Web Scraping Tools Open Source: Python Developer Toolkit

Web Scraping Tools Open Source: Python Developer Toolkit

Web Scraping Tools Open Source

Web scraping, as we all know, it’s a discipline that evolves over time, with more complex anti-bot countermeasures and new web scraping tools open source to use.

Let’s find together what tools can’t be missed for a python web scraper developer.


Web Scraping + Python = Scrapy, by definition. Born in 2009, It’s the most complete framework for web scraping, that gives the developer plenty of options to control every step of the data acquisition process.

Open source, maintained by Zyte (formerly known as Scrapinghub), has the great advantage that there’s plenty of documentation, tutorials, and courses on the web to start with. Being written in Python allows starting instantly to create your first spider within minutes.

Another great advantage is its modular architecture, described in the picture below and well explained in the official documentation.

Web Scraping Tools Open Source
Scrapy architecture as described on their documentation

Let’s briefly summarize the workflow.

  1. The Engine gets the initial Requests from the Spider, passes them to the scheduler, and then asks for the next requests to crawl.
  2. The Scheduler returns the requests to make to the Engine that sends them to the Downloader, via its Middlewares. The Downloader returns a Response that goes to the Engine via its Middlewares.
  3. Again, the Engine sends the Response to the Spider via its Middlewares and Spider returns Items and next Requests.
  4. Finally, the Engine then sends Items to Items Pipelines and then asks for more Requests to crawl.

Most of the magic of Scrapy happens in the two middlewares: in the Downloader Middlewares, you can add some manipulations to Requests and Responses. As an example, you can filter the Requests before they are sent to the website, maybe because they are duplicated. Or maybe you want to manipulate the Responses before they are used by the spider.

In the Spider Middlewares, you can post-produce the Spider output ( Items or Requests) and handle Exceptions.

Items are the standard output of Scrapy spiders and in the Item Pipelines there are options and functions to manage the output of the scrapers, like file formats, field separators, and so on. This makes Scrapy extremely useful for structured data with several columns per row.

Advanced Scrapy Proxies

A little self-promotion here, this is a python package for Scrapy written by me that handles lists of proxies in several formats and uses it in your Scrapy project. You can use a list accessible on a public URL, a list on the local machine, or a proxy directly in the options.Far from perfect but we use it daily in production.

Scrapy Splash

Scrapy is great but has some limitations, the biggest one is that it reads only static HTML.

To overcome this limit, the scrapy-splash plugin adds the ability to make Splash API calls inside your Scrapy project.

Splash is a lightweight browser with an HTTP API, implemented in Python 3 using Twisted and QT5.

This downloader middleware modifies the Requests, routing them to a Splash server specified in the Scrapy options, so the response contains the result of the Javascript executions.

Microsoft Playwright

In case there’s the need for a real browser to scrape some website, Microsoft Playwright is the newest solution we can rely on.

It is not the only automated test solution that allows us to script a browser execution and scrape its content, there’s Selenium too as an example, but it’s the easiest to use and at the moment the one that guarantees more successful responses in case of strong anti-bot software.

Its installation package already includes the most popular browsers and when included also the playwright-stealth package in the execution, the browser is almost indistinguishable from a real human installation.

Wappalyzer Python

I recently discovered this Python wrapper for Wappalyzer.

Wappalyzer is a tool that discovers the technology stack behind a website, like the anti-bot software and common e-commerce platform.

This wrapper in python allows you to programmatically study your target website from command line.

At the moment this seems to me one of the best web scraping tools open source for python web scrapers, but if something is missing or you’re using something else and want to reach out, feel free to write us.

This article was kindly provided by Pierluigi Vinciguerra, web scraping expert and founder of Web Scraping Club. Follow this link to see the original post.

Download GoLogin privacy browser here – and enjoy scraping even the most advanced websites with our free plan!

Also read

Why Anti-Detect Mobile Browser May Be the Future

Let’s get this started with a question: what device are you reading this on?

Truly Anonymous Browsing: Top Ways to Hide Your Identity Online

TOP 5 methods how to hide your identity and remove yourself from Internet search results

proxy server

New Proxy Manager: one-click proxy import and more

Read about the updates step by step – or watch a quick Youtube video summary: Add proxies with one click 1. Copy one or…

We’d love to hear questions, comments and suggestions from you. Contact us [email protected] or leave a comment below.

Are you just starting out with GoLogin? Forget about account suspension or termination. Choose any wed platform and manage multiple accounts easily. Click here to start using all GoLogin features