Amine Mbarki

What would you need in a self-hosted web scraping platform?

by

Hey! 👋

I love the Crawlee and Apify ecosystem – it's genuinely the best tooling for web scraping out there. But I had a specific use case: a client that required all data processing to happen on their own servers (compliance stuff).


So I started building Crawlee Cloud – a way to run the same Crawlee/Apify Actors on your own infrastructure when self-hosting is a requirement.


I'm curious if others have similar needs:

  • Have you ever needed to self-host scrapers for compliance or data residency requirements?

  • What features would be essential if you had to run scrapers on your own servers?

  • Any specific integrations that would make self-hosting easier?

Would love to hear about your use cases! 🙏

24 views

Add a comment

Replies

Be the first to comment