Sukrit Kumar Vemula

SCRAPR - The data layer for the agentic web

SCRAPR is a new approach to web data extraction. Instead of relying on fragile DOM selectors or heavy browser automation, SCRAPR looks at how modern websites actually load their data and extracts structured responses directly from those sources. The goal is to make web data pipelines faster, more reliable, and easier to maintain. Right now SCRAPR is in early MVP and we’re looking for developers, data teams, and AI builders who need clean structured data from websites.

Add a comment

Replies

Best
Sukrit Kumar Vemula

I built SCRAPR after running into the same problem again and again:

Getting structured data from websites is still way harder than it should be.

Most tools fall into two buckets:

• Browser automation (Puppeteer / Selenium) — slow and expensive
• Traditional scrapers — fragile and constantly breaking

SCRAPR tries a different approach.

Instead of rendering pages or parsing messy HTML, it focuses on how websites actually load their data and extracts structured responses from there.

The goal is to make web data extraction more reliable — especially for AI pipelines and data workflows.

It’s still early (MVP stage), and I’m looking for builders who want to try it and give feedback.

Dhayalan P

@vemulasukrit 
Thanks for the insight, Sukrit! Relying on how sites load data (XHR/Fetch) is much more stable for AI pipelines than Puppeteer-heavy stacks.

I’ve worked on various data aggregation projects and would love to put SCRAPR through its paces on some complex sites I've encountered. I'm happy to provide detailed feedback or help contribute to the codebase.

Let me know how I can get started!

Denis Akindinov

How does this engine handle JavaScript-heavy or dynamic content without a browser, and what mechanisms ensure data accuracy when the source website changes its layout?

Sukrit Kumar Vemula
@mordrag For JavaScript-heavy sites, the engine doesn’t use a browser. Instead it looks at the page’s code and finds the API requests the site uses to load its data (like fetch, axios, or GraphQL). Then it calls those data endpoints directly and pulls the real content from there. This makes it much faster and lighter than running a full browser.
Arjun Chintapalli

So what happens when the API changes?

Sites like Linkedin also use server side rendering and hydration for pages so this approach won't work on most websites?

Sukrit Kumar Vemula

@arjun_chintapalli Good question. If an API changes, SCRAPR isn’t tied to just one extraction path. It can re-analyze how the page delivers its data and adjust instead of relying on a fixed endpoint or selector.

And you’re right that some sites use server-side rendering or hydration. In those cases the content still exists in the page response or in subsequent requests, so SCRAPR can fall back to extracting it from the page structure when needed.

Paradox_Hash

Looks cool — but how well does it actually handle hard targets like Cloudflare, JS-heavy sites, proxies, and rate limits in the real world?

Sukrit Kumar Vemula
@paradox_hash The engine doesn’t rely on browser automation, so for JS-heavy sites it looks for the actual data endpoints the site calls (APIs, GraphQL, fetch requests) and pulls data directly from those. That avoids a lot of the usual scraping issues. For things like rate limits or protection layers, it behaves more like a normal HTTP client and adapts requests rather than brute-forcing pages.
Lev Kerzhner

This is such a clean solution to a problem that's been annoying developers forever. Rooting for you!

Sukrit Kumar Vemula
@lev_kerzhner Thanks! 😊
JooWhan Yohn

This approach is super clever — basically doing what I always do manually in Chrome DevTools Network tab (hunting for those fetch/GraphQL calls) but automated 😮

Does the engine just statically analyze the page source to find those internal API requests, or does it use AI/LLM in some way to detect and reconstruct the right endpoints even on tricky sites?

And how well does it handle completely arbitrary URLs — like, throw any random modern site at it and it still finds the clean data source reliably?

Sukrit Kumar Vemula
@paxhumana Yeah that’s actually a great way to think about it 😄 it’s basically automating the kind of discovery people usually do manually in DevTools. Under the hood it analyzes how modern sites load their data and figures out the clean data sources from there. It’s not tied to specific selectors or layouts, which helps it stay stable even when sites change their UI. The goal is that you can throw pretty much any modern site at it and it will still find the structured data without needing manual setup. There are always edge cases of course, but it works reliably across a wide range of sites.
Robert Alexander

Sounds cool. Would love to try it out for example on https://www.maxxi.art/events/categories/mostre/

Sukrit Kumar Vemula

@rjalex Thanks! That’s a great example site to test on.

Right now we’re rolling out early access gradually while we keep improving the engine, but I’d definitely like to try SCRAPR on pages like that. Sites with event listings and structured content are actually a really interesting use case.

If you’ve requested early access, you should hear from me soon!

Robert Alexander

@vemulasukrit thank and can't wait to try. Your product could close an important gap for organizations like this museum that have a communications dept. that publishes material on their website in ways that nobody really controls :)

Joao Seabra

Most scrapers fight the rendered HTML. This goes upstream to where the data actually comes from, am I understanding that right? That's quite interesting.

What gets me most is the stability angle. Anything built on CSS selectors or DOM structure breaks the moment a site redesigns its front-end. If you're anchored to the underlying API calls instead, that problem should mostly disappear.

I'm building an AI platform that pulls structured data into its pipeline, so this is genuinely relevant to me. The edge case I keep running into with this type of approach: sites that sign their internal API requests dynamically, session tokens, HMAC signatures, that kind of thing. How does SCRAPR handle those? That's usually where it gets complicated in my experience.

Sukrit Kumar Vemula
@joao_seabra Yes, you’re understanding it correctly. The main idea is to focus on where the data actually comes from instead of relying on fragile DOM selectors, which is where a lot of traditional scrapers break when the UI changes. For cases like signed requests, session tokens, or other protections, those are definitely some of the harder scenarios. SCRAPR doesn’t rely on a single rigid method there — it adapts to how the site normally serves its data and works within that flow. The goal is not to bypass a site’s logic, but to make data extraction more stable and reliable compared to approaches that depend purely on the rendered HTML. Also really cool to hear you’re building an AI platform around structured data pipelines — that’s exactly the kind of use case we’re seeing more of.
Adam
Does this handle things like fingerprinting and bot detection? Awesome that you coming at this with a new angle!
Sukrit Kumar Vemula
@edgeghost Thanks Adam, appreciate it! SCRAPR mainly focuses on extracting data efficiently without needing a full browser, which already avoids a lot of the typical issues. For things like fingerprinting or bot detection, it really depends on how the specific site is set up. But yeah, the end goal is to make it work for any given website.
Julian Collins

The interception approach is clever, way faster than spinning up a headless browser for every request. Have you thought about a batch endpoint where you can throw a list of URLs at it in one call? Anytime I've built a scraping pipeline for a project, the single-URL-at-a-time loop is where things get slow and annoying to manage.

Sukrit Kumar Vemula
@juelz Thanks Julian, really appreciate that! And yes — that’s a great point. Running things one URL at a time can definitely become slow when you’re building pipelines. There’s already support for batch-style requests where you can pass multiple URLs in one call, and I’m planning to expand that further so it works better for larger data pipelines.
12
Next
Last