We're launching SlothDB - what file format do you struggle with the most?
Hey Hunters!
I'm building SlothDB — an open-source embedded database that lets you query files directly with SQL. No server, no
import step, no dependencies.
You literally just do this:
SELECT * FROM 'sales.csv';
SELECT * FROM read_parquet('events.parquet');
SELECT * FROM 'report.xlsx';
It supports CSV, Parquet, JSON, Excel, Avro, Arrow, and SQLite — all built into the engine.
Before we launch, I'd love to hear from you:
1. What file format do you deal with the most at work?
2. What's the most painful part of working with data files?
3. Would GPU-accelerated queries (CUDA + Metal) matter for your use case?
We're open source (MIT) and you can try it right now:
curl -fsSL https://raw.githubusercontent.com/SouravRoy-ETL/slothdb/main/install.sh | bash
Or just: pip install slothdb
GitHub: https://github.com/SouravRoy-ETL/slothdb
Website: https://souravroy-etl.github.io/slothdb/
Would love your feedback before launch day. What would make this useful for YOUR workflow?

Replies