
Log Analyzer Pro
Open multi-GB logs in VS Code powered by Rust
3 followers
Open multi-GB logs in VS Code powered by Rust
3 followers
Stop switching to the terminal to read large logs. This VS Code extension uses a native Rust backend and memory-mapping to open gigabyte-sized files instantly. Includes virtual scrolling, regex search, smart filtering, and live "tail -f" updates.



Hey Artyom!
Great Job!
Could I use your plugin for searching large files of a different type?
I'm interested in JSON
@igor_kruze Yes, you can open any text file including JSON via 'Open with Log Analyzer Pro' command. It's particularly useful for NDJSON/JSON Lines format (one JSON object per line) — common in log aggregation systems. For pretty-printed JSON, it works as a basic text viewer without JSON-specific features like syntax highlighting or tree navigation
This is a brilliant solution to a universal dev pain point. Using a Rust sidecar with memory-mapped I/O to bypass Electron's limits is the perfect technical approach.
A key question for production use: How does the extension handle actively writing log files? Does the "Follow Mode" (tail -f) update the index and virtual scrolling in real-time without performance degradation, or does it require periodic re-indexing?
@olajiggy321 Follow Mode uses periodic polling with incremental detection, but full re-indexing on changes. Not true real-time streaming, but good enough for most production scenarios.
@olajiggy321
Polls every 500ms (not true streaming)
Compares file size first (cheap) — skips re-indexing if unchanged
When file grows: full re-mmap + full re-index of the entire file
For a 10GB file growing constantly, this means re-scanning ~100MB/s per refresh (the indexer does ~500-1000 MB/s, so ~0.1-0.2s per poll is realistic)
Performance implications:
Works great for files up to ~5-10GB with moderate write rates
For extremely high write rates (thousands of lines/sec) or massive files (50GB+), there could be noticeable lag
The polling model means max 500ms latency for new lines
Potential improvement: Could implement incremental indexing (only scan appended bytes), but current implementation is "good enough" for 99% of use cases.
@let_molchanov
Thanks for the exceptionally detailed and transparent breakdown—that level of technical honesty is rare and appreciated. The polling model with full re-index makes perfect sense for the "good enough for 99% of use cases" goal.
I have a small, practical idea related to managing user expectations around that performance trade-off that you could implement on your own.
If you're open to a suggestion, what's the best way to share it? (Email, DM, etc.)
Thanks for the exceptionally detailed and transparent breakdown—that level of technical honesty is rare and appreciated. The polling model with full re-index makes perfect sense for the "good enough for 99% of use cases" goal.
I have a small, practical idea related to managing user expectations around that performance trade-off that you could implement on your own.
If you're open to a suggestion, what's the best way to share it? (Email, DM, etc.)