Launched this week

LogiCoal
AI multi-agent coding assistant for your terminal
49 followers
AI multi-agent coding assistant for your terminal
49 followers
LogiCoal is an AI-powered CLI coding assistant with multi-agent orchestration, smart model routing, and deep codebase understanding. Free for macOS, Windows, and Linux.





LogiCoal
@bmooreinsaan Multi-model fact-checking is the hardest part to get right here. Running a second model catches surface-level hallucinations, but models from similar training distributions share blind spots... so the verifier confidently agrees with the same wrong answer. Does LogiCoal use architecturally different models for generation vs verification, or is it same-family with different prompting? That distinction is where the reliability gap lives.
LogiCoal
@piroune_balachandran
Under the hood, verification is conducted by an independent model that checks if a statement was made by one of the generative models, and was presented as a fact without the context indicating the source of that "fact". When context doesn't indicate the the correct research was done, it considers that to be unverified and triggers fact checking that is not allowed to rely on training data, but instead has to be verified by a web search,file analysis,etc.... Hallucinations are impossible to eliminate entirely from any AI, but that extra level of metacognition I added has eliminated any that I've seen during development and testing.
Multi-agent CLI assistants tend to break at scale on unsafe tool execution plus context blowups where “autocompact” drops the one file that matters and hallucinations sneak back in.
Best practice is deterministic repo indexing (tree-sitter + ripgrep), incremental retrieval with stable citations to exact lines, and sandboxed command execution with an allowlist + dry-run diffs before apply.
How are you implementing autocompact (summaries vs selective chunk eviction), and what guarantees do you provide that proposed shell commands and patches are reproducible and safe?
LogiCoal
@ryan_thill
Great questions and insights.
Let me start by saying that your best practice suggestion is absolutely correct. I will prioritize adding deterministic repo indexing and sandboxed command execution. Now that you mentioned it, it seems so obvious that I should have implemented that from the start, but hindsight is always 20/20....
As far as how I implemented autocompact: LogiCoal keeps track of token usage per message from both the user and any of the model responses. Tool usage and agent sub sessions operate the same way (but agents get their own context window). When context gets to the point for autocompact, the context is basically broken into 2 parts preserving the most recent exactly as is, and summarizing the older context. That being said, messages persist and aren't deleted, which gives LogiCoal the ability to recognize to cherry-pick part(s) of original context and add back into the current context as needed (even after multiple autocompacts).
As far as how to verify proposed shell commands reproducible, there isn't currently visibility for end users to see an entire command but that is easy to add and I will make sure it is in the next release. As far as shell commands being safe, I believe that would be addressed by your suggestion that there should be a way to sandbox and/or dry run commands.
Based on your feedback, I will plan on implementing the following to LogiCoal:
Deterministic Repo Tracking (most likely leveraging git)
Sandboxed/dry-run command execution option (most likely targeting any destructive commands)
Thanks again for your questions and suggestions... It's hard to work in a vacuum, so your perspective is highly appreciated.
Terminal-first is the right call for developers.
Most coding assistants feel built for non-devs. This looks different.
How does it handle context across multiple agents working on the same codebase?