Alex K

Bring your own LLM: OpenAI-compatible custom models for Pro & Enterprise on CodeCritic

by

We just shipped Custom AI for CodeCritic on Pro and Enterprise plans.

If you want AI code review on your own LLM endpoint instead of our default platform models, you can now connect a public HTTPS API that speaks the familiar OpenAI-compatible contract (/v1/chat/completions). That matters for teams that care about vendor choicepredictable token economics, or routing reviews through an approved corporate model.

What you get

  • BYO LLM for code review - point CodeCritic at your provider’s base URL, pick a model id, and store your API key securely in Settings → Integrations (encrypted on our side).

  • Same product surface - paste code, GitHub PR flows, API and automation keep working; when Custom AI is active and valid, reviews can run against your endpoint without consuming the same platform review quota pattern you’re used to from the dashboard (see live plan details in-app).

  • Enterprise-friendly story - good fit when policy says “self-hosted or contracted LLM only,” as long as the endpoint is reachable over HTTPS and compatible with the OpenAI-style chat API.

Who it’s for

  • Developers evaluating AI-powered code review tools with flexible LLM backends

  • Teams on Pro or Enterprise who already standardize on OpenAI-compatible gateways (many hosted and cloud providers expose this shape)

Learn more

Quick note on scope (so expectations stay clear)
Custom LLM targets public HTTPS OpenAI-compatible endpoints. Private networks, raw HTTP, and localhost are out of scope for this release - that keeps the integration safe and supportable at scale.

19 views

Add a comment

Replies

Be the first to comment