@smalter We run our own GPU servers on Coreweave for our own fine-tuned models because we've found that fine-tuned models, when you have enough data for the task, perform better for many things than the general GPT-4 models.
When we're working on a new problem/feature and we don't have any data, we start with OpenAI models.
Report
Looks super interesting! Will be trying this out later today! Kudos team
So excited to check this out! Iβve been amassing a mess of notes and research and wondering how I could build something myself. Now I donβt have to!
Report
@erik_israni Love that. Lots to dive into! Let us know how it works for you + what we can do to make it even better.
Replies
Keysheet
Toolhouse
Heyday
UXPin
Orbit
Arc
Glow