@smalter We run our own GPU servers on Coreweave for our own fine-tuned models because we've found that fine-tuned models, when you have enough data for the task, perform better for many things than the general GPT-4 models.
When we're working on a new problem/feature and we don't have any data, we start with OpenAI models.
Report
Looks super interesting! Will be trying this out later today! Kudos team
Replies
Keysheet
Toolhouse
Heyday
UXPin
Societal Labs
Orbit
Arc
Glow