All activity
Pegasus 1.5 transforms raw video into consistent, structured, timestamped data on-the-fly. Video becomes a queryable and computable asset, based on your company’s custom requirements. Define a schema of what matters in your domain, point it at any video up to 2 hours, and get back structured, time based metadata in a single API call. And, it’s multimodal – pass in an image, and find anytime this reference appears in your video. Your video library, finally queryable for humans and agents.
Pegasus 1.5 by TwelveLabsAI model for transforming video into Time-Based Metadata
Marengo 3.0 is TwelveLabs' most significant model to date, delivering human-like video understanding at scale. A multimodal embedding model, Marengo fuses video, audio, and text for holistic video understanding to power precise video search and retrieval.
Marengo 3.0 by TwelveLabsThe most powerful embedding model for video understanding