π Meilisearch is a superfast search engine for developers built in Rust. Our latest release introduces AI-powered semantic and hybrid search, blending full-text, semantic, and vector DB capabilities for smarter, faster results.
Replies
Best
Congratulation on the launch of the product π, the demo is absolutely amazing. I love how thoughtful the team is in allowing user to define their own text temple for the embedding.
I have several questions:
Is there any plan to give the query a template too?
AFAIK, the maximum number of query words is 10 (link). Is this still enforced for AI search?
Would you mind sharing some tricks on how the team achieved search-as-you-type performance even with AI search?
@topgΒ Thanks, Vu! Indeed, it has been a wonderful addition to the product, and the idea comes from our open-source community!
To answer your questions: 1. Yes, it's in the plan to implement several workarounds for this, such as being able to provide context to a search query that will help personalize the search responses. However, I would be interested in getting more information about your expectations because I'm not sure I fully understand what a template would do in search. 2. Indeed, there is a limit on the full-text search, but no limits on the semantic search. 3. It's absolutely not a secret, and this will become even better in the future. To have the minimum latency on semantic search, the goal is to remove the network by running the model locally. It works perfectly fine as the searches have only a few words/tokens; with most of the models, it takes 1-5 ms max.
Report
@quentin_dqΒ Thanks for the detailed answer, Quentin π
As per the first question. I was assuming the AI search query is still reatrained by the number of token like tradational search (2nd question) which made me thinking about away to include more context in my query.
Weβre thrilled to see Meilisearch AI finally launch! The combination of full-text, semantic, and vector search capabilities offers developers a seamless and fast search solution. Exciting times ahead for building smarter search experiences!
Weβre really excited about this step β combining full-text and semantic in one simple API like this has been a huge developer experience improvement, and weβre just getting started.
Congratulations on the launch @quentin_dq and the entire team!
We've been using Meilisearch AI at Agora for a few months now. It's the fastest search solution on the market hands down. Excited to see the team take the product to the next level with this GA release π
Replies
Congratulation on the launch of the product π, the demo is absolutely amazing. I love how thoughtful the team is in allowing user to define their own text temple for the embedding.
I have several questions:
Is there any plan to give the query a template too?
AFAIK, the maximum number of query words is 10 (link). Is this still enforced for AI search?
Would you mind sharing some tricks on how the team achieved search-as-you-type performance even with AI search?
I am looking forward to your response π
Meilisearch
@topgΒ Thanks, Vu! Indeed, it has been a wonderful addition to the product, and the idea comes from our open-source community!
To answer your questions:
1. Yes, it's in the plan to implement several workarounds for this, such as being able to provide context to a search query that will help personalize the search responses. However, I would be interested in getting more information about your expectations because I'm not sure I fully understand what a template would do in search.
2. Indeed, there is a limit on the full-text search, but no limits on the semantic search.
3. It's absolutely not a secret, and this will become even better in the future. To have the minimum latency on semantic search, the goal is to remove the network by running the model locally. It works perfectly fine as the searches have only a few words/tokens; with most of the models, it takes 1-5 ms max.
@quentin_dqΒ Thanks for the detailed answer, Quentin π
As per the first question. I was assuming the AI search query is still reatrained by the number of token like tradational search (2nd question) which made me thinking about away to include more context in my query.
Rust-based search sounds powerful! π
Meilisearch
@shenjunΒ Rust is the best language for building a search engine. It is secure, stable, and performant. π¦π₯
Agora API
Amazing launch@quentin_dq and Meilisearch team, congrats! Your search engine speed is crucial to our growth at Agora.
Amazing to see the product going to the next level!
Meilisearch
@alessandro_colomboΒ Thanks Alessandro! π
Meilisearch
Excited for the AI roadmap ahead! π€© Great job, team!
Stripo.email
Weβre thrilled to see Meilisearch AI finally launch! The combination of full-text, semantic, and vector search capabilities offers developers a seamless and fast search solution. Exciting times ahead for building smarter search experiences!
Meilisearch
@marianna_tymchukΒ Thank you so much! π
Weβre really excited about this step β combining full-text and semantic in one simple API like this has been a huge developer experience improvement, and weβre just getting started.
WP Umbrella
Awesome product that we use to power up @WP Umbrella. Congrats to the team for their great work!
Meilisearch
@aureliovolleΒ Thanks Aurelio! Very happy to power up @WP Umbrella π₯
Agora
Congratulations on the launch @quentin_dq and the entire team!
We've been using Meilisearch AI at Agora for a few months now. It's the fastest search solution on the market hands down. Excited to see the team take the product to the next level with this GA release π
Meilisearch
@param_jaggi3Β Thanks, Param! Happy to power the search for such an amazing product that is @Agora β€οΈ
Congratulations on this launch, Quentin! It looks amazing
Meilisearch
@sofi_mohrΒ Thanks Sofi!
Strapi
This looks awesome, well done team Meilisearch !
Meilisearch
@vcoisneΒ Thanks for the support Victor!
ThriveDesk
Best wishes for Melisearch team.
We built @ThriveDesk search and AI features on top of Melisearch platform and we 100% satisfied with it.
Meilisearch
@ThriveDesk Thanks, Parvez! It's amazing! What upcoming features are you expecting?