simonw 2 days ago

Took me a little poking around to figure out what the underlying search engine was: it's https://typesense.org/ hosted in a Docker container.

ij23 3 days ago

Canary is awesome! we use Canary for our doc search at LiteLLM (you can see it here: https://docs.litellm.ai/docs/)

It's really useful to be able to specify the search space for a specific query (example: Canary allows search for the query "sagemaker" on our docs or on our github issues )

  • metabeard a day ago

    The search modal says, "Search by Algolia".

    • yujonglee a day ago

      click cute yellow bird next to the searchbar.

Onavo 3 days ago

You should add support for tinkerbird, so the index can be statically generated and queried without a backend.

https://github.com/wizenheimer/tinkerbird

  • whilenot-dev 3 days ago

    Just played around with tinkerbird on Tinkerboard[0]... it doesn't seem to get good results with the provided example data. Why do you think a support for it would be worthwhile?

    [0]: https://tinkerboard.vercel.app/

    • Onavo 2 days ago

      Getting good results involves tuning, good models, and well defined prompts, the demo not implementing a good RAG has nothing to do with its vector search performance. I suggest reading up on how the technology works.

jgalt212 3 days ago

I have to say Algolia is underwhelming (even after all these years). Perhaps I'm using it wrong, but I often more quickly find the comment or story I'm searching for via a targeted search using Google. I should give Bing a try as I've been been getting better finance related results there lately--especially when trying to locate ratings and / or other docs related to newly issued securities.

  • bryanrasmussen 2 days ago

    I had to use Algolia in a recent ecommerce solution, I think e-commerce really is the sweet spot for what Algolia offers, quick setup not a lot of need to mess around with your rankings etc. with very simple content.

    I'm used to Solr and ElasticSearch for most sites I've ran, which tend to be information sites dense where you need to be able to control rankings to get the best results, which HN is much closer to than to an e-commerce site.

  • lnrdgmz 3 days ago

    Agreed. I dread having to use Algolia search on documentation these days. The search results feel pretty naively selected, and the UI is pretty poor. I get that people want to deploy static sites, but can we please find a way to bring back search _pages_?

    • yujonglee 2 days ago

      > I dread having to use Algolia search on documentation these days.

      agreed.

      > but can we please find a way to bring back search _pages_?

      could you please explain what do you mean?

TnS-hun 3 days ago

In Firefox the "Search for anything" input does not get focused after opening the search dialog.

  • yujonglee 3 days ago

    nice catch! just downloaded firework to test it :) will fix it shortly

pjot 2 days ago

Can you talk about how you implemented search-as-you-type? Doing so with semantic search seems tricked given the roundtrips needed to compute embeddings on the fly (assuming the use of OpenAI embeddings)

  • yujonglee 2 days ago

    sure - implementing a search-as-you-type experience with an ai-powered feature was what i wanted to do as well. it doesn't use embeddings at the moment. when you type a short query like 'openai,' it simply runs a basic query using Typesense. however, if you enter a question-like query, such as 'how to llimit api cost,' it transforms it into multiple queries, like 'budget' and 'limit.'

    in the self-hosted version, it use the CHAT_COMPLETION_MODEL env variable for selecting the llm model. in our cloud version, we use a fine-tuned version of 4o-mini that we will eventually move to a smaller model like llama8b or even 1b.

    • pjot 2 days ago

      Got it! I saw this in the code and assumed you were using embeddings def evaluate(input: shared.EvaluationInput): ds = Dataset.from_list(input.dataset) metrics = [metric_map[metric] for metric in input.metrics]

          llm = ChatOpenAI(
              model_name=shared.LANGUAGE_MODEL,
              base_url=os.environ["OPENAI_API_BASE"],
              api_key=os.environ["OPENAI_API_KEY"],
          )
      
          embeddings = OpenAIEmbeddings(
              model=shared.EMBEDDING_MODEL,
              base_url=os.environ["OPENAI_API_BASE"],
              api_key=os.environ["OPENAI_API_KEY"],
          )
      • yujonglee 2 days ago

        that piece of code is for llm response evaluation, but we are not really using it at the moment.

alexbouchard 2 days ago

Been looking for something like this! Doc search just hasn't kept up with what's possible now and is such a hassle to get the indexing to work properly. Will try it out!

skeptrune 3 days ago

This is sweet. I do think the styling on the component could be a bit cleaner though.

Fire-Dragon-DoL 2 days ago

Does it have the same API? Have been looking for a way to mock the service in development

hackernewds 3 days ago

How does it compare to Glean?

  • yujonglee 3 days ago

    Glean is used for searching the workspace (AFAIK, for internal use). Canary is used for searching technical documentation, GitHub issues, etc., and is intended for the users of the project.

    • detente18 2 days ago

      +1 on the github issues. It's very useful to have this on the litellm docs

      • yujonglee 2 days ago

        nice! lmk if you have any feedback while using it in the litellm docs.

hackernewds 3 days ago

The name Canary is a bit confusing, since a lot of companies already use Canary to indicate symptoms of issues (re: canary in coal mine). However the app doesn't fulfill this need.

I will give it a try, impressive compression

  • yujonglee 3 days ago

    that's a fair point. I don't think I can rename it at this point, but I'll keep in mind that some people might be confused by the name.

    please do try it out, and come to Discord if you want to chat.