• TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Be careful relying on LLMs for “searching”. I’m speaking from experience here - getting actually accurate results from the current generation of LLMs, even with RAG, is difficult. You might get accurate results most of the time (even 80% or more), but it can be difficult to identify the inaccurate results due to the confidence models present their output with when hallucinating.

    Also, if your LLM isn’t doing retrieval-augmented generation (RAG), then it isn’t actually a search and won’t find results more recent than the data it was trained off of.

    • Zworf@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 months ago

      I know. But I’m often not really looking for accuracy. I just need to know something for myself. Most of the stuff I look up is absolutely not critically important. It’s not like I’m trying to write a PhD dissertation or something.

      I know it can be inaccurate but I can verify the results (and they usually are totally fine).