I suspect that this is the direct result of AI generated content just overwhelming any real content.

I tried ddg, google, bing, quant, and none of them really help me find information I want these days.

Perplexity seems to work but I don’t like the idea of AI giving me “facts” since they are mostly based on other AI posts

  • kitnaht@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    17 hours ago

    I think it’s just you. Stable Diffusors are pretty good at regurgitating information that’s widely talked about. They fall short when it comes to specific information on niche subjects, but generally that’s only a matter of understanding the jargon needed to plug into a search engine to find what you’re looking for. Paired with uBlock Origin, it’s all typically pretty straight forward, so long as you know which to use in which circumstance.

    Almost always, I can plug some error for an OS into a LLM and get specific instructions on how to resolve it.

    Additionally if you understand and learn how to use a model that can parse your own set of user-data, it’s easy to feed in documentation to make it subject-specific and get better results.

    Honestly, I think the older generation who fail to embrace and learn how to use this tool will be left in the dust, as confused as the pensioners who don’t know how to write an email.

    • Lvxferre@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 hours ago

      Stable Diffusors are pretty good at regurgitating information that’s widely talked about.

      Stable Diffusion is an image generator. You probably meant a language model.

      And no, it’s not just OP. This shit has been going on for a while well before LLMs were deployed. Cue to the old “reddit” trick that some people used.

        • Lvxferre@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          Or, in a deeper aspect: they’re pretty good at regurgitating what we interpret as bullshit. They simply don’t care about truth value of the statements at all.

          That’s part of the problem - you can’t prevent them from doing it, it’s like trying to drain the ocean with a small bucket. They shouldn’t be used as a direct source of info for anything that you won’t check afterwards; at least in kitnaht’s use case if the LLM is bullshitting it should be obvious, but go past that and you’ll have a hard time.