• self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      god, so this is actually the best the AI researchers can do with the tools they’ve shit out into the world without giving any thought to failure cases or legal liability (beyond their manager on slackTeams claiming it’s been taken care of)

      so fuck it, let’s make the defamation machine a non-optional component of windows. we’ll just make it a P0 when someone who could actually get us in legal trouble complains! everyone else is a P2 that never gets assigned.

      • Ogmios@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        so this is actually the best the AI researchers can do

        Highly unlikely. This is what corporation’s public facing products can do.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          are there mechanisms known to researchers that Microsoft’s not using that can prevent this type of failure case in an LLM without resorting to whack-a-mole with a regex?

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Yeah there’s already a lot of this in play.

            You run the same query multiple times through multiple models and do a web search looking for conflicting data.

            I’ve had copilot answer a query, then erase the output and tell me it couldn’t answer it after about 5 seconds.

            I’ve also seen responses contradict themselves later paragraphs saying there are other points of view.

            It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

              lol. like that’s a fix

              (Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …)