Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.

LinkedIn thus takes after its parent, which recently revised its Service Agreement to make clear that its Assistive AI should not be relied upon.

LinkedIn, however, has taken its denial of responsibility a step further: it will hold users responsible for sharing any policy-violating misinformation created by its own AI tools.

The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won’t be held responsible for any consequences.

The platform’s Professional Community Policies direct users to “share information that is real and authentic” – a standard to which LinkedIn is not holding its own tools.

  • solsangraal@lemmy.zip
    link
    fedilink
    English
    arrow-up
    34
    ·
    3 hours ago

    do people share shit on linkedin when it isn’t part of their job to share shit on linkedin?

    • pathief@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      20 minutes ago

      I stopped using LinkedIn because it totally turned into Facebook. Everyone is just posting memes, motivation quotes or soccer.

    • Shdwdrgn@mander.xyz
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      3 hours ago

      I still don’t know why anyone USES linkedin. It was a shit company built by hacking Windows and sending out emails in other people’s names to try to build their user base. The fact that Microsoft actually bought the company that hacked their operating system just shows how little moral value is present in any of this.

      • ryper@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        I think their mobile apps were in on the contact snooping too, it wasn’t just Windows

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 hours ago

      It’s weirdly used as a normal social media platform by a ton of people I’ve worked with over the years. I have no idea why, tbh, but they’re out there.

      • solsangraal@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        LOL TIL

        whatever. it’s stupid and it sucks balls, but it’s better than instatwitsnapbooktok

        • misk@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          It really isn’t. Lots of weirdos came out during the pandemic. It’s pretty much a cringier Facebook now. The only difference being you have to be on FB for some neighbourhood groups and you have to be on LI for your unimportant job at a multi-billion dollar company.

          • solsangraal@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            sorry, i didn’t mean to give the impression that i give a rat’s ass whether or which social medias are better than others

  • TachyonTele@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    3 hours ago

    “AI is bullshit, and you’re dumb for using it” is what they are saying. It’s amazing.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      38 minutes ago

      “AI can bullshit, and you’re dumb if you don’t verify it.”

      I’m always surprised at the amount of people that expect an algorithm, built by rawdogging literally half the internet, to be an arbiter of truth.

      • TachyonTele@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        33 minutes ago

        I think the massive push for it by every single company gives the layman a picture of “everyone uses it so it must be good”, combined with most people just simply not caring enough to think too much into it.

        Kind of an aside, but I’m really hoping for technology platue of some sort, with the hopes that people really have a chance to look at everything and ditch all the crap.

        And then another period of growth from there.

  • fluxion@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 hours ago

    If companies don’t trust their own AI on their own sites then they are pushing a shitty unvetted algorithm and hiding behind the word “AI” to avoid accountability for their own software bugs. If we want AI to be anything other than trash then companies need to be held accountable just like with any other software they produce.

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    Honestly, the AI information might be better than most of the dog shit insights people post on that platform.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    I would like to join politicians and corporations in divorcing the conventional relationship between my actions and their consequences.

    Where to I sign up?