• m0darn@lemmy.ca
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    22 days ago

    I think it may be possible if you understand a difference between the right to speak and the right to be heard.

    Ie the right to say something doesn’t create an obligation in others to hear it, nor to hear you in the future.

    If I stand up on a milk crate in the middle of a city park to preach the glory of closed source operating systems, it doesn’t infringe my right to free speech if someone posts a sign that says “Microsoft shill ahead” and offers earplugs at the park entrance. People can choose to believe the sign or not.

    A social media platform could automate the signs and earplugs. By allowing users to set thresholds of the discourse acceptable to them on different topics, and the platform could evaluate (through data analysis or crowd sourced feedback) whether comments and/or commenters met that threshold.

    I think this would largely stop people from experiencing hatespeech, (one they had their thresholds appropriately dialed in) and disincentivize hatespeech without actually infringing anybody’s right to say whatever they want.

    There would definitely be challenges though.

    If a person wants to be protected from experiencing hatespeech they need to empower some-one/thing to censor media for them which is a risk.

    Properly evaluating content for hatespeech/ otherwise objectionable speech is difficult. Upvotes and downvotes are an attempt to do this in a very coarse way. That/this system assumes that all users have a shared view of what content is worth seeing on a given topic and that all votes are equally credible. In a small community of people, with similar values, that aren’t trying to manipulate the system, it’s a reasonable approach. It doesn’t scale that well.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      22 days ago

      I think you misunderstand the point of hate speech laws, it’s not to not hear it, its because people rightly recognize that spreading ideas in itself can be dangerous given how flawed human beings are and how some ideas can incite people towards violence.

      The idea that all ideas are harmless and spreading them to others has no effect is flat out divorced from reality.

      Spreading the idea that others are less than human and deserve to die is an act of violence in itself, just a cowardly one, one step divorced from action. But one that should still be illegal in itself. It’s the difference between ignoring Nazis and hoping they go away and going out and punching them in the teeth.

      • m0darn@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        22 days ago

        I support robust enforcement of anti hate speech laws. In fact I’ve reported hate speech/ hatecrime to the police before.

        We’re not talking about laws, we’re talking about social media platform policies.

        Social media platforms connect people from regions with different hatespeech laws so " enforcing hatespeech laws" is impossible to do consistently.

        If users engage in crimes using the platform they are subject to the laws that they are subject to.

        I don’t care that it’s legal to advocate for genocide where a preacher is located, or at the corporation’s preferred jurisdiction, I don’t want my son reading it.

        The question was: is there a way a platform can be totally free speech and stop hate speech. I think the answer is “kinda”