I saw this article, which made me think about it…

Kids under 16 to be banned from social media after Senate passes world-first laws


Seeing what kind of brainrot kids are watching, makes me think it’s a good idea. I wouldn’t say all content is bad, but most kids will get hooked on trash content that is intentionally designed to grab their attention.

What would be an effective way to enforce a restriction with the fewest possible side effects? And who should be the one enforcing that restriction in your opinion?

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    26 days ago

    I recall that some years ago Facebook was looking into their algorithm and they found that it was potentially leading to overuse, which might be what you’re thinking of,

    No, it was recent, and it was an opinion style piece not news.

    but what actually happened is that they changed it so that people wouldn’t be using Facebook as much.

    Can you back this up? Were they forced to by a court, or was this before the IPO when facebook was trying to gain ground and didn’t answer to the share market? I can’t imagine they would be allowed to take actions that reduce profits, companies are legally required to maximise value to shareholders.

    Anyway, when you say the algorithms are demonstrably unsafe, you know you’re wrong because you didn’t demonstrate anything, and you didn’t cite anyone demonstrating anything. You can say you think they’re unsafe, but that’s a matter of opinion and we all have our own opinions.

    I mean it doesn’t take long to find studies like A nationwide study on time spent on social media and self-harm among adolescents or Does mindless scrolling hamper well-being? or How Algorithms Promote Self-Radicalization but I think this misses the point.

    You’ve grabbed the part where I made a throwaway comment but missed the point of my post. Facebook is one type of social media, and they use a specific algorithm. Ibuprofen is a specific type of drug. Sometimes ibuprofen can be used in a way that is harmful, but largely it is considered safe. But the producers still had to prove it was safe.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      26 days ago

      Here’s one example of Facebook adjusting its algorithm several years ago. You can remark that it ought to do more, and I may agree with you, but that’s totally different from saying it doesn’t do anything positive. https://www.washingtonpost.com/technology/interactive/2021/how-facebook-algorithm-works/

      If your argument is that there can be drawbacks to using social media, I think everyone agrees. But remember, we were told horror stories about pinball, pool, comic books, chewing gum, Dungeons and Dragons, the list goes on and on. So with that in mind, I hope you can understand why I’m not convinced by a few studies that social media is net negative in value.

      And the reason we have laws requiring careful drug testing is because of damage that was done in the past, proven damage that actually happened, people whose lives ended short because they were doing things like imbibing radioactive chemicals. Your suggestion that we ought to treat social media the same is putting the cart before the horse. The burden of proof is on you, not on social media companies.

      • Dave@lemmy.nz
        link
        fedilink
        arrow-up
        2
        ·
        25 days ago

        I think we ultimately have different beliefs about how things should work. I think companies should prove their products are safe, you think things should be allowed unless you can prove it’s not safe.

        I get it, and I think it’s OK to have different opinions on this.