And, if that's the case, most smaller companies will just remove that content the second they get a threat over it. So congrats, Brian, you just created a massive, legally backed, heckler's veto to remove any content someone doesn't like. GREAT JOB!
And, also, you still haven't admitted that you lied about me turning off comments. Maybe sit this one out, champ.
... that everyone just automatically assumes that every service must be engaging in bad behavior. This one is totally opt-in, only works if BOTH people opt-in & involves encrypted number pairs. There are all sorts of safeguards here. But because people have been burned, they insist this must be bad.
Bluesky folks came up with a fundamentally new and privacy-protecting method of doing this unlike every other service. Almost every negative response is based on how people assume Bluesky is doing this based on how bad everyone else does it. Bad actors have so poisoned the entire market... View quoted note →
Unless you're trying to argue that machines can reliably determine what is and what is not a true threat, which would be a strange thing to argue, I'm still not sure I see your point.
Unlike a lot of people, I am not reflexively against the idea of an "AI browser," but I remain confused that no one who is offering "an AI browser" has yet explained to me in any compelling way what benefit I get out of an AI browser. It's possible it's there, but then... maybe explain why?
Picking and choosing what to send to a user is a subjective opinion of "this is what I think you'll like." And a subject opinion of "this is what I think you'll like" cannot violate the law. It is an opinion. It is not, as the old saying goes, an "endorsement."
And, under the First Amendment, for a distributor to be liable for violative content, the distributor has to have *knowledge* that the content is violative. That is, they need to know that the content violates some law. And an algorithm can't really know that.
The sleight of hand being pulled here is that they pretend that by hosting or showing you someone else's speech, that now makes the platform liable for whatever harms that speech enables. But that's not how it works, and it's not how it works in areas without 230 either.
If the "harm" is directly from actions by the social media company, Section 230 does not protect them. But, the problem that people have is that the "harm" they're upset about comes from third party *speech* on those platforms, and they don't want to go after the actual speakers.