It's terrifying to know that this is the process for an alert like that going out. I agree with other commenters that this looks like an editorial mistake, but even if not-- a bad-actor journalist (or someone having compromised their Slack account) can get a headline like this on a major wire service in ~10 minutes!? What?!
First, the knocks on post-production completely ignore the history of film development, editing, & printing. The time to an output photo product has only gone down. Drastically.
And this piece ignores the needs of the commercial photographers high end glass is designed for, who need not a toy but as few pieces of our as possible to land as many jobs as possible. When you’re working for clients, it’s easier to own less glass and a few pieces of software that will make it look however you want.
Most technical adults I know are in some stage of trying to revert their internet usage back to the offerings this bill would allow through. The less than 1 million users careveout is HUGE— allowing for the power of small forums, federated socials, etc. seems like a great bill unless you’re trying to exploit young people.
It seems you're implying everyone who disagrees with you is trying to exploit young people. It's the same rhetoric as the politicians who drafted the bill. We need to have open minds in civil society, some humility to anticipate desenting opinions, and careful fingers to type out comments with.
I have some concerns about this bill and tactic. For example, even if it worked as intended, would we see an uptick of problems in 18 year olds' use of algorithmic social media as they're suddenly away from home/out from under their parents' supervision and given access to a bunch of exploiting content? It's similar to how the first two years of driving are more dangerous regardless of when those 2 years take place.
And as somebody who was 12 when COPPA went into effect, there are unintended consequences of banning minors from platforms:
1.) They lie, so either that's going to be commonly known/accepted OR a giant millstone around the neck of any existing company. This makes it a lot harder to pick out accounts that belong to minors, which makes it harder to both research and protect kids.
2.) Related to that, if you're breaking the rules by being in a space, you are way less likely to speak up. If you're a 15 year old who lies and says you're 18 to be on whatever social media platform, then if somebody harasses you, you're less likely to report it because it would get your account banned.
That's not even mentioning what something like this would do for the edge case of kids who genuinely are artists or content creators.
> 1.) They lie, so either that's going to be commonly known/accepted OR a giant millstone around the neck of any existing company.
You hint at an under-mentioned point here: we want laws that encourage companies to be aware of their users and to protect them, and an unintended consequence of laws that say, "you can continue operating as normal as long as you don't know any of your users are kids" is that companies hear, "don't make any moderation or safety features that might open you up to that kind of accusation."
Of course demanding that companies know all of their users perfectly is an obvious privacy violation with obviously even worse consequences. But even though it's better than hooking up ID verification to social networks, "pretend teenagers don't use the Internet" isn't harmless policy, it's not just that there are popups people click through.
----
I personally feel like this kind of "don't knowingly target" stuff is often counterproductive to keeping kids safe online. It means that when they hang out, almost every space they enter is going to be specifically designed for adults, and will systematically ignore the fact that they exist or might have unique needs -- because ignoring that kids exist and removing safeguards is now the safest thing for the website to do.
On a really small scale, think back to when Youtube got targeted for programming "aimed at kids". One short-term result I saw from that was animators/streamers trying to deliberately make their streams less child-appropriate so they wouldn't be swept up. It's anecdotal and I'd like to see more research on it, but I vaguely wonder if the result of these crackdowns isn't often to make social sites more dangerous for kids.
> But even though it's better than hooking up ID verification to social networks, "pretend teenagers don't use the Internet" isn't harmless policy, it's not just that there are popups people click through.
Related to that, we can't just pretend teenagers have no ability or agency. Honestly, it's a toss up who would 'win' a cat and mouse game between the MN legislature and a group of teens with programming capability. Adolescents are in a developmental stage where they're establishing themselves as individuals away from their parents/adult authorities; it's natural that they're going to seek out spaces that either their parents don't know about or don't want them going to. Our job as adults is to make sure that process is safe for them while still allowing them the autonomy to learn to make good decisions.
> I vaguely wonder if the result of these crackdowns isn't often to make social sites more dangerous for kids.
I wonder this too. I was a very digital kid back in the day before there were regulations against it, and there were opportunity costs to kicking out the under-13s that would be very magnified for 13-to-18s.
It prevents kids from having their own social structures and spaces. For example, 7-10 year old me ran a curatorial site for the Geocities' kids neighborhood and late elementary school me also had an IRC channel. It seems bonkers, but there were advantages: Since I/some of the other kids could run things ourselves (with some adult help from trusted adults), it kept creepy adults from integrating themselves into the group by providing resources. (Think the stereotypical college kid buying high schoolers booze; if kids can't sign up or learn how things work, then they have to play in adult playgrounds instead of making their own.) It meant I could kick people and that the conversation was age-appropriate (because that was where I talked about kid stuff and how dare you be off-topic in my channel [kids make great dictators]).
Related, having an admin/building group of kids is really helpful as a buffer, especially in the teen years. Lots of teens aren't going to tell their parents much, but they will tell other teens, so having some teens around who know how stuff works and gives advice is helpful.
You can't legislate for teens without remembering that they have agency and will act independently.
The thing is that there's no benefit to algorithmic social media, so erring on the side of caution is responsible. And we see a clear and extremely reasonable threat. The ideal benefit is that teens won't form a social media addiction.
Algorithmic social media uses the exact same principles as gambling addictions. Teens are much more subject to these exploitations and we don't let them sit in casinos all day and loot boxes have seen some regulation as well.
Social media may be even worse because the gamble is far less tangible, it's cost in the surface is only time and the reward is purely an emotional one.
I would support efforts at restraining exploitative use of algorithmic content for all ages. The main issue is that age-restricting teens especially is logically difficult and can create unintended consequences. Teens are going to act on their own and since many of today's adults (particularly of the generation that tends to legislate in the US) were not raised in a digital world, there will be groups of teens that are better with technology/computers than the adults making laws and therefore the laws either need to account for that (and requires controls so draconian that companies are likely to just ban the kids or stop operating, which opens up the 'what about lying' issue).
You mention casinos. One difference between casinos/alcohol/cigarettes is that there are easy places to intervene/place responsibility: the point of service is a physical location subject to local law. Digital regulation is a lot murkier and easier to exploit. (And not just by creepy adults: a lot of us kids in the 90s/early 00s picked up on the argument that if we weren't legally able to be held accountable, then we couldn't be legally held accountable for things like piracy either).
The only way to ensure something like this could be enforced would be to require ID for signing up for/into any algorithmic service and HELL no. Not only should we not throw the baby out with the bathwater, we also shouldn't set the cradle on fire.
The author takes a very narrow claim— that fax machines are popular in Japan— and expands it into a weak orientalist takedown of Japan’s objectively advanced society. They even go so far as to acknowledge infrastructure masterworks like Bullet trains only to discount them because of a (still globally used!) piece of office hardware that has fallen out of favor in the US tech sector. Whatever, dude…
And...fax machines are heavily used in the medical and legal professions in the US. I have a doctor's card on my desk right now that has a fax number on it, because medical records and forms often need to be faxed. Anyone who's ever had to file a worker's comp or long term disability sort of deal knows the fun of playing fax tag between the company their employer contracts that to and the doctor's office.
That's not even touching on the cult of the magic signature. A piece of paper is suddenly special because someone scribbled something that looks roughly like their name on it, even though that doesn't prove anything.
The breath rate application really got my mind turning here— replacing 2 wearables (breath monitor, fall monitor) with a wireless device, and pulse sensing can already be done by video. Imagine walking into a hospital room or grandma’s home & having full bio monitoring with no tubes or cables…
In 2014, the ResMed office in Ireland was doing this “off label” with their S+ sleep monitor. They were very close, and the goal/target market was for seniors in long term care facilities.
Their current website states "S+ is coming to an end. Support from ResMed for the S+ sleep tracking device and app is coming to an end. We have collaborated with SleepScore Labs™ so you can continue tracking your sleep through their SleepScore Max mobile app only with your prior consent. Learn more here.. If you wish to have your account and personal data deleted, please contact us at customerservice@mySplus.com"
Does this product/service currently exist as bundle, an easily connected assortment of services, or something that needs to be created? If so, I love to know how I can help make this a reality!