When I returned to Duolingo recently -- I used to use it heavily but set it aside for 2 years -- I counted 14 gamification popups in a row after my first lesson in a new language.
14! The damned popups lasted longer than the lesson had!
I switched over to Busuu, which has blatantly copied some of Duolingo's mechanics but at least uses them with a modicum of restraint.
This sort of notification-barrage is a common problem in mobile apps with multiple teams and I really wish it wasn’t. I still use Facebook quite a bit and I’m consistently frustrated by how degenerate the concept of a “notification” has become. Some of the finest engineers I know work at Meta, I know it’s not a technical problem, I think it’s an organizational problem. For example…
Team A ships feature X and sets their KPI to some arbitrary measure of engagement. They miss, obviously, but instead of regrouping and hitting the drawing board, A doubles down and pressures Team B to point towards X in feature Y. A sees some marginal level of gain in engagement for X, obviously, so the intervention is deemed a success. 6mos later, Team A is asked to return the favor and add a modal pointing to new feature Z, per the request of Team B.
I don’t really know what the solution is except outside of careful org-wide watchdogging to ensure this sort of user-hostile engagement infighting gets nipped in the bud.
> This sort of notification-barrage is a common problem in mobile apps with multiple teams
That makes me think about how everyone defining an operational alert/warning thinks theirs is very important, leading to so many that users time them all out and everyone loses.
It’s especially frustrating when DoorDash will happily use notifications for both order status/issues and spam various deal/promotion notifications. There’s simply no way to turn them completely off so you only get order status notifications on iOS.
I ended up disabling notifications completely (and eventually just deleting it)
For the team that worked on a feature for month it's the whole world at the time of release. Being mindful that is not the end-users whole world, but just a tiny insignificant fraction is something easily lost in denial.
I wonder if lack of consequences might be related to lack of proof. Cheating in the past looked like reusing an essay you found online or paying off someone to write it for you -- methods that offer a definitive way to prove it happened.
AI isn't particularly provable. Worse, a lot of professors are lazy and will rely on tools that tell them something was produced with AI; and like any tool, they'll produce false positives. Just imagine being expelled for writing because you don't make spelling mistakes, and do use em-dashes, bullet points and typographic emphasis.
Perhaps the only way to really to provide evidence against cheating accusations is to also provide a version history of the document the student worked on over time, plus notes (hand written preferably), and so on?
That would exponentially increase the workload of educators though, so it's unlikely to be taken up.
(I'm an instructor in a vocational field - students need to demonstrate their skills to achieve qualification, rather than write essays/reports/etc. - so AI isn't as a significant deal as it is in the academic fields.)
A lot of human empathy isn't real either. Defaulting to the most extreme example, narcissists use love bombing to build attachment. Sales people use "relationship building" to make money. AI actually seems better than these -- it isn't building up to a rug pull (at least, not one that we know of yet).
And it's getting worse year after year, as our society gets more isolated. Look at trends in pig butchering, for instance: a lot of these are people so incredibly lonely and unhappy that they fall into the world's most obvious scam. AI is one of the few things that actually looks like it could work, so I think realistically it doesn't matter that it's not real empathy. At the same time, Sam Altman looks like the kind of guy who could be equally effective as a startup CEO or running a butchering op in Myanmar, so I hope like hell the market fragments more.
This is a good point, you can't be dependent on a chatbot in the same way you're dependent on someone you share a lease with. If people take up chatbots en masse, maybe it says more about how they perceive the risk of virtual or physical human interactions vs AI. The people I have met in the past make the most sycophant AIs seem like a drop in the bucket by comparison. When you come back from that in real life, you remark that this is all just a bunch of text in comparison.
I treat AIs dispassionately like a secretary I can give infinite amounts of work to without needing to care about them throwing their hands up. That sort of mindset is non-conducive to developing any feelings. With humans you need empathy to not burden them with excessive demands. If it solely comes down to getting work done (and not building friendships or professional relationships etc.) then that need to restrain your demands is a limitation of human biology that AIs kind of circumvent for specific workloads.
You've just made me realize that I actually do need that as a macro. Probably type that ten times per day lately. Others might include "in one sentence" or "only answer yes or no, and link sources proving your assertion".
No matter how many times I get ChatGPT to write my rules to long-term memory (I checked, and multiple rules exist in LTM multiple times), it inevitably forgets some or all of the rules because after a while, it can only see what's right in front of it, and not (what should be) the defining schema that you might provide.
I haven't used ChatGPT in a while. I used to run into a problem that sounds similar. If you're talking about:
1. Rules that get prefixed in front of your prompt as part of the real prompt ChatGPT gets. Like what they do with the system prompt.
And
2. Some content makes your prompt too big for the context windows where the rules get cut off.
Then, it might help to measure the tokens in the overall prompt, have a max number, and warn if it goes over it. I had a custom, chat app that used their API's with this feature built in.
Another possibility is, when this is detected, it asks you if you want to use one with a larger, context window. Those cost more. So, it would be presented as an option. My app let me select any of their models to do that manually.
When you click a button in Unity or Roblox or whatever to generate a new texture, the thing that gets generated comes from a model that could not have been built without using IP. But because it all got chucked into a blender and turned into an anonymous slurry -- and because AI is a politically important growth industry -- the people whose work went into the slurry will not benefit, at all. They'll never see a dime, while the companies selling the slurry will get billions. A lot of those people are the exact ones whose job will be replaced, which is extra painful when you know it was your own work that was used to replace you.
Although in a sense it's pointless to bring up because that milk is already spilt, and it ain't gonna get back into the container.
Fun fact since you bring up treasuries, Marjorie Taylor Greene appears to have moved several hundred thousand dollars (probably most of her assets) into treasury bills a few days before the tariffs announcement, pretty much like she knew exactly what was coming. I noticed that tonight with a new site someone else here made: https://www.capitoltrades.com/politicians/G000596
I suspect there's real corruption to be found, anyone who knew the contents of "Liberation Day" could make millions off a very obvious bet. Too bad we don't care about corruption anymore!
I am no fan of the administration or Greene, but the tariff day was well communicated beforehand (though, not the magnitude). Getting out of the market before the chaos hit the fan, seems like a fiscally savvy move.
> unfathomable that a nation in such a privileged position would decide it's tired of such a systematic set of advantages.
That's just the thing, you've got it exactly backward! The US has been "looted, pillaged, raped and plundered by nations near and far," and it's far less rich than it should be.
Icarus is taking flight, the rest of the universe better watch out. And for those stuck getting carried along toward the sun... tough shit, there's not going to be enough time to convince a population of born and bred narcissists that it's really hot up there before the meltdown.
If you went looking in China, you'd find plenty of really stupid or bad things that happened as a result of the government's ability to easily requisition land for public use. Ruined ecosystems, displaced families, corruption, the works.
But it's quite obvious that any large-scale building would result in at least some of that. There is no way to "figure out" how to avoid all harms or how to avoid at least some broken eggs in the basket. Personally I'm all for North Americans figuring out what a "greater good" is, but the idea that it'll actually happen is laughable; we can't, we won't, we'll simply fester in what we have.
If someone changes and begins to continually insists that something plainly untrue is true, does that mean that they possibly still have the values they used to? How long do you continue defending the "well, maybe..." case?
Throw out the Jan 6th example, it's now ancient history. As a party, Republicans are, at this very instant, claiming that judges are acting illegally for... using their constitutionally mandated legal powers. Simultaneously, but separately, the party apparatus is repeating on a daily basis a new conspiracy theory that the judges they don't like are being controlled by some nefarious power.
And it's a very, very well established playbook. We have many examples of countries that dismantled their systems of transition of power and division of power starting with the courts. It's a move that could pretty much make it into a "For Dummies" book.
"The value is still there." I can't see it. But maybe I'm too focused on judging on the entire scope of action and speech, rather than a very narrow bit of speech that isn't at all reflected in actions.
14! The damned popups lasted longer than the lesson had!
I switched over to Busuu, which has blatantly copied some of Duolingo's mechanics but at least uses them with a modicum of restraint.
reply