They always show me my total before the cars swipe, so as long as the obfuscation works until the card swipe, at least it would prevent dynamic pricing.
I mean that assumes that you can’t assign the highest price to non-facially recognized people.
Part of the dynamic pricing is that you don’t need to have specific individual targets to do cluster based pricing
So if I am running the dynamic price tuning, then I’ll just jack up prices if faces are obfuscated.
You have to understand the moment you walk into any private establishment that’s a business, you are quite literally walking into a Skinner box at this point.
It isn't just how fast or slow it is. Reading at a slow pace gives you time to think in a way that is flexible from sentence to sentence.
To borrow the same analogy from the article, image trying to savor a meal where someone else was deciding when you take each bite. Even at a slow pace, the rigidness of the pace and your lack of fine control would still pose a problem with giving each bite it's rightful consideration.
That being said I love audio books and think I would struggle to apply this article's advice in my own life. Slowing down your audiobook is still a step in that direction, though I sometimes find that slowing it down can cause my mind to wander and my comprehension goes down and not up.
No, just the opposite. In general, the things wealthy people buy (luxuries) experience much larger swings in demand due to price changes like added taxes (in economic terms, "the elasticity of demand"). It's because they are only wants and not needs. They are also usually easily swapped. Instead of buying your wife those diamond earrings, you could get her a painting or a trip to Spain. And rich people are often very money savvy.
It's the necessities that people will continue to buy (or at least replace with close substitutes), regardless of what happens to the price.
Obviously, in this case it worked out much differently, but no, in general you can't say the wealthy people don't respond to price changes due to their wealth.
I feel like we watched different videos.. Seemed like the AI (or other monitoring system) recognized a problem with the 18000 cups of water order and quickly transitioned to a real human. That instance looked pretty production ready to me.
I interpreted it as the AI system added something strange to the order, and when someone saw it, that’s when the system was cut off. Otherwise the next word sounded like a confirmation
That said, this is not the only video floating out there of these type of systems not handling edge cases elegantly
A lot of digital ones are "local" too in that they are context specific. As long as it stays context specific, your Uber rating is closer to being liked by your local bar tender than it is to the Chinese social credit system. Even your local bartender has a little context leakage.
I agree there is a scarier potential there. And also some do, on occasion, escape their context (mostly credit score). They also have bigger contexts, but not so big that I would jump to the Chinese social credit comparison.
That's absurd. That doesn't pass the sniff test at all for being remotely true that people would react like that to only a 3 percent tax.
I looked it up, and it was a 3 pence tax per pound. When tea was selling for 2 to 3 pence per pound. So yeah, a 100-150% tax combined with the fact that the East India Company was allowed to sell without paying the tax. That is very unjust and threatens their business a lot more than the tax alone.
One potential issue with that approach is the factors wouldn't stay very constant across generations of AI models.
While a lot of people have used various methods to try to gauge the strength of various AI models, one of my favorites is this time horizon analysis [1] which took coding tasks of various lengths and looked at how long it takes to humans to complete those tasks and compared that to chance that the AI would successfully complete the task. Then they looked at various threshholds to see how long of tasks an AI could generally complete with a certain percent threshold. They found the length of a task that AI is able to complete with a various threshholds is doubling about every 7 months.
The reason I found this to be an interesting approach is both because AI seems to struggling with coding tasks as the problem grows in complexity and also because being able to give it more complex tasks is an important metric both for coding tasks or more generally just asking AIs to act as independent agents. In my experience increasing the complexity of a problem has a much larger performance falloff for AI than in humans where the task would just take longer, so this approach makes a lot of intuitive sense to me.
I agree Hyundai should fix this for free (would make up a small portion of the bad PR for having this issue in the first place), but don't forced recalls usually only apply to defects that cause safety issues?
I'm not sure this would fit the definition of a product safety defect.
I think your take makes more sense in a world where you actually own the car fully and have the freedom to do what you want with it. Even if someone was able to write this patch themselves without the source code, distributing it would require owners to root their devices, which isn't legal in all jurisdictions.
You don't expect Microsoft or Adobe to issue fixes any time someone finds a remote exploit that let's attackers gain control of you system though security issue in their software? I 100% expect this of my software vendors even for this purchase in the past. The expectations for software and hardware are certainly very different, but even for hardware we have laws that force companies to fix their hardware in some situations.
reply