This. Normal AirTags are just fine for tracking your stuff.
> "(thiefs use apps to locate AirTags around, and AirTags will warn the thief if an unknown AirTag is travelling with them, for example if they steal your car)"
The reason this was introduced is exactly because people used AirTags to stalk others. Advertising that your product turns that off is basically targeting that specific demographic.
And you can use hammers to brutally murder people as well as to drive in nails. You can use a screwdriver to grievously wound someone besides using it to repair your glasses. The fact that a tool can be used for bad things does not negate the good things it can be used for. Nor does it mean that the maker is responsible if someone chooses to use it for bad things.
As it's 4 hours off / 1 hour on, the device is not very suitable for stalking someone. Also once the AirTag is back on and the person starts moving, they will be alerted that the AirTag is tracking them.
That’s perfect for finding out where someone lives. Drop it in their bag or jacket at a concert/bar/work/whatever-in-the-evening, and the place they’re likely at in 4 hours is their home.
Not trying to be creepy, I’m just trying to demonstrate how we all need to think like adversaries (eg creeps) when designing products.
I once donated an infant car seat to a coworker but forgot I had put an AirTag on it. After she had taken it home, her iPhone told there was an unknown AirTag and she texted me. I apologized profusely and she wasn't bothered by it. Nonetheless had I been nefarious, I would have been able to get her home address.
ADC (Application Default Credentials) is a specification for finding credentials (1. look here 2. look there etc.) not an alternative for credentials. Using ADC one can e.g. find an SA file.
As a replacement for SA files one can have e.g. user accounts using SA impersonation, external identity providers, or run on GCP VM or GKE and use built-in identities.
There's a bit of a... pun(?) in there with its apparent origin as a name in Waluigi: The word is "waru" (noun) or "warui" (adjective), and with the "l" / "r" thing with Japanese pronunciation, "warui" and "luigi" combine really well.
Easy, from my recent chat with o1:
(Asked about left null space)
‘’’
these are the vectors that when viewed as linear functionals, annihilate every column of A . <…> Another way to view it: these are the vectors orthogonal to the row space.
‘’’
It’s quite obvious that vectors that “annihilate the columns” would be orthogonal to the column space not the row space.
I don’t know if you think o1 is magic. It still hallucinates, just less often and less obvious.
Sure average humans don’t do that, but this is hackernews where it’s completely normal for commenters to confidently answer questions and opine on topics they know absolutely nothing about.
"On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."
So the result might not necessarily be bad, it's just that the machine _can_ detect that you entered the wrong figures! By the way, the answer is 7.
> Can you please give an example of a “completely illogical statement” produced by o1 model? I suspect it would be easier to get an average human to produce an illogical statement.
Following the trail as you did originally: you do not hire "ordinary humans", you hire "good ones for the job"; going for a "cost competitive" bargain can be suicidal in private enterprise and criminal in public ones.
Sticking instead to the core matter: the architecture is faulty, unsatisfactory by design, and must be fixed. We are playing with the partials of research and getting some results, even some useful tools, but the idea that this is not the real thing must be clear - also since this two years plus old boom brought another horribly ugly cultural degradation ("spitting out prejudice as normal").
> For simple tasks where we would alternatively hire only ordinary humans AIs have similar error rates.
Yes if a task requires deep expertise or great care the AI is a bad choice. But lots of tasks don't. And in those kinds of tasks even ordinary humans are already too expensive to be economically viable
Do you have good examples of tasks in which dubious verbal prompt could be an acceptable outcome?
By the way, I noticed:
> AI
Do not confuse LLMs with general AI. Notably, general AI was also implemented in system where critical failures would be intolerable - i.e., made to be reliable, or part of a finally reliable process.
Wodehouse: Titanic forces beyond your control such as scheming aunts, accidental engagements, and inability to express your feelings threaten to irrevocably ruin your life forever. It’ll take a Machiavellian mastermind and a series of unlikely coincidences to extricate you from this predicament but you’ll have to pay a price.
Absolute banger.
But the auto-aim on vertical axis is missing. You should be able to have the crosshair under an enemy and still hit them.
But in any case, nicely done!
Funny enough, when I've tried to introduce (indoctrinate) friends to DOOM, "how do I aim up" has consistently been the biggest hangup.
This makes sense when I try to indoctrinate my teenager who grew up on Halo and Call of Duty. But I began noticing this hangup in the late 90s with friends my own age.