+1 from another happy Whispr Flow power user. I tried 4-5 similar apps and even built one with Assembly AI, but Whispr is a significant upgrade above the rest for correctly recognizing my accent and jargon. Having the custom vocabulary helps.
Do you happen to have a link with a more nuanced technical analysis of that (emergent) behavior? I’ve read only the pop-news version of that “escaping” story.
There is none. We don't understand LLMs well enough to be able to conduct a full fault analysis like this.
We can't trace the thoughts of an LLM the way we can trace code execution - the best mechanistic interpretability has to offer is being able to get glimpses occasionally. The reasoning traces help, but they're still incomplete.
Is it pattern-matching? Is it acting on its own internal goals? Is it acting out fictional tropes? Were the circumstances of the test scenarios intentionally designed to be extreme? Would this behavior have happened in a real world deployment, under the right circumstances?
The answer is "yes", to all of the above. LLMs are like that.
I think the issue is more that a lot of people weren't cynical enough. I knew Bitcoin was a shit currency when I first heard about it, and thought that was all there was to it. I didn't understand that while it was a shit currency, it was a great speculative asset. I thought people would look at it, go "that's dumb", and move on. Apparently I hadn't heard of, or understood the Dutch tulip mania and similar historical events. I presumed people would be better than they turned out to be, and that cost me a lot of potential capital gains.
It takes no imagination or insight to see reasons why something wouldn’t work. It’s the default mental pathway for every risk-averse beast. Skepticism is not born out of contentment and abundance but out of self-preservation. It’s not correlated with feeling enough, but with feeling bitterness and envy of those who took risks and gained an advantage instead of suffering consequences.
People who are content feel less need to take risks by accepting dubious statements without proof. They have what they need so why risk it for more?
Sceptical people will be grounded by what we know to be true. They will explore new ideas but will not be swept up by them. We need people like that or we'll waste our time on flights of fancy. But we need the irrational optimists to explore new ideas too. It's a classic exploration vs exploitation trade-off.
Many people who have risked their money by placing it on Bitcoin likely had enough, and they risked the extra money that they had lying around. Why not place bets on something you think might be probable? Is there something morally wrong in making some extra buck? Is it morally superior just to keep your money lying on bank account or what?
I'm pretty sure these peeps who hang out at /r/buttcoin are going to work like regular people to get some fiat currency to their beloved government blessed bank accounts. So I guess they don't feel like they have enough.
To be honest I don't think the skeptical people thought bitcoin's success was probable and that's why they didn't bet on it. It's not really anything to do with them being content with what they have.
But it could be this too in some cases.
Some people do things unless they find a reason not to but so a skeptical person will only do things if they find a reason.
People who really feel they have enough might not see any reason to spend their time or effort placing bets, even on things they think are probable. But I don't think many people think that way.
These spam repositories have been deleted, but I still had lingering notifications stuck on GitHub, and I couldn't see them in the UI to remove them (but the small blue notification dot was constantly on). The API hack resolved this problem.
> clean/erase/undo/soft-delete/hard-delete mistakes[...] make the change tracking capable of time travel itself [...] Transitioning to an EAV
I just finished building out all of that + more (e.g., data lineage, multi-verse, local overrides, etc), also on PG. Reach out if you want to chat and get nerd sniped!
For “Organizational Use”: the people who designed and own Myers-Briggs say this is a bad idea -
> The MBTI® assessment is not intended for use in selection of job candidates, nor for making internal decisions regarding job placement, selection for teams or task forces, or other similar activities. […] Given that it is not appropriate for selection, there have been no meaningful studies evaluating the MBTI’s ability to predict job performance. Established researchers in the field of predicting job performance would not use the MBTI assessment for this purpose.
1) MBTI cannot be used adversarially - if people know certain types are preferred, they will answer to the test.
2) MBTI is strictly worse than Big 5 in study design since it doesnt have continuous variables and they correlate anyway.
3) There are huge liability issues if you say your product will help you hire or promote in the united states. Like q tips saying you can't ever use them in your ears.
Apparently there's 500 hours of video uploaded to YouTube every minute (30 seconds every millisecond). Assuming 4K@60fps, that works out to 14,929,920,000 pixels per millisecond.
If YouTube wanted to give every incoming pixel its own UUIDv7, they'd see a collision rate just under 0.6%.
> Assuming 4K@60fps [...] they'd see a collision rate just under 0.6%
This doesn't detract from your point of collisions like that being viable at that scale, but assuming an average of 4K@60fps is assuming a lot. The average video upload there is probably south of 1080p@30fps.
You're glossing over the fact that they assumed youtube would want to assign a UUID to each pixel in a 4k@60fps video as the use case that this would fail for...
I like UUIDv7s as database IDs since they sort chronologically, are unique, and are efficient to generate. My system chooses the UUIDs; I don't allow externally generated IDs in. If I did, then an attacker could easily force a collision. As such, I only care about how fast I create IDs. This is a common pattern.
If your system does need to worry about UUIDv7s generated by the rest of the universe, you likely also need to worry about maliciously created IDs, software bugs, clocks that reset to unix epoch, etc. I worry about those more than a bonefide collision.
Your app is must be popular to be having an entire universe "amount" of users lol
joke aside all of this is theorical, in practical application its literally impossible to hit it that it doesn't matters if its possible or not since you are not google scale anyway
Only if the other side uses your key as theirs, and uses it to store data from many sources. I, personally, don't feel it's hardly worth considering. A primary key under your own control doesn't cost much, and is a better choice.
Another strategy is to pass your values as an array param (e.g., text[] or int[] etc) - PG is perfectly happy to handle those. Using ANY() is marginally slower than IN(), but you have a single param with many IDs inside it. Maybe their ORM didn’t support that.