The weird thing about AI is that it doesn't learn over time but just in context. It doesn't get better the way a 12 year old learning to play the saxophone gets better.
But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.
Less learning all around equals enshittification. Really not looking forward to this.
This type of thing really incentivizes founding a startup. If you are a very senior developer, who needs the corporate stupid factory? You can do a lot of work with half the people and work for yourself.
A new study urges a move away from metaphysical debates over consciousness and instead toward a systems-theoretical perspective that redefines AI as a structurally unique communicative system.
I am going through Math Academy and I like it very much. I have done advanced technical work in my field but my math background had weaknesses from my public schooling in a large urban area and some experimental math instruction in high school. The ability to do it over is oddly exhilarating.
It's not just professionalism. It's the challenge of removing irritation from one's communications because that generally doesn't get the best cooperation.
If I may ask, were you directly involved in the process? I'm writing a book based on my experiences and would love to hear more about FANG diligence differs. I can be reached at iain c t duncan @ email provider who is no longer not evil in case you are able and interested in chatting
My current solution is the freedom app. I have all social media blocked during work hours and after 10:30 at night. I am mostly susceptible to reddit, twitter, instagram reels. I track some issues on reddit & twitter that I am genuinely interested in and impacted by. Freedom will block on the phone and laptop.
Last time this didn't work because I kept turning off the freedom app. (Sigh.) This time I seem to be holding the line though. I'm getting more done and feel better.
I think this pardon just reflects Trump's transactional politics. Ulbricht has sympathizers in high places now because crypto is all over this administration.
In the long run letting political influence trump (no pun intended) the criminal justice system is a very bad thing.
By world standards our criminal justice system is a strength of the country. A pity if we lose that.
There are strong signals that continuing to scale up in data is not yielding the same reward (Moore's Law anyone?) and it's harder to get quality data to train on anyway.
Business Insider had a good article recently on the customer reception to Copilot (underwhelming: https://archive.fo/wzuA9). For all the reasons we are familiar with.
My view: LLMs are not getting us to AGI. Their fundamental issues (black box + hallucinations) won't be fixed until there are advances in technology, probably taking us in a different direction.
I think it's a good tool for stuff like generating calls into an unfamiliar API - a few lines of code that can be rigorously checked - and that is a real productivity enhancement. But more than that is thin ice indeed. It will be absolutely treacherous if used extensively for big projects.
Oddly, for free flow brainstorming like associations, I think it will be a more useful tool than for those tasks for which we are accustomed to using computers, required extreme precision and accuracy.
I was an engineer in an AI startup, later acquired.
> Their fundamental issues (black box + hallucinations)
Aren’t humans also black boxes that suffer from hallucinations?
E.g. for hallucinations: engineers make dumb mistakes in their code all the time, normal people will make false assertions about geopolitical, scientific and other facts all the time. c.f. The Dunning Kruger effect.
And black box because you can only interrogate the system at its interface (usually voice or through written words / pictures)
But using it heavily has a corollary effect: engineers learn less as a result of their dependence on it.
Less learning all around equals enshittification. Really not looking forward to this.