What bambulab printers that have the "A1" series nozzle, replacing one is a matter of 3 seconds, just by naked hands. 10x times faster than in the Prusa world (Nextruder included). Being this the single most important thing a printer need from the POV of servicing, this is a huge difference.
How often are you changing nozzles? I can't think I've ever done it except for swapping out the manufacturer supplied brass with a hardened steel after or arrives
For tabletop gaming I'm switching between nozzles pretty regularly, so I can have fine detail on small minatures and faster prints for bigger terrain, bases, scatter and grid pieces.
I'm not changing out nozzles daily, but I do sometimes make small parts that need the .2mm nozzle. Not for gaming or miniatures, but typically mechanical interfaces where the .2mm nozzle gives a better result.
But then for most stuff I use a .6mm high flow nozzle.
It's worth noting though that this part of the game is about to change majorly for both brands.
Prusa is widely known to launch their next-generation tool changer at Formnext in Nov, which is going to be a concept where you have a rack of nozzle+heatpipe+filament tube tools that the print head grabs and heats up inductively. And Bambu is more or less working on something similar they will probably launch some time next year.
This is totally changing the "nozzle swap" equation. It means purge-free and much faster multi-material printing without outright duplicating print heads like the XL does, and the ability to park and mix nozzle sizes as well.
It'll be cool to see which company pulls it off better. As someone who was never convinced by either Bambu's shoddy and wasteful AMS or Prusa's ridiculously humongous MMU+Buffer approach, this is the leap I've been waiting for for an upgrade.
Edit: Amazing move to downvote a comment that simply and neutrally adds new information to the thread.
Prusa is going to use Bondtech's upcoming INDX system, which swaps out the entire filament path.
Bambu Vortek seems to just be swapping nozzles so, while that should cut down on waste, it's going to be much slower (XL is already much faster than AMS based printers but comes at a substantial price increase).
INDX tool changes are expected to be around 8-12 seconds. Vortek would be probably be around 30+ seconds.
Interesting, yeah. I'm mainly interested in making multi-material faster without upping to the size of an XL, so it looks like the Prusa solution would be a better fit for my needs personally.
If Bambu are only swapping nozzles it also means they still need something AMS-like to swap the filament path, which somehow feels a bit clunky. I think having seperate filament paths is overall a cleaner and simpler design.
Maybe because you're incorrect about the AMS being the problem? The AMS merely feeds the correct filament. It's not the fault of the AMS that the printer doesn't have multiple nozzles.
The problem with the AMS is it feeds all the filaments into one nozzle so it requires a purge every time the color changes. There are other filament spool designs without this problem.
EUs have lower chronic diseases like diabetes and hypertension than USA. Those are not diseases that have any answers in medical system so it wouldn't matter how advanced and available the system is.
For example, 40% of ppl in usa are obese vs 12% Switzerland. 50% of ppl in usa have hypertension vs 20% Swiss.
So what exactly is a medical system supposed to do if half your population is sickly and obese ?
I see this 'medical system' stuff even from very educated ppl but I feel like i am missing something. Do ppl think having access to a doctor is going prevent one from being obese ? whats the logic.
The difference doesn’t come down to one single factor.
Comments that try to reduce population-scale differences to a single factor, like access to healthcare, are overly reductive. When it comes to obesity (not using being overweight, but truly past the obese threshold) you don’t need a doctor to inform you that it’s unhealthy.
The reductive claims about access to healthcare are also ignoring the fact that people in the US do actually use a lot of healthcare. The rate of GLP-1 use in America for weight loss is around 1 in 8 people, which is significantly higher than anywhere in Europe last time I checked. Obviously the higher obesity rate contributes to higher usage, but it demonstrates that many obese people in the United States are not lacking access to health care.
> Do ppl think having access to a doctor is going prevent one from being obese ? whats the logic.
Doctors can vary in whether or not (and for how long) they advocate trying a healthy diet and exercise before prescribing drugs. In the UK the system is incentivised to avoid drug prescriptions unless necessary, as it reduces the financial burden on the NHS - both for buying the drugs and for managing complications linked to obesity. In the US, pharma companies can offer money and perks to doctors who promote their products.
Some people have no idea how to diet or exercise, or have no idea that they're overweight, or have specialised conditions that make it hard to follow generic advice. These people might find it really valuable to receive individualised advice and education from their doctor.
Also this would often be in the context of the patient coming to the doctor with a complaint. If the doctor says "trying eating healthily and exercising, then come back in a month for a follow-up", some might just do nothing but many people will actually try it.
To me there is a big difference between a "health care system" and a "medical system".
One is only here to try and fix issues, while the other will invest in prevention campains and help direct the overall politics around having an healthy population.
To me the recent EPA decision around PFAs is a signal of a deficient "health care system".
now the definition is so nebulous , no one know what the other person is talking about. It can include things like economic system and I can say 'for me stock market is the health care system' .
I realized that I ate way more chocolate than average Swiss people (Googled and it says around 24 grams per day for average people in Switzerland). I usually eat about 50 grams daily...and 72% dark
Having a socialized healthcare system incentivizes the government to ban the worst public heath offenders. High fructose corn syrup would have been long gone from most foods in a sane society, for example. Generally, making the government have a vested interest in its citizenry's good health is a good thing.
1) The Amish do not live an 1800 lifestyle. For example, if someone is sick and needs to go to the hospital, they use a phone to call an ambulance to take them there.
2) There are a lot of things wrong with the American health care system, but a lack of care for white males is not actually one of them.
> The Amish do not live an 1800 lifestyle. For example, if someone is sick and needs to go to the hospital, they use a phone to call an ambulance to take them there.
The Amish are very deliberate about what changes they incorporate into their communities. Each community also sets their own rules, so it's poor practice to generalize.
(For example, their attitudes towards electricity are quite complicated and I don't think I could do it justice in a quick post.)
I almost want to disagree with you here but I’m not fully apprised of the greater situation.
My dad is poor and neglectful of himself. He had a stroke. He got ambulanced to the emergency room and spent a good deal of time there.
The hospital discussed billing which was several hundreds of thousands of dollars. Well he can’t afford that. The hospital had us talk to some advisers and they got him on a state Medicaid (?) plan. The plan retroactively paid for it all.
He then got checked out for a variety of other issues including a severe spinal issue and a hip replacement for 0 out of pocket.
It’s great. He’s a changed man who is active and takes care of himself now.
I also had a major medical event and I have since paid tens of thousands out of pocket after insurance. At one point we were investigating if I could essentially quit work for a bit, go on the Medicaid plan, get better, and then go back to my job. That is madness!
States that opted into the ACA Medicaid Expansion and generally fund hospitals have great emergency care for poor people. There's a kind of missing middle where once you're above the income threshold for Medicaid but aren't working for a job that's willing to fund an extremely good health plan you have to deal with all sorts of deductibles and prior authorizations and stuff. Plus, non-emergency care, especially from specialists, has gotten longer and longer wait times unless you're lucky enough to live in a region with mostly healthy people that also aren't the "worried well."
Tl;dr, it's incredibly patchwork, and everyone's experience is going to vary depending on their state's individual social safety net, the overall health of their local population, the particular insurance network and hospital network they have access to, and their individual income.
Also, the US has a federal law that no hospital that accepts Medicare patients is allowed to deny care in the case of an emergency based on someone's ability to pay. That means that a lot of very poor people will get incredibly expensive emergency care for free, while not being able to afford the basic preventative care that would keep them out of a state of medical emergency. That isn't really the hallmark of a particularly functional system.
You have rights in an emergency room under EMTALA
Doctor talking to a patient
You have these protections:
1. An appropriate medical screening exam to check for an emergency medical condition, and if you have one,
2. Treatment until your emergency medical condition is stabilized, or
3. An appropriate transfer to another hospital if you need it
The law that gives everyone in the U.S. these protections is the Emergency Medical Treatment and Labor Act, also known as "EMTALA." This law helps prevent any hospital emergency department that receives Medicare funds (which includes most U.S. hospitals) from refusing to treat patients.
In the last ten years Poland life expectancy has risen by 5 years, in the same time period US was almost stalled. Now (as of the 2025 projection), Poland has slightly better LE overall.
However, approx. 9% of US population is uninsured, in Europe (including Eastern Europe) health care systems are universal.
Edit: Even for those insured, insurance claim rejection rate is 19%
Yeah, agree. Medecine there has caught up in Poland for sure, and it's catching up in Romania and Hungary too. Eastern europe will have road safety issues for a long time though, so i don't see them reaching WE europe LE at birth until like 2050 (as long as WE don't crash its LE by americanizing, which it might do). LE at 60 though should be around the same level soon enough (road safety do impact older people, but not as much as kids and young men)
Road fatalities in Poland and Czechia were 50 per million inhabitants in 2022 according to this report.
https://road-safety.transport.ec.europa.eu/document/download...
This is already less than Italy and only slightly above EU average. I don't think that small number affects much LE though.
No screens is a good assumption for everyone at the time the study covered - TVs were just coming out towards the end, and were expensive enough that not everyone owned one yet.
When I was very young, my stepfather started a trucking company. We didn’t get along terribly well so my mom thought that driving together would solve our problems. We would hotshot recreational vehicles two to a flatbed and haul them from an Amish community east of Chicago to their dealer destination.
So, we got to know some people in the community and learned some things that would be relevant to this. One big one is the Amish view on technology. With 1965 data, especially looking at farmers, you’ll see variations in pest control tech. Amish people are not against all technology but they evaluate it differently.
For the Amish, they look at a technology and ask whether it will pull them together or push them apart. Farm chemicals would increase yields, but dramatically reduce the number of people they could have working on fields. So many colonies avoided highly toxic chemicals like DDT that were released during or after WW2. And because there was some resistance to Amish people, they tend to congregate together and so you’ll have colonies bunched up in areas - some colonies avoided water table contamination through a freak of geology and cousins who shared a belief on technology.
So nutrition does play a role - food in Amish communities is very whole and very close to natural. As an example, my stepfather was quite affable and so we’d take doughnuts to the factory where we picked up RVs. Certain companies have so much sugar in their doughnuts that it felt like giving people drugs. Physical activity is a constant. And their community plays a massive role in life and life expectancy but this data is from 1965 and looks at farmers so chemical use is definitely part of these findings as well.
Great article not mentioning local backups were already available and what this is about. The state of affairs in iOS vs Android of the past feature and the next one. Details of all the kind are missing. WTF.
The problem with all that, is the fact it remains possible to create a protocol with N big institutions (governments and large tech companies, big non profit organizations and so forth) signing every block, to create a collaborative system that is perfectly suited for the same task. The system can make progresses as long a given fractions of the participants is available and so forth, there are a number of well known protocols to do so. This maintains many benefits of the blockchain and lacks many issues (fast, simple, near zero cost, controllable to a given extent -- no takeover possible, ...).
> The problem with all that, is the fact it remains possible to create a protocol with N big institutions [...] This maintains many benefits of the blockchain and lacks many issues (fast, simple, near zero cost)
That's more or less exactly what this is. Stripe is launching an EVM L1.
The Ethereum Virtual Machine part gives it a mature tech stack with experienced developers and auditors. Plus, well-tested smart contracts that have already processed billions of dollars on other chains can be deployed on Tempo.
The "Stripe L1" part will ensure that it's fast, simple, near zero cost.
If we skipped the whole blockchain part, wouldn’t it be faster, simpler, cheaper? What value does the whole blockchain, EVM, L1 offer? Don’t they fully control the network? Don’t they decide “everything” anyway?
I’d love to understand it, I’m not a hater, just a developer who don’t quite get this announcement.
good questions - and your questions are, or could be, actually rhetorical. Yes, they are the validator and thus they control the transactions. It could be as simple as having a Database at the end ... Well I can think of two things:
1- they start by owning all validators, maybe they expect to open validators to other entities at some point in the future. If these entities don't collude together, we could expect some sort of neutrality
2- Marketing - because crypto is coming at an ATH and why not getting some good marketing for free (or almost)
And people mentioning costs, this is not particularly relevant. L2s are extremely cheap by most standards, let alone by Stripe standards which charge horrendous fees.
My questions were not rhetorical. I’m actually interested in the space (fintech, web3, blockchain, etc), but in this space particularly, it’s hard to discern marketing gimmick from use cases where these technologies actually provide real value, so I’m being critical of these announcements while at the same time keeping an open mind.
> signing every block, to create a collaborative system that is perfectly suited for the same task.
Indeed you can! We even have a name for that! Its called a blockchain.
> This maintains many benefits of the blockchain and lacks many issues (fast, simple, near zero cost, controllable to a given extent -- no takeover possible, ...).
Blockchains can do all of these things.
Perhaps you are thinking of "bitcoin", instead of "blockchains"? Bitcoin, something that was created a whole 17 years ago, indeed has many drawbacks compared to modern blockchains.
There is no bright line difference between proof of stake and any other type of consensus, committee or voting body. Proof of work of course is very different.
The difference is enormous: in one case, you don't need any centralized N entities, just a big percentage of the network, whoever wants to participate, runs the protocol and there are no 50 institutions / companies that can block it without reaching the majority of work / stake. In the other case, you are delegating the consensus to a fixed amount of parties. Now, we against the crypto / blockchain shitstorm advertised the alternative of old-style federated consensus with N trusted organizations for years and years. And now, no: you can't say, this is a form of blockchain. You admit failure and acknowledge that classical consensus was good enough and even better in most cases.
There's always a comment in any HN blockchain thread where the commenter disproves the need for a blockchain by proposing just to use a blockchain instead.
Your protocol has to use a consensus mechanism if you want to reliably make progress, and be able to recover if you make mistakes, this is exactly what a blockchain solves
That you _can_ solve it with a blockchain doesn't mean that you can _only_ solve it with a blockchain.
M valid signatures of N authorities is a consensus mechanism that just needs public keys. You don't need a blockchain if you're prepared to trust a set of authorities like stripe and their trusted partners.
This is true for the pre-training step. What if advancements in the reinforcement learning steps performed later may benefit from more compute and more models parameters? If right now the RL steps only help with sampling, that is, they only optimize the output of a given possible reply instead of the other (there are papers pointing at this: that if you generate many replies with just the common sampling methods, and you can verify correctness of the reply, then you discover that what RL helps with is selecting what was already potentially within the model output) this would be futile. But maybe advancements in the RL will do to LLMs what AlphaZero-alike models did with Chess/Go.
It's possible. We're talking about pretraining meaningfully larger models past the point at which they plateau, only to see if they can improve beyond that plateau with RL. Call it option (3). No one knows if it would work, and it would be very expensive, so only the largest players can try it, but why the heck not?
Indeed, but in some way the Redis version is a bit too Redis-ish, that is, memory saving concerns are taken to the extreme instead of having a more balanced approach about simplicity. In my YouTube channel C course, I'm showing something similar to SDS in the latest lessons, and I may use SDS again in later course in order to show how to integrate back the useful features that diverged. Maybe an SDS3 maybe a middle ground among the Redis version, some API error that should be corrected (but not in Redis: not worth it), and other improvements.
I always thought that the POSIX threads semantics that forces the thread acquiring the lock to be the same thread that releases it, is too strict and not needed. In certain use cases it forces you to redesign the code in more complicated ways.
It isn't too strict. Releasing a pthread_mutex has the semantics of being a release memory barrier, which means that any writes on that thread will be visible by other threads that issue a subsequent acquire memory barrier (e.g. by acquiring the mutex).
If you want this behavior, it's relatively simple to implement your own mutex on top of futex, but no one is going to expect the behavior it provides.
I used to think that way, but I've learned that there's one good reason for why the API is designed that way: priority inheritance. Priorities are bound to threads, and, when a high priority threads wants to lock a currently occupied mutex, we want to bump the priority of the current holder. And POSIX requirement makes that easy --- the thread who locked the mutex must be the one holding it.
I'll cover in my YouTube why this is wrong but TLDR: you need to evaluate quality not process. AI can be used in diametrically different ways and the reason why this policy could be enforced is because it will be obvious if the code is produced via a solo flight of some AI agent. For the same reason that's not a policy that will improve anything.
If you look at how Quality Assurance works everywhere outside of software it is 99.9999% about having a process which produces quality by construction.
Respectfully, Mr. Redis, sir, that's what's going on. I don't see any reason to make a video about it. From the PR that's TFA:
"In a perfect world, AI assistance would produce equal or higher quality work than any human. That isn't the world we live in today, and in many cases it's generating slop. I say this despite being a fan of and using them successfully myself (with heavy supervision)! I think the major issue is inexperienced human drivers of AI that aren't able to adequately review their generated code. As a result, they're pull requesting code that I'm sure they would be ashamed of if they knew how bad it was.
The disclosure is to help maintainers assess how much attention to give a PR. While we aren't obligated to in any way, I try to assist inexperienced contributors and coach them to the finish line, because getting a PR accepted is an achievement to be proud of. But if it's just an AI on the other side, I don't need to put in this effort, and it's rude to trick me into doing so.
I'm a fan of AI assistance and use AI tooling myself. But, we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code."
"I don't see any reason to make a video about it". This sentence is so wrong in its depth that's difficult to know where to start arguing against it.
"The disclosure is to help maintainers assess how much attention to give a PR."
For the same reason we should ask how many years contributing have been writing software and in the specific language, as they are also correlated with quality of produced code.
"we need to be responsible about what we're using it for and respectful to the humans on the other side that may have to review or maintain this code"
Yes, producing great code and documentation, regardless the process.
There's a world of difference between giving feedback and coaching to a human that might be able to learn from that feedback and use it to do better, and giving feedback and coaching to a LLM that has a human acting as its go-between.
If research continues over the next few decades, these LLMs (and other code-generation robots) may well be able retrain themselves in real-time. However, right now, retraining is expensive (in many ways) and slow. For the foreseeable future, investing your time in providing feedback and coaching intended to develop a human programmer into a better human programmer to an LLM is a colossal waste of one's time.
To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.
I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.
But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.
AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.
Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.
It's kinda hilarious to see "simple task ... like translation". If you are familiar with the history of the field, or if you remember what automated translation looked like even just 15 years ago, it should be obvious that it's not simple at all.
If it were simple, we wouldn't need neural nets for it - we'd just code the algorithm directly. Or, at least, we'd be able to explain exactly how they work by looking at the weights. But now that we have our Babelfish, we still don't know how it really works in details. This is ipso facto evidence that the task is very much not simple.
I use AI as a tool to make digital art but I don't make "AI Art".
Imperfection is not the problem with "AI Art". The problem is that it is really hard to not get the models to produce the same visual motifs and cliches. People can spot AI art so easy because of the motifs.
I think midjourney took this to another level with their human feedback. It became harder and harder to not produce the same visual motifs in the images to the point it is basically useless for me now.
Isn't "But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills." only true if the false positive rate of the verifier is not much higher than the failure rate of the AI? At some point it's like asking a human to double check a calculator
> any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct
I hope you're right, but when I think about all those lawyers caught submitting unproofread LLM output to a judge... I'm not sure humankind is wise enough to avoid the slopification.
> But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct.
The usual solution is to specify one language as binding, with that language taking priority if there turns out to be discrepancies between the multiple version.
You might still need humans in the loop for many things, but it can still have a profound effect if the work that used to be done by ten people can now be done by two or three. In the sectors that you mention, legal, graphic design, translation, that might be a conservative estimate.
There are bound to be all kinds of complicated sociopolitical effects, and as you say there is a backlash against obvious AI slop, but what about when teams of humans working with AI become more skillful at hiding that?
IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.
> Book and music album covers are often done with AI (even if this is most of the times NOT advertised)
This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".
It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.
Look at your examples. Translation is a closed domain; the LLM is loaded with all the data and can traverse it. Book and music album covers _don't matter_ and have always been arbitrary reworkings of previous ideas. (Not sure what “ebook reading” means in this context.) Math, where LLMs also excel, is a domain full of internal mappings.
I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.
This is not inherent to AI, but how the AI models were recently trained (by preference agreement of many random users). Look for the latest Krea / Black Forest Labs paper on AI style. The "AI look" can be removed.
Songs right now are terrible. For the videos, things are going to be very different once people can create full movies in their computers. Many will have access to the ability to create movies, and a few will be very good, and this will likely change many things. Btw this stupid "AI look" is only transient and is nowhere needed. It will be fixed, and AI images/videos generation will be impossible to stop.
The trouble is, I'm perfectly well aware I can go to the AI tools, ask it to do something and it'll do it. So there's no point me wasting time eg reading AI blog posts as they'll probably just tell me what I've just read. The same goes for any media.
It'll only stand on its own when significant work is required. This is possible today with writing, provided the AI is directed to incorporate original insights.
And unless it's immediately obvious to consumers a high level of work has gone into it, it'll all be tarred by the same brush.
Any workforce needs direction. Thinking an AI can creatively execute when not given a vision is flawed.
Either people will spaff out easy to generate media (which will therefore have no value due to abundance), or they'll spend time providing insight and direction to create genuinely good content... but again unless it's immediately obvious this has been done, it will again suffer the tarring through association.
The issue is really one of deciding to whom to give your attention. It's the reason an ordinary song produced by a megastar is a hit vs when it's performed by an unsigned artist. Or, as in the famous experiment, the same world class violinist gets paid about $22 for a recital while busking vs selling out a concert hall for $100 per seat that same week.
This is the issue AI, no matter how good, will have to overcome.
I mean, test after test have shown that the vast, vast majority of humans are woefully unable to distinguish good AI art made by SOTA models from human art, and in many/most cases actively prefer it.
Maybe you’re a gentleman of such discerningly superior taste that you can always manage to identify the spark of human creativity that eludes the rest of us. Or maybe you’ve just told yourself you hate it and therefore you say you always do. I dunno.
Reminds me of the issue with bad CGI in movies. The only CGI you notice is the bad CGI, the good stuff just works. Same for AI generated art, you see the bad stuff but do not realize when you see a good one.
care to give me some examples from youtube ? I am talking about videos that ppl on youtube connected to for the content in the video ( not AI demo videos).
> Translations are more and more done exclusively with AI or with a massive AI help
As someone who speaks more than one language fairly well: We can tell. AI translations are awful. Sure, they have gotten good enough for a casual "let's translate this restaurant menu" task, but they are not even remotely close to reaching human-like quality for nontrivial content.
Unfortunately I fear that it might not matter. There are going to be plenty of publishers who are perfectly happy to shovel AI-generated slop when it means saving a few bucks on translation, and the fact that AI translation exists is going to put serious pricing pressure on human translators - which means quality is inevitably going to suffer.
An interesting development I've been seeing is that a lot of creative communities treat AI-generated material like it is radioactive. Any use of AI will lead to authors or even entire publishers getting blacklisted by a significant part of the community - people simply aren't willing to consume it! When you are paying for human creativity, receiving AI-generated material feels like you have been scammed. I wouldn't be surprised to see a shift towards companies explicitly profiling themselves as anti-AI.
As someone whose native language isn't English, I disagree. SOTA models are scary good at translations, at least for some languages. They do make mistakes, but at this point it's the kind of mistake that someone who is non-native but still highly proficient in the language might make - very subtle word order issues or word choices that betray that the translator is still thinking in another language (which for LLMs almost always tends to be English because of its dominance in the training set).
I also disagree that it's "not even remotely close to reaching human-like quality". I have translated large chunks of books into languages I know, and the results are often better than what commercial translators do.
The point is that you don't have to "trust" me, you need to argue with me, we need to discuss about the future. This way, we can form ideas that we can use to understand if a given politician or the other will be right, when we will be called to vote. We can also form stronger ideas to try to influence other people that right now have a vague understanding of what AI is and what it could be. We will be the ones that will vote and choose our future.
Life is too short to have philosophical debates with every self promoting dev. I'd rather chat about C style but that would hurt your feelings. Man I miss the days of why the lucky stiff, he was actually cool.
Sorry boss, I'm just tired of the debate itself. It assumes a certain level of optimism, while I'm skeptical that meaningfully productive applications of LLMs etc. will be found once hype settles, let alone ones that will reshape society like agriculture or the steam engine did.
Whether it is a taxi driver or a developer, when someone starts from flawed premises, I can either engage and debate or tune out and politely humor them. When the flawed premises are deeply ingrained political beliefs it is often better to simply say, "Okay buddy. If you say so..."
We've been over the topic of AI employment doom several times on this site. At this point it isn't a debate. It is simply the restating of these first principles.
reply