Hacker Newsnew | past | comments | ask | show | jobs | submit | xhevahir's commentslogin

I can't tell whether you're serious or not. Your criterion for an "impressive" AI tool is that it be able to write and scan poetry in ancient Greek?

AI looks like it understands things because it generates text that sounds plausible. Poetry requires the application of certain rule to that text, and the rules for Latin and Greek poetry are very simple and well understood. Scansion is especially easy once you understand the concept, and you actually can, as someone else suggested, train a child to scan poetry by applying these rules.

An LLM will spit out what looks like poetry, but will violate certain rules. It will generate some hexameters but fail harder on trimeter, presumably because it is trained on more hexametric data (epic poetry: think Homer) than trimetric (iambic and tragedy, where it’s mixed with other meters). It is trained on text containing the rules for poetry too, so it can regurgitate rules like defining a penthemimeral cæsura. But, LLMs do not understand those rules and thus cannot apply them as a child could. That makes ancient poetry a great way to show how far LLMs are from actually performing simple, rules-based analysis and how badly they hide that lack of understanding by BS-ing.


This is not a useful diversion, it's like arguing if a submarine swims.

LLMs are simple, it doesn't take much more than high school math to explain their building blocks.

What's interesting is that they can remix tasks they've been trained very flexibly, creating new combinations they weren't directly trained on: compare this to earlier smaller models like T5 that had a few set prefixes per task.

They have underlying flaws. Your example is more about the limitations of tokens than "understanding", for example. But those don't keep them from being useful.


> those don't keep them from being useful.

They do stop it from being intelligent though. Being able to spit out cool and useful stuff is a great achievement. Actual understanding is required for AGI and this demonstrably isn't that, right?


I don't care if people want to debate over the semantics of intelligence to be honest.

Similarly, most AGI discussions are just people talking past each other and taking pot shots at predicting the future.

I've come to accept some topics in this space just don't invite useful or meaningful discussion.


The fact that he's a very eclectic thinker and not very systematic, although that's one of the things that a lot of people admire about him. His religious commitments, as well, I would guess. But also he had some very odd ideas--like refusing to get a tumor removed from his face. He also was not the best at communicating his ideas.


Right, politicians and officials working on behalf of the tax-filing lobby could introduce lots of changes to the tax code with a view to making this software useless.


The point of open sourcing from a dying ship is that the groups that can modify this software and resell it all start from it as a baseline. Is TurboTax all lean mean code available at a low enough price while still meeting profit expectations if it needs drastic changes?


What is this repo's marketing budget by any chance?

Intuit's was big enough to pervert American tax policy for decades.


Intuit can spend all the money they can convince investors to lose relative to last year and expectations, but they'll have a yggdrasil of companies to buy out from a turn-key solution and all their costs fighting OMB will amount to nothing if they screw one buyout up and get an updated software drop for a new round of $5 filers.

The companies Intuit will have to buy out don't have to make any profit per filer, they just have to take filers away from Inuit.


I mean… in some sense, it might be nice is the company doing your tax preparation is not too lean and mean, their whole point is to eat the hit if they screw it up, right? The math is not actually hard.

But, realistically, I guess if a self-service tax prep company messed up your taxes, they’d make sure you end up in arbitration.


The form you sign to authorize efiling says "I declare that I have examined a copy of the income tax return ... and to the best of my knowledge and belief, it is true, correct, and complete." If you think Intuit is going to cover you, you haven't really seen the things they do.


This attitude is depressingly common in lots of professional, white-collar industries I'm afraid. I just came from the /r/law subreddit and was amazed at the kneejerk dismissal there of Dario Amodei's recent comments about legal work, and of those commenters who took them seriously. It's probably as much a coping mechanism as it is complacency, but, either way, it bodes very poorly for our future efforts at mitigating whatever economic and social upheaval is coming.


This is the response to most new technologies; folks simply don't want to accept the future before the ramifications truly hit. If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.

LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.


I don't think it is only (or even mostly) not wanting to accept it, I think it is at least equal measure just plain skepticism. We've seen all sorts of wild statements about how much something is going to revolutionize X and then turns out to be nothing. Most people disbelieve these sorts of claims until they see real evidence for themselves... and that is a good default position.


hedging the possibility that they get displaced economically before it happens is always prudent.

If the future didnt turn out to be revolutionary, you now have done some "unnecessary" work at worst, but might've acquired some skills or value at least. In the case of most well off programmers, i suspect buying assets/investments which can afford them at least a reasonable lifestyle is likely too.

So the default position of being stationary, and assuming the world continues the way it has been, is not such a good idea. One should always assume the worst possible outcome, and plan for that.


> One should always assume the worst possible outcome, and plan for that.

Maybe if you work e-commerce or in the military.

But how do you even translate this line of thought for today?

Is you EMP defenses up to speed?

Are you studying russian and chinese while selling kidneys in order to afford your retirement home on Mars?

My point being, you can never plan for every worst outcome. In reality you would have a secondary data center, backups and a working recovery routine.

None of which matters if you use autocomplete or not.


> If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.

Look, we see the forest. We are just not impressed by it.

Having unlimited chaos monkeys at will is not revolutionizing anything.


Lawyers don't even use version control software a lot of the time. They burn hundreds of paralegal hours reconciling revisions, a task that could be made 100x faster and easier with Git.

There's no guarantee a technology will take off, even if it's really, really good. Because we don't decide if that tech takes off - the lawyers do. And they might not care, or they might decide billing more hours is better, actually.


> billing more hours is better, actually

The guiding principle of biglaw.

Attorneys have the bar to protect them from technology they don’t want. They’ve done it many times before, and they’ll do it again. They are starting to entertain LLMs, but not in a way that would affect their billable hours.


“First thing we do, let’s kill all the lawyers”

History majors everywhere are weeping.


Many of us would prefer to see the technological leaps to be evenly distributed (so e.g. even clean drinking water that does not need to be boiled before consumption is not a baseline in 2025). So if you want to adapt to your new and improved position where you are just pushing buttons fine - but some of us are actually interested in how computers work (and are actually really uninterested in most companies' bottom lines). It's just how it is ;)


I think many people just settled in while we had no real technological change for 15 years. Real change, not an update to a web framework.

When I graduated high school, I had never been or knew anyone who had ever been on the internet at all. The internet was this vague "information superhighway" that I didn't know really what to make of.

If you are of a certain age though you would think a pointless update to react was all the change ever coming.

That time is over and we are back to reality.


> you are ... le LUDDITE

Or maybe they just know the nitty-gritty inherent limitations of technology better than you.

(inb4: "LLMs can't have limitations! Wait a few years and they will solve literally every possible problem!")


Friendly reminder that people like you were saying the exact same thing about metaverse, VR, web3, crypto, etc.


Yes. If you judge only from the hype, then you can't distinguish LLMs from crypto, or Nuclear Weapons from Nuclear Automobiles.

If you always say that every new fad is just hype, then you'll even be right 99.9% of the time. But if you want to be more valuable than a rock (https://www.astralcodexten.com/p/heuristics-that-almost-alwa...), then you need to dig into the object-level facts and form an opinion.

In my opinion, AI has a much higher likelihood of changing everything very quickly than crypto or similar technologies ever did.


I didn’t buy the hype of any of those things, but I believe AI is a going to change everything much like the introduction of the internet. People are dismissing AI because its code is not bug free, completely dismissing the fact that it generates PRs in minutes from a poorly written text prompt. As if that’s not impressive. In fact if you put a human engineer on the receiving end of the same prompt with the same context as what we’re sending to the LLM, I doubt they could produce code half as good in 10x the time. It’s science fiction coming true, and it’s only going to continue to improve.


Again, there were people just as sure about crypto as you are now about AI. They dismissed criticism because they thought the technology was impressive and revolutionary. That it was science fiction come true and only going to continue to improve. It's the exact same hype-driven rhetoric.

If you want to convince skeptics talk about examples, vibe code a successful business, show off your success with using AI. Telling people it's the future and if you disagree you have your head in the sand, is wholly unconvincing.


As someone who gleefully followed along as the Web3 hype train derailed, an important distinction is that crypto turns every believer into a salesperson, by design. There were some that were truly passionate about the potential applications for blockchain technology, but by and large they were drowned out by people who, having poured $10k into the memecoin of the week, wanted to see the price of that coin rise.

This doesn't feel like that. The applications of generative AI have become self-evident to anyone that's followed their rise. Specific applications of AI resemble snake oil, and there are hucksters who pivoted from crypto to AI, but the ratio of legit use cases to scams isn't even close.

If anything, the incentives for embellishment have flipped since crypto. VC-funded AI companies will dreamily fire press releases about AI taking us to Mars, but it doesn't have the pseudo-grassroots quality of cryptocurrency hype. The average worker is incentivized to be an AI skeptic. The rise of generative AI threatens workers in several fields today, and has already negatively impacted copywriters and freelance artists. I absolutely understand why people in those fields would respond by calling AI use unethical and criticize the shortcomings of today's models.

We'll see what the next few years hold. But personally, I foresee AI integration ramping up. Even if the models themselves completely stagnate from this point on, there's a lot of missing glue between the models and the real world.


You don't have to be able to vibe code an entire business from scratch to know that the technology behind AI is significantly more impressive than VR, crypto, web3 etc. What the free version of ChatGPT can do right now, not just coding; would've been unimaginable to most people just 5 years ago.

Don't people and companies using AI lazily to put out low quality content blind you to its potential as well as the reality of what it can do right now. Look at Google's VO3, most people in the world right now won't be able to tell you that it's AI generated and not real.


The value of these was always a far fetch, and requires a critical mass adopting it before becoming potentially useful. But LLMs value is much more immediate and doesn't require any change in the rest of the world. If you use it and are amplified by it, you are... simply better off.


In my small-minded opinion, llms are the better version of code-completion. Search and time-savings on an accelerated course.

They can’t write me a safety-critical video player meeting the spec with full test coverage using a proprietary signal that my customer would accept.


Frankly I disagree that LLMs value is immediate. What I do see is a whole lot of damage it's causing, just like the hype cycles before it. It's fine for us to disagree on this, but to say I'm burying my head in the sand not wanting to accept "the future" is exactly the same hype-driven bullshit the crypto crowd was pushing.


That's why it's what I define as immediate value. It's undeniably incredibly amplifying to me, whether you or others agree or not. No network effect required. It doesn't matter whether I convince anyone else of the value, I can capture it all on my own. Unlike ponzi-schemes like web3 or VR experiences that require an entire shift in everyday life and an ecology of supporting software.

I don't need to convince anyone that LLMs are enabling me to do a lot more. This is what makes this hype different. It has bones. Once you've found a way to leverage it, it's undeniably helpful regardless of your prior disposition. Everyone else can say they're not useful and it rings hollow because it obviously is to me. And thus probably useful to everyone else too.


Ah yes, please enjoy living in your moment and anticipating your entirely new world. I also hear all cars will be driving themselves soon and Jesus is coming back any day now.


I found it mildly amusing to contrast the puerile dismissiveness with your sole submission to this site: UK org's Red List of Endangered & Extinct crafts.


Adapt to your manager at bigcorp who is hyping the tech because it gives him something to do? No open source project is using the useless LLM shackles.


As if you'd know if they did.


Why would we not? If they were so effective, their effectiveness would be apparent, inarguable, and those making use of it would advertise it as a demonstration of just that. Even if there were some sort of social stigma against it, AI has enough proponents to produce copious amounts of counterarguments through evidence all on their own.

Instead, we have a tiny handful of one-off events that were laboriously tuned and tweaked and massaged over extended periods of time, and a flood of slop in the form of broken patches, bloated and misleading issues, and nonsense bug bounty attempts.


I think the main reason might be that when the output is good the developer congratulates themselves, and when it's bad they make a post or comment about how bad AI is.

Then the people who congratulate the AI for helping get yelled at by the other category.


As long as the AI people stay in their lane and work on their own projects, they're not getting yelled at. This is ignoring that AI has enough proponents to have enough projects of significant size. And even if they're getting shouted at from across the fence, again, AI has enough proponents who would brave getting yelled at.

We'd still have more than tortured, isolated, one-offs. We should have at least one well-known codebase maintained through the power of Silicon Valley's top silicon-based minds.


I think it's pretty reasonable to take a CEO's - any CEO in any industry - statements with a grain of salt. They are under tremendous pressure to paint the most rosy picture possible of their future. They actually need you to "believe" just as much as their team needs to deliver.


Just a grain? I say take it with a gargantuan Train loaded with salt on all cars. An entire salt mine's worth. Markets, and CEOs, are downright insane, and they are the only ones who stand to profit from this situation, and have everything to gain.


IMO it is a mixture of stupidity and denial.

I am not a software engineer but I just can't imagine my job is not automated in 10 years or less.

10 years is about the time between King – Man + Woman = Queen and now.

I think what is being highly underestimated is the false sense of security people feel because the jobs they interface with are also not automated, yet.

It is not hard to picture the network of automation that once one role is automated, connected roles to that role become easier to automate. So on and so on while the models keep getting stronger at the same time.

I expect we will have a recession at some point and the jobs lost are gone forever.


Lawyers say those things and then one law firm after another is frantically looking for a contractor to overpay them to install local RAG and chatbot combo.


In most professional industries getting to the right answer is only half the problem. You also need to be able to demonstrate why that is the right answer. Your answer has to stand up to criticism. If your answer is essentially the output of a very clever random number generator you can't ever do that. Even if an LLM could output an absolutely perfect legal argument that matched what a supreme court judge would argue every time, that still wouldn't be good enough. You'd still need a person there to be accountable for making the argument and to defend the argument.

Software isn't like this. No one cares why you wrote the code in your PR. They only care about whether it's right.

This is why LLMs could be useful in one industry and a lot less useful in another.


Isn't this also kind of just ... a reddit thing?


"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair


> Considering our intelligence stems from our ability to use bayesian inference and generative probabilities to predict future states...

Sounds like a pretty big assumption.


It's the Bayesian Brain Hypothesis and Predictive Coding, both thoroughly researched theories that line up with empirical evidence. [1]

[1] https://www.cell.com/trends/neurosciences/abstract/S0166-223...


It's a popular view but it's massively controversial and far from being a consensus view. See here for a good overview of some of the problems with it.

https://pubmed.ncbi.nlm.nih.gov/22545686/

(You should be able to find the PDF easily on scihub or something)


Better check to make sure your laptop's hardware, particularly WiFi, is compatible. Most aren't.


You can install/run guix using a nonfree kernel to get around that.

https://wiki.systemcrafters.net/guix/nonguix-installation-gu...


I don't think Liars Poker is a business book in the sense that this author means. These books are supposed to impart specific, practical insights that you can apply to your own work. Lewis in his books just wants to tell a story that will entertain and be interesting to a general audience.


I really doubt live poker is as big as it was twenty-five years ago, when Phil Helmuth was a household name and Hollywood were casting Matt Damon in movies about the sport.


Live poker is significantly more popular now than ever. Every major tournament has seen record participants, Vegas has bigger poker rooms than ever before, and I'd say anecdotally local poker clubs are packed compared to anytime I can recall.


that's a shame, the coverage is 100x worse than it was.

the ESPN2 streams suck, they seem like they don't know what table they're watching and the commentary is usually below-basic pop-culture and memery, and the WSOP commentators are equally childish and unprofessional.

poster was right though, it seems far from what it was as far as general non-poker interest goes.. maybe the increased size of the poker hall/tournament attendance is evidence of another effect; gambling tends to go up in poor economies.

my .02c: i've seen a lot of my favorite casinos close their poker rooms or convert them in the past five years. my neighborhood games are all mostly dried up, and all of my cohort I network with about poker stuff is essentially still just enjoying 10-20 year old Poker After Dark eps. The coverage sucks and only the huge games or private tables are worth watching, and that's a whole other cash grab. The personalities are largely non-existent, and the ones that try angle don't do that great a job.

It all sounds like sour-grapes nostalgia, and maybe it is to a degree, but it's a common opinion that poker is in a rut lately.


ESPN2? I thought the live coverage is only on PokerGO for the last few years, with the packaged shows broadcast later on CBS Sports channel?


We must have been frequenting very different households.


It was people of a certain age and mindset I think.

At the peak before black Friday, it was pretty routine for 3 or 4 people I knew from work to be on Full Tilt at the same time and I only really knew about 15 people at this company.


> Informal relationships: mapped through authoritative journalistic sources, these include ideological affinities, mentoring relationships, and membership in patronage networks.

So a key part of this is impressionistic stuff: labels like "soft conservative," "liberal," and so on. Doesn't sound very rigorous.


It doesn’t need to be for this type of analysis. Given the information available, you’re looking for ways to bend the odds, not necessarily “know” with precision. Facebook makes billions with similar techniques.


Seems like Protestants and people in majority-Protestant countries are struck by the bells-and-smells but I think the Catholic Church isn't especially distinctive in this regard. Catholics favor a pretty muted look to things compared with the Eastern churches.


No it's the opposite. The eastern churches are very thread bare. Catholics have icons statues and every other kind of art whereas the eastern church went through iconoclasm. A lot of the eastern stuff today is due to influence from the west. This is especially true of the eastern Catholics.

For example, the Kerala church was so against statues and images that basically all the art we have from them are crosses. This was characteristic of the church of the east. The eastern Orthodox went through iconoclasms and some even have issues with statues still.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: