Hacker Newsnew | past | comments | ask | show | jobs | submit | massung's commentslogin

I’m no expert either, so I hope one can corroborate or correct me…

My understanding though is that these steps are really the very beginning. Using a quantum computer with quantum algorithms to prove that it’s possible.

Once proven (which maybe article this is claiming?) the next step is actually creating a computer with enough qubits and entanglable pairs and low enough error rates that it can be used to solve larger problems at scale.

Because my current understanding with claims like these is that they are likely true, but in the tiny.

It’d be like saying “I have a new algorithm for factoring primes that is 10000x faster than the current best, but can only factor numbers up to 103.”


Go Forth and prosper.


Great post. I feel obligated to reply with a similar post a friend wrote a while ago that’s probably made the rounds here as well: https://prog21.dadgum.com/116.html


> Any app that ever claimed to tell you what "Hemingway would say about this blog post" would evidently be lying — it'd be giving you what that specific AI model generates in response to such a prompt.

First, 100% agreed.

That said, I found myself pondering Star Trek: TNG episodes with the holodeck, and recreations of individuals (e.g. Einstein, Freud). In those episodes - as a viewer - it really never occurred to me (at 15 years old) that this was just a computer's random guess as to how those personages from history would act and what they would say.

But then there was the episode where Geordi had to the computer recreate someone real from their personal logs to help solve a problem (https://www.imdb.com/title/tt0708682/). In a later episode you find out just how very wrong the computer/AI's representation of that person really was, because it was playing off Geordi, just like an LLM's "you're absolutely right!" etc. (https://www.imdb.com/title/tt0708720/).

This is a long-winded way of saying...

1. It's crazy to me how prescient those episodes were.

2. At the same time, the representation of the historical figures never bothered me in those contexts. And I wonder if it should bother me in this (LLM) context either? Maybe it's because I knew - and I believed the characters knew - it was 100% fake? Maybe some other reason?

Anyway, your comment made me think of this. ;-)


I wonder if there's a difference between "asking for critique" and "acting the part". I generally have no problem and or get fooled, watching a movie about a famous person, even though it's not actually that person. Rami Malek is not Freddie Mercury, Timothée Chalamet is not Bob Dylan. But we (or at least I) watch them and am to some degree, fooled / by into their depiction that I'm actually seeing the real person. I have to remind myself the actor's version is not the actual person.

It feels easier to portray famous characters how we'd think they'd act but seems harder how we'd expect them to critique something. I don't know of those are just points on a spectrum from easy to hard, or if one requires a level deeper than the other.


I think the core difference there is that the holodeck character feels like a character that is playing a person (because it is of course) whereas the LLM feels more like someone lying to you about who they are.

When watching a play the actor pretends to be a specific character, and crucially the audience pretends to believe them. If a LLM plays a character it's very tempting for the audience to actually believe them. That turns it from a play into a lie.


In that context, the computer was solving for a faithful representation. In our case, the computer is solving for most likely sequence of words to appear in conversations with a similar context - which not remotely the same thing.


> In that context, the computer was solving for a faithful representation.

Was it, though?

They had Newton (died 1727) playing playing poker (invented at some point during the early 19th century), repeating the myth that the apple fell on his head and then reacting insulted when Data says "that story is generally considered to be apocryphal".

More generally:

In TNG, Holo-Moriarty claimed to be sentient and to have experienced time while switched off despite Barclay saying that wasn't possible, much like LLMs sometimes write of experiencing being bored and lonely between chat sessions despite that not being possible given how they work.

In DS9, there was a holo-village made out of grief, and when it got switched off to reveal the one real person who had made it, while the main cast treated all the holograms as people, that creator himself didn't. Vic Fontaine was ambiguous, being a hologram who knew he was a hologram but still preferring to keep his (fake) world to its own rules and eventually kicking Nog out of the fake world when it was becoming clear Nog was getting too swept up in the fantasy.

In Voyager, the Doctor was again ambiguously person and/or program, both fighting for his moral rights as an author in a lower-stakes echo of TNG's Measure of a Man, and also Janeway being unsure if he was stuck in a loop or making progress with grief about the death of Ensign Never-Before-Mentioned-In-This-Show.


> Seasoned Rust coders don’t spend time fighting the borrow checker...

Experienced Rust coders aren't going to find themselves in various borrow checker (and lifetime) pitfalls that newbies do, sure.

That said, the borrow checker and lifetime do cause problems for even experienced Rust programmers. Not because they don't understand memory management, lifetimes, etc. But because they don't - yet - fully understand the problem being solved.

All programs are an evolutionary process of developing a solution to a problem (or many problems). You think one thing, code it up, realize you missed something or didn't fully grok the issue, pivot, etc.

Rust does a great job in the compiler of letting the user know if they've borked something. But often times a refactor/fix in C/D/Zig due to learned (or new) requirements is just a tweak, while in Rust it becomes a major overhaul because now something needs to be `mut` or have a lifetime added to it. I - personally - consider that "fighting" the borrow checker, regardless of how helpful (or correct) it also is.


I’m sure your question was rhetorical and sarcastic (that the app exists makes me sick).

It absolutely is not. AI counts as a medical device: it’s used to help diagnose and inform medical treatments. However, this is a loophole, because (quoting fda.gov):

> FDA does have regulatory oversight over devices intended for animal use …. Pre-market Approval is Not Required: The FDA does not require submission of a 510(k), PMA, or any pre-market approval for devices intended for animal use.

The FDA will only step in after complaints or enough pets start dying.


I've personally found that the time taken to think through a discussion is akin to an inverse guassian curve:

- on the left tail are people who know little-to-nothing about (or have little experience with) the given topic and neeed a chunk of time

- then as knowledge and experience increases, less time is needed, eventually peaking out at what appears to be instance understanding + ability to communcate effectively about it

- but then something interesting happens when they get even more experience + knowledge: they now know about all the edge cases, things that go wrong, etc. and once again take more time to think through the topic

I've also found that most everyone is the same in this regard. Every once in a while (like any normal distribution) there's an outlier on one side of the spectrum or the other, but for the most part, everyone is the same.

Where people tend to differ is in their coping skills in such situations. Early in my career I had to learn to ask people to explain their thinking. Later it was me slowing down and realizing there's likely more to it than I think (and for those behind me).

Now it's me telling those at the peak of the curve to slow down, because while they may be right, and -maybe- they've thought it through, that's probably not the case.

TL;DR to anyone who thinks they are a slow thinker - you probably aren't (like imposter syndrome), and just need to learn to slow the room down. Doing so will help you, others behind you, and those in front of you.


First, this is great information in an area I know very little about.

But I’m curious - from your experience - how do you know the OP isn’t pretending in order to learn about new avenues to block or attack or to track down people who are trying to circumvent?

I don’t mean that as a “be careful”. You’re the expert compared to me and for all I know these are unblockable. Or maybe those doing the blocking would already know about them? So I’m interested in just understanding more.


By that logic, if modern LLMs existed in the 80s, you’d have never learned Haskell, Ocaml, Rust, Go, Erlang, … and all the cool concepts and ideas that came with them. You’d still be programming Basic and Fortran, simply because that’s all the models knew.

AI may be helpful at times, but to limit one’s self to only the knowledge and experience they have is… short sighted at best.


You got a point here however I would just flip his argument. It is best to rely on LLMs that have a lot of training data exposure, and here is Python etc. dominating over Delphi.

I for example find LLMs not useful in regards to coding on 6510 or 68000 especially in assembler when developing code for a product of the demo scene.

x86 became pretty useful lately, but still, on certain machines with bit manipulation, you would better take your time to triple check your code and don't rely on LLM.

I would love to see a change here.


Unfortunately this is what AI is leading to. People will stop learning new languages and companies will stop developing new ones because AI is now supposed to write code.


I agree, I envison that we will reach a state where the tooling will be generating executables directly.

As next step of low code/no code tooling, the agent will do the actions for us.

We are already seeing this on SaaS offerings.


I tried to make some LLMs write (GW-)BASIC and they failed miserably. Maybe they were only trained on some modern BASIC that doesn't look like BASIC at all? Could not convince them to use line numbers at all. Maybe with a lot of context they could do it, but my prompts did not work, even making I clear I wanted line numbers.

(Free)Pascal seems to work great though. I think enough of that is in training data that it can be used as well as any language. There isn't much special to consider to get it right. It is not like figuring out how to do Rust or C++.


You might be right. Have you seriously considered that you're wrong though? What if you're investing a dead craft and it never pays off? have you engaged with that idea and rejected it?


There’s two sides to your question, I think:

- professionally (for money) - personally (for knowledge’s sake)

Regarding the former, I’m nearing retirement age, so personally I don’t care as much; I’m no longer “investing [in] a dead craft”. Assuming it is dead (I don’t think it is).

Re the latter, I have rejected it. I love problem solving. And I consider programming a tool I use to solve problems. Regardless of whether it’s an LLM or my old C text book, if I limited myself to only what came before me, then I can’t possibly improve on the current situation. My solutions would be in a perpetual state of stagnation. I can’t speak for others, but that sounds boring AF to me.


So then it's a life stage thing. You're already well established in your career, and you'd rather some intellectual engagement. There's nothing wrong with that.

A 22 year old fresh out of undergrad almost certainly wants actual money far more than they want intellectual engagement. Most of them are better served by picking up a boring workhorse language that they can reliably get paid to write. Inevitably some will speciate into more esoteric fields, but that's the exception, not the rule.


People wanting shortcuts and to do less work/thinking is about to become the major force in society over the next generation.


Cheaper but worse unfortunately will win in most cases.

At least it’ll eventually become easier to distinguish oneself with something better. You’ll just always be slower.


Shh! Too many secrets!


Seatech Astronomy


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: