Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A.I. can now write its own computer code – that’s good news for humans (nytimes.com)
80 points by flippyhead on Sept 10, 2021 | hide | past | favorite | 126 comments



Luckily for anyone in the software writing profession, writing the actual code is the easy part. Anyone who watches the video in its entirety will be made painfully aware of this.

That aside - what I'd actually prefer is something that does the opposite of this. Rather than write code, I'd rather it actually help decide what I should be doing, not give me how it's implemented.

Example:

> "Make twitter"

- "OK. Do you mean a short message sending service like "Twitter.com" when you say 'twitter'?

> "Yes"

- "OK"

- "How many users are you thinking this will have?"

> "20 million"

- "Alright, where are these users located?"

> "All over the world?"

- "Will they all be on simultaneously?"

> "Yes"

- "OK, what are your latency requirements?"

> ...

The end of this discussion could be a fully architected design in the abstract, with recommendations on specific technologies to use, the tradeoffs and the costs, if applicable.

A plus if said architecture could be specified in a way that makes it easy to deploy. This logic could be used for high level implementation designs, and even UI/UX.


  aicoder< What is a User?

  nocoder> a User registers with a unique email and a password of more than 16 characters, but not the weird ones just the normal ones. Oh, and on registration give the user a unique id.

  aicoder< So, email the user their unique id?

  nocoder> No, no, it's our little secret.

  aicoder< Would you like to design the registration form now?

  nocoder> God no, just make a standard form. With client-side validation. And server-side validation too, just to be safe. And give it some flair, we're a cool company after all.

  aicoder< (I don't get paid enough for this shit.)

  nocoder> Wait, what?

  aicoder< Would you like to add the flair now?


That's Clippy, not AI.


There isn't a difference.


aicoder< Time to go on strike.


It could even design all the dark UI patterns and dopamine feedback loops and freemium economics and user segmentations and privacy invasions and personal data exploitations and penis-swastika logos for you!


Why does it have to be penis swastikas? And furthermore, why has no one investigated the possibility of vagina swastikas? Aren't we being a bit sexist in the midst of all of this hyper-racist fascist ideology?


People like drawing penises and swastikas. I think it's because they're simple shapes, but also taboo. A bot might very well imitate that.


Slack's new logo is a penis swastika

https://boingboing.net/2019/01/16/slacks-new-logo-is-a-playd...

How Slack’s New Logo Became a Lightning Rod For Everything Bad On The Internet

https://www.buzzfeednews.com/article/nicolenguyen/slack-new-...

The Penis Swastika Mistake

https://www.urbandictionary.com/define.php?term=The%20Penis%...

>The rookie mistake of trying too hard when designing a logo and you end up making one that has either a penis, a swastika, or both, included in your design.

...Or one swastika and four penises.


Well Tesla’s logo is based off an IUD…


> penis-swastika

Well, my curiosity quickly led to the discovery that penises and swastikas have a large overlap with furries, by the way of the Rule34 site. Which finally made me realize that furries are probably the catch-all category on Rule34: making another ‘rule 34’ pic? well just slap furries on it, and you got that audience covered at almost no additional cost. The same way as incest titles are everywhere in porn, since it costs nothing to phrase every single title that way regardless of actual content.


I love designing programs, programming and everything around it, it's my job. I constantly jumping back and forth between being worried that I won't be able to do this in the foreseeable future and realizing that these systems don't solve the problems I'm solving.


These systems don't solve the problems you're solving?

Not yet, but give it a couple years.

You will join the taxi drivers, and so will I.


"insert a Russian book at one end and come out with an English book at the other," Doctor Dostert predicted that "five, perhaps three years hence, interlingual meaning conversion by electronic process in important functional areas of several languages may well be an accomplished fact."[1]

-IBM Press Release 1954 regarding the 701 translator

Predicting the problem will be solved in a few short years is the easy part. Execution to realize those predictions is much harder.

[1] https://www.ibm.com/ibm/history/exhibits/701/701_translator....


It would be easy to translate any book into any other book (if every language was a one-to-one variant of each other with no differences in slang or metaphor or cultural backdrop that converts simple turns of phrases into a medley of aphorisms each deserving of their own short stories.)

Other than that all you need is a camera, a computer and a dictionary.


If that were true, we should expect to be able to feed that initial quote through multiple translations with minimal meaning loss. Here is the result of translating that entire phrase from English -> Greek -> Serbian -> Creole -> English with a popular translating tool:

"put a Russian book at one end and an English book at the other, ”predicted Dr. Dostert."

It completely loses its original meaning not to mention the latter half of the quote.

I think the constraint of "if every language was a one-to-one variant" is too constricting to be of use in the real world. The reason why this is hard is because that rule rarely holds true. Language is more about communicating concepts than just words. Translating one-to-one concepts is much harder because you need to understand context.


Ok, I didn't think I was subtle but I was exceedingly sarcastic in my reply. I don't think any language is 1 to 1 with any other language, and expecting to be able to directly translate with a simple dictionary is patently absurd.


Complications don't start at phrases.

Being intimately familiar with a couple languages, I find it a bit frustrating how often single words can't be translated perfectly because of small differences in meaning. Google Translate also often gets confused with false friends.


Yes and crossing language families often results in words your native language thinks of as atomic being divided into multiple more specific concepts (or vice versa). For example Japanese has no word for foot- they just say leg and use context to infer what part is meant. On the other hand they have 3 words for specific parts of the back/hip area we would usually just call back or lower back in English. This kind of thing causes no end of confusion.


Er. Doesn't that quote support their argument, rather than counter it?

At this point, language isn't a barrier. I read and write Russian without knowing how to read or write Russian. I regularly talk to my Russian friends in Russian thanks to Google Translate.

It may have taken a bit longer than the 1954 prediction said, but it did happen.


Do you buy books translated with Google?

67 years have passed and we still need translation for many applications.

That’s not an indictment of the technology; only of the extreme optimism.


That's an economics issue, not a technology one. Books are mass market items where it makes sense to translate it once and sell the translation thousands of times. Putting a whole book into a translation software would generally be more laborious than tracking down an edition in the language of choice.

But in 1954 there were no web pages and reading articles or papers published in foreign languages was extremely rare. Nowadays, these are all translated instantaneously for free by modern web browsers with reasonable fidelity. And odds are these make up the overwhelming majority of the foreign language material a modern person consumes.


Google Translate often fails spectacularly, DeepL is a lot better but it fails too.

I imagine the pairs of languages plays a part, and of course the prose.

Regardless; there will always be ambiguities, and many sometimes-important-subtleties are bound to be lost in translation. Even when a professional human does it.

(Also your prediction being half a dozen decades late isn't particularly useful.)


It would be helpful if your downvoters would explain their reasoning.

To corroborate your point about Google translate, that’s exactly what writers used to create gobbledygook language for the malfunctioning robot in The Good Place. They literally used the tool to translate his dialogue from one language to the next so it would be semi-incomprehensible gibberish


Mind that those taxis were all supposed to drive themselves since 2018.


Self driving cars have been in development for ~30 years. We’re remarkably close to taxis driving themselves.


In California maybe. We're a lot farther off from self driving taxis in places where it snows, like Toronto or Chicago.


Give it a few decades.


the correct timeline


There are two things that I want to address here.

For one I still call a Taxi when I need one. Happy to pay a bit of extra to get a professional driver. I don't want to bash on people using other services or people providing those, but it's not for me.

Similarly my clients call me and my colleagues because they have a problem that needs solving, we solve it partly with programming because that is how we can solve it exactly, reliably and freely. We don't typically use low-code tools because they can trap us and they don't scale with our ability and understanding, and the productivity they promise is true for narrow uses. Our clients don't use them because it will and has taken them too much time to learn and use them with mediocre to (really) bad results, they want the problem to be taken care of and are willing to make a trade.

The second point is that I simply refuse to stop adapting and learning. I'm happily adopting technology into my repertoire when the tradeoff is worth it. Analyzing and understanding those tradeoffs is part of the job. Expanding knowledge is part of the job. And this was always true for anyone who works in software related fields, our community always has had to adapt, adopt and evolve, balancing pragmatism and curiosity.


You mean we'll become taxi drivers, or we'll join them in unemployment due to self-driving cars? If the latter, that's great because it means I'm good for at least 20 more years of this career.


The taxi drivers haven't been replaced by computers yet either.


Yawn. Seriously, don‘t worry about it.


First they came for Lee Sedol, and I did not speak out— Because I was not a professional game player.

Then they came for the taxi drivers, and I did not speak out— Because I was not a taxi driver.

Then they came for the frontend programmers, and I did not speak out— Because I was not a frontend programmer.

Then they came for me—and there was no one left to speak for me.


> they came for me

They tried, but there was not any taxi on sight


Neither of you is going to be a taxi driver because taxi driving is easier to automate than software engineering.

Part of me is pessimistic about AI programming tools, part of me thinks they’ll only enhance the agency of existing programmers.

Either way taxi driving is probably easier to automate and only requires modifications to Tesla’s self driving tech.


> pessimistic about AI programming tools

I suspect that these tools (if they do anything at all, anyway) will just make it harder to learn programming as a newcomer - just as all the advances in programming that have come about in my lifetime have. IDE's are great, until they do something you didn't expect, and then you have to understand what it is they're automating in order to figure out how to get them to do what it is you really want. Try explaining a Java classpath or dependency problem to somebody who's never opened a command-line terminal before. Docker is great - until it expects to find something that you happen to not have installed. What will probably happen here is that you and I will be fine, because we'll be able to effortlessly wield these new tools as they're just doing quickly what we used to do slowly, but new students to programming will have an even steeper hill to climb than we did.


What you describe is an interpolation. It is a solution for what already exists. You can interpolate this long, bisecting various axis ranges that specify a problem (short messages using text, using emojis only, using pictures, using videos, using...). This is exactly the stuff NN are good for - after all, they are universal approximators for functions with bounded domain and bounded codomain.

At some point of time you will need to extrapolate. Twitter was an extrapolation, Google was an extrapolation as well.

Can a system trained to provide you with system design of a Twitter clone let you help with the design of, say, Medium?


> Twitter was an extrapolation, Google was an extrapolation as well.

Were they?

Twitter: "an absolutely minimal MySpace clone. Let users write posts to feed using SMS."

Google: "Like AltaVista, but without all the bullshit - just the search box. For weighting, have it use this funny algorithm we figured out the other day."

I'm not denying both were innovative - I just think the innovation, the "extrapolation", didn't happen at the technical/tactical level we're discussing here. All the pieces were already there, and used for similar things. The innovation was in figuring out that this particular shape of service is something people might want to use, and that it could make money.


The extrapolations there are "limit on the post size" and "this funny algorithm".

That "just the search box" thing would not go anywhere without that algorithm. I guess the "search box only" design decision was a kind of thing that happened to ARM: its' founder said that ARM was able to give its' chip developers the thing that neither Intel nor IBM was able to give - the absence of money.


> The extrapolations there are "limit on the post size" and "this funny algorithm".

I disagree. "Limit on the post size" was not arbitrary, it was a technical limitation - Twitter was created as "a social network, but for SMS". SMS was a well-known technology back then. As for "this funny algorithm", in context of the dialogue 'endisneigh posted, the algorithm is just an input. "Hey, AI, use that thing as a ranking function". The "extrapolative" work would be the invention of the algorithm itself.


I don’t agree that writing code is the easy part. Making changes to and maintaining a huge code base is no joke.


That kind of makes the point. Writing the code is the easy part. Designing a system that's easy to maintain is hard. But a lot of that is decided before you write your first line.


Good point. AI not only has to write the code, but also has to understand what existing code does, and then change it on request. If it doesn't understand its own code -- some human has to do that stuff.

And maintenance is 90% of the work.


That's a perfect example!

There is such a huge code case, because writing the code was easy to do.


It’s no joke, but writing the code is not the hard part. Reading the code, understanding it, thinking through the states and control flow — those are the hard parts of software maintenance. I do it for a living. 80% of my billable time is reading and testing code. Once I know what to write I’ve got to the easy part.


This is circling towards what we do as computer programmers - specify system behaviour in minute detail.

You can move it to a higher level, and there are usually tradeoffs there, and perhaps AI can help mitigate some of those. But in the end, you need a way to specify the system behaviour in explicit detail. Which is where software developers come in.

That's not going to go away because AI comes along, AFAICT, though it may (as it already has many times) change form or grammar here and there.

That interactive prompt system is going to get boring fast, so we'll script it. And then build on that and ...


I have an idea and all the details. Are you available to do the easy part?


Whenever a smart business career friend comes to me with a wagging tail telling me about this "idea that will change anything", unable to tell me what his value would be in the partnership I would like to be allowed to slap them once.


maybe - what's your idea and the details? If your details leave any room for ambiguity then we'll have to call off the engagement :)


No. You do not even have proper authorization to ask. "Before the law sits a gatekeeper."

https://www.kafka-online.info/before-the-law.html


Just because its easy (relative to everything else, also it depends on what you mean by "all the details"), doesn't mean it doesn't take time and therefore money.

You might argue, "well then generating the code is still a win" and it might be, but its a micro-optimization. If the AI can do the easy part but not the hard part, its akin to shaving seconds from an operation that takes hours. Its focusing on the wrong thing. If the AI could instead do the hard part, you would save a lot lot more effort and therefore money. Once that's done, by all means, automate the easy part too, but until then, the priorities are off.

You might still say that this is worth it, saving those seconds still means you don't have to pay for them, even if the majority of the cost is still there. This is possibly true and all well and good. I don't really care, because I'll still have a job doing the hard part. The issue is that when these AI's are mentioned, the "hard part" is always glossed over, the AI is sold as this thing that will automate all development, where it really should be sold as a thing that shaves a little off the total cost of development, but that the hard and therefore expensive parts are still there.

Sure, there are some development tasks that are fairly trivial and some companies that do mainly these might go out of business, but you still have the stories of oracle selling a website to the government agency for $100 million, because it has to interact with a slew of legacy systems, deal with ambiguous tax codes/regulations/requirements. There's a lot of tech out there that has these complexities and that's not going to be automated by these AI's for now, until they tackle "the hard part".

A note on "all the details": if you truly have all the details (refined unambiguous requirements, detailed architecture with all the use cases and edge cases outlined and documented, technology tradeoffs investigated and documented etc etc) then great, it would be quick, easy and cheap to implement then. Unfortunately, what is more common, is that a non-developer will say this and "all the details" really isn't all the details at all and just the tip of the iceberg.


Jokes still on you, because that’s still the easy part.

The hard part is still running the service, dealing with scalability problems, security, bugs, customer support, new features, content moderation at scale, legal compliance, meeting service level agreements, supporting multiple platforms, etc…

If only architecting the platform and writing the code was all we needed to do… every developer would be doing it.


Now this I'll pay money to use. What will happen to software architects though?


This is a really interesting idea. It would be very useful to be able to lego together some components and get a rough estimate of how it would scale and what the technical performance would be (to first order).


Yeah, an AI boss, coworker/partner, friend or all in one would be great.


we’re going towards hive processing, everything in the room will communicate in an unrealistic way, the room will load balance the processing, routing and prioritizing what gets done when. IoT but on steroids. we’re not even a fraction of where we can go. i def think something like what you are talking about will be a thing some day, programmers will be people who know how to explain things really well so an AI can do exactly what the summoner wants. anyways, it’ll happen. not sure when but it will happen within my lifetime


Wouldn't it be easier to just have people open source a generic architected 20 million simultaneous user system with low latency and kubernetes/helm charts as a github project?


That's an interesting use of the word "easy".


> "Make yourself"

Would this violate something in computer science theory?


That's what folks at hyperc.com are aiming to solve.


what if, instead of make twitter, you asked it to make money? let the computer decide what to make, and collect the profit.


"Computer, please conquer Australia for me by dinnertime."


Evaluated OpenAI Codex for weeks now. It cuts 1/2 the time off my current coding time, largely by producing syntax correct code & reducing lookups to docs/stack/etc.

At the same time, it's hopelessly wrong or broken about 1/3 of the time.

On balance - it is revolutionary. For real world use - it is still very experimental.


Can we ask Codex to write in languages like Haskell where it's hard to make "small" mistakes, because types are catching you? Or does it circumvent that?


If you want to auto-write Haskell, use MagicHaskeller:

http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.ht...

And whle we're at it, if you want to auto-write Prolog, use my own Louise:

https://github.com/stassa/louise


Considering Haskell is much more dense than most languages, it should be harder to create those small mistakes because the produced code is much shorter, but I haven't tested it, so take this with a grain of salt.


The semantics of "AI that can write its own computer code" is hurting my head. What is the line between an "AI" code generator that inputs natural language and outputs code in a high-level software language, versus a high-level language compiler that inputs structured instructions and outputs machine/bytecode? At some point, these "AI" / autocoding systems are just a higher level of code abstraction, right? If you showed a modern python script to an assembly developer from 1975, they'd probably call that a code-generating AI. Or am I missing something?


I think it's good to remind that a common theme in AI development history is that whatever advances AI tools demonstrate become considered "non-AI" - even if before it was seen as AI-worthy.


With a caveat that typically "it was seen as AI-worthy" only because people expected the full solution, which would require an AI, but learned to accept a partial but much simpler one.

For example, free-form document search used to be considered AI problem, because people assumed the program needs to understand the documents and the query, the way a librarian or an archivist did. It turns out that some fuzzy matching and a silly graph algorithm can get you 90% there - so search is now called "non-AI". But it's not the people's understanding of AI that changed. It's that the AI part of search is the unsolved remaining 10% (that would also make it not suck).

Same story with machine translation. It was considered AI, now it's 90% solved with clever algorithms and a big corpus. Definitely not AI - but again, only because the AI part is the remaining 10% that would make machine translation not suck. Note how business and legal language is still translated by humans - that's because the 90% that was solved is good enough only for casual use, where people are willing to tolerate mistakes and to meet the machine halfway.

Point is, I'd argue people still have the same vague definition of AI as they always had - a program that is smart in general sense, that can navigate its environment and figure stuff out on its own. A program that, if you squint hard enough, could be considered a person[0]. Problems that transitioned from "AI-complete" to "doesn't need AI after all" are ones for which we found good enough solutions that didn't need the AI.

--

[0] - At least to sensibilities trained on science fiction, which very much broadens what you'd consider a sapient life form, a person.


I guess there needs to be some "design" involved in the process. So a python program determines the sequence of machine code instructions uniquely (given some assumptions of repeatable builds, or lets just say on a given computer without updates and so on...), an AI on the other hand needs to have some freedom to choose which specific program it runs, lets say if the specs say "feature X has to be user configurable" then the AI has to make the decision wether that should be in a config file or in a GUI element.


Nobody tell them compilers have been writing their own computer code based on higher-level instructions for half a century ;)


Back in the 80's there was a software product that advertised no more programming! You just wrote a specification in their specification language :-)

I stopped seeing the ads for it after a few months.


When i think about the work i do as a fairly blue collar front-end engineer writing react and swift code, it's interesting to consider how this could fit in and help.

The problem areas it seems to excel at are somewhat self-contained, which is in contrast to the code I write, usually all about integrating multiple systems and bodies of knowledge (user device, network, data schema, industry practices, product requirements etc).

I too rarely, to my occasional regret, have a chance to write a more pure function whose function can be explained so concisely as the miraculous codex demos are. Helper functions ("count the words" etc) are sprinkled throughout my code for sure but are mostly provided to me by the platforms I inhabit.

Codex's ability to explain a piece of code in plain english seemed exciting at first, but the type of "other people's code" I am usually puzzling over has so many tentacles into the specific "business rules" and arcana of the service i'm writing to. How would Codex know about all that?

Of course codex has already blown my mind several times so I am quite open to it someday being able to ingest an entire set of interrelated codebases and break them down for me succinctly. That doesn't even seem far-fetched, based on what we've seen to this point.

The thing that is ringing a bell for me the most is the idea of it being able to understand APIs and generate correct code for them. That could be a neat learning tool and save some boilerplate. Kind of like scaffold-generation code, but on steroids ...


I remember "The Last One", a program generator written in 1981. So the idea of programs writing programs is nothing new.

Improvements by purposeful self-modification would be a different matter...

https://en.wikipedia.org/wiki/The_Last_One_(software)


They have been saying our jobs are obsolete because AI longer than that anyway. The stuff that codex generates is not the work we do and as such it's pretty useless. I see AI more in frontend code: have designers think up things and have the AI generate React that gives the best possible (machine learned) user experience from those designs (like: this looks like a form with a list right next to it: that will be best with this html on mobile and desktop). At least that seems possible now while writing code that adds value outside that rather does not.


I think i saw a workin product that does exactly that but i can't remember the name


There is at least one that was here about 2 years ago but was not good.


I am not looking forward to the day I have to fix or enhance legacy code written by some AI that is long gone and no one knows how anything works...


Then you put another AI on top of that, designed to fix legacy code bugs.

And another AI on top of that, and…


I tried codex last weekend.

It's not writing code as equivalent to how a human engineer write code. Codex cannot produce meaningful code beyond instructive "A does B". Even a similar thing like "A and B does C and D respectively" will likely confuse codex.

As we often know how hard it's to express oneself in human language, for codex to be near the performance of human engineer is no less difficult than inventing an AGI.

Good luck with that in 20 years


There are two main things I've seen code AI do: automate generation of boiler plate and inline code libraries. It doesn't look like this AI does anything different from that either. Those are nifty but hardly game changing.


Here's a pretty impressive demo.

https://www.youtube.com/watch?v=SGUCcjHTmGY

I think they said it handles something like 37% of requests.

Btw, don't watch it if your worried at losing your job to a computer.


Watched it, very impressive for AI but not at all worried about losing my job :)


Hi! What is your job and why are you not worried?

I am also looking for reasons not to be worried!


i'm curious - if you're worried based off that video - why?


I wrote what I wrote before watching it. After watching it I am not afraid for the coming... 5-10 years. Ok. But what then? Why am I afraid, what am I afraid of?

In the last years "AI" started to make surprising leaps every few years. What we got now is the "child of a new species". It's still a child. But it can grow. The species we are seeing scares me, as an overpaid codemonkey. I can compete with a child of it, but I could not compete with an adult of it. Imagine this system, but more advanced, more tuned to your specific domain.

The systems we work with are all trapped in mind boggling complexity, but what if AI starts to untangle this, what if AI starts to truly become the only human-machine interface to produce software?


Eventually you’ll need better control systems.

As AI becomes more advanced it becomes harder to understand and then control. Everything that isn’t AGI is still easy to control.

My vision would be a brain computer interface, which lets you perceive the data processed by the AI and so on like you’d perceive sounds or colors. It would be like synesthesia where you perceive multiple modes of qualia correlated with together.

You’d have an extremely high level “programming language” using your thoughts, the brain computer interface automatically interprets that to machine instructions.

In this scenario the AI doesn’t become a human machine interface. The AI is the machine and the brain-computer interface is the human-machine interface you described.

In theory you could use this to access cloud compute and delegate tasks to machines like you would today with a computer. I suspect this will help us compete with and survive artificial general intelligence.


It seems to me that the problem from a programmers standpoint isn't that their job will disappear but that the definition of their job will change quite a lot.

I always think of the example of supermarket cashiers. Formerly a fairly skilled job but now merely providing cheap meat-robot manipulators for a scanner. The person is still there but has a job concentrated down to the few things a person does better, and those things aren't always the fun things.


I don't see a world where this happens. Not because the AI will never be smart enough, but instead because if the system is smart enough to turn this into a low skill job, then you could just have domain experts use it - which is the holy grail of programming tools.


I wonder if “coding/coder” will go the same way as “computing/computer” [1]

[1] https://en.m.wikipedia.org/wiki/Computer_(occupation)


The job already has changed, we have been using AI to write code for decades already. Most people no longer write a lot of code based on their own understanding, rather they google for code libraries or snippets, include them and then fiddle with it until they do what the coder wants it to. I don't see how this situation is any different from that.



In the 2000's you had people getting paid 6-figure salaries to just "program" in HTML, that was, of course, before the dotcom bubble burst. This seems to be a modern response to the same kind of developer that's been popping up for the last 10 years, all the fuss over the newest shiny tools will never replace good fundamentals, and most of this fear comes from the fact that people know their fancy frontends/APIs are glorified glue code in messy codebases. There are still systems today that'll require knowledgeable people to maintain for decades to come. The MS Word example is the best part because it'll reduce my free IT consultancy to parents/friends.

Look at your work as a craft, invest time into getting incrementally better and you shall know no fear.


This is pretty amazing. The guys are absolutely right - it's still early days for this tech and the sky would seem to be the limit.

Coders are always climbing the learning ladder and should add co-working with a code-writing AI to their toolkit, especially if it truly is 'open' (CoPilot will be a paid service I believe?).

The long term possibilities for eliminating many types of labour seem enormous. It is not so easy yet to understand what forms of labour will be not only resilient in the face of this developing tech, but even 'antifragile' to it. If these are few (could by definition be an oxymoronic assumption), how will the relative returns on labour vs returns on asset ownership diverge? Will a fundamental revision of socio-economic systems be required?


>> The guys are absolutely right - it's still early days for this tech and the sky would seem to be the limit.

It depends on what you mean "early days". If it's about Codex itself, then that's probably right, it's a new-ish system. However, the task of creating a computer program by another computer program, variously known as "automatic programming", "program synthesis", "program induction", "inductive programming" and so on is not new, rather it goes back to the 1970s and probably earlier still. Very briefly, these terms describe a constellation of approaches that automatically create a program according to a specification.

Placing Codex in the context of this earler work, it is an example of program synthesis system that composes programs from incomplete specifications, variously given as natural language specifications or what we can call "code snippets" (i.e. you start writing code and the system completes it; I don't know the propper term for this kind of specification). As such, it is one in a long line of systems that predate it and it is not even the most impressive of those systems (37% accuracy is nothing to write home about).

The reason that you misidentify it as something new, by your "early days" turn of phrase, is a peculiar tendency of deep learning researchers to ommit any references to relevant prior work, for reasons that are difficult to discern. Sometimes the reason seems to be honest-to-god lack of familiarity with other approaches than deep learning (even other neural networks approaches). Sometimes it seems to be more of an attempt to claim well-trodden territory as brand new and trumpet a small step forward (or not even that much forward) as the breaking of new ground. Sometimes there seems to be an attempt to avoid unwanted comparisons to different approaches with different trade-offs, that could make the deep learning system look more lackluster than desired.

In any case, there you have it. Codex is not any kind of breakthrough or innovation. What is new is the sudden interest in program synthesis. Perhaps this is a positive thing and program synthesis approaches will finally begin to be adopted in the industry. But knowing how these things work, and how what gets hyped and what doesn't is altogether without any rationality, I'd guess probably not.


I expect an increased upward wealth transfer, as has already been happening in most countries for a long time.

The real problem facing our elites will be how to both control their immiserated and angry population, and the AIs now running most of the economy.

I propose cyborgization for such people and will advise the rich people I know to invest in brain computer interfaces and related tech.

We are going to have to think long and hard about the social contract we live under.

How do we deal with never before seen unemployment rates? Is liberalism able to cope with that? How do we stop something much worse from exploiting the coming automation crisis?


That is a very impressive demo! However, I don't see this as a job eliminator. I see this as a turbo button for some development tasks, like starting out and scaffolding an application. Notice that they still had to define "functions" that the neural net could then leverage, they had to speak in precise ways in a given order; it is a different way of coding, but still coding.


My printer can print art.

That makes it an artist.


but if the printer could print art without you doing anything, then that would be what the title is implying...


i like this comparison a lot


I’ve been using Copilot the last few weeks and it’s like having a lunatic babble in your ear while you’re programming. Once in a while though, it has moments of startling clarity - just right enough to make me pause, and just wrong enough to make me burst out laughing.


Everyone except me seems to have gotten access to Codex :)

Can someone who has access try asking it to do this with Python/Pandas:

I have a data frame with columns [name, date, win] which is sorted by date in increasing order within each name. The win column is Boolean. Now add a new column “days_to_win” which is the number of days until the NEXT win=True for the name, not counting the current date. If there is no next win for the name, set it to 9999.

This should be done without any for loops with pure pandas functions/methods.

I will be impressed if Codex can do this.


Worth remembering that just a few years before Deep Blue beat Kasparov, grandmasters were saying that was indefinitely far off. And that in the face of a clear steady ratings trend. You need to account for the rate of progress, not just the current abilities.

(Yes, there's more to software development than coding.)

(Didn't read the article, it's nytimes.)


An AI can brute force a winning plan on a chess board. Chess masters study books and patterns. Deep Blue was fed books and other player's moves. When Kasparov was beating Deep Blue they had to quit and feed Deep Blue more of Kasparov's games.


While this impressive but building software is about the larger understating of the environment and the intent of the users etc. Coding is just translating the solution from the developers head to the machines to execute.

This is coding using natural language. It associates natural language text with code but it understands nothing really in the way humans understand larger context.


so.. give it to the noob and see how it goes! :D

code "generators" has been with us for a long time, yet they only "success" is that they are good at generating "boilerplate" which by definition is sign of bad programming.


Good luck with that.

Simple information theory arguments assure this will not work out so well.


I’d love to hear more about that. That other poster itt asked if you could post those arguments. I’d also like to hear those. Information theory is a topic I’ve worked on before, while designing an AI.


Interesting. Could you provide such an argument?


On a side note, I saw this on the Gartner wave for 2021. It was under the hype cycle with 5 years, fwiw.


AI cannot “write its own code.”

Also a human with the social skills of AI is not qualified to make software.


How long before people use it to pass interview leetcode-level questions.


The quality of the code it produces is not very good.

I have been trying this out, I can share a bit of my experiences/thoughts below:

-------

I've been writing JavaScript/TypeScript fulltime for ~6-7 years now. Day in, and day out, and I happen to love + ardently follow the progress of the language.

In the middle of my functions using "const" and "for (let thing of things)", it will try to autosuggest code snippets using "var" and "for (var i=0, i < things.length; i++) { var thing = things[i]".

There's two problems I see here:

  1. Languages evolve. Newer language features that devs should be taking advantage of don't get suggested because training data doesn't exist on it yet.

  2. The code quality isn't great, as you have to assume the majority of programmers are not producing the world's best code, and so that is the data it was trained on.
I saw the same thing in Java. Using JDK16, it would never suggest autocompletions for Records, or multi-line strings, or pattern matching.

If I had accepted its suggestions, what I would have wound up with, was code that TECHNICALLY worked, but was very low quality and used dated techniques.

Many things it suggests can now be solved in a few lines using recent features of languages that there isn't enough training data on. So it will never suggest them.


Code that uses older constructs is not always bad code. It almost never means the author is an idiot. Code that is poorly readable or badly designed is bad code.

Theres not necessarily an advantage to rewriting older code with newer versions of the same because it works.

You're kind of implying that you dont write bad code. But from your reasoning all your code is bad because a newer way will replace your code.

So many opinions about code are wrong and I think you take something too seriously when it doesn't really matter. If someone uses var rather than let, that doesn't make them an idiot, it just means they're using an older construct. The difference is so unimportant that it rarely makes a difference to code understandability or readability.

Most developers go through your phase of thinking other people are idiots because they don't know something you do. But in the grand scheme of things, it makes no serious difference to code quality. That pedantic person just wastes everybody's time in a code review and spins their wheels when they could be learning to write more understandable code.


> I have been trying this out, I can share a bit of my experiences/thoughts below

How do I try it? Any instructions on setting it up


It is explained in the video...


Leetcode interview questions will adapt to ask for the wrong answer. You'll have to write broken code to prove you can code better than the correct code AI can spit out.


But can it reverse a binary tree


Just don't teach it Lisp.


The title is misleading.


Okay, so something not understanding the domain, "writes" lots of garbage copy&paste code and then a expert is called in to "fix" it. This situation seems very familiar.

Let me explain were this leads. I work in industrial automation were the code is regularly poorly created by "local" experts and people from outside the field. This always fails, flails and bails- after a certain complexity is reached.

Then externals are brought in to "fix it" - and the created mess is used as negotiation mass. After all the project was "well under way" and is "almost completed". You are also given only a very small "time frame" as the "software is almost done". When this happens, take a peak at the software and then just stand up and leave. You will spend otherwise days unpaid educating, the project-management about the most basic elements of software-development. Which then instantly will be forgotten, as post-traumatic memory suppression kicks in.

Nothing can help these companies, because they are not even able to perceive the problem. They do not understand about the value of abstraction, nothing about abstraction layers- often they miss even the difference between the complexity in performing a task (e.g. Writting code) and the complexity residing inside the domain (e.g. Writing Device Controlling Software). These are projects, were taking a instance of the same project with different parameters can result in easily the same amount of time spent again.

You can easily identify the presence of such companies, by looking at the tools. Usually they are deformed, by the "Project Management" to be easier aka more like excel.

Im not kidding. This is how they plan all projects, this is were they expect to find difficulties, this is they perceive the world, this is how they want to solve it. A instance is a copy & pasted cell.

When they try to "advance the field" and make there projects less of a death march, no-code tools and soon AI-Code Generators are there way to go. And more excel. Expertise dependency is to be avoided like the devil, thus all tooling has to be useable by non-experts. If you are long enough in this, you can read there thoughts from across the hallway.

"We can do this cheaper.

No need for someone to study this.

Why are my projects failing.

I need to get out of this field.

I need to promote somebody who has no idea, into my place, so i can get out or up while i still can. "

So with every project in these companies you get a new liaison-face, cause nobody wants to manage software projects.

These are not software companies and they can not build and ship complex software. But they have too, cause its part of their business now. They are stuck, in a procedural, copy-pasted world, in the late 70, were the sun never set on non-object orientated code.

These are companies, were the "software-team" is the only thing growing exponentially.

I honestly by now appreciate the scammers, milking the idiots with coding-tools for every cent but i think, if there was somebody out there, trying to solve the situation by forcing education upon the "local" experts, that would help alot.


It writes Javascript, that doesn't count.

I doubt it read any API specs and implemented the code for that matter.

The sentence he wrote in this case is probably a programming language that got transpiled to JS.

disclaimer: Only saw the picture because the article is behind a paywall.


Codex has the capacity to write decent Python code. A large part of getting great results from it is writing clear, well separated, prompts. Also taking it off streaming mode, to get best N results, and penalizing repetition, improves the output dramatically. Even better if you can give further hints (e.g. import pandas) before submitting.

Of course, it's merely trained on code it's seen on GitHub, so it certainly has a particular smell to it (disclaimer: I focus on Data Science related code, which is not always of the highest quality and has its share of cargo-culting).

Most of the demos you will likely have seen are in streaming mode, have vague prompts with a high temperature setting.


> I doubt it read any API specs and implemented the code for that matter.

That should be easier than an unbounded philosophical discussion with an eight year old sensemaking vocabulary in-formation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: