Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can't necessarily judge the future of of a technology by its past. Consider transportation. Imagine it's 1936, automobiles have been around for 50 years, but there are still plenty of people getting around by horse. Some people are claiming that in another 50 years, by 1986, horses will be hardly used for transportation compared to cars, other people say that horses have been used for thousands of years, there's no way they'll ever go out of style.

Programming languages exist today because computers can't handle ambiguity and don't understand software design. In another 50 years, machines will be a lot smarter, more able to handle ambiguity, and better than people at designing workflows and catching potential errors. Like the horse, no doubt some people will still prefer to do things the old way, but there's a good chance this will be limited mostly to academic exercises.

All they're saying here is that the tools we have will progress a lot in the next 50 years. There are some obvious problems with the way we design software right now which are due to human limitations. The only way to fix those is to remove a lot of direct control from humans and give it to AI programmers. Manually writing JavaScript in 2066 will be like manually carving arrowheads today: still effective but not something you would do for a serious purpose.



Your example actually cuts the other way. Imagine it's 1966 and someone tells you that the cars, trains, and planes will "have to be dramatically different in 50 years," yet lo and behold a trip from NYC to LA takes about the same amount of time now as it did back then and the Northeast Regional is a hair slower than the Metroliner used to be.


I was focusing on 50 years into the development of the technology as a rough analogy. By 1966 it was much more mature, but look at how much things have changed. A mechanic from 1966 would find today's cars completely unrecognizable. They might appear somewhat similar from the outside, but on the inside they're basically just giant computers. We now have cars with varying levels of self-driving capabilities, drones replacing pilots, traditional pilots being essentially babysitters for autopilot systems, hyperloop designs. I'd say those are much bigger changes than 1936-1986.


Cars are not "basically just giant computers" on the inside. Computers are used to control various engine parameters, and aspects of the transmission and suspension, but all the parts that make the car go are just refined versions of what existed in the 1960s. Okay, so now we use computers to control valve timing instead of mechanical means. But the principles of what valves are and how they make the engine work are very similar to 1966.

And that computing horsepower mostly goes towards fuel efficiency and safety. Which is nice, but almost certainly not the kind of progress people in the 1960's thought we'd make in automobiles over 50+ years.

> traditional pilots being essentially babysitters for autopilot systems

The first autopilot takeoff/cruise/landing happened in 1947.

> hyperloop designs

But we don't have hyperloops.

> I'd say those are much bigger changes than 1936-1986.

By 1986 we had fully-digital fly-by-wire aircraft. Our big achievement since then has been about a 20% improvement in fuel efficiency.


The concept of what change is more or less significant can be pretty subjective. I'm not talking about things like fuel efficiency, although those are some really interesting facts. Autopilot in 1947! didn't know that one. Yes, cars and jets still use the same basic architecture for what makes them move, but the control mechanisms for that architecture have completely changed.

To bring your comparison closer to the subject at hand, this article has nothing to do with the design of computers themselves. We could use the same basic Von Neumann architecture in 50 years and still get rid of traditional programming languages as a primary method of designing software, just like we use the same basic engine designs from 50 years ago but use entirely different methods of designing and controlling them now.

Take an engineer designing a jet in 1966 and put them with a 2016 team. They will have to learn an entirely different workflow. Now computers are heavily involved in the design process and most of what was done manually by engineers is now written into software. The same situation will happen 50 years from now for people who design software.

Take an extreme example like game creation. In 1966, you could make a computer game, but you were doing manual calculations and using punch cards. Now you download Unity and almost everything but the art design and game logic is done for you. Game design moved quickly toward these kinds of automated systems because they tend to have highly reusable parts and rely mostly on art and story for what separates them from the competition. But there's no reason why this same concept wouldn't apply to tools used for any kind of program.

The horse to car comparison was only meant to show that the development of a technology in the first 50 years (or any arbitrary number) will not necessarily look like the next 50 years. Well-established tools quickly fall out of use when a disruptive technology has reached maturity, even if that tool has been used for thousands of years. Right now, software design is difficult, buggy, and causes constant failures and frustrations. Once we have established and recorded best practices that can be automated instead of manually remaking them every time, there will be no need for manual coding in traditional programming languages. Machines are getting much better at understanding intent, and this will be built into all software design.


"Take an engineer designing a jet in 1966 and put them with a 2016 team. They will have to learn an entirely different workflow."

Send them to the "PCs for seniors" course at the local library to learn the basics of clicking around on a computer. Then a one or two week training course on whatever software is used to design planes these days.

Getting up to date on modern "workflow" is not going to be a major hurdle for someone smart enough to design a jet. Heck, it's very likely there could be someone who started designed jets in 1966 and still designs them today. (Post retirement consultancy.)


My point was not that they wouldn't be able to learn it, only that the tools and methods of design have changed and become much more automated. That process has not stopped, only accelerated. The people in this article are saying that the process of making software in 50 years will be very different from the modern method. It will rely heavily on automation and what was done manually by writing in programming languages will be integrated into systems in which the intent of the designer is interpreted by a machine. You can see it in IDEs today. They already analyze and interpret code. This is extremely primitive compared to what we will have on 50 years. The progress of machine intelligence is clear and doesn't require any major breakthroughs to continue for the foreseeable future. It will be as irresponsible for most people to write everything manually in 50 years as it is not to use a debugger today. No doubt there will be people doing things the same way, just like we have traditional blacksmiths today, but we will not have billions of people typing into terminals in 50 years. The criticism is against the idea that in the future, everyone will need to learn how to code in the same way as everyone needs basic arithmetic. That is not a plausible version of the future. It's trending the other way, more automation, more code reuse, less manual entry.


"Now you download Unity and almost everything but the art design and game logic is done for you."

Yes, Unity helps to visually organize your game's data, and there are built in and downloadable components (which are all created by coders) that can be used to plug into your game, but it's just another set of abstractions. Most of the time you will be writing your own components in a traditional coding language or delving into other's component code to adapt it to actually make your game function. There ARE game creation systems intended for no coding required, but they come with the expected limitations of visual coding that people are bringing up in this thread. No, Unity doesn't really fall into this category, barring a few limited game domains.

Perhaps in 50 years every domain will be "mapped" in this way, with predefined components that work with each other and can be tweaked as needed, but I don't see how that could eliminate coding, or even displace it that much. Two reasons I think coding is here to stay:

1) Any sufficiently complex system needs it's organization to be managed. At a certain complexity, whatever system is replacing coding will become something that looks a lot like coding. At that level of complexity, text is easier to manage than a visual metaphor. 2) Most pieces of software need custom components, even if only to stand out. Those game creation systems with no coding? No one is impressed by the games that are created in those systems. Not because the system cannot produce something worthwhile - but because with everything looking the same, the value of that output drops substantially.

I think coding will only go away when programming does. When the computer is as intelligent and creative as we are. And that's a point which I do not want think about too much.


I think we'll reach that point in 50 years because we already have computers with certain types of intelligence that exceed ours. Translating a human intent into machine language does work with coding, but we have to admit that it's not ideal. There are too many mistakes and vulnerabilities. Even the smartest people create bugs.

This like the shift in transportation. A lot of people love driving and mistrust autonomous vehicles. But the tech is almost to the point where it's safer than human drivers. In most situations, it already is.

Another comparison would be SaaS. For a lot of companies, it's about risk mitigation. Moving responsibilities away from internal staff makes business sense in many cases.

This is a criticism of the idea that we need to make coding a basic life skill that everyone should focus on. It looks a lot like denial to some people.

Let's go back to transportation. Imagine if people were pushing the idea that commercial driving needs to be in every high school because driving was such a big employment area. Some people might say that the autonomous vehicles look like a big threat to job prospects, so maybe it's not such a good idea to focus on those particular skills.

Coding is great, provides a lot of opportunities to the people that it attracts, but it's a pretty specialized skill that's going to be increasingly displaced by more natural and automatic interfaces this century in all likelihood.


Well, it's dramatic in the little things, but not so much in the big things.

Cars now go 100,000 miles between tune-ups. They used to go, what? 10,000 miles?

Cars are much safer in collisions than they used to be.

Most cars now have air conditioners. I've driven in a car without AC in Arizona in July; believe me, AC can be a really big deal.

Most cars now have automatic transmissions, power steering, and power brakes.

And cars get much better fuel economy.

Driving from NYC to LA takes less time due to interstates and higher road speeds (and cars that can comfortably handle those speeds). Not half the time, but still a significant improvement.

And yet, most cars are not dramatically different as far as the experience of driving them is concerned. Nothing in the last 50 years looks revolutionary. It's been an accumulation of improvements, but there has been no game changer.

I suspect that the next 50 years in computing will be similar.


>Cars now go 100,000 miles between tune-ups. They used to go, what? 10,000 miles?

I'm curious what your definition of tune-up is, because I don't believe there exists a car that can go that far unmaintained without doing lasting damage to various systems.

After a quick Google, my impression is that most 2016 cars have a first maintenance schedule around 5k-6k miles. Some as low as 3,750.


I don't think an oil change is a tune up. Maybe it is.. My honda has 80k miles on it, and has had oil changes + tires replaced. That is it. Compare to a 1970s car and what it would need in the first 80k miles.

For even lower maintenance look at electric cars. I think Tesla has very very low maintenance requirements for the first years.


> I don't think an oil change is a tune up. Maybe it is.

It's not.

I have a couple of 60s Mustangs and several newer cars. My original '65 needs ignition service (what most people call a "tune up") every couple of years (of very modest usage). My '66, converted to electronic ignition, gets about twice as long (and 10x as many miles) before needing ignition service. They both end up fouling plugs because of the terrible mixture control and distribution inherent in their carbureted designs.

My wife's 2005 Honda CR-V gets about 100K to a set of plugs. (Fuel injection, closed loop mixture control, and electronic ignition are the key enhancements that enable this long a time between tune-ups.)

My diesel Mercedes and Nissan LEAF obviously never get tune ups.


> My diesel Mercedes and Nissan LEAF obviously never get tune ups.

You don't do valve adjustments on the Mercedes?


No. I have the OM606 engine. Hydraulic lifters eliminate the need for mechanical valve adjustments as on the older diesels.

About the only thing I've done abnormal on the car in 7 years is replace two glow plugs. (And when the second one went, I actually replaced the 5 that hadn't been changed yet, since they are cheap and I didn't want to take the manifold off again to change #3...)


Actually, the Nissan Leaf can, although is be really conserned about the brakes at that point.


There are currently no signs that what you think will happen will happen. Soft AI is the only place where anything is moving on that front and the movement is infinitesimally small. Here's an analogy for you: It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time.


Years don't make progress on their own, people working during those years push progress forward. The estimated population of ancient Babylon at its height was 200,000. Let's imagine that 1% of them were working on developing writing for at leat 2 hours every week and that those who came after them were able to maintain that level of work for 1000 years until ancient Greece, over 200 million hours of work. That's less time than the official Gangnam Style video has been watched on YouTube.

In 50 years, 99%+ of all the work ever done by civilization will be done after 2016.


As long as we're criticizing the analogies in the discussion (rather than the actual arguments) I'd say the hours spent do not have a consistent quality vis-a-vis solving hard problems. Because there are more absolute hours available does not mean that there are more hours available for solving hard AI problems. There are very likely less. And there has been virtually NO progress on the hard AI front.


Hard, human-level AI would help this a lot, but it isn't necessary. All that's required for traditional programming to become obsolete is for computers to be much better at understanding ambiguity and have a robust model for the flow of programs. With today's neural networks and technology, I have no doubt it would be possible to design something that would create good code based on all the samples on github. Not easy by any means or someone would have done it, but it doesn't require any breakthroughs of computer science, just lots of data and good design. The tools referenced in the articles are working primitive versions of this.


There's an important distinction though between being able to write a compiling (or even functional) program and being able to write a program that serves a particular purpose.


I'm talking about human-guided programming without using traditional programming language, creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set.


> I'm talking about human-guided programming without using traditional programming language, creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set.

Creating clear and accurate design documents is so much harder and more specialized a skill than programming that many places that do programming either avoid it entirely or make a pro-forma gesture (often after-the-fact) in its direction.

(I am only about half-kidding on the reasoning, and not at all about the effect.)


"creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set."

This is exactly what programmers do today. We just call the "design document" a "program".

Over time, our design documents become higher and higher level, with the programmer having to specify fewer details and leaving more of the work of sorting out the actual details to the computer.


Yes, exactly! That's what the article is claiming.


Why do you assume that this design document would be simpler to create than the traditional computer program? Because otherwise, this is exactly what happens now.


There are some fairly aspirational claims about how it might be different in this paper, which is a great read:

http://shaffner.us/cs/papers/tarpit.pdf

There has already been some significant progress on this front. E.g., SQL and logic programming let you describe what you want to happen, and let the computer figure out some of the details. Any compiler worth using does this, too. Smarter machines and smarter programs will mean smarter programming languages.


Design is always going to be a part of creating something. What this article is arguing is that manual typing of text by humans using traditional programming languages will not be the primary means of implementing those designs in the future. We don't yet know how to make computers into good designers, but we know that we can create tools that translate designs into executable code that can be less error-prone and more reliable than people typing letters into a text editor.


My question is how is drawing rather than writing simplifying anything i.e. what is the gain from moving from traditional programming to some sort of theoretical picture programming? Is it that you can draw lines between things rather than just assuming that the line from one symbol points to the next symbol on the line? Does that simplify things, or make them more complicated?

> we know that we can create tools that translate designs into executable code that can be less error-prone and more reliable than people typing letters into a text editor.

I disagree. Maybe you know, but I haven't seen any indication of the sort.


Drawing rather than writing is just one method. A lot of it will likely be conversational. I could imagine a designer with an AR overlay speaking to a computer which offers several prototypes based on an expressed intent. The designer chooses one and offers criticism just as a boss would review an alpha version and suggest changes. The machine responds to the suggestion and rewrites the program in a few fractions of a second. The designer continues the conversation, maybe draws out some designs with a pencil, describes a desire, references another program which the machine analyzes for inspiration, and the machine adjusts the code in response. This is just one of many possible examples. The point is that software design is trending toward more automation. Coding is not a new essential skill that everyone will need on the future. Human-machine interactions are trending toward natural and automated methods, not manual code entry. Most people need to learn to be creative, think critically, analyze problems, not learn the conventions of programming languages.


Analogies are always a rabbit hole. Haha.


> for us to go from writing with only consonants to using vowels for the first time

Speaking as someone who has studied cuneiform and Akkadian, I would say that this claim isn't true. Here's a vowel that predates the period that you mentioned[0].

[0] https://en.wikipedia.org/wiki/A_(cuneiform)


> Here's an analogy for you: It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time.

Where did you get this idea? Babylonian writing fully indicated the vowels. It always has. You're thinking of Egyptian / Hebrew / Arabic writing.

Even where a semitic language was written in cuneiform, vowels were always indicated, because the cuneiform system didn't offer you the option to leave them out. https://en.wikipedia.org/wiki/Akkadian_language#Vowels

(Old Persian was written in repurposed cuneiform, and therefore could have omitted the vowels, but didn't.)


>It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time

Yeah and it took "us" 60 years from discovering flight to landing a rocket on the moon. Took "us" 60 years from the first computer to globally-live video streaming in your pocket. Time is a pointless metric when it comes to technology. You don't know what someone is cooking up down in some basement somewhere that will be released tomorrow and shatter your concept of reality.


I wonder if 'leisure-person years' is a better metric of progress (where 'leisure' is defined as the number of hours you can spend neither raising/searching for food nor sleeping).

Be really hard to identify, though.



What do you mean by computers handling ambiguity? At the end of the day for a idea to become cristalized it needs to be free from ambiguity. That is the case even in human interactions. When using ambiguous language, we iterate over ideas together to make sure everybody is on the same page. If by handling ambiguity, you mean that computers can go back and forth with us to help us remove ambiguity from our thoughts then they are basically helping us think or in some sense do programming for us. That is a great future indeed! A future where actually AIs are doing the programming in long run! But with this line of thought we might as well not teach anything to our kids because one day computers will do it better. Specially if we already stablished that they can think better than us :)


Let's teach our kids the higher level stuff that doesn't ever get old, thinking clearly, engaging in creativity, solving problems, whether through code or whatever means appeals to them. Let's give them options and opportunities, not must mandate memorizing specific facts. Let's teach kids computer science instead of just programming, creative writing instead of just grammar, mathematics instead of just algebra, let's engage their imagination, not just their instincts to conform to expectations!


The best "programming" curricula aimed at general education teach (elements of both) generalized problem solving and computer science with programming in a particular concrete language or set of languages as a central component and vehicle for that (and often incidentally teach elements of a bunch of other domains through the particular exercises.)

This is particularly true, e.g., of How to Design Programs [0].

[0] http://www.ccs.neu.edu/home/matthias/HtDP2e/


Let's teach them computer science with programming as a fantastic way to concretely demonstrate its abstract ideas. (The same goes for math vs. arithmetic!)


Yes, definitely. Too often the application of the idea is taught without understanding the idea itself. Then we get standardized testing and focus not even on the application but in what ways the application of the idea will be stated on a test. We still need the conceptual framework to learn anything lasting!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: