Hacker News new | past | comments | ask | show | jobs | submit login

There's a serious gap in the writer's mind about computation and programming. It's like the author is suggesting that "eventually we won't need writing: it will be replace by writing-thinking or picture-writing". It's completely absurd. Specific, complex ideas can only be described and communicated in text. Not pictures. Blueprints, for example, have a pictorial element to them, but their fundamental value is our ability to use the formal language to analyze what's on the plan and whether it is correct or not. To the degree that a picture or a motion graphic can formally accomplish this is to the degree that it is supported by a specific language under the covers. Not the other way around.



Blueprints/schematics are far, far superior at conveying the information they do compared to a written narrative. Given the ease of preparing written text compared to drawing schematics nobody would go to the trouble of doing so if that weren't the case.


Of course, all of the pieces of information that blueprints and schematics are conveying are 2D layouts. Once you're out of the realm of things whose forms can be reproduced at reduced scale, or simplified to functional equivalents that lie on a plane, the types of useful visual representations of things are sharply reduced; they become extremely stylized, and symbolic. At that point, you've basically arrived at language again.


An interesting angle on the topic comes from my father who at one time was a project manager and designer in the construction industry. In the days before computers he would painstakingly hand-draw the design that was reproduced as blueprints, that was the role of the "draftsman".

But the drawing wasn't the source of stress, rather it was the project "specification" that he sweated. The issue was the spec was a legal, text-format document detailing the size of beams, type of wire, plumbing, fixtures, etc. He had to assure that beams were sufficient to support structure, electrical wiring was safe and up to code, etc. A mistake could expose the contractor and himself to legal liability if a component failed, so an accurate spec was a task he took seriously.

Of course the subject of program specifications is commonly discussed, though often doesn't have the same significance that my father experienced. I guess in most cases program crashes don't have the same impact that a roof caving in would entail. In situations where crashing can't be tolerated, the spec will mean a whole lot more.


I work in the same construction design industry. The drawings themselves are also contractually binding. Many smaller jobs forgo the written specifications altogether.


My father had mostly worked on larger projects, like tract houses and the like. Of course, times change, my recollection was of how things were a long time ago. My comment was just illustrating an instance where relying on a text description was still important even though there was a graphic format as well.

Your info was relevant to the idea of that at some level of complexity it becomes necessary to use text vs. only graphic presentation. Maybe in construction that occurs when there are more than a few elevations to juggle, but you probably know much more about it than me.


If you had a blueprint of the whole of NewYork city, you surely would need some tool to abstract away the maze of individual lines and be able to refer-to/work-with concepts like "Central Park", "the Harlem", or "Brooklyn Bridge".

It is not about how much more information can we convey, but how much less data must be expended to present a tractable model of reality to the human operator. Conveying more details is worse than useless, it results in informationi overload and cognitive stagnation.

Historically, the way it happened in computer programming is those tools are text based. This is as much about the early use of computers as clerical aids to process business data, and the early synergies between computation and linguistics. Maybe it can be done, but it will require millions of man hours to accomplish. And almost nobody wants to invest in doing so because of the cost of opportunity.


Of course, there's the ability to zoom and pan to get the appropriate level of detail. There's a reason Google Maps isn't a text adventure.


In Google Maps, the ability to zoom relies heavily in a (unacknowledged) property of the problem domain: planar geometry. If every relevant detail is nicely clustered together and, more importantly, every irrelevant detail is nicely clustered far away from wherever you are zoomming-in, then sure!

If, on the other hand, you cannot ever be 100% sure that fixing one stop light in Brooklyn will cause a bunch sewage lines to flush out to the street in Long Island, then zooming does more harm than good. At the end of day, you need the map to conform to the realities of the territory. If that gets in the way of that pretty abstraction of yours, then the abstraction - not reality - is wrong. And when that is the case, you need to start over and make a better map.

Text based toolchains are, for all their limitations, a (sufficiently) reality conformant map. It does not mean there cannot be others; but as of today I do not know about any suitable candidate.


When I write "2.5 mm" this is not narrative. If you want to explain "2.5 mm" without using text, how would you do that? The only way to do it is to use something literal from the real world. That's what we're talking about when we're comparing blueprints to programming. I think the word is literal. Can't avoid the need for text when it's precision we're after.


> Blueprints/schematics are far, far superior at conveying the information they do compared to a written narrative.

Blueprints don't change as much as software does. It's not generally interesting to diff, fork, reformat, or patch a blueprint.


Hmm, I don't think you've ever worked on designing a building. Being able to diff two sets of plans would be hugely beneficial.


Graphics can be useful in some domains but nothing beats text in the general case.


Just imagine a compiler that scans your diagram written by hand on a piece of paper, translate it into AST then interpret it or even produce an executable.


And then realize that your diagram was misinterpreted and you have a big bug in said executable.


It used to be that CPUs were designed with schematics (drawings). Today, they seem to be designed with text (VHDL or Verilog). I wonder why?


Basically all other electronics is developed with schematics though.


Why do you think textbooks (and ancient works) are written in text, not comics?


Why do you think Euclid drew diagrams and didn't write everything out in text?


I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs. Even when "hierglyphics" is used as as a term of abuse of for programming languages synatax -- it ends up pretty popular.

I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work -- if only it refrained from doing the cking stupid things that LabView did (such as the strongly typed editor* that would automatically progate the any type error it found, but not your fixes).

In my current C++ work, I would dealy love a graphical tool that showed me where any given value came from, much like LabView does by its very nature.


"I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs."

My understanding is linguistics research has pretty thoroughly debunked this idea.

Don't remember the experimental design (was a long time ago, sorry), but I believe a study showed Chinese readers basically translate the characters back into the sounds of spoken language in their heads, before any processing of meaning takes place. In other words, pictographic mnemonics may be helpful when first learning the characters, but play no role for a fluent reader.

I suspect a similar thing will be true with programming for a long time to come. Even if you try to replace keyboard characters with other icons, it will be just substituting one arbitrary association between symbols and meaning with another. (Which is basically what language boils down to, anyway.)


> I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work

That's funny. I came away with the opposite opinion. Text is much better at describing details and it's much easier to be consumed by various things: people, editors, analysis tools, web apps, test engines, code generators, code transformation tools, ... I could go on.

Languages like LabView never have a complete toolchain (Prove me wrong by posting a small piece of editable LabView in a reply to this HN comment). They work well as domain specific languages, but that's about it.


> I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs.

Based on these two sentences, I'm confident that you don't know the first thing about Chinese characters or Egyptian hieroglyphics.


I think we can distinguish. The ideograms and hieroglyphs have very, very specific rules about they can recombine, and that nothing to do with their pictorial aspects. It has to do with semantic / grammatical aspects.


As someone who is awful at Pictionary, I hope so as well. Just today, I defined a class with 4 functions. I had another function that created an instance of the class and called one of the functions. It changed a variable that would show up in the web browser formatted by CSS. And I can't even draw a dog in Pictionary...


Emojis = "picture-writing"


And with emojis you can describe how to build a bridge precisely?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: