I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs. Even when "hierglyphics" is used as as a term of abuse of for programming languages synatax -- it ends up pretty popular.
I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work -- if only it refrained from doing the cking stupid things that LabView did (such as the strongly typed editor* that would automatically progate the any type error it found, but not your fixes).
In my current C++ work, I would dealy love a graphical tool that showed me where any given value came from, much like LabView does by its very nature.
"I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs."
My understanding is linguistics research has pretty thoroughly debunked this idea.
Don't remember the experimental design (was a long time ago, sorry), but I believe a study showed Chinese readers basically translate the characters back into the sounds of spoken language in their heads, before any processing of meaning takes place. In other words, pictographic mnemonics may be helpful when first learning the characters, but play no role for a fluent reader.
I suspect a similar thing will be true with programming for a long time to come. Even if you try to replace keyboard characters with other icons, it will be just substituting one arbitrary association between symbols and meaning with another. (Which is basically what language boils down to, anyway.)
> I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work
That's funny. I came away with the opposite opinion. Text is much better at describing details and it's much easier to be consumed by various things: people, editors, analysis tools, web apps, test engines, code generators, code transformation tools, ... I could go on.
Languages like LabView never have a complete toolchain (Prove me wrong by posting a small piece of editable LabView in a reply to this HN comment). They work well as domain specific languages, but that's about it.
I think we can distinguish. The ideograms and hieroglyphs have very, very specific rules about they can recombine, and that nothing to do with their pictorial aspects. It has to do with semantic / grammatical aspects.
I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work -- if only it refrained from doing the cking stupid things that LabView did (such as the strongly typed editor* that would automatically progate the any type error it found, but not your fixes).
In my current C++ work, I would dealy love a graphical tool that showed me where any given value came from, much like LabView does by its very nature.