Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That was a sad thing to read, the author is so clueless they don't even know when the reasons they imagine something might have been broken are wrong.

Back when UNIX was born the <tab> character was a first class citizen in every computer on the planet, and many languages used it as part of their syntax. Static binaries were invented when Sun and Berkeley co-developed shared libraries and there needed to be binaries that you knew would work before shared libraries were available (during boot before things were all mounted, during recovery, etc)

It always amazed me when someone looks at computer systems of the 70's through the lens of "today's" technology and then projects a failure of imagination on the part of those engineers back in the 70's. I pointed out to such a person that the font file for Courier (60 - 75K depending) was more than the entire system memory (32KW or 64KB) that you could boot 2.1BSD in.

Such sillyness.



The most important thing about UNIX - no matter how simplistic (or not) it might appear or how significant (or not) the perceived flaws might seem - is that a move to UNIX back in 70s-80s was liberating with its simplicity and human friendliness for so many of those coming the world of closed-off, proprietary operating systems, walled gardens, development tools and kernel API’s.

Writing a mere string out to a file on a non-UNIX was nowhere near as easy as ‘fd = open (“a file”, O_WRONLY); write (fd, p_aString, strlen (p_aString)); close (fd);’ on UNIX. Many systems required either a block-oriented or a fixed-record (with the record structure to be defined first) to be opened, the block or the record to be written out and then the file to be closed. Your record-oriented file has grown very large? Brace yourself for a coffee break after you invoke the “file close” system call on it. Did you process get killed off or just died mid-way through? Well, your file might have been left open and would have to be forcefully closed by your system administrator, if you could find one. Your string was shorter than the block size, and now you want to append another string? Read the entire block in, locate the end of the string, append a new one and write the entire block back. Wash, rinse and repeat. Oh, quite a few systems out there wouldn’t allow one to open a file for reading and writing simultaneously.

Flawed make? Try to compile a 100 file project using JCL or DEC’s IND using a few lines of compilation instructions. Good luck if you want to have expandable variables, chances are there wouldn’t be any supported. You want to re-compile a kernel? Forget about it, you have to “generate it” from the vendor supplied object files after answering 500+ configuration related questions and then waiting for a day or two for a new “system image” (no, there were no kernels back then outside UNIX) to get linked.

Awkward UNIX shell? Compared to crimes a numbers of vendors out there committed, even JCL was the pinnacle of “CLI” design.

No matter how perfect or imperfect some things were in UNIX back then, hordes of people ended up running away screaming from their proprietary systems to flock to UNIX because suddenly they could exchange the source code with their friends and colleagues who could compile it and run within minutes, even if some changes were required. Oh, they could also exchange and run shell scripts someone else wrote etc. In the meantime, life on other planets was difficult.


I remember people saying Unix was the gold standard of user-hostile operating system.

That was well before I met AS/400 and MVS.

And then I had contact with a Burroughs A-series and it's appropriately named OS, MCP.

OTOH, I love the 3270s.


TRON's MCP was named after Burroughs', I'm quite sure. Guess who TRON/Alan Bradley was based on? :)


I think it's for Alan Kay. Bonnie MacBird is his wife.


> It always amazed me when someone looks at computer systems of the 70's through the lens of "today's" technology and then projects a failure of imagination on the part of those engineers back in the 70's

True enough, but as a younger programmer, I find it pretty reasonable to look back at computer systems of the 70s and wonder if we can do better today. I feel a little bit gross every time I have to write a bash shell script (or edit config files that aren't JSON/XML/YAML, for that matter), and I don't think that's a bad impulse. That something so inelegant and unsafe is still in widespread use in 2017 really ought to be a scandal. Even if the author didn't frame the issue in the most charitable way for the earlier trailblazing generations, he's calling attention to the right issues.

In other words, if you couldn't justify something being designed a certain way de novo, why be content with the existing design!?


JSON doesn't have comments so it's a bad choice for human-editable config. YAML doesn't have an end marker so you can never be sure if you've got the entire file. XML is a huge pain to edit by hand if the schema is complicated, and overly verbose if it isn't. None of them are even close to being safe (for example https://arp242.net/weblog/yaml_probably_not_so_great_after_a...). All of those choices fail your "elegance" test.

TOML is my preferred config file language option where I have a choice - https://github.com/toml-lang/toml - but I suspect that suffers a lot of the same problems.


I will capitalize on this derailment to promote luxem, my flexible and minimal JSON alternative: https://github.com/rendaw/luxem#what-is-luxem


The issue with all of these, of course, is that in order to get a system running you have to configure multiple "independent" tools, processes and daemons. Think setting up a web application - you have to configure the web application to listen on a certain port/UNIX socket, then configure your web server to go find it. You then need to scale this up across logical servers separated by a network - your web servers need to communicate with your database, they need some sort of authentication key/password, etc etc. You're never just configuring one thing.

The modern solution would be that there needs to be a network configuration tool which generates specific configurations for each component, is capable of encoding arbitrary invariants, and works consistently. Configuration also needs to be "push" based on events - when a DNS server dies, it should be able to figure out "we need at least 2 DNS servers, we have 1, fire up a new one - then update all systems to know about the new one".

Configuration management systems for Linux, by and large, suck. They're very good at starting from an Ubuntu Server install and building on that, and then get more and more fragile as the system lives on. Some of them (Saltstack, for example) do have some degree of event management - you can run certain commands on certain things happening, but it's not declarative or reactive in the way you'd hope - e.g. you can't just say "this system knows about all DNS servers" and expect it to work. The Docker/Kubernetes ecosystems claim to solve the network configuration problem (in a really awkward roundabout way), but not really intra-system configuration, and it still takes a lot of manual work.

NixOS gets a lot closer - but it needs to be expanded with a constraints solver and an event processing system. It's Turing-complete, so you can encode pretty much whatever you want into it, while still being a reasonable configuration language (basic use of it looks a lot like JSON).

But the point is - the formats individual components use for configuration should be more-or-less irrelevant. They could be completely opaque, so long as it's possible to get from the network config to the individual component's config and it's possible to update that config on-the-fly. In fact, it'd be more useful to standardise on one library which can handle all that for you.


I agree with most of your post, but this is still more complex than just adding a constraints solver and an event processing system. Different things not just depend on each other, they also require different strategies for dealing with failures. Trying to squeeze everything into a single model will not work well. Maybe something like supervision trees for services might solve that, where supervisors for each service are part of the package and handle everything from automatic configuration to failures in any way they need.



Still does not allow tabs for indentation - same problem as `make` but inverted


>JSON doesn't have comments so it's a bad choice for human-editable config (...)

What about Lua tables?


I second this, I love using Lua tables as a data description language.


Just add comments to JSON, Douglas Crockford can eat his heart out.


>True enough, but as a younger programmer, I find it pretty reasonable to look back at computer systems of the 70s and wonder if we can do better today. I feel a little bit gross every time I have to write a bash shell script (or edit config files that aren't JSON/XML/YAML, for that matter), and I don't think that's a bad impulse. That something so inelegant and unsafe is still in widespread use in 2017 really ought to be a scandal.

As a relatively young programmer i look at the computer systems of the 70's and think to myself "what the fuck are programmers thinking when writing today's programs". I mean we got amazing tools and a plethora of amazing languages, an insane amount of memory and an idiotic amount of processing power, chips specialized for sound and even ones for visuals. And yet everything lags.

As for shell scripts, they are fine. Not that you have to use them, as you can use python, scheme, anything (even C with tcc).

As for XML... oh god.. it is the worst, the absolute worst, way to do config files. Jesus help, is it awful. Bad for the user, bad for the processor. Simple key–value pairs (as in "power_level: 9000") are good enough for almost all configuration. Hell even the INI format is faaaaaaaaaaaaaaaaaaaaaar better then XML (and YAML, JSON (although these are themselves better then XML)).

I look at the 70's and think "haven't we learned any fucking thing ?".

PS On topic, the Unix philosophy is still valid. For a plethora of reasons. I typed too much already but if anyone's interested in my "opinion" i'l write it later today. PPS The author of this piece doesn't even know what "the Unix philosophy" exactly is.


XML is nasty, but at least it's nasty in standardised ways. I've definitely worked with worse (e.g. Cadence DEF files).

The lag should be regarded as intolerable, but instead it's routine. I shouldn't have to wait for computers like this.

Shell is a nice idea, but UNIX should either have banned whitespace in filenames or made the shell handle it properly. It's amazing how many things blow up on encountering a space, and you can upset people even further by putting newlines in your file names.


ftp://ftp.slackware.org.uk/slackware/slackware64-14.2/ChangeLog.txt

Just ctrl+f for "xml". (edit: nvm, only 3 reports are bugs. although the libxml2 report has 3 somewhat serious bugs)

"XML is a well established standard" is usually the reason people choose XML for anything. I guess i would add "Nobody got fired for choosing XML" to the list of such sayings, if i had such a list.


> what the fuck are programmers thinking when writing today's programs

The difference is in the old days, you adapted to computer. Now, computer must adapt to you.

Take a look at MS's latest text-editor program's release notes (https://code.visualstudio.com/updates/v1_10). They have to build a program that accommodates so many different existing must-have features (from a potential users POV), and for each feature, accommodate almost every flavor of configuration that has ever existed for any other text editor whose style the potential user is now "locked-in to". Otherwise user will refuse to adopt.

In the 70's, I assume the audience was a pretty tight knit community of computer engineers, coming from a time of very frustrating tooling where it wouldn't be uncommon to lose a day's work because the 'Save' function failed or something like that. In this environment, you learned the crazy unintuitive keymap for vi and called it a blessing.

Now, you yell at your editor if it doesn't remind you to save your work when you close it. Or you yell at it when it does remind you; too many pop-ups! The point is that user preferences are more fragmented and expectations are higher for features, and willingness to learn details and gotchas of a program are lower, which has created the bloated program for the simple things like text editors.


As a contrast, I use sublime. It has (almost) everything I desire. And it saves everything i write, even in the case of a power outage.

Another example would be Acme[0].

>The point is that user preferences are more fragmented and expectations are higher for features, and willingness to learn details and gotchas of a program are lower, which has created the bloated program for the simple things like text editors.

I agree. People are people and people will always be subjective. But there should be a distinction between feature bloat and node.js bloat. Still, ofc, to each his own. (I measure how good it is for me by how much hair i pull out by using it, be it by lack of features, bugs, performance, or anything)

[0]https://www.youtube.com/watch?v=dP1xVpMPn8M and https://www.usenix.org/legacy/publications/library/proceedin...


This might sound odd.

Instead of writing bash scripts, learn awk.

Awk is old, much older than bash, and has largely gone unchanged since '89.

But, despite the fact it was born in an era where we were still nailing down PLT and what was important in a language, it ticks a lot of boxes.

Having written some sizeable (5000+ LOC) programs in it, it kind of feels like a dynamic mash up of C, and Lisp.

Try Awk, and see if quick scripts are still as painful.


Yup, XML is ugly to write by hand, but I'd argue it is nearly as legible as YAML/JSON, with the great-great advantage of being able to be structurally validated.


> wonder if we can do better today

Cough... systems... cough...

We can do better. Plan 9 existed. Apple's Newton has a unique take on data files. AS/400's map all storage to memory addresses giving a single address space for everything. With Smalltalk, applications would just be new classes you import into your system.

The key to Unix's success seems to be not doing better, but doing as little as possible while still remaining useful.

And yes, it's a bit disappointing that, when we did better, we were less successful. Today, all the dominant OSs are children of either Unix or VMS.

And MVS, which went nowhere, but in the sense it's always been there and, it seems, always will be.


I understand why they wonder but it should be a 10 minute thought exercise in most cases.

You have to define "better". Is it synonymous with "easier", "type safe", "less resource intensive", "object oriented", "functional", "faster" or some combination thereof.

How will you maintain backwards compatibility? Because, despite the fact that you may have just graduated, there are scripts and programs that rely on scripts that go back decades and people are NOT going to rewrite them just because of your new whiz-bang solution.

You may use it and if you represent a large enough organization, it might be viable for that use case but if you're only shopping a solution in search of a problem, you may as well not waste your time.


Plan 9 is not "better enough". The other ones had fundamental non-technical problems that hurt their adoption (Plan 9 had those at the beginning too).

Back to Plan 9, you can expect something that is just slightly better than your current options, and that keeps being this way to steadily gain adoption until it's popular. But that requires it being no worse than the current on any popular use case. Plan 9 may have a great architecture (I'm not sure about that anymore), but OSes compete on many different fronts, like portability, performance, software availability...

If the Plan 9 architecture were something amazingly better that solved a lot of problems, you could be sure some people would have adopted it, and then it would start improving on the other dimensions. But it's not. It's better. It will make your life easier. Yet it will bring no order of magnitude change. So, people simply take a deep breath and write (again and again) the 3 or 4 lines of code required to solve their problem in Unix, instead of changing everything.


I'll say that if you could run a web browser on Plan 9 and maybe there was a POSIX or a Linux compat layer, it could've become popular. It'd be a boost to porting efforts if it was a usable desktop OS. Give me the best OS, but if I have to write a whole desktop/server stack on it to make use of it, I'll stick with what I have at hand.

A plethora of software is available for Unix and it's why it's here still. Had I the chance to switch to, say HaikuOS, without a big/complete productivity loss, I would.


Your stated reasons aren't valid. I think they ported Mozilla until 2002 (Russ Cox, maybe?). There was also a compat layer: APE.

As a counter to your argument: a system that has little capability compared to the well-established alternatives, starts from nothing, yet takes over the world: Linux.

I really believe there was no greater reason for Plan 9's failure to catch on (or at least it's ideas) than my peers not being interested in their own education.


I read the plan-9 papers and was excited to play with it. I had a .edu affiliation and it was still $500 for a tap eand license and a letter saying what you planned to do with it... or some such. I had no budget, so we used linux.

They did open it up eventually, after linux had quite the following. It it had been open/available it would have had a chance.


It sounded fine the first time I heard it, but I don't buy the "it wasn't free and Linux was available" argument any more. If more Linux users had read the papers like you, and done so early enough, they might have had an aggregate influence on Linux's design when it was still malleable. A Plan9-like Linux would have been just as good as Plan9 eventually.


Plan 9 had fundamental technical problems too. Like most of the systems of that time it was designed with small single-building kind of networks in mind and didn't offer any value once large planet scale networks came to life. Also everything could have being implemented on top of any unix system, but wasn't that interesting for anyone to bother.


Didn't Plan 9 eventually get some mechanisms to facilitate interactions between such local-only networks around the world? 9grid and such.

In any case, Plan 9 is still ahead on quite a few fronts, with federated authentication that is simple(-ish) and works, a versioned filesystem by default, a unified namespace (the new interpretation of "filesystem") to access any resource you might want to use (instead of, today: DNS + TCP ports + HTTP URL's), and security by only exposing the necessary resources to processes, instead of exposing everything and then making up contrived access restriction schemes on top. (Smells of capabilities by Pike's own admission, I believe). That last part was mostly thanks to namespaces which Linux has picked up and popularised in a simple form with Docker... after 20+ years.

The important bits of the new architecture were also about the parts that were left out. No disk usage utility, based on the rapidly decreasing price of a giga bytes. Go and try to make contemporary sysadmins, who still partition their drives to this day, accept this highly distruptive simplification. Gotta create ourselves our busywork!

In the end, Plan 9 was made by /educated/ people with /resources/, and much of today's software isn't. No surprise it's better made.

(edit: some structure added to my rambling.)


Most of them were fixed in Inferno, but by then most stop paying attention.

To the point even on HN many seem to be unaware that the line ended on Inferno and not Plan9.


Personally, I tend to slot Inferno closer to Java than to Plan9, even though I'm clear about the lineage. In that sense, the line of "real" operating systems ended at Plan9.


For me it is Inferno.

If you look at the architecture of mobile OSes, they were in the right direction.


"if you couldn't justify something being designed a certain way de novo, why be content with the existing design!?"

Please, no. Don't convert system programming/engineering to the mess that is web development today.


I'm sorry, Docker is already here, even in enterprisie flavor.


> why be content with the existing design!?

Because backwards compatibility is more important. We can't just throw everything away every couple of years. Besides, who is to tell that the newer design will be better? Judging by the history of our industry, I have some serious doubts.


> Besides, who is to tell that the newer design will be better?

As much as it's popular to think otherwise, this isn't true. Taken as a whole, newer things are more often than not better than older things.

For example, cryptography has been on a huge march upward since the days of Unix crypt().


I have a shining counterexample to your thesis:

    https://arstechnica.com/apple/2015/01/why-dns-in-os-x-10-10-is-broken-and-what-you-can-do-to-fix-it/
In almost all my experiences reimplementing established software in a commercial environment "building a better wheel" has met with failure. Or in the very least it took some time to get it back to a level of acceptable that our customers were happy with.

This idea that it's easier to start from scratch than maintain old code is almost always borne of hubris.

The thing that many people miss is that software in active use is a "living thing" that has undergone many evolutionary iterations between developer and user before it's been gotten right.

There is so much implicit knowledge & experience embedded in the code itself. When you say you want to replace something with a "new" version you're effectively doing something similar to sacking an experienced engineer and replacing them with a graduate.

That's not to say that good enough is good enough. But it's important to remember that improving on "good enough" is hard. If you don't have a commercial mandate or other good reason for improving it it just may not be worth your time.


Apple is in the vanguard of only one thing: marketing.


What a lame and supercilious comment.


Even the great Butler Lampson has a slide set online which includes the item

Design (Apple's forté)

This is arguably a slightly more defensible version of the GP.


I'm embarrassed to say I hadn't heard of him before now. Reading his Bio he does seem like something of a heavyweight!

I have some sympathy for our friend's point of view here, even if it is orthogonal to what I was saying. Particularly more recent developments but I have no time for this kind of snarky, pugnacious sniping.

I would suggest that Apple's real strength (or should I say Steve Jobs') was "Product Development" which yes includes a large component of marketing and I really don't see that as a bad thing. It's not like what was being marketed was a substandard product? People paid a premium and got a premium device.

That you don't get the same quality any more is no reason to decry the brand itself as only being good at marketing. Their devices are even today physically manufactured to the highest standards, but historically they have developed superbly engineered products. Yes this cannot be credited to Jobs, but he did identify and harness the talent.

This topic isn't black and white and perhaps there are many shortcomings to Apple, and to Jobs, but this kind of single-line snark does nobody any justice, in particular the commenter himself.


Cryptographic algorithms are, when they are shown to be better than old algos, better.

Many algorithms are shown to be not better than old algos, hence the frequent admonitions (see Schneier for multiple examples) against roll-your-own crypto. I'm trying to remember the name of a much-touted encrypted messaging application being advertised for use by Arab Spring activists which turned out to have massive vulnerabilities.

There's a list (which doesn't seem to include the one I'm thinking of) here: http://www.pcworld.com/article/2843372/popular-messaging-app...

The reality is that encryption advocates strongly encourage people to use tried and tested mechanisms.

Worse: cryptosystems, inclusive of the algorithm and all the support infrastructure around it are very frequently worse than old systems, and reveal very, very badly broken implementations, sometimes years down the road.

https://security.stackexchange.com/questions/18197/why-shoul...

ftp://ftp.pgpi.org/pub/pgp/7.0/docs/english/IntroToCrypto.pdf


Cryptography benefits mostly from huge computing resources we have today and fundamental advances (math). Current crypto algorithms were impossible/impractical with hardware from just few decades ago. Yet we still suck at designing crypto systems.


In finance we call the driver of this "survivorship bias."

IOW, your contention is true/truthy if you ignore the fact that there is a large selective pressure weeding out the "not better" new things.


In computing industry, there's little pressure for weeding out "not better" new things. Unix is a great proof of that. Solutions survive on basis of popularity, not technical soundness, and many things get continuously forgotten and reinvented a decade later.


Often for good reasons! In my career, I've seen the pendulum swing from storage on every device, to diskless devices sucking from the network, back to storage. Many times. Never mind the philosophy folks quote; its about technology, or rather the relative speed/latency of network versus local storage. Its true in lots of things. Folks 'rediscover' messaging systems for redundancy and failover every 10 years or so.


Exactly. Backwards compatibility is king. There have been plenty of research operating systems which never made it to popularity; not just Plan 9 but things like BeOS and Nemesis. The network effects driving people to consolidate on one system are huge.

Browser design is a mess, but that's because it's a battlefield between competing demands. We're very lucky it's remained open and competitive. There's an alternate universe somewhere where IE6 won and you have to use ActiveX controls to enter your government ID before you can post to websites. (Oh wait, that's South Korea in this universe http://www.bbc.co.uk/news/technology-29617196)

You can write a better operating system, but unless it runs a browser and you can give it away for free it will have a very limited audience.


As a sysadmin I just want to say I love bash. I'm not a programmer, but I get shit done in it all the time, and it's easy to understand and fix errors later. I have read every single why bash is bad article and I find all of them wanting for better arguments.

On the windows side, power shell.

And keep in mind, many of these scripts have been running for 10+ years. When's the last time you wrote a piece of software that worked for 10 years with minimal or no update?


> and wonder if we can do better today.

No, we can't do better today. We may start something better today and enjoy it in 40 years, but for the time being, these are the tallest giants' shoulders we have.


Bash is not bad at all; it's unrestrained and arcane at times, agreed, but it does have the nature of a true programming language, just like powershell does. Now .bat scripts, that's truly inelegant and unsafe.


While it's certainly not the worst, and certainly can be used for many things, bash is definitely pretty damn bad. You've got horrible string escaping nonsense ($IFS, ...), an utter lack of any sensible types, constant incompatibilities due to command-line tools supporting different arguments on different platforms, ...

While Perl or similar are not without many faults, at least they solve a lot of this. Think of even writing something like a JSON parser in bash... it's horrifying.


You could use something like jq for parsing JSON. And while I agree with you regarding some of your points, they could be easily refuted by pointing fingers at other things: yes, string escaping can give a grown man nightmares, but hey, it's not working with strings in C; and once you get the hang of it, it is pretty simple. Lack of typing can hurt, sure - but it's not BASIC; With a little imagination you can have strict types even.

And cli tools having different args on different platforms? come on, it's not even bash's fault...

Basically what i'm saying, everything you can do on $LANG, you can do in bash. And it's not that hard. It just has a loooong learning curve and seems to provide with just enough things to shoot yourself in the foot. But then again, you can be stupid in any language...


I agree that CLI tools having different args on different platforms is certainly not bash's fault, but I think it hurts Bash much more than other languages because, since Bash is such a limited ("awful") programming language, the de-facto way of doing most anything in bash is to call out to these command-line tools for everything.

For instance, above you suggest using "jq" to parse JSON. I totally agree with you that if you actually needed to deal with some JSON data in Bash for some reason, then a tool like "jq" is definitely the way to do it, because it avoids trying to do actual programming in Bash.

In any "more reasonable" scripting language (perl/python/ruby/...), you can very easily open files, parse them, extract data, etc., all within the context of the language, without relying on all these external tools like awk/sed/grep/etc. And you have basic types like arrays and maps at your disposal that often make what you're trying to do a lot easier.

I can't imagine trying to actually write a JSON parser in pure bash. (Well, I can... but uggggh...)


> In other words, if you couldn't justify something being designed a certain way de novo, why be content with the existing design!?

Not justifying it designed this way today and keeping a false traditionalist position to retroactively-self justify it is no worse than berating something designed at the state-of-the-art at the time with no contextual understanding of why it was done that way... it is quite possible to evolve designs within a tradition of understanding their origins rather than being a smug neophyte who self-admittedly is responding out of ones own ignorance yet choosing to still berate despite this ignorance 'because who even knows why but it seems old man'

see also: just about any incremental improvement in any long running system ever, over time, across all systems, from the dawn of time, up until today


Plain text config files are lot easier to write and edit than XML or JSON - the KISS principle applies.


bash vs yml has always been there. you're really saying "declarative vs imperative" - not news. this is relevant, though, because Unix has always been about tools and composition, which is indeed dependent on imperative expression. if you feel dirty describing how (imperative), then maybe you're in the wrong business and should stick to the declarative world.


I had much the same reaction. I learned Unix on a PDP-11 running sixth edition AT&T. I was also using the University's UNIVAC 1108 and the elegance, simplicity and orthogonality of the Shell vs the EXEC-8 command language was a joy to behold. Sure, there is much about Unix that could be better and I sometimes regret that Plan 9 didn't succeed more. But it's like Kirk said to Saavik, "You have to learn WHY things work on a starship". Unix mostly does things for a reason, even if you don't see it immediately.

But, having said that, I really hate how TAB is used in Makefiles. The first screen editor I used was the Rand Editor and it treated TAB as a cursor motion character. It was a handy way to move horizontally on a line by a fixed amount, something not easy to do in, say, vi. This editor converted all tabs to spaces on input and then re-inserted them on output, which mostly worked, but it could mess up some files that were fussy about tabs.


I think it is this ambiguity of tab that was the source of confusion. Certainly on DEC machines at the time, when you typed 'tab' in and editor that was inserting text it was because you wanted an ASCII TAB character in your text. What was more, the program could just echo the TAB character to the terminal you were using and the terminal would "do the correct thing" for displaying that tab. Back in the 90's when 'model/view' controllers were the rage it was couched as separating presentation from the semantics.

The challenge came when there were devices which operated incorrectly when presented with a TAB character or confused command data with actual data. That became the "killer" issue when computers started substituting for typewriters. Because a typist knows that if you hit the tab key the carriage would slide over to the next tab stop that was set on the platen bar, but the paper was "all spaces". When you try to emulate that behavior in a computer now the TAB becomes a semantic input into the editor "move to the next tab stop" rather than "stick a tab in" and "move to the next tab stop" could be implemented by inserting a variable number of spaces.

Computer engineers knew it was "stupid" to try an guess what you wanted with respect to tabs so they wrote a program for formatting text called 'troff' (which had similarities to the program on RSX11, TENEX, and TOPS called RUNOFF.

It is always interesting to look back though, if you had told a system designer in the 70's that they would have gigabytes of RAM and HD displays in their pocket when their grandchildren came to visit they would have told you that people always over estimate how fast change will happen.


I worked on a product at AT&T Bell Labs back in the 80's/90's that had a config file that had tabs and spaces mixed on the same line -- and it mattered.


If you think about Makefile syntax being an earlier member of the same family as Python, the major difference is that tab may have been a more ubiquitous indent convention at the time (but I really don't know).

I still prefer to indent C with tabs, and I'm not the only one. It's really not hard to handle tabs well, as a code editor or as a programmer. You can view them as whatever width you like. You can have marked to distinguish from spaces (e.g. vim's "set list"). They've been around for longer than you have so there's no good excuse for forgetting about them.


One of the nice thing about indenting with actual tab characters is that you separate the notion of 'indentation' from the actual indentation. I set my tab stops to 4 which tightens the code, others set it to 8, still others set it to 2. If the code has tab characters it just works, if the code has spaces there is no way to have your most comfortable indentation level be used.


Yes, yes, yes. I'm saddened that tabs have essentially 'lost' - they seem to me in every way superior to spaces, but I would never use them in a new project simply because it would be likely to throw off others, and perhaps tooling. Github, for example, does not play nicely.


The proper way is tabs to the point of indent, and spaces thereafter. Even Github will work if you do that.


Github works better if you do that, agreed. But IIRC (I don't do this any more, so this is from memory) it uses eight space tabs as a default, and can only be changed on the 'raw' pages.


Spaces are superior to tabs in only one (important) way: They're a constant width.


Bam-bam-bam-bam. Sorry can hear you over all the whitespace I have to type.


Every editor worth its salt does this for you. Maybe try upgrading from ed.


"This is a good decision because we've forced every editor to handle this good decision well."


If only there was a way not to have to rely on editors all behaving the same with leading spaces. Oh wait, there is a way: indent with tabs.


I also prefer tabs. I'm just saying that "I have to wear out my spacebar" is not a good argument.


Yeah, indenting with spaces wastes so many punch cards! Oh, wait.


Actually, a punch card used to represent a single line.

But you'd be wasting holes.


Not that I've ever used one, but I wouldn't expect someone to indent any code on a punch card.

I just felt the comment was suiting since it must have been about when they were still used that you couldn't automate indentation.


In FORTRAN 77, it'a syntax error to put a statement before column 7


Interesting! Can you explain why that is the case? Were the first 6 characters reserved for the line number or goto marks?


Yes. Columns 1 to 5 are for labels. Column 6, if it has any character there (usually "$"), marks a continuation from the previous line.

I think the column 6 rule and line continuations were removed for FORTRAN 95.


They're superior only in C-like languages when you indent in constant-width levels, and then only if you use very short names in your code.

Try using tabs to indent e.g. arguments to function one under another, and weep.


> Try using tabs to indent e.g. arguments to function one under another, and weep.

Where the terms are distinguished in a programming context, that's alignment, not indentation; tabs for indentation, spaces for alignment is a common suggestion.


Sure, as long as you don't mix them on one line (i.e. first indent, then align), because then differences between tab widths will still break alignment.


Not if you do it right:

   \t\tsome_func(arg1,
   \t\t          arg2,
   \t\t          argN)
will look right no matter what the tab width is.


I just realised my invalid belief was caused by experiencing dumb editors in the past, which wouldn't recognise aligning and would turn your example into:

  \t\tsome_func(arg1,
  \t\t\t   arg2,
  \t\t\t   argN)
I.e. they would replace all sequences of tab-width spaces with a tab character.


First indent and then align will retain alignment with different tab widths, because you only align things that are at the same indent level, so different tab widths in a tab indent / space align combination just move the whole aligned block left or right without affecting alignment.


I stand corrected.


> Try using tabs to indent e.g. arguments to function one under another, and weep.

I despise that formatting, and use:

    fn_with_many_args(
        very_long_arg1,
        very_long_arg2,
        very_long_arg3,
        very_long_arg4
    )
which aligns fine with `\t\t`; so I shan't be weeping today, sorry! :)



I wrote my own version of Make for VAX/VMS and MSDOS, back in 1983 or so. I hated the tab mechanism of the Unix make, so I didn't use it, and none of my users (of which there were apparently thousands) ever complained. I got a few "thank you"s for it.

TAB was a stupid choice even in the 1970s.


Tab are actually more eco-friendly than space.

Just think of the energy lost sending 4 characters instead of 1 in every uncompressed json/xml actually moving around because lazy people don't mind suppress them before sending it over the network...


and imagine how annoying it gets when you are punching a punchcard.


We would punch up a program card to put on the drum in our IBM 029 card punches to set the tab stops, and let the tab key do its job.


Why the tab in column 1? Yacc was new, Lex was brand new. I hadn't tried either, so I figured this would be a good excuse to learn. After getting myself snarled up with my first stab at Lex, I just did something simple with the pattern newline-tab. It worked, it stayed. And then a few weeks later I had a user population of about a dozen, most of them friends, and I didn't want to screw up my embedded base. The rest, sadly, is history.

— Stuart Feldman


Stuart was also a FORTRAN programmer :-)


Source: The Art of Unix Programming


Sure there are some errors where lore overrides historical fact. An article ranging over 30 years of history via hops between minute detail is probably permitted a few factual errors. His tone certainly could be moderated if he wants to provoke less visceral rejections when he is wrong, though.

But I'd submit that the central thesis-that it is somewhat ridiculous to adhere to many of these conventions simply because they are established-is a very good one.

We have learned so much about how to make computer systems better and almost none of it is applied to Linux. The Linux kernel in particular is the work of brilliant people using archaic methodologies and adopting conventions that have long since ceased to make sense.


IMO, the main takeaway from the article isn't that "those 70's developers lacked imagination", but that UNIX is far from perfect.

The author even states that UNIX was amazing when it came out, but that doesn't mean all its ideas make sense today.


"they lacked imagination and like, I feel that we, young people today, like, a new generation, could do better"


It certainly ought to be uncontroversial to suggest that the general arc of progress tends upward, and that software is no exception. That has been a constant refrain for millenia.

Saying that they lacked imagination is diminuitive and self-serving but certainly we can do better than those who came before us, and those who come after us will do better than us.


Is that supposed to be a quote from the article? 'cause I can't find it.


use your imagination


Chuck I'm really glad you frequent HN. I love reading your comments and your insight is always valuable. -- A fan.


The fact that it was awesome in the 70's does not change the fact that it is broken today.


I don't think this is a fair criticism of the text.

However good Unix might have been in the 1970s, why am I still running it on my mobile phone? How good is it today? It seems that the major argument in its favor is legacy.

(Honestly, it seems to me that most of his criticisms could be moved away from stepwise. I'm guessing most of the configuration on my phone is stored in binary files. Someone could write the object-wise commandline utils he wants, and then create the architecture to make them more efficient. We could move there. Should we?)


Copying an .editorconfig in every new project solves any problems with remembering to use tab characters.


You know that thing about how old ideas in science can only really be replaced when their supporters die?

Unix greybeards, this is about you.


Sure, but the follow-up systems would be worse. I haven't seen a single actual improvement in software in 10 years.

It's all churn and bloat.


There is good and bad, but you certainly must be blind if you think the progress of technology has been relatively stagnant since the introduction of unix. Just a couple things that come to mind:

* The internet (early 90s)

* Wireless networking (early 2000s)

* Highly portable, small, and lightweight computers (early 2010s)

and all the software that supports them. These "follow-up systems" are certainnly better than their predecessors, and there are literally countless more examples (no, everything shouldn't be written in asm and c).


Those are all new(ish) applications for software; the underlying software can still suck even if being used in revolutionary new applications (which I think is more what the grandparent meant).


I think there is some truth to that. And I also think there is a tendency to look backward through time and come away with the wrong impression. It is starting with people new to web design wondering how anyone thought HTML was a good idea. So perhaps it's a 30 year window for it to start.

In my experience it gets worse and worse over time.


Not just tab, but Vertical-Tab.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: