These are all fantastic photos. One thing that stood out to me is how prevalent the AT&T/Western Electric blue is everywhere.
Much of my career has been spent inside telephone company central offices, where I've seen equipment from dozens of different vendors. The 5ess is still the sharpest and cleanest looking hardware I've ever seen. Bright white with that distinctive blue accent trim.¹
It's really a timeless look, and shows just how much effort they put into "irrelevant" details like that. A lot of vendors aim for a unified style, but it's usually boring and hard to distinguish from their peers. But AT&T stuff was distinctive, and I don't think it suffered from the passage of time. In other words, it doesn't look retro-futuristic. It just looks nice.
Look at the the 8th photo². You see the cabinets in the background, the keys on the console, the keys on the terminal, the platen knob, and the drawers on the desk. All matching and designed.
[1] (I give Nortel some "anarchy" credit for their courageous use of brown and green. It's fugly in an endearing sort of way :)
There's no switching hardware in these photos. It's an IBM System/360 installation, and everything is IBM System/360 Blue [1], not AT&T/WE. The same branding is still used today.
Thanks. I'm realizing this as well with sibling comment's photo and my reply. AT&T's blue is not as vivid as shown in these photos and the ones you linked. It's more muted/desaturated.
I know there's no telco switching stuff here, but I had (wrongly) assumed that, since it was Bell Labs, that it was their own hardware using the same scheme.
I stand by my admiration of that 5e, even though it appears the photos here aren't representative of it. :)
Bell Labs was not just about telco stuff, it was quite a revolutionary R&D unit in many ways. Seems to me a lot of people who worked there either did - or could have - worked at MIT, Stanford, Berkley, etc.
A read of the Wikipedia page [1] shows just how varied the research there was, but I'm not sure it's up there today so perhaps it's slipped from our minds.
One interesting thing I found out about the place is that I believe some university faculties have tried to model their research buildings on the physical layout of Bell Labs: everybody gets an office, but common areas are lengthy corridors away, meaning there is a good chance of serendipity and meeting colleagues when you're not heads-down. I have a hazy recollection this is how MIT built one of their newer buildings, but can't find a source for it now.
I would say that some big core router manufacturers have built equally cool looking designs, for things like the Juniper T640, Cisco CRS1 and similar. Also the big Infinera DWDM platforms. The aesthetic is best when people who really care about good quality singlemode do careful cable management on the front line card side.
My least favorite old telecom equipment is probably the Nortel DS3 microwave radio systems, full rack sized, which were the most ugly shade of dark brown.
There is a whole book online about IBM's design in that era:
The Interface: IBM and the Transformation of Corporate Design, 1945 -- 1976 by John Harwood
My father used to be a programmer back in the 1960's for the IBM 700 series mainframes, writing programmes and simulations in FORTRAN 4 (and I think some assembler, too) for our now-defunct shipbuilding industry (Sweden). The stories he told me about how debugging back then included manually going through series upon series of punchcards (often by laying them out on the floor in his house and crawling while reading) humbles me somewhat to how far we've come in the industry where debugging something is as easy as pressing a button and getting instant feedback. Back then results from a compute had to be processed overnight even for relatively simple programmes.
One thing we've seemed to regressed on though is the professional stature of our industry - back then private offices were the norm and the Orwellian open-office hellscape of today was no where to be found.
In the old days, the first step was think through the logic very carefully, because every mistake took a long time for the cards to get collected and run through the machine. Most people started by scratching their head and looking into space for a long time. Then they started drawing some sketches of flow (maybe with flowcharts, but often not doing that formally). Then they wrote out the code on coding sheets that had columns matching the columns on cards. Then they spent time reading the code and thinking like the machine. Only then were cards punched. And those cards were certainly checked for typos before submitting the job.
This was all because the turnaround was so slow.
Another fun card fact was that most people drew diagonal lines on their card decks, so you could more easily put them in order if they got scrambled.
The consequence of all of this is that debugging cards was actually not all that common, because you were doing checks at all the preliminary steps.
This was all quite fun, actually. Think of driving a standard-shift car, compared with an automatic.
> Most people started by scratching their head and looking into space for a long time. Then they started drawing some sketches of flow (maybe with flowcharts, but often not doing that formally). Then they wrote out the code on coding sheets that had columns matching the columns on cards. Then they spent time reading the code and thinking like the machine.
This sounds a lot like how I work when writing new code that's not just a quick throw-away script, but built to last. If implementing a new feature takes me one week, then the first one is usually spent just reading the existing code and thinking about the other parts of the application and environment are going to interact with the new feature.
It takes more time initially, but it saves me a ton of pain down the line, and luckily my managers appreciate that trade-off as much as I do.
Ah yes. I remember an article talking about how Jean Ichbiah programmed, and his card decks were always right at first run even when writing awfully complex programs :)
Interesting. But then, what those consoles with lots of lights and buttons were for? They definitely do look like they display the CPU state, allow control over it, and just in general are hardware precursors of today's debuggers.
From the article text, presumably these do not involve programming? I'm old by HN standards but this predates me by a couple eras. I guess "Tape Librarian" is obvious enough, but what does the day-to-day life of a Computer Operator look like?
I worked with mainframes and superminis back in the 70s. My first job was night computer operator.
Computer operator: scheduling and monitoring jobs, managing the print spool (queue) printing reports, stripping (removing carbon paper) and collating, sometimes swapping disk packs, loading/unloading tapes, running backups. And making sure the computer didn’t crash or halt. The minis and mainframes I worked on generally ran one job at a time, and the jobs might have a sequence, reading from and writing to disk or tape.
We kept transistor radios on top of the computer cabinets. Each job made a distinctive sound, as did runaway programs (infinite loops, for example). A halted computer did not make a sound so the radio was an early warning that something wasn’t right.
Tape librarian: magnetic tapes degrade over time from use, stretching, humidity. The tape librarian cataloged the contents and sequence of tapes, how many times they were used, their age. When a tape had too many errors or got to end of life the tape librarian would schedule a duplicate job.
I remember something that happened fairly often. The nightly jobs were on a schedule with estimated run time and special instructions. The jobs came with disk files or sometimes written job control language (JCL) that specified environment variables, required disk packs, resource requirements (temp disk space, memory, tapes). The JCL language was terse and arcane and the programmers and managers who submitted jobs sometimes messed it up. I would have to correct that and sometimes write the JCL if the job didn’t come with a file, or the instructions said “Same JCL as last run.”
Mostly the job was rote and I could tape boxes of paper together end to end and sleep while everything churned away. The most exciting night was a meltdown. I could do my Friday shift anytime over the weekend. I went in on Saturday afternoon and found the computer room very hot. The AC had failed and the backup AC didn’t switch on. The computer room and tape vault were climate-controlled. I could hear one of the disk drives (the size of a small refrigerator) whining. I ran around shutting everything down, but one of the GE disk drives had overheated and the disk platters had melted. Most of the drives were HP and they survived, but the cheaper GE drives were all damaged. The AC supplier’s insurance had to pay for the damage.
It’s hard to imagine today how big, slow, expensive, and delicate computers were back then. Hardware failures were common. RAM was measured in kB or low mB. Disk storage was expensive and slow.
The job I’m describing was for a school district. They had two HP 3000 superminis (16 bit), two HP 2000 timesharing systems, and an IBM S/370. The HPs were identical, we could use one as a hot spare if necessary, though back then HP stood for rock-solid reliability. The HP 3000 has a long history, I wouldn’t be surprised to find some still in use, and I know there are emulators for it to run old application code (COBOL, Fortran, Basic). The HP 2000 systems were common timesharing systems used in public schools for admin work and student accounts for learning programming. My first exposure to programming was HP 2000 timeshared BASIC on a teletype model 33 in my high school.
After the school district I got a job as a junior programmer at Blue Ribbon Sports, now known as Nike. They had several DEC PDP-11s and two DECSystem 20 superminis, all in a secure climate-controlled rooms. The programmers did not have access to the computer room, we didn’t have admin (“wheel”) accounts. We had to ask the operators to do system admin things for us. I had an ADM-3a terminal at home with an acoustic coupler, then Nike let me take a VT-100 terminal home so I could work more. We had remote work way back then in 1979.
> We kept transistor radios on top of the computer cabinets. Each job made a distinctive sound, as did runaway programs (infinite loops, for example). A halted computer did not make a sound so the radio was an early warning that something wasn’t right.
The different components would make different electromagnetic interference patterns. You could tell what the computer was doing (or not doing) from the static.
This is also how some of the first computer generated music was produced.
I remember listening to my TRS-80 that way -- easy to monitor outer/inner loops, different sounds for Basic vs. asm code, etc. The downside was that often people had no choice -- the TRS-80 was so noisy, that nobody in the house could watch (OTA broadcast) TV if I had the computer on.
Can confirm. I did some work on a TRS-80 in a Radio Shack store (inventory on cassette tapes!). The “trash 80” as it was called would interfere with the stereo equipment in the store, the manager would have to turn the little computer on the counter off to demo a receiver.
We had a couple computer operators (one per shift, not sure if there were just two shifts or an overnight shift too) at my first job (school district), they didn't program the mainframe, they ran and supervised jobs; for us, most of the effort was around running the print jobs for things like report cards and what not. Grab the paper from the warehouse, load the paper, collate if needed etc. Run the backups, change the tapes, etc. Yell at the student interns when they run ping of death and it borks the mainframe. etc
> Computer Operations Supervisor
Supervises the computer operators, basically the manager. Handle shifts when a Computer Operator is out sick, presumably. Not sure we had a dedicated computer operations supervisor because we didn't have that many operators.
> Tape Librarian
Organizes the tapes (reel is a lot harder than the little cartridges we had). Physically deliver tapes to computer operators, or maybe load them, based on requests, etc.
> Data Control Supervisor
Supervising the Data Control people --- you didn't list them, so I don't have to guess what they did. :) My guess is running reports, checking consistency, that kind of thing.
Computer Operator is probably close to being extinct but that was actually my first IT job, in 2010! It was for a very old school credit union, and I basically managed all their AIX systems and ran the nightly batch jobs for all the daily financial transactions.
They changed the title to 'UNIX Administrator' while I was there so that should give you an idea.
Around the turn of the millennium I worked at a major visual effects house in LA as a sysadmin - with no mainframes in sight, we still basically had all of these roles in the shop.
There was an entire tape operations team whose job it was to load in tapes of assets from clients (textures, models, frames) for a project (IIRC they were staged on netapp filers for the most part) and load out tapes of final rendered frames (and new or transformed assets) for delivery to the next step of the post-production workflow. (Even if WAN links suitable for pushing many many terabytes of data were economical at the time - and they weren't - the internet wasn't really trusted for shipping around this data back then. The ludicrous powers ascribed to movie hackers in the 90s-2000s gives you a real glimpse of the paranoia in the industry at the executive level. Napster scared the SHIT out of them.)
The operators/ops supervisors were basically the render management department. Their role was to ensure jobs ran (i.e. to render frames) and produced (at least superficially) valid output. "Data control supervisor" didn't exist by that name but the job of managing compute and storage capacity over time definitely did. It might even have been in my department (systems) but I was just a puppy then and really wasn't paying enough attention.
Fun story, one fine spring the entire render management department was out for various reasons but an urgent job had come in - the South Park movie was (IIRC) 6 weeks from release and Cinesite (IIRC?) didn't have the capacity to render out all the final frames in time. The call went out internally for volunteers to learn how to run jobs, and I gave up a weekend to babysit renders. Every day new assets would be delivered and rendered frames picked up (on huge Ampex DST cassettes - hundreds of gigabytes each! A lot in 1999, anyway.)
My task was to get shots rendered at output resolution as efficiently as possible, and to preview each shot to make sure there were no obvious errors (black frames, e.g. - software wasn't perfect.) There was a bit of an art to it, since the frames could be rendered on almost any idle CPU in the facility via our in-house distributed queuing system (race, props to erco) and capabilities of the host systems were variable. If you weren't on top of things, the shot might get hung up on some frames that ended up enqueued somewhere shitty where they'd never finish, etc. The South Park frames were pathological - out of a desire for verisimilitude, the construction paper textures were HUGE, and everything was modeled in 3D (Alias) with lighting and shadows and very small Z depths between layers. A lot of the shots I drove were from the "Blame Canada" sequence (though without sound, I didn't know it until I saw it in the theater a few weeks later) - with a huge number of characters on screen, that meant gigs of texture per frame, as every character (and usually the background) was made of polygons each mapped with a particular paper texture. No wonder Cinesite ran out of time and had to go hit up other shops at the last minute.
In conclusion, batch processing: still a thing.
(Epilogue: sadly, didn't make it into the credits - those were already done, I guess? - but for giving up my weekendI did score an invite with a +1 to the wrap party. Isaac Hayes rocked. Mary Kay Bergman held my +1's hair while she barfed in the ladies room. And as far as I could tell, Matt and Trey never bothered to show up. A+, would give up weekend again.)
Adding to what others have said, the Computer Operator profession is dead as it was replaced by "operating systems". Responsible for loading and scheduling work on computers.
The picture with the tape library makes me wonder why we only have the option of the small tape cartridges of today. It would be super cool to have a horizontal arrangement of say 20 such “tape wheels” each with their own R/W head. Imagine the capacity such a unit would achieve with today’s tape medium! Our LTO-8 tape cartridge holds 12 TB of incompressible data. Imagine a wheel that big could hold somewhere like 100 TB or so. If you put 10 of them in a horizontal row, with 10 heads, you get a PB archival machine..
A standard-length ANSI-standard 9-track is 2,400 ft long and 0.0015 in thick.
So naïvely assuming you could spool about 2,400 ft * (0.0015 in / 5.6 µm) worth of LTO-8 tape on a reel, we have
145161/28000 = (0.0015 in / 5.6 µm) * (2400 ft / 960 m)
so your imaginary LTO-8 reel could store around five times as much data as an ordinary LTO-8 cartridge, or about 60 TB.
Sounds promising until you realize that, even assuming impossibly efficient cylinder-packing and no additional overhead for automation, a single 9-track reel takes about as much space to store as five LTO cartridges…
This has come up before here and on Twitter and nobody has ever known, but I intend to keep asking: does anyone know where this Bell Labs facility was in Oakland, CA? The only mention of it anywhere on the web lead back to this page. I can find no other contemporaneous documentation of it, and I even searched the Oakland Public Library periodicals archive.
Then you may be interested to know that 80+% of the outfits I saw would fit right in at a high school as of 2 years ago. About 50% of that 80%, or 40% overall, would be whenever students were dressing up for some reason, whether on a game/concert day, or if they just felt like it.
Edit: Formatting, because I can't do things right the first time.
I remember raised-floor machine rooms being distinctly colder than office space. But more importantly, none of those shoes are barnyard acid resistant.
When I started at the phone company back in the day it was post breakup. The waste that went on when there was no competition was sickening. Many of my co-workers told me about all the abuse and theft that comes with being a monopoly - with the cost being passed down to the customer and the tax payer.
Also the world of the Internet, the openness of IP protocols disrupted them. They finally had to deal with technology that wasn't dictated and controlled by them.
We lost lots of good paying jobs, we lost the rollout of ISDN, which would have spend up offering high speed network access, we lost hundreds of millions spent on basic research.
We gained, a bunch of openness, mobility, and access to a wider breadth of services. It's still a mixed bag.
Yes, it was. Bell was operating under a consent decree dating from 1956 which prohibited it from doing much beyond phones and telegrams. Bell didn't want to piss of the government, so it played it safe and declined to enter the software business when Unix was first developed, allowing it to spread on a more informal basis, including the source code. It did enter the software and Unix business when the consent decree expired, leading to System III and System V, which were closed-source.
The modern commercial Internet and non-monopoly/non-government ISPs would not have become possible without the 1984 breakup. Companies like Sprint and MCI benefited greatly from it.
All of the early core ISPs like UUNet and its competitors, in the pre 1997 period.
Large monopolists like Bell are basically non-governmental tax authorities. Tax-funded research can be effective, but it can also be horribly ineffective.
It is worth observing that the quality of DoD acquisitions and public-funded R&D also declined around the time of Bell's most recent breakup.
Might there be other forces at play than the dissolution of a single company?
Monopolies stamp out competition, resulting in less innovation.
You might instead ask if society was better off before the invention of the teenager. That was emblematic of the shift toward short-term-profit-driven thinking we now call neoliberalism. That economic philosophy also happens to stifle innovation of the Bell Labs type by not financing long term projects.
Today monopolies exist in spirit if not in technicality. Does society benefit? I'd say that looking back longingly at Bell Labs means no, it doesn't.
"One day I took a camera to work and shot the pictures below. I had a great staff, mostly women except for the programmers who were all men. For some reason only one of them was around for the pictures that day."
Lots of women in IT (or 'DP' for 'data processing') back then.
I started in 1990 on IBM mainframes, a lot of what is depicted looks like my first data centers. I saw the last of the punch cards, round tapes go to square tapes, and more greenbar paper than I'd care to admit.
The Mountain View Computer History Museum is near by and I love making a visit. The era that interests me the most is from the 60's where they started to scale up from the tube era. It's just amazing considering the time and effort compared to today.
If the same kind of pictures would be taken today in the US (and most other western countries) obesity would be the rule with a few exceptions. Seems like the 60s didn’t have this problem yet.
Me too, but if you put yourself in the photographer's shoes, I guess you wouldn't just walk around taking pictures of random stuff in your office (it's just the equipment you work on, after all, who would ever care about that?), but rather the people you interact with every day. Reading the article's text, the photographer was clearly fond of the people he worked with. I'm also glad it turned out to be this, instead of that. I've seen plenty of photos of old equipment; it's cool to see candid shots of people actually performing the work that was done on the equipment.
Much of my career has been spent inside telephone company central offices, where I've seen equipment from dozens of different vendors. The 5ess is still the sharpest and cleanest looking hardware I've ever seen. Bright white with that distinctive blue accent trim.¹
It's really a timeless look, and shows just how much effort they put into "irrelevant" details like that. A lot of vendors aim for a unified style, but it's usually boring and hard to distinguish from their peers. But AT&T stuff was distinctive, and I don't think it suffered from the passage of time. In other words, it doesn't look retro-futuristic. It just looks nice.
Look at the the 8th photo². You see the cabinets in the background, the keys on the console, the keys on the terminal, the platen knob, and the drawers on the desk. All matching and designed.
[1] (I give Nortel some "anarchy" credit for their courageous use of brown and green. It's fugly in an endearing sort of way :)
[2] https://2.bp.blogspot.com/-vaO-bHVmCQ4/XciOziQqrqI/AAAAAAAAS...