Longer than that - in 2005 I was at a network hardware startup and we had vendor-locked (ahem, _qualified_) SFPs back then. Probably started back in 2001 when they were introduced.
Things have probably changed since I last talked to my friends at a large state radio/tv broadcaster, but for long haul they used either MADI over fibre, or AES50 into boxes from NetInsight along with SDI for the video feeds. This works so well that you can put the input/output converters in a venue hosting a live music and do the program audio mix in a control room at broadcast HQ 100s of kilometers away.
At 100s of km, you’d be pushing the limits for actual live sound, though. 100km is about a light-millisecond, and ordinary fiber is rather slower than light, so that’s maybe 3ms round trip per 100km. If a musician can hear themselves through monitors at too much more latency than that, it could start to get distracting.
As i understand it, the sound for audience in the venue and monitors for artists was run locally by separate mixer. The audio backhauled to HQ was for the live broadcast.
Firefighters practically salivate at the possibility of smashing the windows of a car parked in front of a hydrant they need to access for supply. This isn't much different.
So? In both cases, the cars are impeding their urgent work. It's not like firefighters are just going around smashing windows on cars that aren't doing something very, very wrong.
I've talked to some that said it was one of the job perks. I don't blame them at all, it seems necessary and fun. It's one of those "instant justice" kind of things.
I can and it would require porting my entire CI workflow to machine. We use a lot of docker-executor specific stuff, and if I'm going to spend the time to port several thousand lines of CI config, I can just port it to a different CI platform instead.
Oh, gotcha. You have CI for multiple architectures already and you want to add arm64? I was thinking it was brand new code you wanted to write, but I get why you wouldn't want to rewrite everything.
Just on amd64 right now, we want to move to add support for arm64.
Sadly we adopted CircleCI early on and make heavy use of their support for multiple containers - if you specify a list of containers in a Docker job, it will execute the tests in the first container and connect the other containers as "data" containers (think redis, mysql et c) for use in tests.
After finishing Severance, I went looking for a keycap set mimicing the Data General Dasher keyboard that the series based their terminals on. Thankfully there's an MT3 version of it[0] which looks and feels amazing. Really love the profile.
Depending on your projector and the amount of space you can fit one or two digital heads in there. We modded our 1950's era Bauer U2's to add DTS readers upstream of the analog sound readers, but at the expense of the 70mm magnetic sound heads. Took about an afternoon.
Was that ever released for film use? I don't recall seeing any prints referring to ProLogic or ProLogic II during my time as a student film projectionist. If so, it was highly backwards compatible since our decoder was only SR (analog 4.0 matrix).
The prints would generally come with SR (analog 4.0) and SR-D (Bitmaps between the sprockets). Most of the time you'd also get DTS CDs and about 40-50% of the prints we got (in Stockholm) had SDDS, though I think there were maybe like 5 SDDS theatres in Sweden.
Have they gotten faster at applying updates? it would take something like 45 minutes to an hour to make any changes back in 2014, when Fastly was doing sub-minute updates for any CDN changes.