Does anyone have any insights into the proprietary graphics buses that were being created leading up to the VESA Local Bus (as referred to in the article)? I was not aware of anything between 16-bit ISA and the addition of VLB.
Practically every Unix workstation had a different solution.
Part of their performance lead was a proprietary bus that was much faster than ISA.
Technically they weren't graphics busses but since scsi and networking were built in graphics cards were the only things that mattered when it came to the higher bandwidth. A typical Sun Sparcstation would have a graphics card and maybe a serial port card or something that didn't care about the bandwidth of SBUS.
People completely forget this but from the late 80s to the mid-90s (when PCI started becoming widely available) if didn't want to shell out for a Unix workstation and you stuck a fast Radius or Supermac video card in your Macintosh II, your desktop publishing/graphics editing/visualization workflow experience was astronominfinitely better than on PC even if its 486 was faster than the 68020/68030 in your Mac. When PCI came out Apple immediately switched.
Intel probably looked at NUBUS, SBUS, and all of the others and went "well shit if we don't do something about this the pentium won't matter because video cards will be stuck on either ISA or the jank-ass VLB".
Well, VLB wasn't limited to graphics... it was just a fast bus. As opposed to the much later AGP that afaik, was graphics only.
But, MicroChannel was IBM proprietary. I don't know if anybody else had enough market or enough full stack to make a proprietary bus viable; IBM was making graphics cards and motherboards (and cpus, sometimes), and selling enough units that it was worthwhile for add-in makers to support MCA.
VLB wasn't limited to graphics... it was just a fast bus.
VLB wasn't limited to graphics, but it had issues which made it difficult to use in other applications. Still, there were a handful of SCSI and Ethernet cards made to the standard.
The physical size (Very Long Bus!) meant that it was best suited to cards which were already going to be large (e.g. graphics cards with lots of memory chips) and the tight coupling to the system memory bus meant that it was hard to use with anything other than an 80486 CPU -- which inherently discouraged its use for peripherals which weren't firmly aimed at the consumer market.
Ultimately I think the story here is less "Intel undercut a standards process" and more "Intel realized that the standards process had produced a horrible design". We should be glad that they hedged their bets; PCI was far superior.
Intel Architecture Labs is responsible for essentially the entire I/O architecture of virtually all computers (not just x86) for the last ~three decades: USB, SATA, PCI and PCIe, plus PCI-for-Graphics (AGP). Notably all of these were largely developed in-house at Intel and then basically gifted more or less finished to standards bodies or Intel created an industry consortium around them.
Wonder what made Intel keep its damn paws firmly on Thunderbolt up until the USB4 days. To this day it's still a truckload of issues/hacks to get old PCs upgraded with Thunderbolt 2/3 cards.
How much of that is because the tech was better versus because it was Intel pushing for it? There were competing standards such as (off the top of my head) SCSI (isn't that what SATA basically is anyway?), Firewire and PCIx.
> SCSI (isn't that what SATA basically is anyway?),
Nope, there is a lot of differences (Wiki would help on the details), most notably is what SATA was designed to be cheap from the start, including the controller complexity; while SCSI demands a quite intelligent controller, which costs more.
> SCSI (isn't that what SATA basically is anyway?),
Ehh, ever since ATAPI, both old 'IDE' style as well as SATA hosts could use SCSI commands. The speed you get with a SATA cable, wire for wire, is a win over any SCSI cable I've ever seen, let alone the LVD vs HVD and everything else you had to worry about.
>Firewire
arguably had the right ideas at the wrong time; the extra power delivery is something we are finally now seeing in USB. However firewire was still relatively expensive.
> PCIx
PCI-X was still parallel with lots of data lines/etc which can cause it's own problems. Aside from having multiple cards potentially hamming each other up (PCI-Express this is less possible since it's point to point rather than shared lines) there is the challenge of the large number of traces and the difficulty in running them on a board as the signalling frequency scales up.
Note that AGP was initially basically just PCI 2.1, with the bus conflict resolution system ripped out, the connector flipped around and a few minimal tweaks. It could, in principle have been used for something other than a video card.
The crucial part about it was that it was a dedicated link to one device. AGP was initially created not because PCI bandwidth was running out, but because PCI is a shared bus, and the kind of transfers video adapters liked to do played havoc with the system that negotiated who had right of way, resulting in all kinds of problems when other cards had to wait for their turn for much longer than their driver developers expected.
The fact that it only ever connected one device to the host made it much easier to evolve, as future cards and hosts could just negotiate to do something different than what AGP 1.0 defined, if they found they both supported a faster version. When bandwidth demand rapidly rose with ever faster 3d accelerators, this was very beneficial.
In the PC space, as a disjointed semi-chronological overlapping timeline:
In the beginning, we had ISA. We had ISA because it was cheap enough for IBM's PC, not because it was good.
MicroChannel was a thing, largely limited to IBM, starting in 1987. It worked well.
EISA was a thing. It was not ever particularly common or cheap. It had 32-bit width and 8.33MHz bus speed, in 1989. EISA's main features were that it was solid, and that it was not MicroChannel. It was backwards-compatible with ISA cards.
Then VLB happened. It was fast, 32-bit, was often flaky, and it was cheap. It was very popular for all kinds of PC accessories -- not just video cards.
Then, of course: Everything performance-oriented shifted to 32-bit PCI almost overnight (including some things outside of the PC space).
But there as also a time when we had PCI-X (which is absolutely not an abbreviation for PCI Express). PCI-X was 64-bits wide at up to 133MHz (though 66MHz was more typical). Like EISA, it never became common or cheap.
And eventually, we had AGP -- but only for graphics.
And there was also PCI-X 2.0, which was like the previous version was 64-bits wide, but it could operate at up to 533MHz. It was theoretically excellent, but essentially never really existed: Widespread PCI Express adoption was right around the corner by then.
And now, of course: We have PCI Express, which we've been successfully flogging along in various incarnations for a couple of decades -- a damned eternity in computer years.
This is called outdoor air temperature reset. It's been commonly used in medium to large size buildings (ie multi residential buildings) back 100 plus years.
Originally there might have been an operator doing the adjustment daily, later mechanical automation, later electronic control.
The system in Toronto takes cold water from deep in Lake Ontario, runs it through heat exchangers that feeds the district cooling loop downtown.
The lake water continues out of the heat exchangers into the intake of the water treatment system, feeding drinking water to the city. The water eventually returns to the lake after being used by people and flowing through the sewers and wastewater treatment system (returned at a shallower level).
I really enjoy typing on the unicomp buckling spring keyboard; I've had an older model for a long time.
It seemed very well built on the outside, however eventually a number of keys stopped functioning. Taking it apart, I found that the plastic rivets holding the assembly together failed.
It was repairable by drilling and replacing the rivets with machine screws. Although original model M's were built in the same way, the design or manufacture of these rivets on my unit is completely insufficient. I'd expect better for a keyboard in this price range. This also seems to be a common problem based on scouring the forums. I'd hope this would be re-designed in newer models, however also don't hold out much hope.
I wonder if the "New" Model M has the same problem, or if they've finally updated/replaced their tooling. AFAIK they were/are still using the old machines from the 80s and 90s.
The high temperature limit switch should never be tripping, even when the furnace runs continuously.
This indicates you have a problem with temperature rise across the heat exchanger, probably from inadequate airflow. Your ducts may be sized too small/furnace too large, fan speed set too low, plugged filter, etc. You may be able to mitigate the problem by increasing fan speed, limiting heat by using stage 1 only, putting in less restrictive filter, add additional cold air return, etc.
Usually a failed limit switch means cycling and a bigger problem with airflow. However I have seen a high temperature limit switch fail without cycling - fell apart with age (40 year old furnace).
The point is, yes his grease flew out. And dirt got in. But his CV joints aren't being used, they are just free spinning around all day not moving any torque. So yes his torn cv boot didn't give him trouble in a Toyota 4x4. It wouldn't have given him trouble in a dodge 4x4 either..
Yours will not/did not click or grind because it is not being used. You owned effectively a rear wheel drive truck with a switch/lever to make it 4x4. 99.9% of the time those trucks are in RWD mode, which means your cv shaft is not being used. It just free spins at most, transmitting no torque.
How long can it last like that? Basically forever.
I would hazard a guess that Toyota and Lexus have very similar real-world reliability - most of them share platforms. However this article shows that Lexus owners perceive their cars to be more reliable than Toyota owners.
>However this article shows that Lexus owners perceive their cars to be more reliable than Toyota owners.
Maybe they get better maintenance than their Toyota cousins on average?
BMW often ranks high but there's a bias when asking a BMW owner -who often likes cars- how good their car is, compared to a Dacia owner who often just needs to get from A to B and see a car as strictly transport.
I think it's also worth looking at mikrotik. I run a HAP ac2 for travel which is small, has dual Wi-Fi radios, and enough power for good speeds on IPsec VPN. It also supports wireguard.
They also have a tiny, less powerful unit (mAP) that may fit with some travel use cases.
Mikrotik's product names are sufficient to uniquely identify them and easily searchable. This differs from many hardware vendors, so your question was reasonable, despite already having what you asked for from the prior poster.
I played mp3s on a 486dx2... I think at 40MHz (but could be after 100MHz upgrade). The biggest issue was bitrate, I think it struggled with 128kps but was fine if they were encoded anywhere below. Certainly could not play 160kps. 96kps mp3s were common enough on the internet back then with dialup, although they sounded awful.
Did any of these make it onto the market?