I want to like this — I think having ground-based alternatives to GPS and other space-based PNT systems is a very good thing! But after reading the paper at https://www.nab.org/bps/Broadcast_Positioning_System_Using_A... and other BPS information on the NAB's website, I think the NAB is being wildly optimistic about BPS:
• ATSC 3.0's physical layer can already transmit GPS time in a way that receivers could get it back out. What BPS brings to the table is a requirement and specification for accurately and consistently filling in the physical layer preamble fields containing the time data, along with a new physical layer pipe (think "low-level data stream") that contains additional information about the transmitter and, optionally, its neighboring transmitters.
• BPS is capable of producing time fixes when the receiver only has a lock on one source. This isn't surprising at all — GPS receivers can do the same thing. But either type of receiver with only one source would see a clock offset proportional to the path delay, which it wouldn't be able to compute and back out without knowing its position.
• BPS is only designed for 2-D position fixes. While that's a reasonable design decision (the vertical position error would be massive), it also makes BPS less useful for the NAB's "indoor positioning for first responders" use case, especially in areas with multi-story buildings.
• The need to receive and process/decode multiple, most likely non-adjacent 6 MHz channels for positioning increases receiver complexity and cost.
• The NAB claims that 1 kilometer of separation between two BPS transmitters is "sufficient for useful position determination." I don't buy it, especially in the face of poor transmitter geometry.
• They note that 16 TV stations in the New York City area broadcast from One World Trade Center, so for the purposes of BPS, they're effectively one station. This kind of transmitter colocation is incredibly common, both in urban areas (ten TV stations broadcast from Sutro Tower in San Francisco) and in more rural areas (six TV stations in the Roanoke-Lynchburg DMA broadcast from towers within ~1 mile of each other on the ridgeline of Poor Mountain). Even if every ATSC TV station became an ATSC 3.0 w/ BPS transmitter, bad transmitter geometries would destroy BPS's position accuracy in lots of markets.
• What's the business case for broadcasters? BPS won't be free for broadcasters to implement, and there doesn't seem to be a path to it generating revenue except for a hand-wavy "maybe one day televisions will be able to determine their locations without Internet connections using BPS, and then broadcasters can do location-targeted advertising with those TVs!"
My uncharitable take is that BPS will never be a usable standalone PNT system. A timing system in the "rebroadcasts GPS" sense? Maybe. Standalone positioning? No way. Broadcasters implementing BPS (or ATSC 3.0 at all) without being forced to by the government? I don't see it.
My uneducated guess is government funding, plus becoming part of a new "essential backbone" infrastructure, thus guaranteeing incentives to stay operational for a longer period of time.
I was merely a Kickstarter backer of CastAR and have no insider knowledge, but my guess is that a failed moonshot attempt happened here.
After raising $1 million on Kickstarter, Technical Illusions/CastAR received an investment from Andy Rubin's Playground Global. CastAR later announced that they would refund Kickstarter backers' money and give everyone who backed at a level that would give them a pair of CastAR glasses a voucher for the retail CastAR glasses, whenever they released. Somewhere along the way, CastAR also changed course from "AR glasses tethered to a computer/phone" to "standalone AR glasses." Then they acquired Eat Sleep Play, a game studio in Salt Lake City. Then they went bankrupt.
Instead of releasing a product (even a rough, beta-quality product!) for their Kickstarter backers and iterating from there to a retail-quality product, it seems they took their VC money and went straight for a moonshot standalone product with first-party games available out of the gate. The moonshot was expensive, Playground Global declined to invest further, and here we are.
That sounds pretty plausible. I was deeply disappointed when they announced the pivot to standalone glasses. I didn't want standalone glasses, I wanted the original device as pitched on Kickstarter. I figured, even if none of the VR/AR mechanisms worked at all, I'd have been thrilled to have a tiny set of display glasses with HDMI inputs.
As a backer, I'm pleased that they refunded everyone (and I can confirm having received a full refund). But I'm still sad to see this result; I had a lot of hopes for this, and it looked much more promising to me than any of the alternatives, right up until today.
I have an HP MicroServer N40L that I bought several years ago, and it's almost a doorstop now. Its CPU (dual-core 1.5 GHz AMD Turion II Neo) is slow and doesn't support AES-NI. It maxes out at 8 GB of RAM (16 GB of RAM if the stars align and it likes the RAM you bought). It has one GigE port, and SATA ports are limited to 3 Gbps (SATA II). Expansion is limited to an eSATA port, USB 2.0 ports, a low-profile PCIe 2.0 x1 slot, and a low-profile PCIe 2.0 x16 slot.
It's okay as a NAS that mostly sits idle and occasionally serves up unencrypted data at GigE speeds or less. For more demanding tasks, it's woefully underpowered.
I have one of those. Finding the RAM is pretty easy, I'm running 16GiB of ECC RAM. Cheap, too, because DDR2 if I remember. You can put a NIC in the PCIe slot. The SATA ports are fine for spinning disks. You can use the drive bay which is also SATA.
Of course it always depends on the use-case, but for most people at home it's sufficient. I use it as a Minecraft and media server.
At ${PREVIOUS_JOB}, I changed offices and discovered my window wasn't the only thing I lost in the move: SSH sessions kept dying with MAC errors. It happened with multiple computers with different NICs and running different OSes, so it wasn't computer-related. I tried swapping the patch cable, but that didn't change anything.
On a whim, I took the faceplate off the box containing my (100 Mbps) Ethernet and phone jacks and discovered both drops were provided over a single four pair cable originally installed in the late 1980s. More alarmingly, the outer jacket of the cable had been cut off about a foot from the punchdowns on the backs of the keystone jacks, and the entire foot of conductors emerging from the jacket were all untwisted and balled up in the box.
Cutting off about 10 inches of each wire, twisting the pairs back together, and punching the wires back down onto the backs of the jacks fixed the SSH problem...
While getting my degree I worked as a sys admin assistant for the CS dept. This was back in the 10BASE2 days. It wasn't unheard of find a professor who had re-arranged his or her office and needed a longer patch cable to connect their workstation, so he or she found some 75 ohm coax and used that and then wondered why the networking wasn't so good.
Android 4+ supports IPv6 on both the cellular and wireless Ethernet interfaces. It only supports stateless address auto-configuration (SLAAC), however, and does not support DHCPv6 at all. (There's a 3+ year old Android DHCPv6 bug that was declined/closed late last year for, IMHO, unconvincing reasons.)
The ~5 years between AMD's release of the Athlon 64 and Intel's Nehalem were truly AMD's glory days on the server. We had two clusters, one with pairs of 2.2 GHz Opteron 248 CPUs in its compute nodes and another with pairs of 3.4 GHz "Nocona" Xeons in its compute nodes. The Opteron nodes completely wiped the floor with the Xeon nodes in everything we threw at them, despite the Xeons enjoying a >50% clock speed advantage and a newer manufacturing process (90 nm vs. 130 nm).
Intel's "Core 2" CPUs scrapped the Pentium 4's NetBurst architecture in favor of an evolution of the Pentium M architecture (which was, in turn, an evolution of the Pentium III architecture), and Intel was competitive with AMD on the desktop again. Nehalem brought on-die memory controllers and QPI (Intel's HyperTransport-alike) in late 2008, which made Intel the performance champion on multi-socket servers. AMD's Bulldozer architecture was dead on arrival in 2011, and AMD never recovered from that.
Maybe AMD will pull a rabbit out of their hat with Zen...
At ${PREVIOUS_JOB}, I ran a stratum-1 NTP server that got a timecode and PPS signal from a GPS clock. I could tell if someone had left the door to the lab open by what PLL frequency "ntpdc -c kerninfo" reported on that server — the various oscillators in the server would drift above/below their ideal frequencies depending on how hot/cold they were, and the room would get a few degrees colder with the door open.
There was no need, really. The university's networking/telecommunications group ran their own pair of stratum-1 NTP servers, plus four stratum-2 NTP servers, so my stratum-1 wasn't really needed. I ran my stratum-1 NTP server simply because the hardware was available and I had an interest in it. You see, the GPS clock (and its predecessors, a WWVB clock and a pair of GOES clocks) were relics of a time long past...
[insert wavy flashback transition here]
The lab I mentioned was a seismological observatory. In the days before cheap, high-resolution A/D converters and computers with massive amounts of storage existed, almost everything was analog. Gloriously, unashamedly analog.
Seismograph stations sent data from their seismometers back to the lab over a "dry loop" — a leased line with no dial tone or voltage on it — from AT&T. To do this, the seismometer's output signal was greatly amplified, then frequency modulated onto a relatively low frequency carrier (1–2 kHz). The signal then traveled through AT&T's network all the way to the lab, where the signal was demodulated.
Okay, great, we're getting the signals back at the lab, but how do we store these waveforms? The answer is a giant drum, a motor, some paper, and a pen or stylus — a drum recorder.
"Helicorder" was Teledyne Geotech's brand name for their line of drum recorders, but it was so popular that "helicorder" has become a generic name for "drum recorder" in the seismological community, much like "Xerox" became synonymous with "photocopier." (If you ever look at earthquake records on-line, check for "heli" in the URL.)
A piece of paper is wrapped around the drum. A pen/stylus rests on the paper and deflects side-to-side depending on the polarity and magnitude of the input signal. A big positive voltage makes the pen move really far to one side, and a small negative voltage makes the pen move not so far to the other side. The pen is also attached to a threaded shaft that rotates, slowly moving the pen from one side of the paper to the other. The drum itself also rotates, and the rotation speed of the drum and the shaft was usually selectable — most people had recorders with small drums set to record ~24 hours of data per piece of paper, with wider drums set to record for a proportionately longer amount of time. (The higher the drum speed, the better the record quality, but then you had to change the paper more often.)
So we can record the signal onto paper, but we're missing a very important thing — time. We need to know exactly when stations saw ground motion in order to locate earthquakes and other seismic events. Enter the GPS clock (and the WWVB clock and GOES clocks before it). The GPS clock received a very accurate time signal and was configured to output a very simple timecode known as "slow code." Slow code works as such:
• At exactly the start of the 0'th second of every minute, generate a voltage pulse for some amount of time, usually 2 seconds.
• At exactly the start of the 0'th second of the 0'th minute of every hour, generate a voltage pulse for some longer amount of time, usually 4 seconds.
• At exactly the start of the 0'th second of the 0'th minute of the 0'th hour of every day, generate a voltage pulse for some even longer amount of time, usually 6 seconds.
This slow code would be added to the signal being recorded by the drum recorder, adding precisely timed "bumps" to the record. When the paper was changed on the drum every 24 hours, someone would write or stamp several pieces of information on the paper: the seismograph station's name, the date, and the time of the first time mark:
Note the column of time marks between the stamped dates. The narrow marks are minute marks, the slightly wider marks are hour marks, and the widest mark (five lines above the little earthquake) is the day mark.
[insert wavy flash-forward transition here]
Eventually, everything at the seismological observatory went digital, and the seismograph stations were upgraded with digitizers that had their own GPS clocks for timestamping data. The WWVB clock and GPS clock sat unused until I cleaned them up and reconfigured them to serve up time for ntpd to consume.
I don't know if they inject ads, but they have the capability to do it. When I connect to an "xfinitywifi" SSID, the first HTTP page load I do will have a little "You're using XFINITY Wi-Fi. Isn't it totally awesomesauce?!"-style pop-up appear for a few seconds in the lower right corner of the browser window.
• ATSC 3.0's physical layer can already transmit GPS time in a way that receivers could get it back out. What BPS brings to the table is a requirement and specification for accurately and consistently filling in the physical layer preamble fields containing the time data, along with a new physical layer pipe (think "low-level data stream") that contains additional information about the transmitter and, optionally, its neighboring transmitters.
• BPS is capable of producing time fixes when the receiver only has a lock on one source. This isn't surprising at all — GPS receivers can do the same thing. But either type of receiver with only one source would see a clock offset proportional to the path delay, which it wouldn't be able to compute and back out without knowing its position.
• BPS is only designed for 2-D position fixes. While that's a reasonable design decision (the vertical position error would be massive), it also makes BPS less useful for the NAB's "indoor positioning for first responders" use case, especially in areas with multi-story buildings.
• The need to receive and process/decode multiple, most likely non-adjacent 6 MHz channels for positioning increases receiver complexity and cost.
• The NAB claims that 1 kilometer of separation between two BPS transmitters is "sufficient for useful position determination." I don't buy it, especially in the face of poor transmitter geometry.
• They note that 16 TV stations in the New York City area broadcast from One World Trade Center, so for the purposes of BPS, they're effectively one station. This kind of transmitter colocation is incredibly common, both in urban areas (ten TV stations broadcast from Sutro Tower in San Francisco) and in more rural areas (six TV stations in the Roanoke-Lynchburg DMA broadcast from towers within ~1 mile of each other on the ridgeline of Poor Mountain). Even if every ATSC TV station became an ATSC 3.0 w/ BPS transmitter, bad transmitter geometries would destroy BPS's position accuracy in lots of markets.
• What's the business case for broadcasters? BPS won't be free for broadcasters to implement, and there doesn't seem to be a path to it generating revenue except for a hand-wavy "maybe one day televisions will be able to determine their locations without Internet connections using BPS, and then broadcasters can do location-targeted advertising with those TVs!"
My uncharitable take is that BPS will never be a usable standalone PNT system. A timing system in the "rebroadcasts GPS" sense? Maybe. Standalone positioning? No way. Broadcasters implementing BPS (or ATSC 3.0 at all) without being forced to by the government? I don't see it.