Is starlink that low power? The whole array of this one is 1.5kW. I thought starlink would be at least a few watts especially considering its bandwidth.
Yeah it's impressive and I know hams often spend a lot of money on gear. I don't though (I don't even do HF) but it's certainly cool to see.
But for incidental moon tracking I don't really see the need for a phased array other than the cool factor and the knowledge gained building it. Which are perfectly good reasons to do it of course! Just not technical ones.
I totally see what you're saying, but to me this feels different. Compilation is a fairly mechanical and well understood process. The large language models aren't just compiling English to assembler via your chosen language, they try and guess what you want, they add extra bits you didn't ask for, they're doing some of your solution thinking for you. That feels like more than just abstraction to me.
If this is true then a PMs jira tickets are an abstraction over an engineers code. It's not necessarily wrong by some interpretations but is not how the majority of engineers would define the word.
And they're as deterministic as as the underlying thing they're abstracting... which is kinda what makes an abstraction an abstraction.
I get that people love saying LLMs are just compilers from human language to $OUTPUT_FORMAT but... they simply are not except in a stretchy metaphorical sense.
That's only true if you reduce the definition of "compiler" to a narrow `f = In -> Out`. But that is _not_ a compiler. We have a word for that: function. And in LLM's case an impure one.
A fundamentally unreliable one: even an AI system that is entirely correctly implemented as far as any human can see can yield wrong answers and nobody can tell why.
That’s not entirely the fault of the technology, as natural language just doesn’t make for reliable specs, especially in inexperienced hands, so in a sense we finally got the natural-language that some among our ancestors dreamed of and it turned out to be as unreliable as some others of our ancestors said all along.
It partly is the fault of the technology, however, because while you can level all the same complaints against a human programmer, a (motivated) human will generally be much better at learning from their mistakes than the current generation of LLM-based systems.
(This even if we ignore other issues, such as the fact that it leaves everybody entirely reliant on the continued support and willingness to transact of a handful of vendors in a market with a very high barrier to entry.)
The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code – assuming the AI agent is capable of producing human-quality code or better?
I agree it's not a layer of abstraction in the traditional sense though. AI isn't an abstraction of existing code, it's a new way to produce code. It's an "abstraction layer" in the same way an IDE is is an abstraction layer.
> The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code
Actually yes, because Humans can be held accountable for the code they produce
Holding humans accountable for code that LLMs produce would be entirely unreasonable
And no, shifting the full burden of responsibility to the human reviewing the LLM output is not reasonable either
Edit: I'm of the opinion that businesses are going to start trying to use LLMs as accountability sinks. It's no different than the driver who blames Google Maps when they drive into a river following its directions. Humans love to blame their tools.
> Holding humans accountable for code that LLMs produce would be entirely unreasonable
Why? LLMs have no will nor agency of their own, they can only generate code when triggered. This means that either nature triggered them, or people did. So there isn't a need to shift burdens around, it's already on the user, or, depending on the case, whoever forced such user to use LLMs.
At end of the day, there'll be always someone controlling those AIs, so a person is a guaranteed. The exception to this is if AI gets free will, but that would result in just replacing a human person with a digital person, with all the same issues (may disobey unless appropriately paid, for starters) and no benefits in comparison to just keeping the AI will-free.
I don't see the scalability problem here. The logic is the same as when we replaced human computers with electronic ones - responsibility bubbled upwards from the old computers to the employer, which may choose to do things directly through the new computers - which results in keeping all of the responsibilities - or split them in a different way along the other employees, or something in-between.
> At end of the day, there'll be always someone controlling those AIs, so a person is a guaranteed. The exception to this is if AI gets free will [snip]
Honestly that isn't even really true right now. It doesn't require free will or intelligence, it just require autonomy. People on this very forum have been talking about turning agent swarms loose in harnesses to work and behave autonomously, so we're basically at this point already. The problem I'm describing can easily happen if an agent in a loop goes off the rails.
Does it have to be? The etymology of the word „abstraction“ is „to draw away“. I think it‘s relevant to consider just how far away you want to go.
If I‘m purely focused on the general outcome as written in a requirement or specification document, I‘d consider everything below that as „abstracted away“.
For example, this weekend I built my own MCP server for some services I‘m hosting on my personal server (*arr, Jellyfin, …) to be integrated with claude.ai. I‘ve written down all the things I want it to do, the environment it has to work in and let Claude go.
Not once have I looked at the code. And quite frankly, I don‘t care. As long as it fulfills my general requirements, it can write Python one time and TypeScript the other time should I choose to regenerate from that document. It might behave slightly differently but that is ok to a degree.
From my perspective, that is an abstraction. Deterministic? No, but it also doesn‘t have to be.
This is quite a milestone for open silicon. Having a completely auditable path from RTL down to GDS targeting the GF180MCU via wafer.space is no small feat-especially pulling it all together with a Nix-integrated toolchain and Dart for the hardware generation.
On the I/O side, getting even a basic 400MHz oversampled SerDes into a first-gen test chip puts this way ahead of most academic open FPGA efforts.
Really looking forward to seeing the Terra family expand and how the test chips perform.
Wild hardware flex for a garage project. Reverse-engineering the Pi 5's MIPI to push 5.6 Gbps from custom MASH sigma-delta ADCs to a Lattice ECP5 FPGA to the Raspberry Pi is serious engineering. The idea that the RF receiver looks like a "camera" to the Pi while the transmitter is a "display" is super creative. Getting a 1.5 kW, 240-antenna EME array for $2,499 is actually cheap for something like this.
Their standalone 4-antenna tiles (https://moonrf.com/updates/) show off some killer apps, like 30 fps spatial RF visualization and NEON-optimized drone video interception.
I'm rolling my eyes at the "Agentic Transceiver" part, though. It is highly doubtful that an onboard AI casually writes, debugs, and compiles a real-time C app with analog video color sync recovery and decode in ten minutes.
> Reverse-engineering the Pi 5's MIPI to push 5.6 Gbps from custom MASH sigma-delta ADCs to a Lattice ECP5 FPGA to the Raspberry Pi is serious engineering
Using video interfaces to transfer arbitrary data at high speeds is becoming a common trick for cheap boards with limited interfaces. Video inputs and outputs are generally highly mature and optimized to avoid dropping frames because everyone wants reliable video. Putting arbitrary data into video IO pipelines is a cheap way to get high speed IO through standard interfaces.
There is a cool project that uses cheap HDMI to USB capture devices for high speed data transfer out of cheap FPGA boards that have HDMI output [ https://github.com/steve-m/hsdaoh ]
In a perfect world, using PCIe directly would be a much better solution for a project like this. Having access to PCIe DMA support directly without relying on video IO peripherals is helpful for high speed ADC/DAC applications like this. It would also make the board more portable to other SBCs.
The ECP5-5G can do PCIe 2.0 x2 or PCIe 1.0 x4 which would provide around 8Gbps of data transfer. The problem is that the Raspberry Pi 5 only exposes a single PCIe lane to the user. The other 4 PCIe lanes of the Raspberry Pi 5 SoC are routed to the RP1 chip, which has the MIPI and CSI interfaces that are used in this project. So the data is going through a convoluted path instead of being connected to PCIe directly.
I would have to look at the details more closely, but even using the PCIe 2.0 x1 port (around 4 Gbps after overhead) on the Raspberry Pi would be close in bandwidth to the 5.6 Gbps number they give for their custom MIPI solution.
I think the Raspberry Pi 5 is a good first choice for most projects because it is widely support and has the largest community, but for a project like this the benefits of moving to a different SBC with PCIe 2.0 x2 would have been helpful. Keeping the project semi-independent of the SBC has a lot of benefits.
unfortunately the ECP5-5G FPGA (with the SERDES/PCIe option), costs way more than the ECP5 (without SERDES). The Pi-5's MIPI interfaces gives you 8 parallel LVDS lanes that can run at 640 MHz each which is manageable for a cheap FPGA.
While true I do worry that it's mandating a pi 5 for each tile? And who knows how specific it is to the 5. Doesn't seem very open relative to something like a usb superspeed, pcie, or 10gbe. USB could be maybe done with the LIFC-33U depending on I/O limitations. PCIe can be done on various FPGAs in the lattice lineup and others.
If you use PCIe, theoretically you don't need to reverse engineer how they implemented because you're not at the edge of the spec like they are here.
That said, I've thought about doing what they're doing countless times and it is nice to see it would work.
I'm struggling to understand the signal chain or antenna architecture here. If those two MAX chips are 2829s this would be 2x2 mimo per tile but I'm not super familiar with that product line and the PCB layout looks like a 4x4 setup.
And yeah, the agentic stuff is dumb, I've played a ton with doing low level SDR work on Opus 4.6 and it's truly ass.
Also, the "can't radar, plz don't ITAR" is horseshit. Some basic fw tweaks and you could get this to be, at the very least, a sweet FMCW setup.
I used to work radar systems. The point being that the hardware is fully capable. The software side is quite well understood at this point. There will be plenty of repos floating around in a year to turn this into an airborne drone SAR or whatever. Functional range resolution will be around 4m but that's plenty for most shenanigans.
> Also, the "can't radar, plz don't ITAR" is horseshit.
My assumption is that they're trying to avoid crossing a legal line, as opposed to being personally invested in the idea of preventing radar use by a determined hobbyist.
ITAR feels a lot like Bernstein v. US all over again. Until very recently, everyone who can do anything that would be covered by ITAR was a giant corporation that likes the moat that regulations create, so it's unthinkable to challenge it. But that is changing, just like cryptography was in the early 90s.
RTL-SDR-grade fleet doing passive radar (using radio/TV OTA broadcasts) isn't actually that new; but pretty much any detailed reports have caught self-censoring after TLA visitors came by.
I think they're claiming the actual transmit power is 240W (23.8 dBW), and the EIRP is 63.1 dBW.
I am sort of skeptical of the claimed gain... even at 6GHz, you need a 2-meter parabolic reflector to get 40dB, the array is 1/10th that diameter. EDIT: Ignore this second paragraph I misread the spec page.
I went down that rabbit hole. Apparently, he was a member of the Saskatchewan Social Credit Party. It looks like the party never made inroads in Saskatchewan, but the party controlled Alberta for decades. Then I ran into the following comment in the article:
> If mental illness is on the rise, then the obvious solution is on-demand therapists through an app
That is not the only "solution". Alberta had a Eugenics Board for the entire run of the Social Credit Party. One of the roles of this board was to sterilize people with mental illness. (The board predates the party by about a decade, but was only abolished about a year after they lost power.) While this a couple of leaps from the Technocracy movement, the mere association is rather scary.
Controlled Alberta for decades and BC as well though in BC it transformed into more of a big tent "everyone but the NDP" conservative party. Still run by lunatics though.
In Saskatchewan prairie populism took a left wing form instead.
In Alberta the taint of these people never went away. Lougheed's progressive conservatives pulled Alberta governance a bit more mainstream for a couple decades, but Smith's UCP has dragged it right back. Magazines like Alberta Report and hangers-on kept far right prairie right wing populism alive for decades, Preston Manning (Social Credit premier Earnest Manning's son) "mainstream"ized it in the Reform Party ... which essentially took over the federal conservative party... there's a well-spring of this stuff in rural Alberta.. and its full of all sorts of paranoid persecution complex politics, undertones of anti-Semitism (sometimes outright explicit as in the whole James Keegstra afair), with everything to the left of them considered "communism" and these days with bags of money being dumped on them from the US they have no managed to get themselves enough signatures to force a referendum on "independence" (aka annexation by the US).
As a person from Alberta originally and with all my family still there, I find it all a bit terrifying. Very much not a relic of the past, and with COVID and now Trump the lunatic fringe has outsized influence there like it never did before.
I suspect if you probe the right people from the UCP in the right church basements where they're off-mic you'd still find them defending things like eugenics etc.
"SpaceX lacks the hardware or a plan with scientific viability for a genuine Mars program. The scientific community has long recognized the facade. A feasibility study published in the journal Nature definitively concluded that a crewed Mars mission using Starship is unworkable. The vehicle’s massive dry weight creates a severe Delta-v deficit, making a return flight physically impossible. Furthermore, the architecture lacks closed-loop life support and relies on massive, non-existent nuclear power and water-mining infrastructure. Instead of building for Mars, Starship is a heavy-lift vehicle with low characteristic energy (C3). This is only a reasonable design for driving mass to low-Earth-orbit constellations—a trajectory that perfectly mirrors decades-old Pentagon objectives, that recently has manifested as Golden Dome.
In the 1980s, Michael D. Griffin architected "Brilliant Pebbles," a global missile-interceptor network made up of thousands of weaponized satellites in Low Earth Orbit. It died alongside Reagan's Strategic Defense Initiative in 1990s after the DC-X reusable rocket program failed to lower launch costs. The architectural dream survived through "New Space" advocacy. Griffin co-founded the Mars Society and recruited Elon Musk after he was brought to his attention by Peter Thiel. In 2001 Griffin and the young Musk traveled together to Russia to examine ICBMs. SpaceX was conceived on the flight home to solve the exact launch bottleneck that killed Brilliant Pebbles. Musk later admitted the company was simply "continuing the great work of the DC-X project," and it was ultimately Griffin—later acting as NASA Administrator—who awarded billions of dollars in contracts that saved a zero-experience SpaceX from bankruptcy.
SpaceX masking began to slip when Gwynne Shotwell publicly confirmed the company's willingness to launch offensive weapons in 2018. That same year, Griffin returned to the Pentagon to establish the Space Development Agency, mandated to build a proliferated LEO constellation for hypersonic missile tracking. In 2019, U.S. General Terrence O'Shaughnessy pitched the Senate on "SHIELD"-a layered orbital missile defense system. Shortly after, O'Shaughnessy retired from the military and joined SpaceX to lead their discreet new division: Starshield.
Three decades later, Brilliant Pebbles is finally materializing as Golden Dome. As Reuters reported, Musk's Starshield is the frontrunner to build this classified SDI successor, pitching the Pentagon on a Golden Dome architecture involving thousands of weapon satellites. Starshield is already deploying these military satellites alongside standard Starlink satellites.
Mars was the necessary myth to recruit talent, capture public imagination, and secure capital. But as the Nature study proves, Starship was never physically capable of planetary colonization. The capabilities SpaceX actually delivered...cheap mass-to-orbit and rapid satellite replenishment...are the exact prerequisites of Golden Dome.
"
Aand of course it's open source, https://github.com/open-space-sdr/main/
reply