Hacker Newsnew | past | comments | ask | show | jobs | submit | ianhowson's commentslogin

This is the best idea to come out of this whole scheme. Space solar panels are super cheap and efficient? Prove it! Launch them and transmit the energy down.

This is orders of magnitude easier than the original proposal -- and yet still nonsensical.


Beaming power down to Earth from space-based solar collectors is a concept that's been around for a while.

"Dr. Glaser is best known as the inventor of the Solar Power Satellite concept, which he first presented in the journal Science for November 22, 1968 (“Power from the Sun: It’s Future”). In 1973 he was granted a U.S. patent on the Solar Power Satellite to supply power from space for use on the Earth."

One thing that always struck me was that you wouldn't want to be living near the "collectors". A very small angular error in beaming could result in being literally microwaved.

https://nss.org/in-memoriam-peter-e-glaser-1923-2014/


> "A very small angular error in beaming could result in being literally microwaved."

One of the SimCity games had this as an occasional disaster event. You had to make sure your ground collector stations weren't too close to the rest of the city or risk setting your buildings on fire.


Reminds me of the malfunctioning space laser news story from Robocop.

https://www.youtube.com/watch?v=4CrTJEY8zNs

Such fun.


We could use the energy to power a data center on earth!


Your idea lacks depth.

... could we instead beam the energy down to a data center on the sea floor?


An old idea, impractical not maybe not absolutely impossible. Somehow they made it worse


I had a good laugh.

- You can't build 40MW of solar panels for $2M, even with theoretical maximum efficiency. You can't even build the cabling and regulators at that price.

- You need battery storage -- not as your backup -- but as primary source. It is going to cost more than $2M. Batteries are heavy. They are going to cost a lot to launch. This is not even solved on the ground yet.

- You need a heat transport medium to move heat into your massive radiator. Either you use water or you use air or you use heatpipes (metal). You have to pay for the cost and weight and launch expense. This is probably half the weight of the rack and I haven't bothered to do the math about how you transport heat into a 500 foot solar sail.

- Let's not even talk about how you need to colocate multiple other racks for compute and storage. There aren't any 1TBps orbital link technologies.

- Rad shielding? It doesn't work, but I'll let this slide; it seems like the least problematic part of the proposal.

- 15 year lifetime? GPUs are obsolete after 12 months.

I don't want to be the guy who shoots stuff down just for fun, but this doesn't even pass the sniff test. Maybe you can get 10x cheaper power and cooling in space. Still doesn't work.


Also: repairs. Every time I read someone’s story about large-scale ML training, a bunch of it is about identifying failing or flaky equipment and fixing it. That’s not so easy in space.


Nonsense, it's right there in the acronym: Space Reliability Engineer (or I guess one could also just leave "Site" as is, since space is for sure a site). That PagerDuty rotation is gonna be hell


At this point as long as you say AI 3 times someone will give you money.

I guess you need connections too, and maybe a previous exit.

This idea in particular doesn't make any sense... Currently. Maybe in a decade or so with better technology.

Although the prospect of polluting the stars itself with a bunch of computers generating AI slips... We paved Paradise to put up a parking lot


It makes sense if you want to put a nearly unkillable AI in orbit to control life on earth :D


Where do I buy some of these 12 month old obsolete GPUs?


eBay works for me


What are you searching for?


Either this is performance art, or interest rates are too low.


Or it's a cover for putting something else up in orbit secretly.


Good point, Rad shielding … how even trust your calculations when everything is grilled by charged particles.


GPUs are not obsolete after 12 months. Look at how Nvidia is stagnating for their 50 series lineup.

The biggest problem is software. The CUDA stack is not maintained forever and certainly less than 15 years.


If you have continuous sunlight, can't you get away with no battery?

Not arguing with your overall point - this company looks ridiculous.


For continuous you need to either go for a polar orbit or go very far in space. Most launch centers & providers are not well situation for polar orbits because its not a common use case, so you need to sacrifice launch mass. The same goes for far away orbits - you need to sacrifice launch mass to go further. Also if you are far then you get latency issues.

So it skews the economics pretty harshly. I think OP is right - you need good batteries somehow.


I think the proposal suggested an orbit where the solar panels are always in sun and always properly aligned and always clean due to space gophers.

But more seriously, GPU loads are super spiky. Ground-based power grids and generators and batteries have trouble keeping up with them. You can go from 1MW idle to 50MW full power in 10ms. Unbuffered solar cells are right out.


> "GPU loads are super spiky... You can go from 1MW idle to 50MW full power in 10ms."

That sounds like something that could be addressed in software, if necessary? Cap/throttle the GPUs according to the available power, and ramp power up/down gradually if spikiness is the issue.


> You need battery storage

How so? Is it not possible to position the satellite in an orbit that keeps it in perpetual sunlight?


I thought rad shielding works?

Just that it tends to involve heavy AF materials like water


Most comments on this page are about the problem with heat. You're saying the problem is battery storage.

... couldn't you just merge both problems into a solution - your radiators ARE you power source


Temperature isn't a power source; heat flowing across a temperature gradient can be. But that brings us back to the first problem - how to make it flow.


1. They're overpriced. A quick Amazon search will show comparable kits for half of the price. You can buy a whole prebuilt ebike for the same price.

2. They're dangerous. FWD ebikes have a tendency to break traction on corners.

3. They tend to destroy the dropouts in your fork.

4. Installation is much more complex than "pop off your bike's front wheel". I can't even find the axle compatibility specs on their website. You know that there are many standards for front axles, right?

Do not recommend.


Irrelevant.

1. Downstream of the mains power supply are DC-DC converters that run the router hardware. Those contain the filters and capacitance you think you're fixing. Nothing in that router actually cares about mains power quality. They absolutely do not care about perfect sinusoids.

2. If you were seeing insufficient power to the router, you would observe crashes and faults -- not slowdowns.

3. Two different routers showed the same behavior, which suggests that the fault lies outside the router+power supply and more to do with something common (e.g. network, laptop).

The dip shows a reduction in voltage, and a larger one than I would like, but without a scale on either time or voltage, it's difficult to guess if it actually matters. I would suspect not, since the device does boot successfully. Again, the voltage doesn't matter, since the router runs off its internal DC-DC supplies, not the external power supply.

I'm happy that the capacitor and new supply has fixed the issue, but I'm unconvinced by the explanation. Check grounding between inverter and laptop.


Generic issues like brownouts and crashes I can believe as a power fault. Slowdowns? Not likely.

Possibly he's not describing the problems right. I can certainly believe a shitty enough power supply would cause problems.


When I hooked a Raspberry Pi (model B maybe?) up to an analog input on my Marantz NR1605, the UI of the receiver got noticeably laggier until it was barely responsive. Disconnecting made the UI perk back up. I noticed the Pi had a 2.5V DC bias on its audio out. I didn't investigate further.

As for the article, I'm left with more questions. Surely the solar inverter wasn't running all the time in series, like a double conversion UPS, right? So how how would the mains waveform have been significantly affected in normal operation?

Also those scope traces, what's the scale? Are we talking 100mV dip, or a 1V dip? And is that a storage scope, or is that one cycle of the supply's ongoing ripple? And the complete lack of any dip on the "fixed" trace with extra capacitance makes me wonder if they even got the triggering right.


I think the inverter is always running in between the batteries and the AC outlets. It's a dedicated "solar circuit".

I have questions. What's the point of a dedicated solar circuit? They are leasing from their "local lines company". If the lines company is in the leasing business, this system must be grid-tied right? Why the dedicated solar circuit?

Also, is it common for full sine wave inverters to produce power less clean than the grid? Maybe when the batteries are low? Curious.


There’s oscillators out there that slowdown as they brownout. That would cause those symptoms. A lot of logic families have slower edges and longer delays too, not that there would be much asynchronous logic in a router.

Could also be affecting the analog circuitry more if the droop is too bad or its browning out. That could be loss of gain, SNR, etc.. that could cause packet loss and retransmissions


Are any of those oscillators running off mains power rather than off a lower, separately-regulated supply?


That actually happened at my job a few years ago, where some extra power draw off a power supply that was also feeding a PC would slow down the code running there. It does sound crazy...


I’m super fascinated by the rare occasion that digital things degrade in an analog fashion. Usually they just work or don’t.


Yeah, I've lost two modem/routers to power outage incidents, despite them being on a reasonable quality surge protector.

But that was an all-or-nothing failure mode in which they would power up but never do anything else. Performance changes is a claim that requires much stronger evidence.


My neighbour's internet was terrible last year. He replaced a bunch of stuff, and went as far as switching ISPs to my recommended provider. Nothing worked.

Finally a savvy helpdesk person had him move his ADSL router. I believe he moved it off a cheap power board and plugged it directly in to the wall.

All problems immediately solved.


If it’s being moved inside the house, it could also have been a different phone jack that made the difference, or a different cable connecting to the jack.


It wasn't that. It was the power board.


ADSL is particular sensitive to electronic noise. I'm afraid to set up my HF ham radio again.


I agree that you'd probably see crashes but it still could be power supply related. Crappy SMPS designs do tend to shovel some noise from the primary into the secondary side as well as adding their own. And there's a lot of noise generally coming from an inverter as specified. It might just be adding noise to the line and crapping on the SnR occasionally. ADSL is quite robust but with noise it does slow down horribly if you bend it hard. I used to be able to slow my line down keying on my amateur radio transmitter back in the day.

About the best thing they did was adding a choke on it.

On top of that, it's a crappy 40MHz analogue scope. You're not going to see anything useful.


A fancy modern turbo'ing CPU which has power availability feedback loops might just slow down in this scenario, but I don't think anyone has put anything remotely that fancy into a low power SoC that these routers were using.

So yeah, seems unlikely the only impact would be a sluggish dashboard. Maybe the device was churning on error re-transmissions from the brownout? Like the CPU itself was OK but the ethernet ports weren't?


> Maybe the device was churning on error re-transmissions from the brownout

I think this is more likely. Two different routers impacted. Crappy grounding or induced noise causing high BER on the links.


How fancy do we really have to get? A Raspberry Pi can detect an inadequate power supply and slow down.

Granted, cheap consumer devices are much simpler than that, but it's still something that can be added to the SoC.


I think the original Broadcom chips were overstock in the normal market because they were somewhat a nuisance of unnecessary checkboxes that would reward you with further testing/diagnostic complications in return for paying too much..


They were overstock because they were underpowered, overpriced garbage targeting the set-top market and those manufacturers weren't interested.

Ebon Upton started Raspberry Pi to help Qualcomm dump stock they couldn't get rid of otherwise.

Look at every single Pi to come out - it's been faster than what came before it, but in a matter of weeks half a dozen competitors have better boards with faster processors for cheaper - that don't have all the nonsense like RPi foundation repeatedly fucking up the power supply so vendors could milk people on "pi compatible" USB power bricks.

A Pi 5 16GB costs $120. Plus case ($10) plus power supply ($12) plus video adapter ($10)...$152. That is absurd.


A huge chunk of the value of an RPI is the ecosystem & support. The actual hardware in & of itself is consistently mediocre to poor for the price these days.

I think RPI is starting to lose sight of that with the Pi 5 and especially the 16gb models. It's starting to just be expensive and the better support for an ARM SBC starts losing a lot of value when it's butting up against x86 mini-pcs like the large number of N100-based options at around the $150 price point[1]. They aren't really any less efficient, and x86 software support is still consistently better than ARM software support.

1: eg https://www.amazon.com/Beelink-Intel-N100-Computer-Desktop-D... - $170 for a complete unit w/ 16GB RAM & 500GB SSD.


It's funny that people were once proud of the pragmatic origin story and now apparently ashamed.


Cool, I was going to research what a 16GB ram OrangePi would cost but their website is down for me.


Well, there are encrypted CNIs like Weave. I've used Calico over ZeroTier to similar effect. The network is 'encrypted' and there isn't much effort required past initial configuration.

But that's not really the issue. You still have a big plaintext network with a bunch of random stuff talking, no mutual auth and no security controls other than segmentation. That's the tricky problem that mTLS and service meshes attempt to solve.


First, I’ll respond this w.r.t. k8s CNI specifically: all inter-node traffic is encrypted, the only plaintext is localhost. If you’re worried about network snooping on localhost you’ve got bigger problems. As for security controls, that’s what Network Policies are for.

Outside of k8s (where one has greater control over how specifically e.g. Wireguard is deployed). Again, there is no plaintext outside of localhost. Wireguard is mutual auth, I’m not sure why you think it isn’t. Wireguard + firewall is security control since, well, you have mutual auth so rules can be applied per-client.


Further, it's not that the magnet 'hurts' the pacemaker or device in any way. The presence of a magnetic field is used as a control signal to change the operating parameters of the device.

This is critically important for a device that may not have any remote control functionality or easy way to reprogram it other than 'cut up the patient'.

(Keep in mind that these things were designed decades ago, and 'RF remote control of pacemaker' probably introduces more problems than it solves, even today.)


Has much changed in the pacemaker world since this article was published, warning of the potential for hackers controlling pacemakers remotely?

https://www.cbc.ca/news/world/cybersecurity-heart-devices-im...


There's now FDA guidance on cybersecurity risks, but in a nutshell, it says "you must consider cybersecurity risks in your risk management plan." Manufacturers were (usually) doing this anyway but the probability and impact have been reassessed since the various lawsuits.

I don't work on pacemakers, specifically, but I'm confident you can no longer say "it has RF remote and nobody can buy the programming interface, therefore it's secure", because that's what went wrong before.

Instead, the risk management plan will look something like:

1. "magnet only with big disclaimers" (because you need the emergency cut-out switch; 'patient died because malfunction' is a worse outcome than 'maybe someone put a big-ass magnet on someone's chest and they got mostly non-life-threatening symptoms')

2. optionally, some form of 'secured' RF interface. Don't presume SSL. This is risk management, remember, so it's probably enough to say "hold a magnet on and then we enable unencrypted unauthenticated comms". Or maybe they do use SSL and just put way bigger micros in there. I don't know.

(edit: someone below who knows more than I do says "use Bluetooth", possibly triggered by magnet taps. Makes sense to me.)


I lived in the Gold Coast for a year while my job was in Sydney. I commuted by air once a week. Total commute time and housing expenses decreased.


Powertrains are much, much better.

- You can shift under load (usually)

- Shifts are a little faster

- Clutches give less chainslap

- Narrow-wide rings hold the chain freakishly well


At least here in NorCal, it's not trail damage that's the issue. It's:

* people riding beyond their ability (too fast, too technical) and getting injured

* batteries overheating and stranding the rider somewhere they can't climb out of, requiring heli rescue and risking wildfires

I also don't like to be overtaken by someone who isn't suffering as much as I am [1] and it makes me feel bad.

[1] Not really, I also ride a road e-bike which strokes my fragile ego.


I'm personally convinced that there's no justification for banning eMTBs (or any other negative sentiments toward them really) beside gatekeeping.

There's nothing about eMTBs allowing people to venture in more difficult trails than with normal MTBs. Most terrains that are too hard on a normal MTB will be even harder on an eMTB because you loose some agility and the ability to carry the bike over unrideable sections.

I often see people complain about "how fast eMTBs go in some climbs, which is dangerous to bystanders". This is the most ridiculous argument, completely unrelated to eMTBs. Regardless of the terrain (uphill, downhill, flat, ...) I always slow down to ~walking speed when passing someone if the trail isn't wide enough to put at least ~1.5 meter between me and them. Meanwhile the same people who complain about eMTBs being dangerous bomb down the hill past hikers at 25+ km/h. Anyway, a long rant to basically say that any argument about speed is only ever relevant if brought up in the context of educating people to be more courteous riders regardless of the bike they ride.

Finally, you're orders of magnitudes more likely to be stranded because you punctured and forgot your repair kit (or whatever other damage to your bike that can't be fixed trailside) than due to overheating battery or whatever improbable scenario people come up to justify hating on eMTBs.


I generally agree with you. I don't personally find eMTBs to be bothersome, and fully expect to be riding one when I'm not physically able to ride an acoustic MTB. And I definitely agree with you re. bystanders, where regular MTBs are fast enough to be dangerous.

The only point where I disagree is:

> There's nothing about eMTBs allowing people to venture in more difficult trails than with normal MTBs

If you want to go down the hill, you need to climb up the hill. This is a skill and fitness gate, normally; you're not going to tackle a difficult or extended descent if you don't think you can climb out again.

Soquel Demo Forest, one of the very popular Bay Area spots, bans eMTBs. It might be reasonable in this case due to the elevation profile. From the car park you need to climb for 20-40 minutes. You descend. You then have another long climb back to the carpark. It's a worst-case scenario for batteries, and because the area is so popular, it tends to attract people riding beyond their limits already.

I don't personally care, and I don't want to tell people not to enjoy an activity, but I can understand the reasoning in some cases.


It won't work with the LG Ultrafine 5k, which requires a Thunderbolt connection to run at full resolution, and behind-the-scenes is a tiled pair of DP1.2 displays.

It ought to work with a DisplayPort 1.4 5k, though there aren't many of those.


Thanks. So as far as I can tell, that ridiculously expensive switch is still the only option (if even, I wasn't able to fully confirm), and nowhere near worth the cost vs. just physically swapping the cable between computers.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: