Hacker Newsnew | past | comments | ask | show | jobs | submit | sliken's commentslogin

Only if you are in a hurry, say an advanced civilization has been around for 1M years (0.07% faster than us). It might well be worth sending out millions of drones to the most promising areas at 1% of the speed of light, their advanced sensors and telescopes and science would likely be able to pick the most likely stars based on metal content, vicinity (i.e. stable of 1B years), water, temp, etc.

Not to mention they could send probes closer and further from the galactic center to take advantage of the slower and faster rotation rate to see new stars.

As for the nuclear fission blast I have my doubts. Ham radio folks brag about 1000 miles a watt, in a lossy atmosphere and multiple bounces that reflect less than 1% for each bounce. Using advanced things like tubes of transistors and a copper cable thrown over a tree branch.

Using the 1 watt per 1000 miles the largest nuclear explosion would be 22 light years, and clear line of sight through space is going to transmit quite a bit better than bouncing off the atmosphere then off the ground several times.

An advanced civilization could make say a square km array (which us lowly humans have managed) and would understand nuclear bombs enough to know their likely signature, decay rate, shape of the curve, etc. Much like how astronomers use supernovas as standard candles for distance, despite crazy different red shifts.

Seems quite reasonable for a civilization to keep track of anything going on in their fraction of the galaxy.

"People for some reason refuse to comprehend just how hard is it to send a speck of dust over light years of distance" It's only hard if you are in a hurry, in fact we have 3 rocks come through our solar system from well more than a light year away.


send 1 a day, that way you only have to communicate to the nearest probe.

The launch system isn't consumed by launch, so launch them as often as necessary to keep the communications gap as small as needed.

Not like a 1 gram probe is going to be expensive compared to the launch system.


Dunno, training maybe, for inference pytorch and llama seem more important.


Great justification for switching to Graphene OS, more secure, more control, and google has to ask permission to install things and the play store is optional.


Unless you're against giving your money to Google and depending on their hardware and software.


Right, but the system needed maintenance, and people would still need to ration water and not water their lawns, even if the water they sell to outsiders (less than 1%) stopped.


From the article is sounds more like, some town folk don't like not being able to water their lawns. Said folks targeted the people buying water, despite them being less than 1% of the water used. Not to mention apparently they are providing 45% of the towns tax revenue with their water purchases.

So an irrational decision fueling conflict.


> apparently they are providing 45% of the towns tax revenue with their water purchases

"Revenue for the water sales to rural residents totaled $43,000 per year, about 15% of total revenue."

This means that 'total revenue' is about $287k. I would guess that's the revenue of the water system, not the town's entire tax base. Still a significant figure, but not 45% of the town's tax revenue.


24TB drives are quite available, $300 on newegg.

Buy a Data60 (60 disk chassis), add 60 drives. Buy a 1U server (2 for redundancy). I'd recommend 5 stripes of 11 drives (55 total) with 5 global spares. Use a RAIDz3 so 8 disks of data per 11 drives.

Total storage should be around 8 * 24 * 5 = 960GB, likely 10% less because of marketing 10^9 bytes instead of 2^30 for drive sizes. Another 10% because ZFS doesn't like to get very full. So something like 777TB usable which easily fits 650TB.

I'd recommend a pair of 2TB NVMe with a high DWPD as a cache.

The disks will cost $18k, the data60 is 4U, and a server to connect it is 1U. If you want more space upgrade to 30TB $550 each) drives or buy another Data60 full of drives.


There are also enterprise SSDs in existence now which pack more than 200TB into a single NVMe drive. $$$$$, though (for the foreseeable future?)

Kioxia LC9 SSD Hits 245.76TB of Capacity in a Single Drive - https://news.ycombinator.com/item?id=44643038 (22 days ago, 7 comments)

-> https://www.servethehome.com/kioxia-lc9-ssd-hits-245-76tb-of...

SanDisk's "reply": Sandisk unveils 256 TB SSD for AI workloads, shipping in 2026 - https://news.ycombinator.com/item?id=44823148 (10 days ago, no discussion)

-> https://blocksandfiles.com/2025/08/05/sandisk-pre-announces-...


Sure, you could. The design would do something like:

We need a bigger memory controller.

To get more traces to the memory controller We need more pins on the CPU.

Now need a bigger CPU package to accommodate the pins.

Now we need a motherboard with more traces, which requires more layers, which requires a more expensive motherboard.

We need a bigger motherboard to accommodate the 6 or 8 dimm sockets.

The additional traces, longer traces, more layers on the motherboard, and related makes the signalling harder, likely needs ECC or even registered ECC.

We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel). All larger, more expensive more than 2x the power, and is likely to be in a $5-$15k workstation/server not a $2k framework desktop the size of a liter of milk or so.


> We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel).

This is the real story not the conspiracy-tinged market segmentation one. Which is silly because at levels where high-end consumer/enthusiast Ryzen (say, 9950 X3D) and lowest-end Threadripper/EPYC (most likely a previous-gen chip) just happen to truly overlap in performance, the former will generally cost you more!


Well sort of. Apple makes a competitive mac mini and macbook air with a 128 bit memory interface, decent design, solid build, nice materials, etc starting at $1k. PC laptops can match nearly any aspect, but rarely match the quality of the build, keyboard, trackpad, display, aluminum chassis, etc.

However Apple will let you upgrade to the pro (double the bandwidth), max (4x the bandwidth), and ultra (8x the bandwidth). The m4 max is still efficient, gives decent battery life in a thin light laptop. Even the ultra is pretty quiet/cool even in a tiny mac studio MUCH smaller than any thread ripper pro build I've seen.

Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.


> Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.

The market dynamics are pretty clear. Having that much memory bandwidth only makes sense if you're going to provide an integrated GPU that can use that bandwidth; CPU-based laptop/desktop workloads that bandwidth-hungry are too rare. The PC market has long been relying on discrete GPUs for any high-performance GPU configuration, and the GPU market leader is the one that doesn't make x86 CPUs.

Intel's consumer CPU product line is a confusing mess, but at the silicon level it comes down to one or two designs for laptops (a low-power and a mid-power design) that are both adequately served by a 128-bit memory bus, and one or two desktop designs with only a token iGPU. The rest of the complexity comes from binning on clock speeds and core counts, and sometimes putting the desktop CPU in a BGA package for high-power laptops.

For Intel to make a part following the Strix Halo and Apple strategy, Intel would need to add a third major category of consumer CPU silicon, using far more than twice the total die size of any of their existing consumer CPUs, to go after a niche that's pretty small and very hard for Intel to break into given the poor quality of their current GPU IP. Intel doesn't have the cash to burn pursuing something like this.

It's a bit surprising AMD actually went for it, but they were in a better position than Intel to make a part like Strix Halo from both a CPU and GPU IP perspective. But they still ended up not including their latest GPU architecture, and only went for a 256-bit bus rather than 512-bit.


Yes, but that platform has in-package memory? Which is a higher degree of integration than even "soldered". That's the kind of platform Strix Halo is most comparable to.

(I suppose that you could devise a platform with support for mixing both "fast" in-package and "slow" DIMM-socketed memory, which could become interesting for all sorts of high-end RAM-hungry workloads, not just AI. No idea how that would impact the overall tradeoffs though, might just be infeasible.

...Also if persistent memory (phase-change or MRAM) can solve the well-known endurance issues with flash, maybe that ultimately becomes the preferred substrate for "slow" bulk RAM? Not sure about that either.)


Dunno, nice, quiet, small machine, using standard parts (power supply, motherboard, etc).

If you want the high memory bandwidth get the strix halo, if not get any normal PC. Sure apple has the bandwidth as well, and also soldered memory as well.

If you want dimms and you want the memory bandwidth get a threadripper (4 channel), siena (6 channel), thread ripper pro (8 channel), or Epyc (12 channel). But be prepared to double your power (at least), quadruple your space, double your cost, and still not have a decent GPU.


Nice is subjective. Fractal cases he compares to looks nicer to me.

Quiet? A real PC with bigger fans = more airflow = quieter

Smaller - yes, this is the tradeoff

GPU is always best separate, that is true since the ages.

"double the power" oh no from 100W to 200W wowwww

"quadruple your space" - not a problem


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: