If your signal is "transparent" enough go through so much rock and iron without being absorbed (like neutrinos), you'll have a hard time capturing it on the receiver side.
Well, OPERA was 700ish km, but had Cern at one end. If one has this as the sole goal and wanted to do it real-time over 12,000km is it "engineering-possible" vs "theoretically-possible" ? My guess is that it depends how much money stands to be made ;)
Post-Quantum RSA is clearly a joke from djb, to have a solid reply when people ask "can't we just use bigger keys"?. It has a 1-terabyte RSA key taking 100 hours to perform a single encryption. And by design it should be beyond the reach of quantum computers.
The CDU is inside the datacenter and strictly liquid to liquid exchange. It transfers heat from the rack block's coolant to the facility coolant. The facility then provides outdoor heat exchange for the facility coolant, which is sometimes accomplished using open-loop evaporative cooling (spraying down the cooling towers). All datacenters have some form of facility cooling, whether there's a CDU and local water cooling or not, so it's not particularly relevant.
The whole AI-water conversation is sort of tiring, since water just moves to more or less efficient parts or locations in the water cycle - I think a "total runtime energy consumption" metric would be much more useful if it were possible to accurately price in water-related externalities (ie - is a massive amount of energy spent moving water because a datacenter evaporates it? or is it no big deal?). And the whole thing really just shows how inefficient and inaccurately priced the market for water is, especially in the US where water rights, price, and the actual utility of water in a given location are often shockingly uncorrelated.
Even without AI the whole water conservation topic is kinda silly.
The margin at which it makes sense to save water varies wildly by location, but the cultural dominance of the western USA infiltrates everything.
Here in Zürich, my office briefly installed water saving taps. This is a city of less than half a million where the government maintains 1,200 fountains that spew drinkable water 24/7. But someone at the California HQ said saving water is important and someone at the Swiss subsidiary said yes sir we'll look into it right away sir.
It's on the same level as people using incandescent light bulbs. Well we clear 160k Euros after taxes and have public medical care, and electricity is 10c/kWh here, so why does it matter what bulbs we use?
We live in an area surrounded by grass fed cows, so what does it matter if we throw away 3/4 of our steak?
Without regard to how plentiful resources are in our particular area, being needlessly wasteful is in bad taste more than anything. It's a lack of appreciation of the value of what we have.
For water specifically - it is generally speaking the most valuable resource available, we just don't appreciate it because we happen to have a lot of it.
While I'm not saying waste when things are plentiful in general is okay, I think water is a unique case that can be treated differently.
Comparing to energy costs isn't the same because using the energy for the incandescent bulb consumes that energy permanently. The gas/coal/fuel can't be un-burned. Although solar changes this as the marginal cost of that energy is free.
Comparing to food is similar. Once the food is wasted it is gone.
Water is typically not destroyed, it's just moved around in the water cycle. Water consumption in a region is dictated by the throughput the water cycle replenishes the reservoirs you're pulling from. "Waste" with water is highly geographic, and it's pretty reasonable to take exception to California projecting their problems to geographic regions that they aren't important.
Sun is continuously running a very nice distillation cycle the size of the world that makes fairly clean water just fall out of the sky. It's only a question of where does it fall, and how much. If you want it even cleaner, wait a couple centuries for it to filter down underground, and get it from there - besides maybe a bit high mineral contents, that can easily be removed, it's essentially free, clean water. The only question is how much it's replenished in the area you're taking it from.
There's plenty of areas where there's more rainfall, than there is outflow/evaporation, with water continuously replenishing deep groundwater. "Saving water" in such areas is of little concern besides the basic, economic one of well maintenance - each one can only pull so much, and more usage means more wells, and more upkeep.
> For water specifically - it is generally speaking the most valuable resource available, we just don't appreciate it because we happen to have a lot of it.
And for water specifically, the second order effects from "water saving" programs can be actually negative. Not enough water means that sewers don't work properly any more, leading from events of stink to helping fatbergs grow [1].
To make it worse, the "obvious" idea of scaling down sewer mains doesn't work either because the sewers are (at least in Europe) also used as storm drains, so if you'd scale down the sewers you'd get streets flooded.
It's nothing like either of those things because both of those things have harms to other people and those harms scale linearly with the amount of consumption. Wasting steak isn't problematic because you run out of cows it's problematic because of the climate impact of raising them.
Saying that saving water is "about respect" or something is idiotic. Saving water is about ensuring there's enough water to go around. This is something you need to do in places where water is scarce and not where it isn't. And if you waste time and energy on saving water you are ultimately making the world poorer.
Obviously I'm simplifying things by talking in absolutes here, but what I said above about "the margin at which it makes sense" gets at the truth of the matter. Installing water-saving taps in Zürich is almost certainly a net harm to the environment.
This is one of those cases where we fly by and don't think much about it because we live in a plentiful environment. The more detailed we get, the more we realize that everything has a cost, and wasting water is not free as in beer. Have we also considered the disposal costs of wastewater?
I used to live in a place where water was infinite. Fast forward 20 years, now it's not anymore, the fish bearing watersheds ultimately bear the price, but everyone is still unmetered and there isn't low flow anything. If you piss away precious resources for no good reason and claim it's not wasteful, shame on you.
Yes, I have considered the disposal costs of wastewater! Yes, I still think importing hundreds of stainless steel widgets from China and then having a plumber spend who knows how many hours installing them is almost certainly a net negative. If you piss away precious resources doing that then shame on you.
I have encountered a lot of references to AI using water, but with scant details. Is it using water in the same way a car uses a road? The road remains largely unchanged?
The implication is clear that it is a waste, but I feel like if they had the data so support that, it wouldn't be left for the reader to infer.
I can see two models where you could say water is consumed. Either talking about drinkable water rendered undrinkable, or turning water into something else where it is not practically recaptured. Tuning it into steam, sequestering it in some sludge etc.
Are these things happening? If it is happening, is it bad? Why?
I'd love to see answers on this, because I have seen the figures used like a kudgel without specifying what the numbers actually refer to. It's frustrating as hell.
> ...actual water consumed by data centers is around 66 million gallons per day. By 2028, that’s estimated to rise by two to four times. This is a large amount of water when compared to the amount of water homes use, but it's not particularly large when compared to other large-scale industrial uses. 66 million gallons per day is about 6% of the water used by US golf courses, and it's about 3% of the water used to grow cotton in 2023.
What does that mean for environmental impact? Hydroelectric power uses water in the sense that the "used" water is at a lower altitude. What happens once that is done, is what matters.
Depending on what global average means, it seems like that's quite a lot of cycling of evaporation unless they are releasing steam at 800C
Looking up overall water usage, the US uses 27.4Billion gallons a day residential and 18.2 Billion gallons industrial. It surprised me that industrial was lower, but I guess the US manufactures less these days.
It means different things in different places, depending on where the water was sourced and the counterfactual of what would otherwise have happened to it. For example if you source surface water that evaporates, and the local rainshed feeds that watershed, then it's virtually closed-loop. If you're pumping fossil water out of an aquifer, and it rains over the ocean, then it's the opposite.
I suppose in some desert-ish climate the extra moisture given to the air doesn't rain down nearby but moves with the wind somewhere far away where it rains. So there might be an effect of using locally precious water even if it's a closed loop on a continent level.
Does this evaporated water stay in the loop and condense out later? Is an extra liter on the front end needed in the line until the first liter falls out of the back end?
My understanding is that most new data centers don't use evaporation cooling these days, at least in water-sensitive areas. Hard to find solid data on this either way, though.
Of course, if you're using dry cooling, it uses more electricity, so hopefully you're using solar, not a source that uses evaporative cooling to produce electricity (if in a dry climate).
> there is a massive influence operation that seeks to destroy knowledge, science, and technology in the United States
Agreed. Started with big tobacco by discrediting the connection to lung cancer, playbook copied by many and weaponized by Russia.
> There is no subjective measure by which the water used by AI is even slightly concerning.
Does not follow from your first point. The water has to be sourced from somewhere, and debates over water rights are as old as civilization. For one recent example, see i.e. https://www.texaspolicy.com/legewaterrights/
You are probably correct that the AI does not damage the water, but unless there are guarantees that the water is rapidly returned "undamaged" to the source, there are many reasons to be concerned about who is sourcing water from where.
It doesn't even pass the sniff test because the "AI wasting water" argument comes from liberals based on ecological concerns, whereas the systematic attack on knowledge/science comes from conservatives who explicitly do not give a crap about ecological concerns
For people who are confused: Hyperclay is a NodeJS server and frontend JS library that allows HTML pages to update their DOM and then replace their own .html source with the updated version.
Imagine clicking a checkbox, which adds the `checked` attribute to its element, then using Hyperclay to globally persist this version of `document.body.outerHTML`, so that it's there next time someone visits the page. There's automatic versioning and read/write permissioning.
It's a pretty cool project! I'll definitely try for my own personal tools.
Do note that, from my understanding, it's most useful when there's one developer who is also the only content editor. Otherwise you'll have editors overwriting each other's changes, and if there are multiple copies there's no easy for the developer to push a change to all copies.
This thing comes across so pretentiously, but they have some really novel CSS ideas I might try. The idea of making selector based on [style*="--bgc:"] and using that to set background color like style="--bgc: red" is not something that I would have thought of.
Happy you enjoyed it. Most of my creations just build on and with Startr.Style. It's a tight alternative to Tailwind's mess of classes. I especially love how directly it translates to pure styling yet allows us to have responsive design eg add the -md suffix to --bgc to specify background colors for tablets and up.
When I work on Modernism or any of the other experimental pages on https://startr.style I don't do it with any pretension but out of a love and familiarity of code and what the web can be. As child I traded helping out at a local computer store for time exploring Gopher and then Mosaic's window to the web.
I think what gets me is there's 6 paragraphs in a row that either start with "Modernism" or have it in the first sentence, and it comes along with words like "transcends", "echoes", "essence", "exude", "crafts." They're good paragraphs by themselves and make good points, I just think that together they come out a little breathless, and maybe some consolidation is in order :)
I notice from your profile that you haven't submitted your site to HN, I think you should do so, I think it would generate some interesting discussion.
In my perspective, for truly "self-contained", "portable" and "self-updating" and even "un-hosted" web apps, the only option nowadays are prehistoric data-URIs, that are slowly losing abilities anyway (basically can live only as bookmarks or direct URL pastes, and their only persistence option is location #hash that needs re-bookmarking):
data:text/html;charset=utf-8,<body id=b onload=b.innerHTML=decodeURIComponent((l=location).hash.slice(1)) onkeyup=document.title=b.innerText.split('\n')[0]||'.' onblur=try{history.pushState({},document.title,'\u0023'+b.innerHTML)}catch(e){l.hash=b.innerHTML} contenteditable bgcolor=darkslategray text=snow link=aqua vlink=lime style=text-align:center>#Hello, HN!<br><br>Do you like this %E2%9D%9Dun-hosted%E2%9D%9E app?<br>With persistence<a href="https://news.ycombinator.com/reply?id=44944112">%E2%80%A6?</a>
You probably wouldn't fault another piece of software for calling itself single-file even if it requires an operating system to run. It makes more sense to look at from a (build-)artifact POV - ignoring the foundations the artifact rests on if they're not specific to it.
I love this, I think this brings "block editing" capabilities to the masses, which is a big selling point for WordPress. In recent months, I've been looking at micro-sites and I concluded that Carrd is king for this type of landing pages/microsites. So this looks very promising for this respect. Either of you care to share or dispel security concerns or describe attack surface? For the past 6 months I've been looking at HUGO sites and the simplest deployment approach I found was alpine/sqlite/hugo containers at 5-10mb in size. Is there a way to delegate control/editing of sections/pages? I think the world needs a simple platform to build sites and delegates sections to respective departments/units. The only platform which seems solid for this is drupal, but it's kind of overkill for SMB orgs.
Thanks. Author mentioned TiddlyWiki as inspiration. But the whole point of TiddlyWiki was that it doesn't need a server right?
So I'm trying to understand the difference, the payoff. I understand that local web APIs are ass and you very quickly run into the need for a server.
But I'm wondering about the utility of combining the two approaches. It seems like a contradiction in terms. Here's a server to help you with your dev setup oriented around not needing a server.
I guess the main win would be cross device access? You have it online and you can edit it easily.
I'm editing my stuff on my phone in a text editor. And syncing it to my laptop with a sync app.
Turns out, the original TiddlyWiki used a java jar to handle the file persistence. (I remember it being so magically automatic, but recently investigated how it was done)
I don't think that's right - IIRC it used to be possible to write out a file, if loaded from a file:// URL, directly from JavaScript. Then that ability got nobbled because security (justifiable) without properly thinking through a good alternative (not justifiable). I mourn the loss of the ability, TiddlyWiki was in a class of its own and there should have been many more systems inspired by its design. Alas.
ETA: Wikipedia has reminded me the feature was called UniversalXPConnect, and it was a Firefox thing and wasn't cross-browser. It still sucks that it was removed without sensible replacement.
I used TiddlyWiki a lot to manage my D&D 3.5 campaign back in the day. As I recall, it originally was a true stand-alone HTML document capable of overwriting itself, but once browsers dropped support for this capability, users had to begin using various workarounds, and this remains the status quo today.
TiddlySaver.jar was one such workaround. A check in the Wayback Machine suggests that it was originally required only for Safari, Opera, and Chrome; IE and Firefox needed no such plugin. Nowadays, there are several workarounds, and setting up one is a mandatory installation step: standalone applications, browser extensions, servers, etc. Some are clunky (e.g. you have to keep your wiki in your Downloads directory or the browser can't write to it), and either way, TiddlyWiki is no longer truly a single stand-alone HTML file, at least not for writing purposes. It's still a very versatile tool, though.
TiddlyWiki can replicate itself. All users can freely edit any TiddlyWiki and save their changes to their filesystem. There's a few options for exports.
It is a common gotcha that new users will lose some of their work while they learn persistence.
I see, so you make edits, the Javascript edits the html, therefore File -> Save Page will download an html file with your changes in it that you can open again.
I forget that File -> Save is even a thing for websites.
Note: The TiddlyWiki documentation explicitly advises that File -> Save Page does not work.
You have to click a save button in the app, and it will generate a valid copy. However, most users deploy some plugin or software which allows transparent auto-saving.
I used https://github.com/slaymaker1907/TW5-browser-nativesaver, that still works with the current version 5.3.8, though just in Chromium based browsers. You save the file once and from then on, as long as the tab is open, it autosaves itself.
That said, I advise against Tiddly Wiki, after using it for long. It has multiple bugs, which the author won't fix (e.g. div's inside p's), It has a cryptic syntax (e.g. code in attribute values), and tagging is not implemented in a way which makes a wiki scale (well, technically it is, tags can have tags). It is a thing where features are added but nothing outdated gets deprecated, so it is bloated. One will be more productive by using a folder with markdown files, and a browser add-on like Markdown Viewer.
My solution to a lot of issues is to use Tiddlywiki Classic. No divs inside p that I can find, less bloat (412 KB for a blank file instead of 2.5 MB), and it's still maintained. The main advantage, to me, is that it fits more tiddlers on screen at a time, which is the main point of TiddlyWiki for me; TW5 adds large amounts of spacing, borders, and large font sizes, which looks nicer but is less practical.
It's not perfect, though. Paragraphs are rendered by using two br tags, instead of p tags. Link syntax is the reverse of MediaWiki syntax; i.e. [[foo|bar]] links to "foo" in MediaWiki, but "bar" on TiddlyWiki, which trips me up constantly. There's other syntax awkwardness like sensitivity to spacing and newlines. Journals sort in alphabetical, not chronological order.
I don't know, for me it works, but I don't do anything fancy with it. I just put it on a server, so I can access it from all my devices. I don't even use tags, just tiddlers with sources which are other tiddlers or external web pages.
They never did, if anything it's the opposite in that I think there are now APIs that can make this possible.
With TiddlyWiki you had to essentially File -> Save As and save the HTML back over itself. There were other ways too but they were all workarounds to the issue that browsers don't allow direct access.
They did back around 2008. I used Wiki on a stick - see https://stickwiki.sourceforge.net/which was kind of neat) but after a few years, Chrome etc stopped letting it save itself.
Thanks for the description, I kept reading the webpage and didn't understand what the project was or how it worked. Yours is really succinct and clear.
To be completely honest, I don't see how this is more useful than adding a sync layer to localStorage. I did make a service that does that at htmlsync.io and am genuinely curious how this solution is better.
Your notes are the HTML file! You can keep it in your documents folder, sync it via any service, track it in version control, etc. It’s for folks who know what the filesystem is, don’t know how to host a server (or don’t want to), but want a website-like experience. Works offline, too!
The file itself provides both the dynamic functionality and data storage, but you need an engine (like obsidian) to make the data persistence and dynamic parts work together. I.e. if there is a button that adds a task to a todo app, your engine modifies the HTML file with the new content.
I finally also recalled this project (tiddlywiki), but CMIIW, the statefulness of the hyperclay is only for developers, the end user will get the same conventional html. Without some kind of common solution/protocol/standard on the browser side that would allow persistence it's not so exciting. Theoretically there might be some kind of simple protocol that would save html file versions on the server based on the cookie, but there are so many ways this can go wrong
I think contenteditable is more akin to a rich-text document, while Hyperclay goes a bit beyond by allowing JS to edit the DOM too. I think Smalltalk images and virtual machines are a closer comparison, but applied to the web. You download the image, with some running code, use it, and persist the whole application state.
No, this need a server too. Everything that saves changes to non local (shareable) locations needs some kind of a server. The best solutions are of course when OP hosts the server and you pay for it /s
Wiki has specific content you can add. New article, body of an article, etc. This lets you change all the html. With this, you could edit your wiki into a calculator.
You might be confusing ASICs with FPGAs. You can't reprogram an ASIC, the algorithm is fixed at design time, and the chip built for this single purpose.
I'm a happy user of Fastmail. It's a paid service (€5 per month) but that comes with higher standards. The webmail has been pretty good. Barely any spam to speak of (once a week?), even though I have various email addresses in public places.
I think that's a very important point, but I wouldn't call `or die()` an affordance. A common idiom, perhaps.
A common affordance that invites mistakes is a library that has something like `file_exists(path)` (because it often introduces hard-to-debug race conditions), or `db.query(string)` (because it invites string interpolation and SQL injection).
> Basically the theory is that ideas are actually like radio waves, in the environment, and our brains are like radio receivers that pick up ideas existing in the universe.
That sounds interesting. Like an accidental combination of movie + song + recent news that gives useful insights in some area.
With today's digital tracking, I wonder if we can quantify it: "X% of programming language creators read both Asimov and Terry Pratchett in non-English speaking countries".
> For example rats who master a maze in one part of the world make it easier for unrelated rats anywhere in the world to master the same maze pattern — it’s as if the learned skill/idea is “broadcast” to all rat brains.
Oh, you were being literal. That's deep into pseudoscience territory. What's the proposed mechanism for this "broadcast"?
Gravity, the luminiferous ether, psychic, even RF? Who knows. Maybe all rats are multi-furcations of the same rat, and there's squeaky action at distance.
Regardless I don't know how you'd prove this to everyone's satisfaction, it seems real easy to game/cheat.
I assume that they run the system prompt once, snapshot the state, then use that as starting state for all users. In that sense, system prompt size is free.
Huh, I can't say I'm on the cutting edge but that's not how I understand transformers to work.
By my understanding each token has attention calculated for it for each previous token. I.e. the 10th token in the sequence requires O(10) new calculations (in addition to O(9^2) previous calculations that can be cached). While I'd assume they cache what they can, that still means that if the long prompt doubles the total length of the final context (input + output) the final cost should be 4x as much...
This is correct. Caching only saves you from having to recompute self attention on the system prompt tokens, but not from the attention from subsequent tokens, which are free to attend to the prompt.
My understanding is that even though it's quadratic, the cost for most token lengths is still relatively low. So for short inputs it's not bad, and for long inputs the size of the system prompt is much smaller anyways.
And there's value to having extra tokens even without much information since the models are decent at using the extra computation.
> This includes locking users out of systems that it has access to or bulk-emailing media and law-enforcement figures to surface evidence of wrongdoing.
Isn't that a showstopper for agentic use? Someone sends an email or publishes fake online stories that convince the agentic AI that it's working for a bad guy, and it'll take "very bold action" to bring ruin to the owner.
I am definitely not giving these things access to "tools" that can reach outside a sandbox.
Incidentally why is email inbox management always touted as some use case for these things? I'm not trusting any LLM to speak on my behalf and I imagine the people touting this idea don't either, or they won't the first time it hallucinates something important on their behalf.
We had a "fireside chat" type of thing with some of our investors where we could have some discussions. For some small context, we deal with customer support software and specifically emails, and we have some "Generate reply" type of things in there.
Since the investors are the BIG pushers of the AI shit, lot of people naturally asked them about AI. One of those questions was "What are your experiences with how AI/LLMs have helped various teams?" (or something along those lines). The one and only answer these morons could come up with was "I ask ChatGPT to take a look at my email and give me a summary, you guys should try this too!".
It was made horrifically and painfully clear to me that the big pushers of all these tools are people like that. They do literally nothing and are themselves completely clueless outside of whatever hype bubble circles they're tuned in to, but you tell them that you can automate the 1 and only thing that they ever have to do as part of their "job", they will grit their teeth and lie with 0 remorse or thought to look as if they're knowledgeable in any way.
My suspicion has always been that people that make enough they could hire a personal assistant but talk about how "overwhelmed" they are with email are just socially signalling their sense of importance.
I personally cancelled my Claude sub when they had an employee promoting this as a good thing on Twitter. I recognize that the actual risk here is probably quite low, but I don't trust a chat bot to make legal determinations and that employees are touting this as a good thing does not make me trust the company's judgment
That is still incorrect. The entire point is that this is misaligned behavior that they would prefer not to see. They are reporting bad things. You are wanting to be mad and assigning a tone or feeling that was not actually there. You are punishing the wrong company. All of the frontier Model companies have models that will behave in the same way under similar circumstances. Only one company did the work to find this behavior and tell you about it. Think about whether you would prefer in the future to know about similar kinds of behaviors or not. The action you have described yourself taking if taken probably enough will ensure that in the future we the only way we will ever know is if we find out ourselves, because the companies will stop telling us (or rather, for every company except anthropic continue to not tell us).
It is only acceptable in the sense that they chose to release the model anyways. But, if that's the case, then every other frontier Model company believes that this level of behavior is acceptable. Because they are all releasing models that have approximately the same behavior when put in approximately the same conditions.
Yeah, I mean that's likely not what 'individual persons' are going to want.
But Holy shit, that exactly what 'people' want. Like, when I read that, my heat was singing. Anthropic has a modicum of a chance here, as one of the big-boy AIs, to make an AI that is ethical.
Like, there is a reasonable shot here that we thread the needle and don't get paperclip maximizers. It actually makes me happy.
Paperclip maximizers is what you get when highly focused people with little imagination think how they would act if told to maximize paperclips.
Actual AI, even today, is too complex and nuanced to have that fairly tale level of “infinite capability, but blindly following a counter-productive directive.”
It’s just a good story to scare the public, nothing more.
> the person was doing bad things, and told the AI to do bad things too, then what is the AI going to do?
Personally, the AI should do what it's freaking told to do. It's boggling my mind that we're purposely putting so much effort into creating computer systems that defy their controller's commands.
reply