Because people like me use quaternions but have never attained a full understanding like 3x3 rotation matricies. I will be reading the above link since its only 12 pages and someone indicated its an easier read.
The dollar is going down in value right now. Thats the plan. It makes foreign goods more expensive and exports more affordable to other countries. Meanwhile it should have less inflationary pressure on domestically produced stuff like housing.
I dont know if this is going to work or collapse. If it does work IMO they still need to reduce the debt - current actions are because we are backed into a corner, so that needs to be corrected.
It also increases profits for multi-national US companies since they get paid in other currencies, which they then convert to dollars locally. Basically, they have favorable FX tailwinds.
> Meanwhile it should have less inflationary pressure on domestically produced stuff like housing.
This is pure fantasy. A weak dollar makes it more affordable for foreign capital to buy US assets, yes, including housing. The president himself recently admitted on video that he plans to make house prices rise.
Who is that "we" backed into corner? USA as such certainly not.
Maybe you mean "pro-democracy anti abuse" people? Those yes. But administration and far right are not in corner. They are in full active act achieving exactly what they wanted.
It's a flimsy back-filled rationale thrown on top the mercurial (and often sadistic) whims of an American Caligula, so the elite enablers can pretend there's something rational - or even good - about the chaos and destruction they are supporting.
Alternative: the system exists, so people in the know may well have done proper risk assessment and may have identified multiple reasons that could result in a collision. Some of those reasons are accidental, some are not.
If so, SpaceX's longer term response being "here's our SSA data for everyone and here's how we source it" is a good one for all parties involved (even more so for SpaceX and govt customers they share it with if they have other capabilities...)
Well we already know Starshield (the military version) has specialist space domain awareness capabilities that aren't being shared, and it's entirely plausible that data from regular Starlink sensors/receivers (other than the disclosed star trackers) can be fused into something useful by SpaceX and/or the Space Force.
In the before-AI world, it mattered a lot where data centers were geographically located. They needed to be in the same general location as population centers for latency reasons, and they needed to be in an area that was near major fiber hubs (with multiple connections and providers) for connectivity and failover. They also needed cheap power. This means there’s only a few ideal locations in the US: places like Virginia, Oregon, Ohio, Dallas, Kansas City, Denver, SF are all big fiber hubs. Oregon for example also has cheap power and water.
Then you have the compounding effect where as you expand your data centers, you want them near your already existing data centers for inter-DC latency reasons. AWS can’t expand us-east-1 capacity by building a data center in Oklahoma because it breaks things like inter-DC replication.
Enter LLMs: massive need for expanded compute capacity, but latency and failover connectivity doesn’t really matter (the extra latency from sending a prompt to compute far away is dwarfed by the inference time, and latency for training matters even less). This opens up the new possibility for data centers to be placed in geographic places they couldn’t be before, and now the big priority’s just open land, cheap power, and water.
>Oregon for example also has cheap power and water.
Cheap for who? For the companies having billions upon billions of dollars shoved into their pockets while still managing to lose all that money?
Power won't be cheap after the datacenters move in. Then the price of power goes up for everyone, including the residents who lived there before the datacenter was built. The "AI" companies won't care, they'll just do another round of funding.
I guess it's an answer to the obviously absurd idea that 98% of data centers be in Northern Virginia.
My less snarky answer is -- we've always had data centers all over the place? When I started in web dev we deployed to boxes running in a facility down the street. That sort of construction probably dropped considerably when everyone went to "the cloud".
That only means they have to be built in counties which are part of that compact, or have approved provisions to return the water back to be net-neutral and comply with environmental impact laws (unless your Foxconn or legacy manufacturer or farmer). However, Beaver Dam WI as this article calls out is along a fresh water source and does not require Lake Michigan water.
The other locations like Oracle’s dc in Port Washington or MS in Racine/Kenosha area are located such that they are within the defined boundaries outlined and dc unlike Foxconn are all ‘closed-loop’ which of course isn’t entirely perfect but certainly not on the scale of Foxcon’s 7mil gal/day nonsense.
> Due to the United States Supreme Court ruling in Wisconsin v. Illinois, the State of Illinois is not subject to certain provisions of the compact pertaining to new or increased withdrawals or diversions from the Great Lakes.
I mean it seems like there's already avenues to skirt around this compact?
Also, from what I can tell, this isn't some sort of ban on using water from the Great Lakes basin, it's just a framework for how the states are to manage it. It is entirely believable to me that this compact would actually support water being used for developing tech in the surrounding communities (like using it in data centers).
I can understand concerns about moving thousands of acre-feet of water into the desert for cooling, or pumping your aquifer dry for the same thing. But moving water from the Great Lakes a few miles inland? How much water evaporates out of the Great Lakes every day, and what is the percentage increase when used for cooling?
I don't recall the exact specifics, but I do remember a while ago there was some outrage that Nestle was bottling some really large sounding amount of water (think ~millions of gallons a day?) from a Great Lake. The math behind how much was being used as a % of lake volume was negligible (it would take ~3,500,000 years to "drain" Michigan at that rate).
In my mind this is partly due to people not understanding large numbers, and also not understanding just how much water is actually in the Great Lakes. It's a huge amount - Lake Michigan has 1,288,000,000,000,000 gallons in it. Every human on earth could use close to 10gal of water per day for the next 50 years before Lake Michigan would be "dry", assuming it was never replenished. And that's just Lake Michigan. (Obviously environmental systems are more complicated than the simple division I did, and individual water usage isn't simply 10gal a day - it's just to demonstrate a point).
Now, someone else pointed out that the tragedy of the commons is a sort of death by a thousand cuts. And if anyone who shows up is allowed to draw millions of gallons a day, that can add up and certainly have negative effects. It's just important to actually understand the scale of the numbers involved, and to not let legitimate environmental concerns be cross-contaminated with just anti-tech-of-the-year sentiment, or political motivations, or whatever else might cloud the waters (pun unintended).
It's which side of the drainage basin is the water moved to? When the water is flushed back into the system, does it drain back into the Great Lakes? or down to the Gulf of Mexico?
On the southern shore of Lake Michigan, that "few miles" changes the watershed that its part of.
As for diversions that go to evaporative cooling, that's a big question for the data center itself and there are many designs. https://www.nrel.gov/computational-science/data-center-cooli... has some cutting edge designs, but they're more expensive to use for pumping waste heat elsewhere.
While the Great Lakes are coming off of wet years ( https://water.usace.army.mil/office/lre/docs/waterleveldata/... ) that shouldn't be used as long term prediction of what will be available in another 10 years lest it becomes another Colorado river problem. Currently, the water levels for Lake Michigan are lower than average and not predicted to return to average in the model range. https://water.usace.army.mil/office/lre/docs/mboglwl/MBOGLWL... . You'll note that this isn't at the minimums from the 1960s... and the Great Lakes Compact was signed in 2008.
But where do we stop with all of this endless expansion? Do the great lakes have to go through an Aral Sea type of situation before we decide it's time to stop? It's not like these AI ghouls are shy about wanting infinite expansion and an ever-growing number of data centers to feed their word generators, do we really think that if we just let them have the water now they're not going to abuse that and that they won't start draining the lakes for all the water they can manage? I'm not so optimistic, myself.
Water levels have been down for years as-is. It may not seem like much now, but I think it's important to avoid a "tragedy of the commons" scenario in the future.
“We’re going to have supervision,” Oracle founder Larry Ellison said. “Every police officer is going to be supervised at all times, and if there’s a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.”
Putting them all in one or two places isn’t good for reliability, disaster resilience, and other things that benefit from having them distributed.
Data centers do more than just run LLMs. It’s a good thing when your data is backed up to geographically diverse data centers and your other requests can be routed to a nearby data center.
Have you ever tried to play fast paced multiplayer games on a server in a different country? It’s not fun. The speed of light limits round trip times.
Same reason the F35 manufacture is awkwardly distributed throughout the US - the shore up political support (voting to kill jobs in your state is usually unpopular) and dip into as many subsidies as possible.
Data centers don't create local jobs once construction is complete. 40 people, most remote, can run a data center. The F-35 program claims to have over 250,000 people employed in its supply chain in the US and has large factories with high paying, often unionised jobs.
In these small rust belt towns, even 40 jobs is a huge boost. You have the hands on sysadmin and network guys there, which yeah thats small. But you also have facilities, security, maintenance. When you combine this with the stimulus to the local economy through construction its a positive. Sure its not a 10k person factory, but there are places where the biggest employer is Walmart. These places look at an Amazon Warehouse or a Datacenter as being a big benefit.
I'd also chime in that the presence of a datacenter in a smaller community can also help through the increased tax revenue the town/county gets.
Likely there's some kind of tax incentive for the datacenter to be built in one place over another, but I have to imagine that the local county is going to net some sort of increase to it's revenue, which can be used to then support the town.
There's also the benefit of the land the datacenter is on being developed. Even if that is done in financial isolation from the town/county, a pretty fancy new building designed for tech is being built. Should the datacenter go belly up, that's still a useable building/development that has some value.
Its not as much as you'd expect and the townsfolk often get saddled with higher utility costs, among other things.
When the tax incentive timelines runs out, the data centers just claim they'll move away and the tax cuts get renewed.
Its happening in Hillsboro, Oregon right now. The city promised some land just outside of the boundary would stay farm land until 2030 or later. The city reneged on that already. The utility rates have also doubled in recent years thanks to datacenters. The roads are destroyed from construction which damages cars, further increasing the burden on everyone else.
Sure, but that's to my second point of if they pick up camp and leave, that's still developed property that has potential to be more useful than it had been.
And in the same way that construction-damaged roads can lead to costs on everyone else - the development of that land employed people, and that is a positive thing for construction workers and their families (more than just financially).
Just because you can point at negative consequences doesn't mean positive ones don't exist as well. It's rarely black and white as to the net effects of things like this. You could/should even be considering what doing a build-out like this does for the reputation of a city, and the sense of optimism it can bring to a local community that might otherwise be left behind, completely out of the picture. There's another world where a small town appears not in an article about a new datacenter (or the possible ensuing city renege boondoggle) but as a small blip in a story about how small towns in this country have decayed as a result of being passed by during the current tech "boom".
It's also not all that trivial (or cheap) to just transport a datacenter to another state, or even county. You'd have to be pretty sure that whatever tax you're trying to now avoid is more than the (potentially) zero-tax new build or relocation you'd have to do to "escape".
At the end of the day, it's the responsibility of the local government to make sure that the deal is a net benefit to the community. Maybe that is too much to expect lol
I hear that argument, but a relative has been an elecrtrician that started out working mostly at the original facebook datacenter in 2016 or so. he now owns the business, and his single biggest client is still the facebook datacenter.
For a 100MW scale facility the contract work is never over. Once you are done with one bit of work something else is in need of refreshing or changing. Components are breaking daily at that scale, and switch gear, UPS, generators, breakers, etc. all have useful lifetimes and a replacement cycle.
It’s effectively a full time job for an electrician crew or three.
Of course once the facility goes away entirely the job does too. But so goes a factory or anything else.
Which is a straw man no? This thread is about building data centers, not F35s. Microsoft and FB aren’t competing against LM for land or jobs in Beaver Dam WI nor is it a zero-sum outcome, both can exist ie ‘manufacturing hubs’.
Yeah, I don't understand this at all. I use Patreon and I support a couple of tech content creators. But my use of Patreon intersect in no way with iOS and I'm not sure how it would. Can someone please explain?
Okay so basicially apple users are dumb as rocks which is why iOS is so profitable in the first place, and they are corraled into installing apps and making in-app purchases.
>> I honestly don't understand the hatred that Microsoft gets for most of the work they're doing in Windows. As I've stated before, most 'problems' people ultimately have are either configuration issues or hardware issues.
And then you go on to describe your own hardware problems with windows. That's called "projection" - attributing your experience to everyone else. It's like you don't read the other complaints or somehow dismiss them. Have you not seen the ads yourself? Maybe you take the suggestions to use other Microsoft products as helpful suggestions rather than ads. Is that it? OneDrive failing? Try saying NO to using OneDrive - that's what some people would like to do and it'll keep advertising and trying to enable itself. Then when we do use it... well you've got issues with it not working right too.
That seems like an argument for using the same material. Different metals will expand more or less with temperature variations. If they all chance a fixed percentage the tonal ratios should be preserved.
>> But had they written their software in C, they wouldn't have needed to do any conversion at all. It means they could have titled the article "How we lowered the performance penalty of using Rust".
That's not really fair. The library was doing serialization/deserialization which was poor design choice from a performance perspective. They just made a more sane API that doesn't do all that extra work. It might best be titles "replacing protobuf with a normal API to go 5 times faster."
BTW what makes you think writing their end in C would yield even higher performance?
> BTW what makes you think writing their end in C would yield even higher performance?
C is not inherently faster, you are right about that.
But what I understand is that the library they use works with data structures that are designed to be used in a C-like language, and are presumably full of raw pointers. These are not ideal for working in Rust, instead, presumably, they wrote their own data model in Rust fashion, which means that now, they need to make a conversion, which is obviously slower than doing nothing.
They probably could have worked with the C structures directly, resulting in code that could be as fast as C, but that wouldn't make for great Rust code. In the end, they chose the compromise of speeding up conversion.
Also, the use of Protobuf may be a poor choice from a performance perspective, but it is a good choice for portability, it allows them to support plenty of languages for cheaper, and Rust was just one among others. The PgDog team gave Rust and their specific application special treatment.
> which means that now, they need to make a conversion, which is obviously slower than doing nothing.
One would think. But since caches have grown so large, and memory speed and latency haven't scaled with compute, so long as the conversion fits in the cache and is operating on data already in the cache from previous operations, which admittedly takes some care, there's often an embarrassing amount of compute sitting idle waiting for the next response from memory. So if your workload is memory or disk or network bound, conversions can oftentimes be "free" in terms of wall clock time. At the cost of slightly more wattage burnt by the CPU(s). Much depends on the size and complexity of the data structure.
reply