I honestly feel that "uninitialized by default" is strictly a mistake, a relic from the days when C was basically cross-platform assembly language.
Zero-initialized-by-default for everything would be an extremely beneficial tradeoff IMO.
Maybe with a __noinit attribute or somesuch for the few cases where you don't need a variable to be initialized AND the compiler is too stupid to optimize the zero-initialization away on its own.
This would not even break existing code, just lead to a few easily fixed performance regressions, but it would make it significantly harder to introduce undefined and difficult to spot behavior by accident (because very often code assumes zero-initialization and gets it purely by chance, and this is also most likely to happen in the edge cases that might not be covered by tests under memory sanitizer if you even have those).
The only problem with vendor extensions like this is that you can't really rely on it, so you're still kinda forced to keep all the (redundant) zero intialization; solving it at the language level is much nicer. Maybe with C2030...
There are many low-level devices where initialization is very expensive. It may mean that you need two passes through memory instead of one, making whatever code you are running twice as slow.
I would argue that these cases are pretty rare, and you could always get nominal performance with the __noinit hint, but I think this would seldomly even be needed.
If you have instances of zero-initialized structs where you set individual fields after the initialization, all modern compiler will elide the dead stores in the the typical cases already anyway, and data of relevant size that is supposed to stay uninitialized for long is rare and a bit of an anti-pattern in my opinion anyway.
meh, the compiler can almost always eliminate the spurious default initialization because it can prove that first use is the variable being set by the real initialization. The only time the redundant initialization will be emitted by an optimizing compiler is when it can't prove its redundant.
I think the better reason to not default initialize as a part of the language syntax is that it hides bugs.
If the developers intent is that the correct initial state is 0 they should just explicitly initialize to zero. If they haven't, then they must intend that the correct initial state is the dynamic one in their code and the compiler silently slipping in a 0 in cases the programmer overlooked is a missed opportunity to detect a bug due to the programmer under-specifying the program.
It only works for simple variables where initialisation to 0 is counter productive because you lose a useful compiler warning (about using initialised variable).
The main case is about arrays. Here it's often impossible to prove some part of it is used before initialisation. There is no warning. It becomes a tradeoff: potentially costly initialisation (arrays can be very big) or potentially using random values other than 0.
Fair point though compilers could presumably do much better warning there on arrays-- at least treating the whole array like a single variable and warning when it knows you've read it without ever reading for it.
C has pointers. It's often very difficult or impossible to deduct if an array was written to or not.
It's possible in some cases (local array and lack of pointers of the same type in the scope) though so yeah, a warning would be useful in those cases.
In recent years I've come to rely on this non-initialization idiom. Both because as code paths change the compiler can warn for simple cases, and because running tests under Valgrind catches it.
C++26 has everything initialiied by default. The value is not specified though. Implementations are encourage to use something weird to detect using before explict initialization.
Depends on the boundary. I can give a non-Linux, microkernel example (but that was/is shipped on dozens of millions of devices):
- prior to 11.0, Nintendo 3DS kernel SVC (syscall) implementations did not clear output parameters, leading to extremely trivial leaks. Unprivileged processes could retrieve kernel-mode stack addresses easily and making exploit code much easier to write, example here: https://github.com/TuxSH/universal-otherapp/blob/master/sour...
- Nintendo started clearing all temporary registers on the Switch kernel at some point (iirc x0-x7 and some more); on the 3DS they never did that, and you can leak kernel object addresses quite easily (iirc by reading r2), this made an entire class of use-after-free and arbwrite bugs easier to exploit (call SvcCreateSemaphore 3 times, get sema kernel object address, use one of the now-patched exploit that can cause a double-decref on the KSemaphore, call SvcWaitSynchronization, profit)
more generally:
- unclearead padding in structures + copy to user = infoleak
so one at least ought to be careful where crossing privilege boundaries
Would you rather have a HFT trade go correctly and a few nanoseconds slower or a few nanoseconds faster but with some edge case bugs related to variable initialisation ?
You might claim that that you can have both but bugs are more inevitable in the uninitialised by default scenario. I doubt that variable initialisation is the thing that would slow down HFT. I would posit is it things like network latency that would dominate.
> Would you rather have a HFT trade go correctly and a few nanoseconds slower or a few nanoseconds faster but with some edge case bugs related to variable initialisation ?
As someone who works in the HFT space: it depends. How frequently and how bad are the bad-trade cases? Some slop happens. We make trade decisions with hardware _without even seeing an entire packet coming in on the network_. Mistakes/bad trades happen. Sometimes it results in trades that don't go our way or missed opportunities.
Just as important as "can we do better?" is "should we do better?". Queue priority at the exchange matters. Shaving nanoseconds is how you get a competitive edge.
> I would posit is it things like network latency that would dominate.
Everything matters. Everything is measured.
edit to add: I'm not saying we write software that either has or relies upon unitialized values. I'm just saying in such a hypothetical, it's not a cut and dry "do the right thing (correct according to the language spec)" decision.
We make trade decisions with hardware _without even seeing an entire packet coming in on the network_
Wait what????
Can you please educate me on high frequency trading... , like I don't understand what's the point of it & lets say one person has created a hft bot then why the need of other bot other than the fact of different trading strats and I don't think these are profitable / how they compare in the long run with the boglehead strategy??
This is a vast, _vast_ over-simplification: The primary "feature" of HFT is providing liquidity to market.
HFT firms are (almost) always willing to buy or sell at or near the current market price. HFT firms basically race each other for trade volume from "retail" traders (and sometimes each other). HFTs make money off the spread - the difference between the bid & offer - typically only a cent. You don't make a lot of money on any individual trade (and some trades are losers), but you make money on doing a lot of volume. If done properly, it doesn't matter which direction the market moves for an HFT, they'll make money either way as long as there's sufficient trading volume to be had.
But honestly, if you want to learn about HFT, best do some actual research on it - I'm not a great source as I'm just the guy that keeps the stuff up and running; I'm not too involved in the business side of things. There's a lot of negative press about HFTs, some positive.
The point is that there are security implications to not zeroing out memory, even if it costs performance. Making an argument that it’s too performance sensitive to do anything doesn’t actually hold water.
Zero initializing often hides real and serious bugs, however. Say you have a function with an internal variable LEN that ought to get set to some dynamic length that internal operations will run over. Changes to the code introduce a path which skips the setting of LEN. Current compilers will (very likely) warn you about the potentially uninitialized use, valgrind will warn you (assuming the case gets triggered), and failing all that the program will potentially crash when some large value ends up in LEN-- alerting you to the issue.
Compare with default zero init: The compiler won't warn you, valgrind won't warn you, and the program won't crash. It will just be silently wrong in many cases (particularly for length/count variables).
Generally the attention to exploit safety can sometimes push us in directions that are bad for program correctness. There are many places where exploit safety is important, but also many cases where its irrelevant. For security it's generally 'safe' is a program erroneously shuts down or does less than it should but that is far from true for software generally.
I prefer this behavior: Use of an uninitialized variable is an error which the compiler will warn about, however, in code where the compiler cannot prove that it is not used the compiler's behavior is implementation defined and can include trapping on use, initializing to zero, or initializing to ~0 (the complement of zero) or other likely to crash pattern. The developer may annotate with _noinit which makes any use UB and avoids the cost of inserting a trap or ~0 initialization. ~0 init will usually fail but seldom in a silent way, so hopefully at least any user reports will be reproducible.
Similar to RESTRICT _noinit is a potential footgun, but its usage would presumably be quite rare and only in carefully maintained performance critical code. Code using _noinit like RESTRICT is at least still more maintainable than assembly.
This approach preserves the compiler's ability to detect programmer error, and lets the implementation pick the preferred way to handle the remaining error. In some contexts it's preferable to trap cleanly or crash reliably (init to ~0 or explicit trap), in others its better to be silently wrong (init 0).
Since C99 lets you declare variables wherever so it is often easy to just declare a variable where it is first set and that's probably best, of course. .. when you can.
The problem is the somewhat low atmospheric CO2 concentration; this is why all the "lets just pollute now and remove the CO2 from the atmosphere with some futuristic tech!" approaches are also kinda doomed, because even if you had some workable process that did not cause excessive costs by itself (like this one possibly), you still need to process millions of cubic meters of air, every year, just to compensate for a single (!!) car.
Sure. But thats still very much a lower bound, and it makes a bunch of idealizing assumptions that are hopelessly overoptimistic (assuming your intake gets the full 400ppm of CO2, and you manage to extract all of it in one go).
Even from those numbers, you already get up to a football stadium of processed air per hour for every small town. For a big city, you need to process that football stadium worth of air every second.
Building infrastructure of that magnitude is a major commitment, and if most nations can not be arsed to replace a small number of fossil power plants per country, I honestly don't see us building large air processing plants in every single town in a timely manner (that are extremely likely to be less profitable than replacing the power plants).
It converts CO2 and H2O into ethane and ethylene, oversimplifying the output is natural gas. What do you do with the natural gas?
You can put it in pipes and send it to a central location, but you need pumps and the pipes are a nightmare.
You can store it in a local tank but you need a pump again, and burn it but it release the CO2 again. Using a solar panel and a battery is easier and more efficient.
(Do they need also some water pipes?)
For a distributed production, solar panels are much better.
Pipes and pumps may work in a centralized setup, but I'm still not convinced it's better that biodiesel or ethanol.
Photosynthesis is very inefficient, so there is a lot of room for improvement. But plants are like self building robots and they store the output in grains that are easy to transport.
Earth's atmosphere is 5.15×10^18 kg and at atmospheric pressure density is 1.293 kg m−3. The whole thing would be more like 4 billion billion cubic meters. So a billion AC units could have the whole thing cleaned up in just 200 years.
Which would suggest that maybe as much as 0.1% to 1% of earth's atmosphere has ever passed through an air conditioner.
This just has me picturing a scene where global warming is solved not by cleaning it up, but by leaving tons of window air conditioners everywhere, troll physics style, "to cool down the outside"
> The Stanford team’s passive cooling system chills water by a few degrees with the help of radiative panels that absorb heat and beam it directly into outerspace. This requires minimal electricity and no water evaporation, saving both energy and water. The researchers want to use these fluid-cooling panels to cool off AC condensers.
Outer space is like really really cold. What we need is a huge heat pump in outer space that pumps the planets heat out into deep space. All we need is a space-elevator style tube and we're good to go!
You would need GIANT radiators. Space is cold but there is also allmost no cold material to transfer heat. So even with a space elevator .. not so easy.
I was doing to say that surely it's a larger percentage, especially including all the commercial and industrial AC units running non-stop.
Then I remembered that my dad didn't have indoor plumbing in his house for most of his childhood, and that 200 years is a much longer time than my first gut instinct.
It would not hurt, but this just makes no (economic) sense currently, and that's not gonna change any time soon.
Right now we don't have any CO2 scrubbing process without significant maintenance or operating costs, so this would add significant cost to all those ACs. Furthermore, the effect is marginal: With emissions of >6 tons of CO2/year/human, you would have to scrub a lot of air (>10m³/min with cost-free 100% efficiency, which is a pipedream) to compensate (for a single human); running the ACs on full flow all the time might not even be worth it depending on how efficient the scrubbing is and how clean the source of electricity.
You might say scrubbing clean 10m³/min of air for every human sounds kinda feasible, but just compare the realistic cost of such a setup to the options that are currently implemented, and how much popular resistance/feet dragging they already meet (renewables, nuclear power, electrification, CO2 taxation).
As a general benchmark, I would suggest that before the scrubbing technology in question has not managed to be installed at most major stationary sources of CO2 (coal/gas power plants, etc), it is not even worth discussing it for distributed air scrubbing.
You have to start somewhere. Even a not great solution can set the president, with goals to gradually increase the efficiency. Mandates can do a lot — just look at the catalytic converter. Put it on all HVAC systems and _something_ will happen even if a small effect given the HVAC itself is contributing way more CO2.
We need all across the board solutions, and if you start requiring small scrubbers to function that can start to provide scale effects that can translate for bigger systems.
If we were serious about CO2 capture, then the place to start would be big producers (like coal plants): Because that way you need much less scrubbing efficiency and can tolerate much greater overhead while still being effective.
If a technology is not good enough for at least serious trials in that (much simpler and more forgiving) usecase, then there is no point in discussing it for small environmental air scrubbing. That is akin to talking about electrifying passenger planes before having a single electric vehicle on roads.
Catalytic converters have to convert a tiny part of the output, and they convert them into more stable forms.
The problem with CO2 is that it's the most stable form.
Also, if you want to absorb the CO2, for 1 pound of fuel you get like 3 pounds of CO2. You can absorb it into a solid and the density is like 3 times the density of the fuel. So with a lot of approximations you need container that has the same volume than the fuel tank to store the CO2, or even bigger if you absorb it in a liquid or much much much bigger as a gas. And you must empty/exchange the container when you refuel. And then you realize that it's better to use an electric car.
> I will wager that the cost of compliance will dramatically increase, however.
What makes you confident in that assumption? Because I would wager that cost of compliance is gonna be pretty much negligible, because manufacturers have most of the numbers already anyway, and this pales in comparison to EMF testing, too.
Estimates for effects of the regulation are right there-- they hope for total savings of 8TWh within 2030, but mainly from longer product lifetime through devices staying functional for longer on old batteries and easier battery repairability.
> Estimates for effects of the regulation are right there-- they hope for total savings of 8TWh within 2030, but mainly from longer product lifetime through devices staying functional for longer on old batteries and easier battery repairability.
That's an interesting point I didn't realize. I still don't see how they can get to those numbers, do they quote the calculations somewhere? I toned down the claim to be less hyperbolic based on your feedback here.
I wonder how often a recent generation phone is replaced due to battery life issues, especially considering the 'smart charging' features that phones have now which makes battery wear a fraction of what it was previously (such as charging to 80% max, 'smart' slow charging at night instead of fast charging, etc)
I agree with you that battery tech is basically getting better by itself.
Anecdotally, I bought at least one phone where slightly easier/cheaper repairability of screen + battery would have made me keep the old one.
I think making these metrics clear on product packaging is also becoming more important, because the improvements in phones have slowed down already, and longer lifetimes should be a consequence/benefit (but this is against manufacturer interests).
Product packaging? Where do you see the package for a phone? I haven't seen a box for a phone before I bought it before, even in the 90's when I sold phones at RadioShack they were kept under lock and key. In fact every phone I have purchased in the last 15 years or so has been second hand via eBay or direct from Google and shipped. Even at the Apple store you can't see a package.
I phrased that poorly, I ment "mandatory labeling" rather than physical package.
Screen/battery fixability was a major criteria when selecting my last phone and this was really hard to gauge, I'm hopeful that regulation like this is gonna help.
Estimates for the numbers are right there (which I really appreciated).
I think the main expected gains here are less from the estimated 0.2% electricity savings and more about longer average product lifetimes (thanks to better repairability).
If a manufacturer gets his phone to last for a day of heavy use, there is little motivation to improve efficiency past that benchmark I think (and this labeling provides that).
If only the EU was like "you can save money and our environment by buying Chinese EVs instead of smokey German, Italian & French diesels" in the same spirit. Oh well.
It's pretty easy to regulate things that aren't made by your domestic companies.
This is oversimplifying; European EV tariffs are company specific and (formally) aim to counteract state subsidies.
From a an average voter perspective, "sacrificing" local industry for a (temporary?) 20% discount on EVs is not too popular anyway, and neither is it gonna meaningfully save the planet IMO.
Tariff levels are basically the same as US import tariffs on pickup trucks, so not especially high, either.
I personally think tethered caps are fine, pretty sure they objectively reduce wild trash/lead to less cleanup requirements behind tourists and the like.
Do you think that the cost/benefit tradeoff in untariffed Chinese cars would be significantly better than tethered caps or deposits on bottles, or banning throwaway plastic straws?
Because this is far from clear to me; sure, introducing more, cheaper electric cars would help much more than reducing plastic waste, but the cost/risk to local industry is also MUCH higher, and a situation like the one with agriculture (a whole industry sector running basically on subsidies, in every industrialized country) is worth trying to avoid, too.
The choice isn't between a Chinese EV and an Italian diesel car. There are plenty of EVs by EU based manufacturers, including affordable ones from e.g. Renault/Dacia.
Sacrificing the European automakers for a temporary discount would be very foolish.
The tariffs on Chinese EVs are very unserious at the same time as subsidies are being withdrawn while the alleged deadline for phaseout of ICE is still in place.
VW showed which side they were going to bet on with Dieselgate and should get no further sympathy.
> everyone who could afford an EV probably already has one
Obviously not: this depends on the price of EVs, which is a constantly moving target and is determined by .. the import tariffs I just mentioned. Not to mention that cars have a long product lifecycle. I could afford an electric car, I have a space to park it, but for the time being I'm using my elderly petrol car because my annual mileage is low.
Sort of. They came out with some tiny city cars that are cheaper but everything else has maintained and increased price.
Where's my cheap all electric mini suv with awd [1] that i can take for a holiday when i rent a cabin in the woods without worrying if i can make it up there and make it back home?
Where are the charging stations on the way to the mountains and back? How much of my weekend do i need to sacrifice for charging time instead of hiking?
When I last bought a car 5 years ago, the used car marked for EVs was very small and EVs were very expensive. Since then, they became much cheaper, there are a lot of new models and a lot of used cars on the market.
I'm not planning to buy a new car though, as mine is only 8 years old and still working fine. I'll check again when repairs start to get more expensive, maybe in a few years.
A lot of people also just keep their cars for a decade or more, and buy cheaper used cars mainly-- you can not expect such a market to completely switch in a fraction of product lifetime (especially while new tech is still rapidly improving).
Not even China will switch overnight. I'm asking if China is better at EV incentives.
I'm driving a 15 year old car :) I want to replace it some time in the next 2-3 years. Right now I wouldn't consider an EV or PHEV because I don't think I can charge one regularly and the price premium is not worth it to me, especially compared to the hassle.
Nice page! I'm extremely happy about efforts like this. You might argue that the EU is a sprawling, wasteful bureaucracy and you would not be wrong, per se, but they made a lot of useful laws that just simply make the world a better place.
Having standardized chargers for phones and laptops is SUPER nice and would never have happened without intervention IMO.
The only equivalent for US "useful, average-citizen friendly legislation" that I recently heard about was the standardization of powertool batteries pursued by doge-- which turned out to be an april hoax when I just looked it up :(
And yet, only US has right-to-repair laws for cars. When is EU going to fix that?
I can't get manuals and software access to fix a new car made in EU.
I don't really care that I can't fix my all-glued-up phone for <1000 EUR but I do care that I have to spend thousands on car repairs that I could do myself.
This law has more to do with the environment/Energy usage than with the consumer. And the US consumer cares a lot less about energy usage since they're much more energy and monetarily rich than the EU.
If they paid German gas and electricity prices for example while having European wages, they'd care a lot more about energy consumption, believe that.
I think regulation like this is just strictly good (even from US perspective/priorities), because you can not realistically "vote with your wallet" for environment-friendly products when relevant info is obfuscated, falsified or not available at all.
Just ignoring energy efficiency/repairability labelling is always an option for consumers on the other hand.
> If they paid German gas and electricity prices for example while having European wages, they'd care a lot more.
I'm not so sure on this; I think environmental concerns are mainly culture driven I think, because even after all the price increases over the last decade, especially electricity is still dirt cheap compared to e.g. rent, basically everywhere.
>I think regulation like this is just strictly good (even from US perspective/priorities)
I never said it's bad, I was just answered why the EU is pushing for this when US isn't: because in US energy affordability is not as big of an issue for consumers.
The article talks about ecodesign requirements as well, such as spare parts needing to be available for some years after the product isn't sold, the freedom to have 3rd parties repair devices and so on. It's not only a matter of energy but consumer protection as well
I just checked the el. prices for Germany on [1] and [2] and I see something like 9-12 euro ct/kWh which is $0.10/kWh. In NY state [3], where I live, the prices right now are $0.25/kWh so 250% more. Average salaries for Germany are $57,198 [4] and $61,984 [5] for US. Maybe I'm missing some details about it but I don't it's about affordability and energy cost. My take is it's a lot more about top-down politics.
That's a good point. The numbers go the opposite way if you check household prices. Are prices in Germany anomalous due to them putting their eggs in the Russian gas basket? Countries with nuclear power plants seem to have lower electricity prices (who would've known).
In Germany, I think a big cost driver is infrastructure buildout, from switching coal plants to renewables as well as building new gas turbines, more so than gas price itself (which is <20% of electricity). But the country is already >60% renewables for electricity, so there is at least something to show for it.
France basically invested 40 years ago and are still reaping the spoils; I'd expect prices there to rise significantly once a majority of nuclear reactors reaches end-of-life.
From a household perspective the cost of electricity feels pretty marginal to me, anyway.
> this law has more to do with the environment/Energy usage than with the consumer
Not sure about the distinction there, improving the environment and energy usage is benefitting the consumer, because the consumer is also the citizen living there
> You might argue that the EU is a sprawling, wasteful bureaucracy and you would not be wrong, per se, but they made a lot of useful laws that just simply make the world a better place.
The EU bureaucracy itself is significantly lighter and more purposeful than any of the underlying individual states' bureaucracy, though that might simply be a function of youth and restricted scope.
> The EU bureaucracy itself is significantly lighter and more purposeful than any of the underlying individual states' bureaucracy
The EU ""states rights"" (subsidiarity) is a lot stronger and more real than the corresponding structures in the US. It also doesn't really do direct enforcement - there's no EU federal police checking tablets, it's all done through national level enforcement.
For Japan, I think allowing income tax redirection to a "home town" is a really good model to keep infrastructure funded in more rural areas that suffer from brain drain/exodus.
I'm not saying that you need laws to make the world better, just that some of them do.
This belongs in that category in my opinion because it is something that costs very little in absolute terms (manufacturer has to run some tests and print some numbers that they probably already had), but it makes the whole system work better because it enables people to vote with their wallet, and gets inefficient products eliminated because people can spot them before sale.
No company would advertise a "20% below average battery lifetime" without regulation like this, which is why objectively bad devices can still get sold easily on unregulated markets.
Sure. But even if this would be a full tradeoff between product performance and energy use (which it really isn't, you can typically eliminate a lot of power consumption without impacting performance at all in domestic appliances), than that would be already an improvement:
Without any energy labeling, the manufacturer has basically no incentive to provide any comparable information on energy use (especially if his device is subpar at it), and it's impossible for consumers to make informed decisions when buying (and very easy for extremely wasteful trash-products to get sold).
It also would be very appealing to just waste a ton of electricity for marginally better performance, because wasted electricity costs the device manufacturer nothing.
> Maybe with big enough batteries that can capture this energy, it could become a viable solution?
No, it could not. The problem is that lighning strikes are so short, that their middling amount of energy still results in an insane amount of electrical power (for a very short time). And electrical power is the primary driver of cost in most components here.
Capturing lighning is like building literally a hundred electrical substations just to run them for 50 microseconds a day, 10 days per year. Our planet simply does not have the lighning density for this to ever work out.
All that (very expensive!) capture infrastructure would basically sit uselessly for almost all the time (even in the middle of a lightning storm!).
You do have a good core point, which is that foreign assets can often just be taken from you in a crisis (even if you're a state actor-- see Russia for a recent example).
But things are not as simple as you portrait them. Lets assume that all of Norway suddenly became a country full of completely useless rent-seekers living off of their wealth fund. Just nationalizing their US assets does not just mean that you are unlikely to see any assets that you hold in Norway ever again (or any exports of value, as you correctly identified).
It also means that every other nation is gonna become very "careful" in dealing with you, by reducing trade/cooperation/investments, because holding any US asset has suddenly become a liability (because it could be seized).
But you don't have to take my word-- just look at which nations do this kind of "nationalizing foreign-held assets on a whim", and note where you would put them on a "failed-state-cleptocracy" leaderboard-- you will note that the correlation is quite high, and getting the US high on that leaderboard seems not very desirable to me.
You ommitted the part where the US becomes self-sufficient via AI (no more need for foreign labour) and does not care whether other nations become "careful" in dealing with them.
On what time horizon does this happen? Because this sounds like a wishful utopia to me.
I would expect current progress in AI to deliver the equivalent of a veritable army of consultants for very cheap, available to basically everyone within a decade or so. But that is not gonna make foreign labor worthless (or labor in general-- maybe a lot of whitecollar/creative work, we'll see). But trade is always gonna have value until the earth is perfectly homogenous, simply because it allows you to get value from both being better at things than other nations (=> export) AND from being worse (=> import).
If you are gonna go full autarky, you are going to be left behind by countries that don't, because all the spread-out efforts will struggle to compete with nations that put actual focus on things, and in-housing everything will drive up costs and prices tremendously.
My estimation is that it is one or two generations away.
Ray Kurzweil thinks about the timing more than me, has a pretty good track record in terms of his predictions, and estimates it to happen around 2045.
trade is always gonna have value
We don't trade much with apes and birds, do we? And we don't let them invest in our stock markets. We also don't pay them dividends for the land we took from them.
As soon as one country achieves way higher intelligence than the rest of the world, things might change in a fundamental way.
I personally don't buy the whole singularity argument at all; I see no good examples for interesting intellectual tasks that scale well with number of people thrown at it, and I see the whole AI thing developing exactly the same way-- exponentially increasing demands on ressources for smaller and smaller gains in utility, without any run-away self improvement at all.
> We don't trade much with apes and birds, do we? And we don't let them invest in our stock markets. We also don't pay them dividends for the land we took from them.
This sounds immensely misanthropic to me; if we hit a scenario like that, where a majority of US "entities" (?) share this kind of outlook on other humans, I strongly doubt that you (or I) are gonna be part of the "we" in that world, and I'd consider this more of a "may god have mercy" worstcase for our species than anything to be helped along.
True that an offshore wind turbine can produce 15MW. But it can cost $100m+ just for 1 turbine (built and installed). If drones are going up anyway (to protect a city/citizens from strikes), then electricity generation is effectively free, and the marginal cost is equal to the hardware required to capture it (maybe relatively low).
You don't just need to cover the 350km² with drones though, you also need buffering and/or transmission capabilities for absurdly high amounts of power (=> but low amounts of energy).
If you wanted a single buffer for the whole 350km², you'd need transmission capability from any point (or any drone launch station) to your central buffer in the Terawatt range (currently our highest power grid links are in the ~10GW range, so this is pure fantasy already). Utilization (~ capacity factor) for the lighting capture infrastructure would also be abysmally low. You'd basically need to build a ~10TW (generous estimate!) system, where costs in a lot of components directly scale with power, just to get ~10MW of sustained power out.
There is no way you are ever gonna compete with that $100M wind turbine; you could literally have cheap, high-field, room temperature superconductors and be gifted several warehouses worth of supercapacitors, and the whole lighning capture boondoggle still would not make any economic sense.
Lightning is ~5GJ per strike. That means you'd need ~4 lighning strikes per hour just to keep up with a single large offshore wind turbine (15MW with 40% capacity factor).
There is also no realistic way to scale the whole thing up to significant levels of power; with the wind turbines, you just build several hundred to get into the GW range. There's simply not enough lighning to achieve that.
And the whole power buffering infrastructure that you would need would be an underutilized waste of (expensive) components.
There's never been any serious attempt at harvesting lightning at scale because a single glance at the numbers reveals how (economically) pointless an exercise it is.
Zero-initialized-by-default for everything would be an extremely beneficial tradeoff IMO.
Maybe with a __noinit attribute or somesuch for the few cases where you don't need a variable to be initialized AND the compiler is too stupid to optimize the zero-initialization away on its own.
This would not even break existing code, just lead to a few easily fixed performance regressions, but it would make it significantly harder to introduce undefined and difficult to spot behavior by accident (because very often code assumes zero-initialization and gets it purely by chance, and this is also most likely to happen in the edge cases that might not be covered by tests under memory sanitizer if you even have those).
reply