It seems like the traditional way to develop good judgement is by getting experience with hands-on coding. If that is all automated, how will people get the experience to have good judgement? Will fewer people get the experiences necessary to have good judgement?
Compilers, for the most part, made it unnecessary for programmers to check the assembly code. There are still compiler programmers that do need to deal with that, but most programmers get to benefit from just trusting that the compilers, and by extension the compiler programmers, are doing a good job
We are in a transition period now. But eventually, most programmers will probably just get to trust the AIs and the code they generate, maybe do some debugging here and there at the most. Essentially AIs are becoming the English -> Code compilers
In my experience, compilers are far more predictable and consistent than LLMs, making them suitable for their purpose in important ways that LLMs are not.
I honestly think people are so massively panicking over nothing with AI. even wrt graphic design, which I think people are most worried about, the main, central skill of a graphic designer is not the actual graft of sitting down and drawing the design, it's having the taste and skill and knowledge to make design choices that are worthwhile and useful and aesthetically pleasing. I can fart around all day on Stable Diffusion or telling an LLM to design a website, but I don't know shit about UI/UX design or colour theory or simply what appeals to people visually, and I doubt an AI can teach me it to any real degree.
yes there are now likely going to be less billable hours and perhaps less joy in the work, but at the same time I suspect that managers who decide they can forgo graphic designers and just get programmers to do it are going to lose a competitive advantage
There are about 37 billion tons of CO2 emissions per year. If you could get the price down to $100/ton, you can get the world net zero for $3.7t/year. US GDP is 25 trillion/year. World GPD is about 96.5 trillion. So, it would cost about 3.8% of world GDP per year. World global military is about $2.2t/year, so it would be higher than global military expenditure, but somewhat theoretically possible. If you substantially reduced emissions, it might be feasible to use carbon capture for the rest.
That is the price of a ton of coal. Buy it from China BEFORE it gets burned and you have a sensible strategy.
Sucking it back from the atmosphere AFTER burning takes 10x more energy which is complete insanity. "Getting price down" and "when it scales" is an utter misunderstanding of the situation. Scale doesn't defeat the laws of physics.
So long as they have coal power plants, and need the electricity unless it get prohibitively expensive, they will just mine more coal. The way to reduce emissions is to replace high-emission infrastructure.
The political and logistical problems are a far bigger challenge than energy production. Just think of all the concrete that China and India will need to pour in the coming decades and the CO2 emissions of that.
No. If cheap energy production wasn't a huge challenge we wouldn't have the CO2 challenge either. It is the same challenge essentially. You can say it's "96% the same challenge" if you want to take concrete into equation.
No politics or logistics can change laws of physics. Burning coal with hand and capturing it with the other at 10x the cost just doesn't make sense, neither from the engineering perspective nor from the economical.
I saw a graph of that the other day and supposedly concrete was only about 3% of emissions globally.
Even if India and China triple that, we’re still coming out ahead by focusing on the reduction of fossil fuels for power generation, international transport, and heating buildings.
Why should anyone stop China and India from making lives better for their people? Seems like the classic American mindset where they get to pour as much concrete as they’d like but when other countries try to do it, they start talking of the environment.
> That is the price of a ton of coal. Buy it from China BEFORE it gets burned
And then that money funds more mines to get coal out of the ground faster.
> Sucking it back from the atmosphere AFTER burning takes 10x more energy which is complete insanity.
Part of the plan needs to be capture at power plants. Another part of the plan needs to be heavy taxes for releasing CO2. If someone needs the convenience for some use case, let them pay the capture price.
No it doesn't. You're not turning it back into fuel. You need to bottle it up or react it into a non-gas, both of which use much less energy than you get from combustion.
Well try to write down a concrete chemical reaction to achieve this and you will be disappointed. Light atoms like carbon don't like to stay close to each other at room temperature. You need to pour a lot of energy into chemical bonds between them to make that happen.
So no, it's still an energy-bound problem and we still burn coal to get energy.
One of the options is just injecting CO2 into very deep caves/water. And it will then react with many rocks all by itself!
The company that Microsoft is buying capture from is using lots of energy to remove CO2 from rocks, but that's because they're working to pull more CO2 from normal air. When you have an exhaust pipe it's already concentrated and you can separate it out much more easily.
Injection of CO2 into water happens naturally in the ocean. Unfortunately ocean didn't respond to any of the VC calls.
Injecting CO2 into rocks "all by itself" is extremely slow because it's exothermic and because surface of rocks is too small. You need to crush the rocks (energy) and heat them up (energy).
That's not fast enough and also risks making the ocean too acidic.
> Injecting CO2 into rocks "all by itself" is extremely slow
By injection I meant injection. Drilling a deep hole and pushing the CO2 out the other end.
Only the reaction happens by itself.
With the right structure underground the CO2 can spread over an enormous surface area at a high concentration that promotes reactions. But even if it that takes too long, each well doesn't need to operate forever.
I'm consistently hearing two claims here. One is that we'll have energy to do carbon capture because we may need to overbuild renewables by 3-5x to account for intermittency and then most of the time we'll have a lot of surplus power, and the other is that the only solution is for people to drastically reduce energy consumption.
Clearly at least one of these is wrong, because we can't simultaneously have a big surplus and have to reduce consumption, so which one is it?
Why can't we have an energy surplus and also need to reduce consumption of carbon-intensive goods? It's not clear to me why they'd be mutually exclusive.
If e.g. smelters still need to use coal, then an energy surplus doesn't help them. If carbon capture is expensive even with virtually free power due to wages and infrastructure, the capture cost was reflected in the price of steel, and demand for steel is elastic, then we'd both capture more carbon and reduce steel consumption.
> Why can't we have an energy surplus and also need to reduce consumption of carbon-intensive goods? It's not clear to me why they'd be mutually exclusive.
The vast majority of carbon-intensive goods are related to energy production. Even when people talk about things like transportation and agriculture and construction, a major proportion of their CO2 emissions are from burning fuel.
> If e.g. smelters still need to use coal, then an energy surplus doesn't help them.
Smelters are using coal for heat. Burning it directly on site is more efficient than burning it in a power plant, losing most of the heat to conversion inefficiency, losing some of the electricity to distribution and then turning what's left back into heat.
If you had cheap electricity that didn't come from burning coal they could just use electric heat. At which point there would be no need to reduce steel consumption.
Electricity infrastructure used to be defined by the factories that run from (say) 9AM to 5PM. The grid has to be sized mostly for their needs, and baseload power (fossil, atomic, hydro) are sized for it, This is slow and costly to spin up and down. You see this reflected in things like utility "time of use" plans, where they offer you dirt-cheap energy at 2AM if you're willing to pay a penalty at 3PM. They'd love for you to sop up the glut by running a Bitcoin miner or chilling your house to 15C overnight.
Renewables move on a dime by comparison. If we need n GW of power at the peak time of 5PM, depending on the yield factors of local solar/wind/tidal/etc, we may end up with an infrastructure that generates 3n or 5n at other times of day. A lot of thinking has gone to batteries/molten salt/pumped hydro as ways we can store that surplus for later needs, but we can also direct the glut into processes that are energy-intensive and only economically viable in a power-too-cheap-to-meter scenario.
The CO2 scrubbers could be a viable sink for that excess power once we've got enough grid-scale storage.
You're just making the "we're going to have an energy surplus" case.
We already have storage technologies that could compete with present-day energy prices if charging them was near-free. They're currently not competitive because it isn't, but in your scenario during off-peak it would be. So why would anybody have to reduce consumption then? Buy a battery, charge it when power is dirt cheap and use as much as you do now for no more than you pay now.
Isn't the cost of that plus the cost of operating the fossil fuel emitting infrastructure significantly higher than the cost of other known alternative power generation methods?
The article provides no historical context. Startups are risky and fail all the time. Are more startups than normal failing? How many startups can cut back and survive without more funding?
Indeed; this is the bulk of AWS / Azure / Google Cloud's business model, the so called "kill zone" adjacent to their infrastructure offerings. They let startups conduct the risky, expensive R&D to find product-market fit, then step in with massively more operational resources to clone whatever worked, only with massively more resources on developing better tooling.
Would love some talks covering where the adjacent areas are to the cloud offerings - are you thinking at AI inference startups or more dev tooling?
The oddest trend I have noticed (may be selection bias) is how many people build dev tools these days... to the point even non-devs are starting to talk about their no-code builder startups... it's getting crazy.
Self-driving cars haven't really failed. It looks like Cruise and Waymo are about to get permissions to operate 24/7 in SF.
I think the greater issue is that it is difficult to assess time-to-market. You to asses both what is useful, and how long it might take to build. Some things are just gonna need a lot R&D to work. AI-hype makes people underestimate how difficult it is to actually get things to work.
This is a problem not only for regulators, but pretty much any profession we might want to improve.
Our labor pool is vast, but fixed. There is some fungibility among the individual workers... a welder can be re-trained as a teacher, or a miner can be re-trained to become a web developer. But it isn't perfectly fungible, the 45 yr taxi cab driver can't re-train to be an oncologist, and he certainly can't re-train to be a pharmaceutical engineer.
We might even say that they're not even mostly fungible... not for the professions we really want the most.
So how many "better regulators" do you need? Do you need two more, nationwide? We could probably find those. Do you need 24? A little more difficult, but as long as you are willing to wait 18 months or so, doable.
Do you need 300? 600? 800? Exactly how many at the FDA have to be superior regulators there? Worse still, even if the number is low...
What incentivizes someone who could be the superior regulator, but has chosen to be whatever-else-it-is-that-they-are? Do we need to offer more money? How much more? Million dollar salaries for one or two is feasible, but not for 800. Worse, even if we can afford it, offering that much money doesn't just attract them, it attracts many more inferior regulators.
Can you tell the difference? If three people show up for the $500,000/year regulator job, and two are schmoozing assholes, and the third is hyper-competent, what hiring system can you devise to reliably pass on the two and pick the other?
If the signal-to-noise ratio is too high there, it can likely overwhelm even a good hiring system.
And if it's bad for regulators, then this is simply a losing strategy for jobs like teachers, where we don't need a few dozen or a few dozen "better teachers", but hundreds of thousand of them. Police, etc.