Hacker Newsnew | past | comments | ask | show | jobs | submit | jakewins's commentslogin

.. you are commenting on an article about how non-carbon-emitting energy options are beating out polluting alternatives, aided by exactly these taxes, so obviously yes, they are working exactly as intended: price signals for the market to get carbon out of the energy system

The purpose of the tax is not to raise money to plant trees, it’s to raise the cost of emissions so that markets move away from them


TFA's claim is offshore wind prices are 40% cheaper than gas.

The parent comment stated "actual cost has to price in the impact of using it". Most people would agree on this. However, for both claims to be true, the collected tax revenue must be spent offsetting the impact of that gas usage - not simply reducing gas usage (ie. that consumed gas isn't being compensated for).

If the UK government is spending that tax revenue on anything it wants, then it's not the actual cost, is it?


Sorry I don’t follow. Why would the taxes need to be spent offsetting anything? The carbon reduction already happened, because the taxes made this auction choose lower emission alternatives.

If you then also spend the taxes on some form of offsets (if we pretend for the sake of argument that those work) you would have reduced emissions twice. One time seems plenty to say they are doing their job.


The most popular UK electricity retailer is Octopus Energy which is specifically focused on variable prices and flexible consumer demand. By what metric do you mean variable rate retailers are not popular?

Intermittency is already handled by the price mechanisms, they are set quarter-hourly; if you’re not available when there is high demand you don’t get paid.

The marginal price windfalls happen specifically when you’re able to deliver at a low cost when demand is high in the same ISP.

This just seems like data-free fear mongering.


I’m baffled that any other language would be considered - the only language that comes close to English in number of speakers is Mandarin, and Mandarin has nearly half a billion fewer speakers than English.

We should be happy there is a language that has emerged for people to communicate globally without borders, and support it’s role as the worlds second language rather than work to re-fracture how people communicate


    > "I’m baffled that any other language would be considered"
There are direct trains between French and German cities, where additional announcements in French may be appropriate (and perhaps also English).

For local/regional trains, I wouldn't expect any language other than German.


I would say that for long distance trains only English and the local language should be enough.

For international trains, we should have all languages of all traversed countries and English. So for example a train from Paris to Frankfurt should have announcements in French, German and English (and it is actually the case for that train, I already rode it).

But for example, the Berlin - Warsaw train has only English announcements besides the local language depending on the country the train is in (so no Polish when it is in Germany, and no German when it is in Poland), I consider this to be wrong. It should have announcements in Polish, German and English for the whole route.


Agree with your last point. That's a weird choice. At least the stops either side of the border are guaranteed to have people who natively speak the other language.

I seem to recall lines in Belgium that do announcements is 4 languages: french, Flemish, German, and English.


I take trains like those for work, not to France but to Amsterdam, and I don’t speak German, French or Dutch.. if we want a train system that allows Europeans to use it there needs to be announcements and signs in the language 50% of EU citizens speak


I tried Basel, Buck2 and Pants for a greenfield mono repo recently, Rust and Python heavy.

Of the three, I went with Buck2. Maybe just circumstance with Rust support being good and not built to replace Cargo?

Bazel was a huge pain - broke all standard tooling by taking over Cargos job, then unable to actually build most packages without massive multi-day patching efforts.

Pants seemed premature - front page examples from the docs didn’t work, apparently due to breaking changes in minor versions, and the Rust support is very very early days.

Buck2 worked out of the box exactly as claimed, leaves Cargo intact so all the tooling works.. I’m hopeful.

Previously I’ve used Make for polyglot monorepos.. but it requires an enormous amount of discipline from the team, so I’m very keen for a replacement with less foot guns


You’re converging a lot of ground here! The article is about producing container images for deployment, and have no relation to Bazels building stuff for you - if you’re not deploying as containers, you don’t need this?

On Linux vs Win32 flame warring: can you be more specific? What specifically is very very wrong with Linux packaging and dependency resolution?


> The article is about producing container images for deployment

Fair. Docker does trigger my predator drive.

I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.

> What specifically is very very wrong with Linux packaging and dependency resolution?

Linux userspace for the most part is built on a pool of global shared libraries and package managers. The theory is that this is good because you can upgrade libfoo.so just once for all programs on the system.

In practice this turns into pure dependency hell. The total work around is to use Docker which completely nullifies the entire theoretic benefit.

Linux toolchains and build systems are particularly egregious at just assuming a bunch of crap is magically available in the global search path.

Docker is roughly correct in that computer programs should include their gosh darn dependencies. But it introduces so many layers of complexity that are solved by adding yet another layer. Why do I need estargz??

If you’re going to deploy with Docker then you might as well just statically link everything. You can’t always get down to a single exe. But you can typically get pretty close!


> I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.

Not every dependency in Bazel requires you to "first invent the universe" locally. Lots of examples of this like toolchains, git_repository, http_archive rules and on and on. As long as they are checksum'ed (as they are in this case) so that you can still output a reproducible artifact, I don't see the problem


Everything belongs in version control imho. You should be able to clone the repo, yank the network cable, and build.

I suppose a URL with checksum is kinda sorta equivalent. But the article adds a bunch of new layers and complexity to avoid “downloading Cuda for the 4th time this week”. A whole lot of problems don’t exist if they binary blobs exist directly in the monorepo and local blob store.

It’s hard to describe the magic of a version control system that actually controls the version of all your dependencies.

Webdev is notorious for old projects being hard to compile. It should be trivial to build and run a 10+ year old project.


If you did that, Bazel would work a lot better. Most of the complexity of Bazel is because it was originally basically an export of the Google internal project "Blaze," and the roughest pain points in its ergonomics were pulling in external dependencies, because that just wasn't something Google ever did. All their dependencies were vendored into their Google3 source tree.

WORKSPACE files came into being to prevent needing to do that, and now we're on MODULE files instead because they do the same things much more nicely.

That being said, Bazel will absolutely build stuff fully offline if you add the one step of running `bazel sync //...` in between cloning the repo and yanking the cable, with some caveats depending on how your toolchains are set up and of course the possibility that every mirror of your remote dependency has been deleted.


Making heavy use of mostly remote caches and execution was one of the original design goals of Blaze (Google's internal version) iirc in an effort to reduce build time first and foremost. So kind of the opposite of what you're suggesting. That said, fully air-gapped builds can still be achieved if you just host all those cache blobs locally.


> So kind of the opposite of what you're suggesting.

I don’t think they’re opposites. It seems orthogonal to me.

If you have a bunch of remote execution workers then ideally they sit idle on a full (shallow) clone of the repo. There should be no reason to reset between jobs. And definitely no reason to constantly refetch content.


Also it is possible to air gap bazel and provide files as long as they have the same checksum offline.


> Energy Dome expects its LDES solution to be 30 percent cheaper than lithium-ion.

Grid scale lithium is dropping in cost about 10-20% per year, so with a construction time of 2 years per the article lithium will be cheaper by the time the next plant is completed


LDES: Long-Duration Energy Storage

Grid energy storage: https://en.wikipedia.org/wiki/Grid_energy_storage


Metrics for LDES: Levelized Cost of Storage (LCOS), Gravimetric Energy Density, Volumetric Energy Density, Round-Trip Efficiency (RTE), Self-Discharge Rate, Cycle Life, Technical Readiness Level (TRL), Power-to-Energy Decoupling, Capital Expenditure (CAPEX), R&D CapEx, Operational Expenditure (OPEX), Charging Cost, Response Time, Depth of Discharge, Environmental & Social Governance (ESG) Impact


Li-ion and even LFP batteries degrade; given a daily discharge cycle, they'll be at 80% capacity in 3 years. Gas pumps and tanks won't lose any capacity.


Lithium burns toxic. Carbon based solid-state batteries that don't burn would be safe for buses.

There are a number of new methods for reconditioning lithium instead of recycling.

Biodegradable batteries would be great for many applications.

You can recycle batteries at big box stores; find the battery recycling box at Lowes and Home Depot in the US.


These are LCOE numbers we are comparing, so that is factored in.

The fact that pumps, turbines, rotating generators don’t fail linearly doesn’t mean they are not subject to wear and eventual failure.


Can you give an example of a chip with software-defined IO coprocessors that is 1/4 the price? The pricing I’m getting on the RP2350 is 0.6EUR per chip.

When I’ve compared to other dual-core SoCs with programmable IO, like NXP with FlexIO (~€11) or ESP32 chips with RMT (~€1) they are much more expensive than the RP2350.. is there a selection of programmable IO chips I’m missing?


That's the thing: with proper dedicated peripherals you don't need the software-defined coprocessors.

Sure, they are great if you want to implement some obscure-yet-simple protocol, but in practice everyone is using the same handful of protocols everywhere.

Considering its limitations, betting on the PIO for crucial functionality is a huge risk for a company. If Raspberry Pi doesn't provide a well-tested library implementing the protocol I want (and I don't think they do this yet), I wouldn't want to bet on it.

I think they are an absolutely amazing concept in theory, but in practice it is mostly a disappointment for anything other than high-speed data output.


In Cortex M33 land $15 will get you an entire NXP (or STM) dev board. An MCX-A156 will set you back about $5 which is about on par with an STM32H5. You can go cheaper than that in the MCX-A lineup if you need to. For what I'm working on the H5 is more than enough so I've not dug too deep into what NXP's FlexIO gives you in comparison. Plus STM's documentation is far more accessible than NXP's.

Now the old SAM3 chip in the Arudino Due is a different beast. Atmel restarted production and priced it at $9/ea. For 9k. Ouch. You can get knockoff Dues on Aliexpress for $10.

Edit: I'm only looking at single core MCUs here. The MCX-A and H5 lineups are single-core Cortex M33 MCUs. The SAM3 is a single core Cortex M3. The RP units are dual core M33. If the RP peripherals meet your needs I agree that's a great value (I'm seeing pricing of $1+ here).

Edit2: For dual core NXP is showing the i.MX RT700 at around $7.


I asked Gemini the other day to research and summarise the pinout configuration for CANbus outputs on a list of hardware products, and to provide references for each. It came back with a table summarising pin outs for each of the eight products, and a URL reference for each.

Of the 8, 3 were wrong, and the references contained no information about pin outs whatsoever.

That kind of hallucination is, to me, entirely different than what a human researcher would ever do. They would say “for these three I couldn’t find pinouts” or perhaps misread a document and mix up pinouts from one model for another.. they wouldn’t make up pinouts and reference a document that had no such information in it.

Of course humans also imagine things, misremember etc, but what the LLMs are doing is something entirely different, is it not?


Humans are also not rewarded for making pronouncements all the time. Experts actually have a reputation to maintain and are likely more reluctant to give opionions that they are not reasonably sure of. LLMs trained on typical written narratives found in books, articles etc can be forgiven to think that they should have an opionion on any and everything. Point being that while you may be able to tune it to behave some other way you may find the new behavior less helpful.


Newer models can run a search and summarize the pages. They're becoming just a faster way of doing research, but they're still not as good as humans.


These are, as far as I know, well tracked datasets. For the US the Bureau of Labor statistics tracks each annual cohort of new companies and the attrition over time: https://www.bls.gov/bdm/us_age_naics_00_table7.txt

So for example the first chunk there, cohort of companies that started March 1994, there were 13% still operating 10 years later in 2024.


30 years later....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: