First: look at the CO2 graph for China. That's not falling in any reasonable definition of a word, that's called "flat", even if it is technically less than before.
Second: it's normal for graphs to fall off in their last 1-5 data points because measurements are still coming in.
Third: it's using Chinese data. China has, by the way, publicly declared that they're lying about all public economic data. Please don't use Sinopec data to argue your point.
I just started on an open source and open weight supervised learning model to recognize japanese kanji characters drawn on the screen.
I have a working prototype written in Julia which is a very simple neural network. The input is in vector format so traditional convolutional neural networks don’t work out of the box but I swapped the convolution layer with a path simplification algorithm and it worked extremely well. Like 20 samples per character (from a set of only 5 hiragana during prototype phase) was enough to get 100% accuracy in a test collection of 5 samples per character after only 30 iterations of training.
I plan an working with free and open data, which I don‘t think exists for japanese kanji characters (at least not in vector format; KanjiVG only has one sample per character and I need dozens) so I also build a crowdsourcing web site to collect data from random people on the internet.
I am planning to run some more experiments with my prototype model before I release the crowdsourcing web page to an actual server though.
“Good” or “bad” is not contrary to science. For example scientists will evaluate the risks vs. benefits of a cancer treatment to determine if the benefits are worth the risk. They will do the same for vaccine efficacy etc.
Scientists are also humans with their own value judgment which is sometimes very flawed (see e.g. Richard Lynn and his race science) and sometimes with revolutionary insights that expands our shared empathy for the world around us (see e.g. Jane Godall).
Often when I hear a statement like this I see it as a thought terminating cliché. The value judgement of a scientists is often disregarded only when it is contrary (or inconvenient) to the speaker’s argument.
I would argue the opposite. The number one frustration I have with climate change is the continued and persistent inaction by our world leaders. I would argue that modeling out worst case scenarios is more likely to reach our leaders and finally break this decades long inaction.
I think generally the effects climate skeptics have over climate policy is overstated. And corporations with vested interest in being able to continue releasing massive amounts of CO2 into our atmosphere have much more say over climate policy then climate skeptics. Now these companies often do weaponize climate skeptics in order to lobby government into continued inaction, but that behavior will continue regardless of how scientists frame their climate models.
That has nothing to do with science communication and whether or not scientists should avoid sounding too alarmist when publishing their models for the purpose of getting their desired climate policy enacted by governments.
GMO was the first technology I figured that out. It was heavily pushed in around 2007 as a solution to world hunger, but at the same time it was very easy to see that hunger was a problem of distribution, not technology. Even back in 2007 we made much more food then was required to feed the world population. Furthering the obviousness of the lie was that in reality GMO was (and still is) mostly used for growing feed or cosmetic products. And on top of that we had large monopolies with patents to protect, and herbicides to sell, pushing this technology the hardest.
Now its been 20 years later. The technology is mature and many of the patents have expired, but GMO has done absolutely nothing to solve world hunger.
But where is the genetically engineered heart muscle that sits in the engine bay of my car running on nutrient solution, excreting C02 and urine while driving my car with a hydraulic motor?
I think your blame is misplaced here. LLMs have poisoned the Web, any reasonable person is right to be extremely very of any content they come across on the web in 2026.
The LLM models (or rather the AI companies pushing LLMs) did this. The people who are complaining are reacting very predictably. Between the proliferation of AI generated content, and people complaining about (potentially) false accusations, personally the former annoys me way more.
Unlike witches, LLM content exists. And worse, it has proliferated to a point where you are more likely to spot LLM content in the wild then human written content.
Witch hunts are bad because they target innocent people and burn them at the stake. When the whole internet has been filled with LLM content, it is not unreasonable, expected even, that you start accusing everything of being an LLM, because, most likely, it is.
I know this is nitpicky, but whenever I see Go code I see those capitalized function or variable names and know: “aha, these were imported from another file; or will be exported later” and I think to my self: “why? oh why is that relevant information for my at this point in the code?” and I just think about what kind of a weird ill thought out design decision that was, just to save authors from writing an “export” keyword, and further judge the rest of the language predicting it must have more weird design decisions in it.
I used Julia a lot when I was studying statistics (which I dropped out of) back in 2015, but I recently (like last weekend) came back to it to write a prototype of a supervised learning model, and I have to say, coming back to it was pure joy. And my model prototype was indeed fast enough for me.
Now I will probably rewrite the model in rust if I want to do anything with it (mostly for the web assembly target as I want this thing to run in browsers) but I will for sure be using Julia for further experimentation. Lovely language.
Funny you should say that... there was recently a very interesting announcement for a Julia-to-WASM compiler and a full-stack signals-based web framework:
I am actually on a lookout for a low level language which compiles to web assembly to write a (relatively small) supervised learning model which I plan to be good enough for 5 year old phone CPUs. I have a working prototype in Julia and was planning on (eventually) rewrite it in Rust mostly for the web assembly target. I come from a high level language background so the thought of rewriting in rust is a little daunting. So I was excited to learn about Mojo and find out if they had a WebAssembly target in their compiler.
But then I read this:
> AI native
> Mojo is built from the ground up to deliver the best performance on the diverse hardware that powers modern AI systems. As a compiled, statically-typed language, it's also ideal for agentic programming.
Well, no thank you. I know the irony here but I want nothing to do with a language made for robots.
I’ve written Python for the past 25 years or so. I dig it. But I don’t think I’ve started a new Python project since starting to experiment with Rust. A lot (not all!, but a lot) of Rust patterns look a lot like Python if you squint at it just right. I also think that writing lots of Rust has made me better at writing Python. The things Rust won’t let you get away with are things you shouldn’t be doing almost anywhere else.
Go on, give it a shot. It stops being intimidating soon! And remember that the uv we all love was heavily influenced by Cargo.
I actually have written Rust, but it has been a minute. I think my last project (a backend for a massive online multiplayer theremin jam session [site no longer up; but HN discussion still exists: https://news.ycombinator.com/item?id=10875211] 10 years ago).
I remember Rust very fondly in fact. And I had the same experience as you, learning Rust made me a better Javascript programmer. Lets see if a little neural network can be as fun.
I can only go get coffee waiting for my Python test suite to finish so many times per day. I write Rust because the strict type system accelerates the iteration speed for producing correct code more than any other language in its class.
Good call. It’s not the first language I think of for most things but there’s no great reason why not to. I probably reach for Rust first because I’m more familiar with it and the projects I want to work on were already written in it.
Mojo has been suffering in their communication from targeting VCs rather than users. They never actually had a clear "Mojo extends Python" MVP or even strategy to get to an MVP anytime soon. And the language started developing before AI Agents were a thing and has more to do with building around state of the art LLVM tooling than AI Agents. But I guess "easier lifetime semantics than Rust and native access to MLIR intrinsics" doesn't raise money...
https://www.carbonbrief.org/analysis-chinas-co2-emissions-ha...
https://www.carbonbrief.org/analysis-indias-co2-emissions-in...
reply