Although the entire enclosure was shaken around enough to tear bits off the PCB via sheer inertia and crack the CPU (hence the need for the recovery process described).
How are these produced? I assume they're not actually digging a giant trench and taking a section, but are the drawings based on measurements of a specific individual in some way?
They usually are. It’s a process akin to archaeology where they have to carefully wash away the dirt from the root system, measuring as they go. The problem with this method is that it's hard to reconstruct the entire 3d structure of bigger plants like trees so a lot of the root drawings on the site don’t accurately show how deep they go. It’s much easier with small plants where the researcher can control the soil used.
Modern methods like xray CT or ground penetrating radar can do it nondestructively in the field but they’re usually expensive to set up compared to just sending some grad students to dig.
> you'd get an accurate image of a very distorted root system
At the very least, you've taken a 3D system and reduced it to 2D. Additionally, you're exposing not only the root system but the entire microbiome around them to light and, almost certainly, unless you were incredibly meticulous about sealing, oxygen.
A few ways. This particular project is doing it by hand and very tedious.
The traditional way of transplanting large trees while keeping the root system intact is with a hydrovac. A machine the size of a jet engine that liquifies the soil with water and then vacuums it up. [1]
More recent developments have tried using an AirSpade which doesn’t use water but compressed air to blow apart and then suck the soil without making a slurry which is better as the soil can be redeposited in the same hole rather than discarded[2]
I'm not sure that either of these methods count as traditional.
Air spades in particular are primarily used for rootwork, not transplanting. Bareroot methods are used for smaller trees. Bare rooting leaves roots in a very vulnerable state, so doing it on larger trees you intend to move and keep alive is a serious logistical challenge.
The most traditional method I can think of is "ball and burlap" where root balls are cut free in the field, and retrieved later in the season for final packaging.
Baidu claims state of the art performance on their own OmniDocBench (although some recent models like GPT-5 and Qwen3 are not evaluated) and strong results on olmOCR-Bench and Ocean-OCR-Bench.
I would love to hear more about the solutions you have in mind, if you're willing.
The particular challenge here I think is that the PDFs are coming in any flavor and format (including scans of paper) and so you can't know where the grades are going to be or what they'll look like ahead of time. For this I can't think of any mature solutions.
The trouble is getting people to use your API - in this case med schools, but it can be much, much worse (more and smaller organizations sending you data, and in some industries you have a legal obligation to accept it in any format they care to send).
There is! The company I work for uses a weird version of Azure Devops for <governance> reason, and pip can authenticate and install packages from its artifact feeds while uv cannot. We use uv for development speed (installing internal packages from source) and then switch to pip for production builds.
The advantage of micropython is that you don't have to deal with all the poorly maintained toolchains and UART and flashing and whatnot; for a novice working on their own, that stuff is a nearly insurmountable barrier. That the syntax is Python doesn't make a whole lot of difference.
I agree though, probably shouldn't be the first choice for a professional application.
It's actually a great first choice for a professional application, in that you can get a prototype up and running much faster than a native SDK, iterate quickly, and try things out on a repl. In fact, it's used in industrial settings, including in medical devices and energy distribution.
MicroPython's a bytecode interpreter so, other than the existing Python ecosystem being a huge boon (popularity being a form of strength), you could get many of the same benefits and more from wasm
You can actually opt-in to native compilation on a function level so it's not just a bytecode interpreter. You can also compile it yourself with additional functionality written in C/C++ and just use Python for the glue that isn't performance sensitive.
A nearly all (">90%") plastic bike is interesting, and I guess if you're a plastics company that wants to create a bike it makes sense, but the end product does not seem very compelling to me. 17 kg, 1200 EUR, one size, proprietary parts, and only 50% recycled. A comparable aluminum bike beats it in every metric except maybe fatigue life(?).
My first bike bought with my first salaries (about 2-3 months) just turned 20 years old. It's a basic aluminum hardtail MTB. Still going strong - I do about 2-3k kms per year.
Post-consumer aluminum has been in common use for wheels for ages, and some major brands (like Trek) are also transitioning their aluminum frames to use recycled material.
You can recycle via depolymerization (see the various plastic-to-oil conversion refineries), although that's a more expensive process than simply melting and recasting.
"A regular e-bike battery can take several hours to charge completely, but the H2’s hydrogen cylinder requires just six minutes at a hydrogen filling station." Of course the company wanted to run the filling stations.
reply