Hard to believe it's only 50 years old because of how mechanical and simple the cube is compared to other gadgets of that time.
I remember 15 years ago finding many variants of cube with different sizes and number of dimensions (yes, there is a video game of a 4D or 5D rubiks cube).
Hate to say that but it works for me. I am using the largest regular* font size on an iPhone SE and there's no issue on the exact same page in the weather app.
Perhaps OP is using one of the extra-large font sizes hidden behind the "Larger Accessibility Sizes" toggle. It can be expected users at that font size value accessibility over aesthetics. As screen space is limited there is of course a point where things start to look broken.
The weather app could include a TV or clock-like layout for extra-large text sizes but that doesn't really fly with Apple's UI/UX consistency.
> Perhaps OP is using one of the extra-large font sizes hidden behind the "Larger Accessibility Sizes" toggle.
They are. FTA:
“Update 23 Jun 2024 4:11 PM
Several readers on Mastodon told me the alignment was fine on their phones and suggested Larger Text/Dynamic Type as the reason the charts on my phone are messed up. They were absolutely right. I bumped up my text size so long ago I’d forgotten all about it, but I should’ve known to look into that before posting. Interestingly, moving my text size down just one tick got all the bar ends to match up. The colors still seem a little off to me, but I’ll need to zoom in and look more carefully.”
It's not hijacked, the formulation is the same. For any layer there is a state-space formulation
h = Ah + Bx
y = Ch + Dx
where x is the input, y is the output, and h is the state. They use "h" instead of "s" for the state variables because they're called "hidden states" in the literature. edit: it is obnoxious they've flipped the convention for A/B/C/D which is the one thing controls people agree on (we can't even agree on the signs and naming of transfer function coefficients!).
Where this diverges from dynamical systems/controls is that they're proving that when x/h/y are represented with finite precision numbers, the model is limited in the problems it can represent (no surprise from controls perspective), and they prove this by using an equivalence to the state-space formulae that's consistent with evaluating it on massively parallel hardware.
The classical controls theory is not super applicable here, because what controls people care about (is the system stable, is its rise/fall time in bounds, what about overshoot, etc) is not what ML researchers care about (what classes of AI problems can be modeled and evaluated using this computational architecture).
Interesting. This is how I sometimes feel about physics
The field completely appropriated and redefined so many terms of common language, that now it’s hard to talk in plain language with someone formally trained in physics about physical phenomena
For example, everyone has some sort of intuitive idea about what Energy is. But if you use that word with a physicist, watch out, for them it means something super specific within the context of assumptions and mathematical models and they will assume you don’t know what you are talking about because you are not using their definitions from their models
I mean, if you like state space models, then you should read the paper on Mamba if you haven’t already! Because it quite literally uses state spaces… and you will probably think it’s a really cool application of state spaces!
Apologies if you know the following already, but maybe others reading your comment feeling similarly will not be familiar and might be interested.
At least intuitively, I like to motivate it this way- pick your favorite simple state space problem. Say a coupled spring system of two masses, maybe with some driving forces. Set it up. Perturb it. Make a bunch of observations at various points in time. Now use your observations to figure out the state space matrices.
There’s fundamentally not really anything different (in my opinion) with using Mamba (or another state space model) as a function approximation of whatever phenomenon you are interested in. Okay Mamba has more moving parts, but the core idea is the same: you are saying that on some level, a state space is an appropriate prior for approximation of the dynamics of the quantities of interest. It turns out being pretty remarkable the number of things this can work out quite well for. For instance, I use it to model the 15-min interval data for heating, cooling, and electricity usage of a whole building given 15 min weather data, occupancy schedules, and descriptions of the building characteristics (eg building envelope construction, equipment types, number of occupants, etc).
In the United States, the National Weather Service defines a blizzard as a
severe snow storm characterized by strong winds causing blowing snow that
results in low visibilities. The difference between a blizzard and a snowstorm
is the strength of the wind, not the amount of snow. To be a blizzard, a snow
storm must have sustained winds or frequent gusts that are greater than or
equal to 56 km/h (35 mph) with blowing or drifting snow which reduces
visibility to 400 m or 0.25 mi or less and must last for a prolonged period
of time—typically three hours or more.
> Do you take external DDR RAM on an application-class ARM core? You're an MPU.
The actual difference IMHO is the existence of an MMU. (Which is, fdpic/linux-nommu efforts notwithstanding, the general "Linux" condition.)
(Ex.: the SOPHGO SG2000 RISC-V chip comes with integrated DDR3 RAM, but is still solidly an MPU. I believe there are some ARM Cortex-A with integrated RAM too, can't think of any off the top of my head. [Ed.: nevermind, the SG2000 is dual RISC-V/ARM])
Then the 8-bit ATMega328p in the Arduino Uno is an MPU[1]. Except it's solidly an MCU. I'd say if it uses external RAM as its main memory, and external storage as its main storage, then it's an MPU. If the RAM is all internal, and the storage is all (or can be all) internal, it's an MCU.
That’s a gimmick. I’m talking about how given chip is supposed to be used in a real products, whether vendor supplies Linux builds in one way or another, etc.
It all comes down to cost. These BGA chips have pins under the device unlike older packages like QFP or DIP. The pattern of missing pins is designed so it a fewer signal layers to bring out all the sigal connections in a typical PCB process. Devices with full grids would need more layers or tighter tolerances.
The appeal of STM32MP1 is you can put DDR momery with the chip on a 4 layer board. ST even provides the layout.
I miss this kind of stuff in computer programming languages.
In hardware design, verification is done simultaneously with design, and semiconductor companies would be bankrupt if they did not verify the hell out of their designs before committing millions in manufacturing these chips.
Even among hobbyists this is getting traction with yosys.perhaps its time for programmers to adapt this kind of tooling so there will be less buggy software released...
There are probably orders of magnitude in difference between hardware design output and programming output. At least at the time, seL4's verification was quite impressive for a codebase on the order of 10,000 lines. But we should work towards the goal of improved formal modelling and checking all the same.
>The Lower Mainland is a geographic and cultural region of the mainland coast of British Columbia that generally comprises the regional districts of Metro Vancouver and the Fraser Valley.
No, it isn't. There are a lot of people of Chinese ethnicity in Vancouver but it is nowhere near the dominant fraction. But there is a very nice Chinatown (great food).
The ethnic 'whites' are about 50%, ethnic Chinese is now ~25%, it was quite a bit lower when I was still living in Canada so it was definitely on the rise but the increase seems to have leveled off.
Some externally sourced data that more or less confirms this:
Mildly interesting: I opened the titular image in a new tab hoping to find a higher resolution one, only to see that NASA.gov is using Wordpress, and the original image file is a whopping 102 megapixels, with all EXIF info intact, taken by a Fujifilm GFX100s. A suiting tool to take a photo of an object worth $1B.
This image is something that makes me grateful for technology, a probe was sent into our solar system, rendezvoused with a rock, colected a sample, returned back to earth and I am able to view a high resolution image on my phone almost as soon as the canister was opened. The amount of technology to make this work in space and on my phone is too long to figure out in full but bravo to all the engineers from NASA to camera makers to WordPress and everything on every layer in between that they are built on, I salute you all!
Interesting how that image is so 'scale free' it could be almost any size. The only giveaways are the screw heads visible elsewhere outside of the cylinder and even those could be in quite a range.
> is a whopping 102 megapixels, with all EXIF info intact, taken by a Fujifilm GFX100s
I jest, but it seems like that camera is able to take photos up to 400MP, by automatically combining 16 RAW photos taken in succession ("Pixel Shift Multi Shot"). So they did lazy the job just a tiny bit ;)
That's a lot of hot pixels for base ISO! Could be the focus stacking intensified them.
That camera can actually get a bit more resolution out of sensor movements, though the lens might be at its limit already, or focus was still a little off. edit: it's probably the glovebox glass.
To be fair, Apollo engineering failed badly on similar problems (with moon samples),
- "But in spite of all this beautiful complexity, there were just basic, fundamental mistakes,” Dr. Degroot said."
- "NASA officials were well aware that the lab wasn’t perfect. Dr. Degroot’s paper details many of the findings from inspections and tests that revealed gloveboxes and sterilizing autoclaves that cracked, leaked or flooded."
(The goal in that case wasn't to protect the sample integrity, but to contain alien pathogens).
late edit: Another related example (lunar soil),
- "Although this material has been isolated in vacuum-packed bottles, it is now unusable for detailed chemical or mechanical analysis—the gritty particles deteriorated the knife-edge indium seals of the vacuum bottles; air has slowly leaked in."
I think they were trying to avoid the scenario: move fast, break things, contaminate $1B sample.
Or another scenario, which is a possibility when personalities in charge don't get enough sleep: move fast, break things, snort lines of $1B asteroid dust on Rogan.
Its got to get to orbit first, you have to launch enough of them to refuel the first one. I have no doubt they will get them to work, but its not an option yet.
I'm pretty sure they could get enough methane out of Elon Musk running his mouth to send a probe to Pluto. Probably have enough left over to catch up with Voyager 1.