Hacker News new | past | comments | ask | show | jobs | submit | donquichotte's comments login

Are you using an external provider for image generation or running something locally?


In Europe, this is mandated by law.

https://en.m.wikipedia.org/wiki/ECall


In Spain they soon will require V16 [0] too. Originally it was just a flashing light but then it was improved to use mobile networks to send an emergency call to authorities. And proving, that technically, it is possible to have something like ECall without deep integration into the car.

No need of having always on connection to the network, enable on emergencies only. Thus no remote hacking of SIM/base band possible at random times, or broadcasting presence until used. Mechanic or user can check battery periodically, replace if low, just like refilling wiper fluid. Car could even cut all other electric systems after deploying the integrated autonomous V16-like system.

Car manufacturers deciding to make their ECall implementations complex and privacy invading was their choice.

[0] https://en.wikipedia.org/wiki/V16_warning_beacon_lights


This uses the LTE-M network, which is pretty cool. It doesn't do anything until you turn it on.


Yes and no. Ecall is but telemetry is not. And ecall only makes a call in case of a crash.


Ecall could/should use completely separate communications infrastructure than the entertainment/vehicle connectivity functionality.


That's difficult because both will need to use the cell network.

But I guess you mean on the car side like two modems? Yeah that would be nice, or at least to mandate the option to turn all manufacturer telemetry completely off. The EU never bothered to do this for computers and phones either though.


The GSM SIM cards, by design, can be remotely updated to do whatever the service provider wants (or asked "nicely", or paid to do).

https://www.thalesgroup.com/en/markets/digital-identity-and-...


I know, and they can be hacked too through sms messages.

But the sim card doesn't have access to the car telemetry. Probably even the whole baseband module doesn't. It just gets that data to transmit when an accident happens.

The worst you could do by hacking the SIM is to make the modem send nearby cell data somewhere. Which is serious enough because the rough location can be derived from it. But it's a far cry from what these manufacturers collect.


Or if the service provider is served a warrant, or if they're hacked.


Just enable it by default and put a warning in big red letters that disabling the cellular modem will also disable the ECall system, which may mean you might not get the emergency assistance you need and therefore die.


That shouldn't have to be linked. You should be able to turn off manufacturer telemetry while keeping ecall. Ecall doesn't even talk to the manufacturer. It just calls the emergency services directly.

Of course when the modem is on you can't be sure that it's not doing that but that's what we have laws for.


AFAIK ECall uses a voice call with data encoded on the audio channel. This is not what telemetry does, it should be doable to leave the voice channel and disable any data connections. (But it's probably very hard to check for someone who has no background in emebedded systems). Plus, in theory an emergency call can be done without even using a SIM card, right (if network allows)?


I recently installed NetBSD on a Thinkpad T430s, it is surprisingly snappy and usable, except when firefox is running.

With i3wm, tmux and helix with clangd it is a capable development machine with a surprisingly modern feel, only the compile times give away that the machine is from the early 2010s. Even WiFi is working, and pkgsrc is fast and well-stocked with the newest software.


It's funny to hear someone describe my daily driver personal laptop in the same way as a 380Z. I use Slackware with Xfce4 and except for the lower screen resolution (1360x768) I have never noticed that it's not "modern." I even have a new battery so it gets 6-9 hours of use again.

Like you said, the giveaway is the poor performance, but if you're a systems developer that usually isn't a problem anyway. Emacs, C, assembly, some Chisel and Forth are all that I write on it.


A T42 with slackware and notion wm was my daily driver until around 2015, never really felt a need to get something newer and only did because I wanted something smaller/lighter for portability so I get a cheap $250 11e Thinkpad. Used the T42 regularly until it finally died in 2021 or 22. Miss the 4:3 screen and tiny trackpad. In just shy of 20 years the only issues I had with the T42 was a dead fan. Now I have an X13, 16 cores and NVMe still running slackware and notion, it flies. That $250 11e is still going strong, has survived 3 bike crashes and it shows.


I used a T42p (not just a measly T42, that's for the proles) as a daily machine until forced to move away from 32 bit machines around 2021. Were it not for that I'd still be using it daily - I got three of them in nearly new condition for free about a decade earlier, all of them still work, one of them with a new translucent trackpad cover (made from an old mobile phone screen protector) because the old one was totally worn through - since the combination of the 1600x1200 4:3 (more or less) screen and the keyboard are hard to beat. I'm now using a P50 which, while offering far better lacks the 4:3 screen and 'suffers' from the modern Lenovo keyboard.


>not just a measly T42, that's for the proles

What do you think the 'p' in T42p really stands for? T42p did not exist when I bought my T42 back in 01 or 02. X13 keyboard is almost as enjoyable to use as the T42's was for the first decade of its existence, X13's wins hands down over a T42 keyboard with a decade of use and tired springs. I am pretty happy with their current keyboards.


I still rock a Latitude E6220 from 2011 or so after my main one broke and I just never replaced it.

It had 4gb ram, upgraded it to 8gb for $10 via Amazon, running Alpine with awesomewm and it works perfectly fine, firefox runs fine, compiling works fine, VLC works fine, etc.

Even Windows 10 ran on it fine. Really, CPU's haven't changed so much in the last 10 years, the focus has all bee on graphics.


The T430 is a modern machine with a Core I (and modern Lenovo keyboard...). The S variants are always a bit slower with their low power cores, but it's still very much a modern thing - you'd expect it to all work.


cnlohr is in his own league, this guy is bordering on genius.

Dissatisfied with the state of the vendor SDK for CH32V003 microcontrollers (ultra-cheap RISC-V MCUs), he created his own [1], which is a pleasure to use. He also has a header-only RISC-V emulator that runs Linux (and Doom!) [2], and hacked an ESP32 to emit valid LORA frames with clever use of aliasing [3].

[1] https://github.com/cnlohr/ch32v003fun

[2] https://github.com/cnlohr/mini-rv32ima

[3] https://www.youtube.com/watch?v=eIdHBDSQHyw


It's an interesting question, but only looking at energy expenditure probably does not cut it.

I hike a lot and started running races in alpine territory (e.g. this one [1]) about a decade ago and think if you take into account fatigue (mental, muscle, tendons) and exposure to the elements (sun, rain, wind, snow), a light running pace can definitely make you arrive at your destination less exhausted than walking the same distance, given that you can move confidently in the given terrain.

So it is a multi-dimensional optimization problem (as opposed to only optimizing for energy expenditure) and very dependent on how comfortably you can move at the given velocities.

[1] https://www.youtube.com/watch?v=o-12ghcODMM


Having done some light ultra events myself I have definitely found that going faster can be less tiring overall because you reduce overall duration - less sleep deprivation, less time on feet, etc.


Less time on feet but under a bigger strain with less efficiency which will make you more tired. It's the same for all of these. >(sun, rain, wind, snow)

You are are not as long in the sun but you are longer in the sun under a less efficient energy expenditure. You will sweat more because your are not in an ideal window.

If you look at the Ultra events with 100+ Miles and tons of elevation gain, even the winners have an avg pace of a fast walking speed. There are some hikers who only hiked these events and got very good results.

The more you weight the further the efficiency goes away. Even the top runners only walk the steep uphills. And in long races with lots of elevation gain at the and they are walking all hills. It's exactly because it becomes so inefficient even for lightweights


> If you look at the Ultra events with 100+ Miles and tons of elevation gain, even the winners have an avg pace of a fast walking speed. There are some hikers who only hiked these events and got very good results.

Sorry, but this is totally incorrect. The record for the Western States 100, which has nearly 19,000' of elevation gain, is just over 14hrs for men, and about 15.5hrs for women.

Fast hiking in that terrain would be 4mph which translates to 25hrs minimum.

The world's best ultra runners can maintain 6+mph average in insane terrain.


> The record for the Western States 100, which has nearly 19,000' of elevation gain, is just over 14hrs for men, and about 15.5hrs for women.

Tor des Géants (78,700' over 205 mi) men's record is 66:44 at just over 3 mph and that doesn't touch the Barkley marathon with 60,000' gain and under 2mph, though that's more to do with the rough terrain than elevation.

190' per mile isn't that much, especially since the Western 100 ends 5,000' lower than it starts.


Try walking up a rocky mountain pass and measure your horizontal speed.

3mph at Tor des Géants means running at every runnable segment.


There certainly are "ultra" routes that essentially require just speed hiking. But the majority of 100+ mile ultras are not like this, and allow capable runners of moving much faster than "speed hiking" pace.


OP said "Ultra events with 100+ Miles and tons of elevation gain" (emphasis mine).

"Tons" is subjective, but their statement is objectively true for a bunch of ultra marathons - Tor des Géants and Barkley were just the ones of the top of my head. He is not "totally incorrect" as you said.


4mph per hour hiking is pretty fast. I can do maybe 3.5 without starting to jog.


Very true. First thing I thought when I read this - how much weight are we carrying? What are the water requirements?

Then again the opposite example (lets say a super hot desert with intense sun - but you can wear a shaded hat or something) might result in the opposite - your feet might still be more tired because walking is slower (and more time on them) but the environment is a huge factor.


I'm no athlete, just get around town on foot a lot. Even on sidewalks with no baggage, running at a medium pace with a springy step seems less tiring.

If I'm wrong about that, at least it's obviously easier to run downhill. When walking, it can even take more effort vs flat. But when running, by definition both feet leave the ground at some points, so you have less friction and can let gravity do its thing.


Running downhill is very bad for your knees.


I know from personal experience that it is tiring for the knees.

What makes you think that it is „bad“?


That's my experience too. When I hike in the mountains there are places where I can run for a while without getting tired and gain a lot of time. Usually it's the slightly downhill sections. Wouldn't want to fall there though.


I agree, but I have seen an implementation of a middle ground (Scaled Agile for Enterprises, SAFE) that combined the worst of both worlds, with quarterly, one-week, all-hands-on-deck mammoth PI (product increment) planning sessions down to the story point (whatever unit that was, apparently it has magical self-calibrating properties) that were thrown out of the window 4 weeks into the PI.


Combining the worst of both worlds is definitely a nutty thing to do! You need to use critical thinking and analysis when coming up with new processes. It's definitely possible to make use of the best of both worlds. Though you still need to synthesize and re-evaluate the whole.

Though two wrongs don't make a right, two rights can sometimes make a wrong when combined.


> There was no incentive to carefully consider where to place a sticker. Players didn’t have enough agency to advance the plot through their individual action. This limited creativity to simple patterns like tiling similar stickers or forming lines.

Ah, the game turned into a big corp job!


STM64 subsystem for STM32


You have backwards


My thoughts exactly. It makes the engineer in me go mad if I see projects like Energy Vault [1] getting massive funding that could be used to try and develop technologies that make sense. Thankfully there are some people who see through the charade [2].

If you are into this thing and looking for an even more stupid idea to store energy, I present to you the StEnSEA [3]. Rolls right off the tongue, right? It is a hollow concrete sphere that is lowered to the bottom of the lake. Pumps then remove water from it, creating a vacuum. Letting the water back in and using the pumps as generators, the energy is reclaimed. Curiously absent from all documentation of this project is the amount of energy stored. I did some back of the envelope calculations a while back and it is 3.8kWh, for a multi-million-euro prototype!

[1] https://www.energyvault.com/ev1

[2] https://www.youtube.com/watch?v=iGGOjD_OtAM

[3] https://www.iee.fraunhofer.de/de/projekte/suche/2013/stensea...


> 3.8kWh

That seems very low. Their website mentions 20 Mwh+.

Though my back of the envelope agrees with yours.


If I understand it correctly, ~20MWh is for the full size model at 700m depth and with a radius of 15m (prototype radius is 1.5m and it is located at a depth of 100m).

And building concrete spheres that can withstand the pressure of a 700m water column is probably an interesting design challenge on its own.


For anyone interested in the structure of the software that ran Ingenuity (and some hardware design aspects, such as the use of commercial off-the-shelve parts), there is an awesome and critically underwatched video of Timothy Canham explaining everything:

https://www.youtube.com/watch?v=mQu9m4MG5Gc&t=7s


I always thought it was crazy that NASA uses a FPGA/microcontroller/cell phone SOC setup, which makes total sense to me, but mundane industrial things like the traffic light at an intersection needs a giant cabinet with shelves of crazy seimens controllers, it feels like such overkill in comparison, you can see them on this channel https://www.youtube.com/watch?v=udpB-en9KKM . The guy is always arguing with the commenters that its all needed for safety. I never could figure out why some company doesnt come up with a NASA-sized solution to control the worlds intersections instead.


It's not needed for safety, but for liability. The equipment is costly because it need to be certified, because every piece of equipment inside will need to be certified. Engineers in this field are averse to new technology because they are liable if something goes wrong.


It's all about SIL rating, and flexibility. Industrial control is basically like lego bricks with a mile of paperwork for the safety-critical bricks.


Infact, NASA wouldn’t use that sort of SOC for anything critical like the rover itself. Ingenuity was always a marginal experiment with a correspondingly high appetite for risk and ability to accept budgetary compromises.


Sure, it was a test platform, to prove that they can use a less RAD hardened design by using the more novel approach. I'd imagine future rovers will definitely use a similar approach.


I'm curious what do they use then? Seems like using an FPGA for sensor gathering and some guidance loops with microcontroller logic control is sensible. I've run into scenarios where just the sensior gathering/IO can take up a significant portion of the time slice of the microcontroller.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: