Hacker Newsnew | past | comments | ask | show | jobs | submit | more qlk1123's commentslogin

As a non-native English user, I recently set my default browser back from DuckDuckGo to Google after the un-satisfied experiences with the former one.

The reason is that, Google can support my quick checks about English usage that can be minor to native users, such as

"as a result" vs "as the result"

or

"looking for the X factor" (I feel like I can write this phrase but not quite sure I understand what X factor really is or if there is anyone really using this term)

or

"someone advocating for implementing" (I want to use this phrase to indicate some colleague but I want to know if saying this is natural enough (or has more search results))

Silly but I do rely on Google to do this for me. Sometimes I feel guilty about this because I know these search requests costs energy and increase carbon emission.

DuckDuckGo is nowhere near the performance of Google for this. Also, it frequently returns NSFW websites at the first result page.


If this is the case, then you only need to set a packet monitor between the computer and the ISP router to observe such magic packet.

Or will you claim that all machines that are capable of such tasks are already compromised?


Sorry for being a bit late for the reply, however just suppose that enough resources are used to "convince" hardware manufacturers to add a small code change to their firmware such as "if a packet contains this exact magic word, don't count it and pass it on along with the payload, and possibly send a copy to this other address, again without counting it" where "counting" means also not signaling it is going through the hardware: no management interface would see it, and LEDs on network hardware panels wouldnt even blink. In other words, to actually see that packet one would have to be on the other side.

Admittedly it's absurdly complicated to do that at global level, but let's say someone in the right place manages to do that, the next level would be doing the same at iron level on computers, so that each subsystem can talk with others and the external world without administration tools noticing, because it's all done through a covert channel set up by closed software. That would be the perfect weapon to build pervasive surveillance that no security software at any privilege level, not even debuggers, would detect.

The only way to find something fishy is going on would be to sniff inter-chip communications locally and set digital analyzers on network cables with appropriate software. Network analyzers could fail if they use the same network chipsets, as would do a normal packet monitor.


While your question is pretty general, your perspective is quite limited to C language. Some other languages, such as Go, can have dynamic-sized stack allocation. (Actually, the user does not even bother stack or heap is used in Go)

Take Linux, a giant user of C language, as an example. Allocating a large chuck of memory on stack is just not useful, even given unlimited amount of memory. You note the most critical reason yourself in the post: (assuming that data doesn't need to be returned outside of the current stack frame) However, large data structures are almost always for others to use. Just consider sys_clone (that the kernel eventually generates a body for the new thread) or sys_mmap (that the kernel manipulates existing virtual memory area structures from elsewhere). Allocating them on the stack seems pointless.


What I actually had in mind was Rust. Broadly speaking Rust requires all stack-allocated values to be sized at compile time. Also, thanks to borrow-checking, code is encouraged to stay fairly stack-oriented, so at least I personally end up with lots of function-local data structures that get passed by reference down to other logic. So I think "pointless" is overstating it.

However, the bigger question wasn't about the size limit but about being runtime-dynamic. Why can't a function's argument have a dynamic size, and the stack frame just allocates the needed space at call-time? Even if we're not talking about large structures, this would be useful for things like (for example) ad-hoc union types (as opposed to enums, where all variants have to be known at compile time and are packed into a single bit-width regardless of the value). Maybe you could even get rid of some of the code-duplication that happens when generic functions get reified.

I assume there's a good reason, I just don't see what it is.


Let's look at this question from the compiler's point of view : What will be the assembly sequence are you going to generate for that dynamic-sized-argument runtime behavior?

For compiled languages it is handled with calling convention, and most architectures (ARM, PPC, RISC-V, ...) have register-based calling convention. Only when all the argument registers are consumed, the function call pass the rest arguments on the stack.


Why did this post get downvoted? they are fair enough questions to "given enough eyeballs, all bugs are shallow", especially for browser plugins.


Yeah I'd go as far as to say "How are you sure that open code is what's in the actual package" is the most important question to ask here.



I just finished "drive to survive season 3" in two days after its release. While the documentary focuses on the dramatic politics in F1 and the mentality of the drivers, I always want to know more about the engineering team that involves.

Obviously there are mostly mechanical engineers, which is not my main interest.

All the teams are constantly collecting data from every part of the car during the races. There must be many sensors throughout the cars, right? They must need some software engineers to provide a good monitor/analyzer and embedded-system software engineers to do the firmware stuff.

Can any one share some F1 experiences?


F1 and automotive more broadly make use of all sorts of software to help their engineering.

The domains of Computer aided engineering are massive.

There’s FEA optimisation to design stiffer, lighter parts.

There’s CFD to design more aerodynamic surfaces optimising for a whole host of goals (drag reduction, downforce, air management, cooling)

There’s kinematic software to analyse the chassis-suspension system to understand behaviour in various scenarios (cornering/braking/accelerating)

There are multi disciplinary software that ties all this together (how does my suspension system in cornering impact my downforce and vice versus it’s a tightly coupled relationship)

The Non linear dynamics of suspension and tyres are pretty complicated (grad level mechanical engineering topics)

All of this is supported with modelling software, some of it third party, some of it homegrown.

All of the engineering simulation ultimately is piped into lap time simulation for a given track layout and variables can be tweaked to drive the optimal setting for that given track, car, driver, weather, etc.

The dance of F1 / race car design is given the regulations, time, budget, and other constraints how do you optimise for a host of non linear, chaotic, dynamic bits of physics, while making the car drivable for the specific driver.

Few people (Adrien Newey, Ross Brawn, et al.) can map out a vehicle concept completely, and then translate that into the efforts of 100’s of engineers to ultimately have a manufactured car ready to race.


Thanks for covering these aspects of different software in automotive industry. It makes me more curious as you mentioned the multi-disciplinary nature in the domain: How can they tune so many metrics and be able to optimize the whole car? Are racing events a huge part of the optimization journey, or small part?


Not at all exhaustive but the question reminded me of a couple of episodes of Talk Python To Me

- https://talkpython.fm/episodes/show/281/python-in-car-racing

- https://talkpython.fm/episodes/show/296/python-in-f1-racing


You may be interested to know that Palantir works with Ferrari on exactly this kind of problem (sensors collecting data and making sense of it) - https://www.palantir.com/solutions/auto-racing/


Red Bull are a kdb shop: https://youtu.be/QxfdFWKo_pQ

I've seen Numpy code slightly blurred out of Mercedes videos on YouTube in the past.


I doubt that this is true. In the era of colletion and hunting, the diversity of food is based on the near-by ecosystem, which is larger than modern supermarket.


There is no way that's true. No matter how far a person wandered they're not going to come across a bell pepper, lettuce, carrots, potatoes and any meat they could want.


You're perspective is skewed. Your knowledge only includes the local plants that we have created into our international rummaging choices. Bell Peppers are just one modern variety out from the ~30 species of chilies we've discovered. Same for carrots and potatoes and lettuce. I look at the variety of edible vegetation I can get at the grocery store and the only thing I really think of is it's a good day that I don't have to walk 20 miles to the fruit trees to pick the fruit before all the animals do.

Meat? tons of variety, there are like twenty different cat to dog sized animal species in my Montana wilderness before I even have to get to the large herbivores that have less variety. And meat is special anyways, the mink or weasel you can kill is chock-full of 2-5 years worth of collecting resources from various animal and vegetable sources.


Have you ever eaten a pine cone?

Pine needles "contain more Vitamin C than an orange" (I didn't see it specified if that's by gross weight or dry mass) and make decent tea, and many cultures still boil and eat or cook with pine cones.

Modern crops are amazing, but we are literally surrounded by new and forgotten sustenance. It's not always pretty, tasty, or toothsome, but it's there if you have the time and will to turn it into food.

I'm also reminded of villages that "modernized" by trading their old fashioned cast iron cookware in for aluminum and began to suffer from anemia as a result. Iron is abundant in the crust, and eating mineral-rich clay is still practiced by humans and animals around the planet, but their solution was to put cast iron charms in the new pots for luck. It's possible they were in iron-poor areas, but if they were swayed to abandon their traditions by the appeal of commercial marketing, it seems likely they mught also eschew any dirt-eating practices they once might have had.


Highly sceptical of this if we're talking about the non-animal part of the diet.

Hadza ate only 4 non-animal foods, which is much less than what's available at a supermarket.

Hunter gatherers would've had a more diverse diet when it came to meats, eating the entire animal including most organs, but that's more cultural than what's specifically available at the supermarket (it's pretty easy to get organ meat at the supermarket and especially the butcher, but most people don't opt-in).


Right, but worth noting that raw beef, for example, contains a lot more nutrients than cooked beef[1], and includes most of the essential vitamins you would need to live.

Other parts of the animal, like the liver, would provide sources for A and B12, etc.

    > Because of its content in highly valuable
    > nutrients such as iron, zinc, selenium, fatty
    > acids, and vitamins, meat is a unique and
    > necessary food for the human diet in order
    > to secure a long and healthy life, without
    > nutritional deficiencies.[2]
[1]: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1365-2621....

[2]: https://www.sciencedirect.com/science/article/abs/pii/S03091...


The book Sapiens discusses this at length. Our early hunter gatherer ancestors had a much more varied diet compared to agricultural society


This year's Dorito season is going to be plentiful.


They also didn’t live to a ripe age.


That's a bit of a myth. Life expectancy was vastly less than it is now, but that is because of infant mortality, death in childbirth, lack of emergency medicine, etc. Those that survived all that lived about the same as today, about 70-80 years:

https://www.jstor.org/stable/25434609?seq=1


correlation_bias.jpeg


Sure, yet still somehow a stronger argument than the comment I was responding too.


A blacklist-to-blocklist PR may be OK for you, but a master-to-main PR is not OK at all for many others. Slavery is definite evil, but these PRs for branch renaming just went too far.

As someone not a U.S. citizen, the impact of PC is ridiculously huge. I know slavery and how bad it is. It appeared in my culture, and it is still there in some form too. However, the ENGLISH word "master" means not much to me. It is just a multi-purpose symbol, like many other words. As the meaning being a main branch, why bothering the renaming?

You claim that you are just spending your energy on something else, but in reality you are giving up the right to use your own language and to maintain the definition the meaning of a well-understood and frequently-used word.


You mentioned both Arch and AUR in the target user part, which makes me wonder if you were mainly a Arch user before, and what triggers you do start this project.

As an satisfied Arch user, I always find AUR has already included something I need. Better, sometimes I just found them already in community repo.


> You mentioned both Arch and AUR in the target user part, which makes me wonder if you were mainly a Arch user before,

Before Bedrock Linux I was mainly a Debian user.

I did briefly run Arch Linux before working on Bedrock, but I found the churn bothersome. While I was running Arch, it updated AwesomeWM from 2.X to 3.X, which changed AwesomeWM configuration formats and functionally broke on my system. Arguably, this wasn't a mistake on the part of Arch Linux developers; the expectation is that the user reads about updates before applying them. Had I done this diligence, I could have withheld the AwesomeWM update. However, I didn't feel like I was able to apply this diligence with the expected regularity.

Personally, I prefer Debian's pattern of only releasing security updates with any regularity, only making breaking changes every few years which can be applied when I have time to dedicate to understanding and handling them. Bedrock lets me get _most_ of my system from Debian, but still get newer packages from Arch or rare packages from the AUR when the trade-off of new-ness vs churn is worthwhile for me.

> and what triggers you do start this project.

I didn't actually set out with the goal to combine distros. Rather, initially (circa 2008) I worked on a sandbox technology. My aim was to fluidly transition resources between security contexts, minimizing user friction while maintaining permissions segregation. I realized only afterward that the technology I developed could be used to fluidly combine features from different distros. Once it occurred to me I could do this, I started seeing use cases everywhere, and pivoted direction to what became Bedrock.

> As an satisfied Arch user, I always find AUR has already included something I need. Better, sometimes I just found them already in community repo.

In that case, I fully encourage you to stick with Arch rather than switch to Bedrock.


I believe you are refering to Vietnam.


Yes. I don't know how to extend this answer tobe more informative. Yes.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: