Hacker Newsnew | past | comments | ask | show | jobs | submit | cool_dude85's commentslogin

Diesel is less fuel efficient than regular gasoline except when you measure by volume. It gets fewer miles per unit of energy in the fuel.

Can you source that? Diesel is only 13% more energy dense than gasoline [1] so the difference between the two fuels isn't huge.

I suspect that modern (last five years) turbocharged gasoline engines are probably approaching diesel thermal efficiency, but I don't think that it's correct to say that they generally surpass it. The gasoline Ford EcoBoost is 33% thermally efficient while a BMW N47 turbo-diesel is 42% thermally efficient, as an example [2].

[1] https://afdc.energy.gov/fuels/properties [2] https://en.wikipedia.org/wiki/Brake-specific_fuel_consumptio...


The fundamental difference in how the engine operates by throttling fuel only instead of air and fuel accounts for a large fuel economy savings

Fuel is sold by volume, which is why volumetric fuel efficiency is desirable to the consumer

Fuel is sold by volume and fuel type; diesel is about 25% more expensive per gallon than regular gasoline where I am.

Correct - where I am it is cheaper most of the year, a bit more expensive in the winter.

And it is 10% cheaper than gasoline where I am (South-Africa)

Yes, but measuring miles per volume of fuel and setting increasing targets was a big focus of reducing petroleum dependency since the 70s.

The focus has more recently shifted to reducing overall emissions of CO2 and other harmful gases and particulates, which makes diesel much less appealing.


I don't think any car buyer has ever looked at Calories per litre of fuel as a relevant metric for purchasing.

People that buy cars almost exclusively care about cost of fuel to move between A and B.


The NSA lacked legal authority to do this bulk collection prior to the Snowden leaks, and yet that didn't stop them from collecting. Why would I believe that their lack of legal authority today would stop them?

Because it's not possible for them to get the same easy access anymore?

It was certainly easy in a world where everything wasn't encrypted, that's not the case anymore.


Can someone who knows a bit more about this help me understand how structures like this are produced? Is there some kind of computer search, perhaps guided? Is this a clever combination of sub-structures, timing mechanisms, etc. that are then fit together like Legos?

Basically, for this specific structure, they had to develop their own "sub structures" on the 1d line. These sub structures are known to create one little thing going diagonally (and then leave a bunch of debris behind, but that doesn't matter too much for that first step, they called this custom part "the fuse"). Then, there is a known technique where taking "diagonal moving objects" created on the same y-coordinate and placing them at the "right x position" makes the collide in a way where you can "program" where to create diagonal moving objects but at arbitrary positions on the screen (this is called a "binary construction arm"). And then, once you can create these anywhere on the screen, then you've basically won ; there's another technique to turn arbitrary positions into arbitrary shapes ("extreme compression construction arm", or ECCA), and it's "just" a matter of making the ECCA clean up all of the debris and build a new fuse but moved over.

Of course, the "just" here does the heavy lifting and represents over two years of exploration, writing algorithms for how to clean up everything, and so on.


I believe this one is a deliberate construction, they knew the evolution of the pieces and gradually put it together.

There’s search programs too, for smaller patterns. This construction is just too big and with such a long period. The search space would be enormous.

I got involved in this stuff years ago when I modified a search program for Life to search any CA rule. That’s how we found the HighLife rule and others like Day and Night.


Right. Interesting small patterns can be found using clever search algorithms. There's also the approach of running trillions of random 'soups' and scanning the results for interesting patterns. These small patterns are then pieced together to build the larger structures.

50 Shades is decidedly not a fanfic for the exact reason that it couldn't be sold as one.

Quoting Wikipedia:

“The Fifty Shades trilogy was developed from a Twilight fan fiction series originally titled Master of the Universe and published by [E. L.] James episodically on fan fiction websites under the pen name ‘Snowqueen Icedragon’.”


Exactly so. It was not able to be published in its initial state as a Twilight fanfic due to copyright and had to be re-worked so as not to infringe.

Not sure if it really looms large in the minds of present-day Australians, but they did vote for a left winger in the 70s and got a coup for their troubles.


And despite said coup continued to brown nose the regimes behind it!

>Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.

Some tools are table saws, and some tools are subcontracting work out to lowest cost bidders to do a crap job. Which of the two is AI?


I've been programming for 20 years and GPT-4 (the one from early 2023) does it better than me.

I'm the guy other programmers I know ask for advice.

I think your metaphor might be a little uncharitable :)

For straightforward stuff, they can handle it.

For stuff that isn't straightforward, they've been trained on pattern matching some nontrivial subset of all human writing. So chances are they'll say, "oh, in this situation you need an X!", because the long tail is, mostly, where they grew up.

--

To really drive the point home... it's easy to laugh at the AI clocks.[0] But I invite you, dear reader, to give it a try! Try making one of those clocks! Measure how long it takes you, how many bugs you write. And how well you'd do it if you only had one shot, and/or weren't allowed to look at the output! (Nor Google anything, for that matter...)

I have tried it, and it was a humbling experience.

https://news.ycombinator.com/item?id=45930151


Now tell the AI to distill a bunch of user goals into a living system which has to evolve over time, integrate with other systems, etc etc. And deliver and support that system

I use Claude code every day and it is a slam dunk for situations like the one above, fiddly UIs and the like. Seriously , some of the best money I spend. But it is not good at more abstract stuff. Still a massive time saver for me and does effectively do a lot of work that would have gotten farmed out to junior engineers.

Maybe this will change in a few years and I'll have to become a potato farmer. I'm not going to get into predictions. But to act like it can do what an engineer with 20 years of experience can do means the AI brain worm got you or it says something about your abilities.


right, but this is akin to arguing why the table saw also does not do x/y/z — I don't know why we only complain about AI and how it does NOT do everything well yet.

Maybe it's expectations set by all the AI companies, idk, but this kind of mentality seems very particular to AI products and nothing else.


I'm OK pondering the right use for the tool for as long as it'll take for the dust to settle. And I'm OK too trying some of it myself. What I resent is the pervasive request/pressure to use it everywhere right now, or 'be left out'.

My biggest gripe with the hype, as there's so much talk of craftmanship here, is: most programmers I've met hate doing code reviews and a good proportion prefer rewriting to reading and understanding other people's code. Now suddenly everyone is to be a prompter and astute reviewer of a flood of code they didn't write and now that you have the tool you should be faster faster faster or there's a problem with you.


well that's the issue. The table saw is a tool, we can very clearly agree it's good at cutting a giant plank of wood but horrible at screwing a bolt in. A carpenter can do both, but not a table saw. We never try to say the table saw IS the carpenter.

All this hype and especially the AGI talks want to treat the AI as an engineer itself. Even an assuredly senior engineer above is saying that it's better than them. So I think it's valid to ask "well can it do [thing a senior engineer does on the daily]" if we're suggesting that it can replace an engineer.


I'm not complaining about it, I said in my post that it's a huge time saver. It's here to stay, and that's pretty clear to see. It has mostly automated away the need for junior engineers, which just 5 years ago would have been a very unexpected outcome, but it's kind of the reality now.

All that being said:

There's a segment of the software eng population that has their heads in the sand about it and the argument basically boils down to "AI bad". Those people are in trouble because they are also the people who insist on a whole committee meeting and trail of design documents to change the color of a button on a website that sells shoes. Most of their actual hard skills are pretty easy to outsource to an AI.

There's also a techbro segment of the population, who are selling snake oil about AGI being imminent, so fire your whole team and hire me in order to outsource your entire product to an army of AI agents. Their thoughts basically boil down to "I'm a grifter, and I smell money". Nevermind the fact that the outcome of such a program would be a smoldering tire fire, they'll be onto the next grift by then.

As with literally everything, there are loud, crazy people on either side and the truth is in the middle somewhere.


Junior engineers will be fine; OpenAI is actually choosing to hire juniors now because they just learned all their theory and structure, and are way more willing to push the LLMs to see what they can do.

Bad code is bad code. There’s been bad code since day one; the question is how fast are you willing to fail, learn, fail again, learn more, and keep going.

LLMs make failing fast nearly effortless, and THAT is power that I think young people really take to.


Failing is part of the learning process if you learn from it. Otherwise it's just failing.


AI doesn’t program better than me yet. It can do some things better than me and I use it for that but it has no taste and is way too willing to write a ton of code. What is great about it compared to an actual junior is if i find out it did something stupid it will redo the work super fast and without getting sad


Too willing to write a ton of code - this is absolutely one of the things that drives me nuts. I ask it to write me a stub implementation and it goes and makes up all the details of how it works, 99% of which is totally wrong. I tell it to rename a file and add a single header line, and it does that - but throws away everything after line 400. Just unreliable and headache-inducing.


For me, AI is definitely a table saw. YMMV.


>Wild - whoever did this should lose their job.

Why's that? Because a guy who's apparently friends with the owner of the company that produces these things told you that it saves emissions? Doesn't it seem reasonable to verify these claims?


No that doesn't seem reasonable at all if it's been proven to work _really well_ in several configurations and there's no particular reason to expect that the results would be drastically different in other very similar configurations.


Who proved it works really well in several configurations?


And how do you codify the threshold for what "very similar" configurations don't need to be tested and those that do?


That's what regulatory exemption procedures exist for, and it would be the logical next step if you had convincing hard data.

Every single regulatory process has them, so the fact that this very ranty article omits any mention of an attempt to use them is highly suspect.

I've worked with plenty of systems where for all sorts of reasons exemptions are granted for the express purpose of promoting innovation or recognizing a special circumstance.


Of course we should verify such claims.

Just as we should also verify claims that every regulation that has ever been written into law is by definition Good (tm) and can never be questioned.

It's possible for the friend of the company owner to astroturf an online form to get a good regulation eliminated, just because it didn't benefit him.

It's also possible for the such wealthy individuals to astrotruf in favour of bad regulations, just because it would benefit him.


The null hypothesis is that interventions are just as if not more likely to cause harm than do good.


Aren't regulations a form of intervention?


Yeah thats my point.


Ah, I read it backwards, since companies selling things to make trucks "better" is also an intervention.


Verifying is great!

How many types of truck engine do you reasonably need to test with? The number should fit on one hand. And really you should only need to do the full test with one model and limited verifications with others. That'll get it down from $27M to $200k, which would be a far more reasonable requirement.


Some kind of testing should be required but 27mil seems egregious


Yeah why does the certification process cost so much is one question I have. Would this be a conversation if the cost of the test were more reasonable?


Most likely it costs a lot because there isn't enough frequency of demand for it for more than one company to offer the service thus there is no supply. However, as it is a regulatory requirement the severity of demand when it appears is near infinite.


Having done UL certification before, this is exactly how it is.

During the process we forgot/missed that the product serial needed a single letter appended to the end to denote that it was the UL compliant version. We caught this after paying $15k for just recertification with new parts, no testing, only paperwork.

We went back to UL and told them about the mistake. They charged us $5k to open a new case just to append a "-5" to the name of the product on a handful of documents.

It's a total fucking racket.


>As an employee you don't have financial risk tied to the company

Is your livelihood, housing, ability to put food on the table for your family etc. not a risk by your understanding? Or are you only willing to accept certain types of financial risk as "risk"?

Here's an illustrative question: John Q. Billionaire owns shares in a passive index fund such that he practically has the same exposure to Amazon's stock price as if he owned $10 million in stock. I will potentially be homeless if I lose my 45k per year job at Amazon. Who has more risk?


This drives me nuts. You move to another city, risk your livelihood on a new job that you don’t know if is gonna work out for you. Your kids go to a new school, your partner has to either move to find a new job, or make it work long distance for a while… Your whole life changes on what is essentially a bet, you have no security whatsoever. And people say the risk is not yours.

Also - and I think this is the main thing - you have NO SAY in any of it after you sign that contract. An owner DECIDES to close shop. To fire people. You risk being fired for whatever reason comes to the mind of your boss, manager, director, owner.

But yeah, no risk. No risk at all.


Then don’t do that? In 2020, when I had the “opportunity” to work at Amazon in a role that would eventually require relocating after COVID, there was no way in hell I was going to uproot my life to work for Amazon and that was after my youngest had graduated. I instead interviewed for a “permanently remote role” [sic]. If that hadn’t been available, I would have kept working local jobs for less money.

Anyone with any familiarity about tech knew or should have known what kind of shitty company Amazon was. At 46, I went in with my eyes wide open. I made my money in stock and cash and severance and kept it moving when Amazon Amazoned me. It was just my 8th job out of now 10 in 30 years.


If your a billionaire and you invest in index funds, the risk of becoming homeless is really low, sure. The system works in such a way that the more money you have the easier it is to make more money. So if your stuck at the bottom, your really stuck.

And I believe there is a huge shift of wealth going on, to a very small number of insanely rich people. And that is a very big problem.


I don't think the billionare is at risk of becoming homeless even if they invest all $10m in a single company and it goes belly up.


It seems like both the authors on this paper were hobbyists (though, to be fair, trained mathematicians/statisticians, as one has a masters and the other a PhD).


I think the risk is that there is some systematic difference between those who chose to participate and the overall population of public Montessori kids. For instance, maybe those with high incomes disproportionately chose to participate, and Montessori strengthens learning for this group, but if we could measure the whole population the result is more mixed. It can't be a fully RCT if there's some kind of opt-in provision (which is not to say that an opt-in provision is bad, or a study that is not fully RCT is irrelevant).


This is a speculative criticism about a hypothetical problem. How random was the study?


https://www.pnas.org/doi/10.1073/pnas.2506130122 You could have answered your own question by reading the abstract, which makes it clear that the OP's conjecture was correct: the lotteries were random or somewhat random, but the groups which consented to the study were notably different, with the treatment group being richer, more educated and whiter. They did of course attempt to control for this, whether the controls were adequate or omitted other underlying differences is another question.


> You could have answered your own question by ...

Who cares? It's not about me or someone else (or you), it's about the issues at hand. If the commenter wants to make a claim, they are welcome to.

People on HN can't read a study without finding one of the few methodological flaws they are aware of - as if that's some form of serious analysis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: