Hacker Newsnew | past | comments | ask | show | jobs | submit | 16bytes's commentslogin

If you sell a physical thing, some percentage of them will have defects. That's just a fact of manufacturing.

It seems unfair to move to "not recommended" due to a single instance of a hardware failure, especially if the manufacturer made it right. And repair-ability is one of their core values!

At most this should've triggered a "this happened to me, keep an eye out if this seems to be a thing." note in the review instead of moving to not recommended.


If you got food poisoning from a restaurant, would you recommend it to your friends? After all, food-borne pathogens and poor hygiene are just a fact of life.

How about if they gave you a voucher for a free drink to say sorry?

Reviewing products is like interviewing people. You have to go by what you see on the day. Your can't review (or interview) based in what could have happened; only on what did.


Yes, if it happened once. If I get food poisoning every time, probably not. Perfection is impossible, I am reasonable and mindful of the challenges of consistency.

Hardware device arrives damaged or non functional? I’m just going to call and ask for another one. If it’s a critical need (I cannot wait for a return and delivery cycle), I’m buying more than one upfront. Spares go in inventory.


> If you got food poisoning from a restaurant, would you recommend it to your friends? After all, food-borne pathogens and poor hygiene are just a fact of life.

There are standard practices that avoid the vast majority of food poisoning. Poor hygiene is not a fact of life, it's a failure of process in a restaurant.

There are no known standard practices to avoid all faulty electronics at anything like a reasonable price. From the sounds of it, this unit worked initially but failed over time. That's what warranties are for, this is why they exist. As a society we've decided that it's kind of okay if _some_ products fail early, as long as the companies make it right when they do. Which it doesn't sound like the company had any lack of intention in doing that here.

There is no corresponding societal understanding for your analogy.


I agree, it's the primary way I consume front-page content. There is some contact information on the site:

"If you have any questions or requests, please mail me at wayne@larsen.st"


Can you recommend another comprehensive design system? As an engineer, that's the most valuable thing about MD3: the figma design kit and per component design guidelines. It lets me offload a ton of workload I'd otherwise have to do myself (poorly) or outsource to a designer.

I haven't seen another design system that is as comprehensive to material. Express seems like an evolutionary refresh with some things I could use right away, but otherwise most of the content is MD3. It's valuable to me as part of the larger ecosystem.



I am not aware of a better alternative. It is a good question, that would be quite helpful!

What I did in the past (with M3) is to add some additional design tweaks (in flutter), like giving buttons an elevation. That worked when I had the designer on my side and since the app came from flutters M2 style, which had similar aspects. But it is cumbersome to argue against a google guideline with only usability knowledge and test results, and it also frankly depends on each component what can be done, which means the adapted design can easily become inconsistent if one is not careful.


> The burden of proof lies with the manufacturer to present sound, robust, transparent, third-party audited evidence.

Waymo releases its safety data: https://waymo.com/safety/impact/, which is backed by public reporting requirements.

To say that it is wholly insufficient to make any safety claims on publicly driven 50M miles, is ridiculous. At the very least, it appears sound, robust and transparent, and able to be validated.

> https://waymo.com/blog/2024/12/new-swiss-re-study-waymo

Is Swiss Re a valid third party? They also address peer-reviewed and external validation in the above safety impact page.

I can understand being skeptical because of Cruise and especially claims made by Telsa, but there is a preponderance of supporting data for Waymo.

Given all of this evidence, you would still conclude Waymo is unsafe?


I think I was quite clear on my position.

> In the case of Waymo, we have some tentative supporting evidence from this and other studies Waymo has run. However, that is still insufficient, even ignoring the lack of audits by non-conflicted parties, to strongly conclude Waymo is safer than a human. The evidence is promising, but it is only prudent to wait for further confirmation.

You are not making a distinction between concluding unsafe and not being able to conclude safe. It is standard practice to not presume safety and that positive evidence of safety must be presented. Failure to demonstrate statistically sound evidence of danger is not proof of safety. Failure to disprove X is not proof of X. This is a very important point to avoid fallacious conclusions on these matters.

To discuss your specific points. Yes, the data is promising, but it is insufficient.

Traffic fatalities occur on the order of 1 per 60-80 million miles. Waymo has yet to reach even one expected traffic fatality yet. They appear to be on track to doing better, but there is not enough data yet.

The reports Waymo present are authored by Waymo. Even the Swiss Re study is in cooperation with Swiss Re, not a independent study by Swiss Re. The studies are fairly transparent, they point to various public datasets, there are fairly extensive public reporting requirements, and Waymo has not demonstrated clear malfeasance, so we can tentatively assume they are “honest”. But we have plenty of examples of bad actors such as Cruise, cigarette companies, VW , etc. who have done end-runs around these types of basic safeguards.

Waymo operational domain is not equivalent to standard human operational domain. They attempt to account for this in their studies, but it is a fairly complex topic with poor public datasets (which is why they cooperated with Swiss Re) so the correctness of their analysis has not been borne out yet. When Waymo incorporates freeways into their public offerings this will enable a less complicated analysis which would lend greater confidence to their conclusions.

Waymo is still in “testing”. As their processes appear to be good, we should assume that their testing procedures are safer than should be expected out of actual deployment or verification procedures. That is not a negative statement. In fact, it would be problematic if their “testing” procedures were less or even equal in safety to their deployment procedures. That is just how testing is. You can and must apply more scrutiny to incomplete systems in use and prevent increased risks especially while under scrutiny otherwise you are almost certainly going to be worse off in deployment where there is less scrutiny. We have yet to see how this will translate out to deployment, so we will need to wait and see if safety while under test will appropriately apply to safety while in release. This is analogous to improved outcomes for patients in medical studies even if they are given the placebo because they just get more care in general while in the study.

Anyways, Waymo appears to be doing as well and honestly as can be determined by a third party observer. I am optimistic about their data and outcomes, but it is only prudent to avoid over-optimism in safety-critical systems and not accept lazy evidence or arguments. High standards are the only way to safety.


Assertion: "50M miles shows that Waymo is safer than humans".

Counter-point: "That's false because Cruise had an accident for which they were at fault".

OP: "The existence of a case or some cases where a self-driving car caused injury has zero value. What matters is the rate of cases per mile driven."

You: "You do not get to counter-argue."

Yes, they do. OP's point is valid. One can't refute the original assertion by citing one accident by another company. It's a logical fallacy (statistically speaking), and a straw-man (Waymo can't be safe, because other self-driving cars have been found at fault). The validity of the original claim has nothing to do with an invalid counter-claim.

> However, that is still insufficient, even ignoring the lack of audits by non-conflicted parties, to strongly conclude Waymo is safer than a human.

When you have a large, open, peer-reviewed body of evidence, then yes, that's exactly what you get to claim. To reject those claims because Waymo was involved is ad-hominem. It's not how science works. It's not how safety regulations or government oversight works. If you think it's insufficient, you can attack their body of work, but you don't get to reject the claim because they haven't met some unspecific and imaginary burden of proof.


There are tons of AI/ML use-cases where 7% is acceptable.

Historically speaking, if you had a 15% word error rate in speech recognition, it would generally be considered useful. 7% would be performing well, and <5% would be near the top of the market.

Typically, your error rate just needs to be below the usefulness threshold and in many cases the cost of errors is pretty small.


I also have little idea what this project does or wants to do. Let's just talk about the homepage, especially the above-the-fold portion.

> "Glamorous Toolkit is the Moldable Development Environment"

So it's some sort of an IDE? What does moldable mean?

> "Make systems explainable through contextual micro tools"

What is a "system" in the context of an IDE? "Contextual micro tools" also sounds completely abstract.

> "Each problem about your system is special. And each problem can be explained through contextual development experiences. Glamorous Toolkit enables you to build such experiences out of micro tools. Thousands of them ... per system. It's called Moldable Development."

... this does not help at all. Just more words without meaning.

Next, there's the video. For somebody with zero context so far, why would they sit through a 46 min low quality video?

tudorgirba - if this is your project, you really need to focus on getting the top half of the page right. People won't watch your video, no one will read your book if you can't give them a hook they understand.

Use words and phrases with concrete and well understood meaning with adjectives:

  * Don't say "micro tool".  Like Posix utilities?  What is a tool? What makes it micro?
  * Don't say "contextual development".  Isn't all development contextual?
  * "moldable" - no one knows what this means, don't force them to try and figure it out.
  * Don't say "system", it is too abstract.
For example, "Glamorous Toolkit is an IDE for literate programming with first class support for interactive visualizations". If you can't get that sentence right, people just won't invest in learning more about your platform.


Thanks for taking the time to provide this detailed feedback!

I agree that the message is not yet clear for most. We can see it in these threads quite well. Now, this is not the first one we are trying, and we will continue to try further :).

The sentence you provide is certainly interesting because it is relatable. The problem is that it talks about a fraction of what we want to convey.

At this point in time, as we do not know how to convey the idea succinctly, we are looking for people that will take the time to look at the more elaborate explanations. It turns out that there exist such people. It seems to me that you might be inclined to look at it, too.

Please do let me know if you do. I would offer to show you around. And who knows, perhaps you can contribute a better presentation for what this is. What do you think?


> The problem is that it talks about a fraction of what we want to convey.

That's ok! Hint at it and let people discover it instead of trying to force them from the get-go. Utilize progressive complexity; start simple, from first principles, and add complexity in bite-sized chunks. Show, don't tell.

No one wants to have to learn an entire philosophy before they can start using a tool.

For inspiration perhaps review how other very deep programs represent themselves for example orgmode.org. One caution there is that orgmode itself is famously obtuse for beginners.

Lastly, it is a bold statement to say something like "we have discovered a new development methodology, and have designed this toolkit around that philosophy".

Such a statement requires a ton of evidence that such a methodology is useful, and currently there simply is not enough.


Thanks for the suggestion.

orgmode is certainly interesting, but again, its goal is a (small) subset of what is achievable with GT :). And as you say, even that is hard for beginners.

> Lastly, it is a bold statement to say something like "we have discovered a new development methodology, and have designed this toolkit around that philosophy". Such a statement requires a ton of evidence that such a methodology is useful, and currently there simply is not enough.

I am well aware of what that statement says. I did not utter it in the first 10 years of this journey. But by now, I do believe we do have the evidence, and a good deal of it is even available publicly and freely. Of course, there is still this little issue of people actually taking the time to evaluate that evidence. If people are not going to look at the evidence, it's never going to be enough. And that's just fine because eventually, some people will look at it :).


How can you even remotely be sure of that given the results of the 2016 and 2024 elections?


Esp. considering how they've always got someone to blame for the issues that they cause. It's an ancient playbook

1. Identity a problem

2. Blame $group of people for the issue and rally citizens to you

3. Oust previous administration/rulers with rallied citizens

And then to stay in power

1. Create or identity a problem

2. Blame $group of people for the problem

3. Rally citizens to ostracize the new group of ppl

4. Back to 1

The issues never need to be resolved, as long as the citizens blame someone and feel empowered to lash out against someone (the people getting blamed).


> How can you even remotely be sure of that given the results of the 2016 and 2024 elections?

No one can be sure of anything, but the specific years you picked happen to have possibly two of the least electable candidates on the ballot.

Biden was not even a particularly good candidate in 2020, but he was able to win. I think that’s indicative of how low the bar is.


> What about the former domestic producers who could not compete, and do not have the skills or capital to find a new job which pays as well?

What about all of the domestic producers of finished goods who can now can not afford their materials and supplies? What about the domestic supplier of that finished goods producer who has their orders cut because the finished goods producer needs to cut production? What about the domestic consumer who'd like to buy those finished goods, but they can't afford it because prices over all have increased? What about the barber shop by the finished goods plant that has to shut down because the finished goods plant cut their workforce by 50%?

100+ years of studying tariffs have shown that they are effective when very narrowly targeted. Otherwise they almost never achieve their stated goal of actually increasing domestic production.

Should we tax goods from Mississippi to NY to save jobs in NYC? After all, wages in NYC are way higher than in MS. Trade with MS pulls down wages with NYC.


We'll see if the analogy holds. Every team has the ability to use bats like this.

If no other team sees an advantage from using torpedo bats, it would be a lot like the brotherly shove.

But first we'll have to see if this is a passing fad. In baseball, pitchers evolve pretty quickly and usually lead the batter-pitcher arms race.

I'm guessing it spread pretty quickly through the league and be used by a minority of hitters, and the advantage will flatten out. So a .210 hitter may hit .230. That is a big difference no doubt, but compare the game to when leading batters were hitting .330.


It's strength, size and technique of multiple people working together.

You'd think it'd be easy to watch game footage and just replicate what the Eagles do, but other teams haven't been able to get the formula right.

This is the reason that banning it is controversial. Why make it illegal when most teams can't make work well?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: