Hacker News new | past | comments | ask | show | jobs | submit | kmmlng's comments login

I suppose in the beginning, it was about finding ways to measure how effective different altruistic approaches actually are and focusing your efforts on the most effective ones. Effective then essentially means how much impact you are achieving per dollar spent. One of the more convincing ways of doing this is looking at different charitable foundations and determining how much of each dollar you donate to them actually ends up being used to fix some problem and how much ends up being absorbed by the charitable foundation itself (salaries etc.) with nothing to show for it.

They might have lost the plot somewhere along the line, but the effective altruism movement had some good ideas.


“Measurable altruism” would have been a better name


> One of the more convincing ways of doing this is looking at different charitable foundations and determining how much of each dollar you donate to them actually ends up being used to fix some problem and how much ends up being absorbed by the charitable foundation itself (salaries etc.) with nothing to show for it.

Color me unconvinced. This will work for some situations. At this point, it's well known enough that it's a target that has ceased to be a good measure (Goodhart's Law).

The usual way to look at this is to look at the percentage of donations spent on administrative costs. This makes two large assumptions: (1) administrative costs have zero benefit, and (2) non-administrative costs have 100% benefit. Both are wildly wrong.

A simple counterexample: you're going to solve hunger. So you take donations, skim 0.0000001% off the top for your time because "I'm maximizing benefit, baby!", and use the rest to purchase bananas. You dump those bananas in a pile in the middle of a homeless encampment.

There are so many problems with this, but I'll stick with the simplest: in 2 weeks, you have a pile of rotten bananas and everyone is starving again. It would have been better to store some of the bananas and give them out over time, which requires space and maybe even cooling to hold inventory, which cost money, and that's money that is not directly fixing the problem.

There are so many examples of feel-good world saving that end up destroying communities and cultures, fostering dependence, promoting corruption, propping up the institutions that causing the problem, etc.

Another analogy: you make a billion dollars and put it in a trust for your grandchild to inherit the full sum when they turn 16. Your efficiency measure is at 100%! What could possibly go wrong? Could someone improve the outcome by, you know, administering the trust for you?

Smart administration can (but does not have to) increase effectiveness. Using this magical "how much of each dollar... ends up being used to fix some problem" metric is going to encourage ineffective charities and deceptive accounting.


That's fair enough, there are problems with this way of thinking. I suppose you could say the take-away should be "Don't donate to charities where close to your whole donation will be absorbed as administrative costs". There definitely are black sheep that act this way and they probably served as the original motivation for EA. It's a logical next step to come up with a way to systematically identify these black sheep. That is probably the point where this approach should have stopped.


This is a super fair summary and has shifted my thinking on this a bit thanks.


> It seems that you're saying that a therapist will be "rubbish unless they use basic Cognitive Behavioural Therapy concepts" ? i.e. that this is the only valid approach to therapy?

I believe the parent poster is saying that CBT is the only form of therapy you can trust an LLM to pull off because it's straightforward to administer.


Computerised CBT is even already being delivered and by quite a bit less sophisticated systems than LLMs. Resourcing constraints have made it very popular in the UK.


In that case, the questionable statement is the assumption that a LLM can pull off any form of therapy at all.


With many drugs that are used both therapeutically and recreationally, it is the case that the average recreational dosage is much larger. Ketamine is an exception here, as therapeutic doses are actually quite high. The common mistake here is to equate intravenous dosage with intranasal dosage, when the bioavailability differs significantly between these routes of administration. And that's not even considering that most reported recreational dosages are wrong due to cutting agents.

There certainly is recreational abuse with very large dosages, but I don't think it's fair to claim that the majority of users fall into this category.


I think we have seen a general trend towards centralized platforms on the internet. Where you had many individual niche sites before, now you have a few all-encompassing platforms. There are some exceptions, but I generally find that many of those platforms want to maximize your time on the platform itself. As a consequence, they do what they can to keep you from leaving the platform via a link to some other website.


Yes, there is additional context that is not explicitly stated in the question. It is clear that you are looking for a job to earn money and live your life and everyone already knows this, so there is no need to talk about. The question is: Why did you apply (here out of all the places you could have applied to)?


It's not like the US doesn't have a problem with affordable housing, so I don't see how this plays any role in the divide.

Germany has plenty of applied research organizations, from universities (e.g. RWTH) to things like Fraunhofer. The funding schemes behind these organizations are horrible and I would argue that in many ways, they are machines to burn up potential. Even with all this, Germany has been doing okay on the publicly funded AI research front, but that is irrelevant. The US isn't leading because of publicly funded AI effort, but because of privately funded AI effort.


This seems like the classic shifting of goalposts to determine when AI has actually become intelligent. Is the ability to communicate not a form of intelligence? We don't have to pretend like these models are super intelligent, but to deny them any intelligence seems too far for me.


My intent was not to claim communication isn’t a sign of intelligence, but that the appearance of communication and our tendency to anthropomorphize behaviors that are similar to ours can result in misunderstandings as to the current capabilities of LLMs.

glenstein made a good point that I was commingling concepts of intelligence and consciousness. I think his commentary is really insightful here: https://news.ycombinator.com/item?id=42912765


AI certainly won't be intelligent while it has episodic responses to queries with no ability to learn from or even remember the conversation without it being fed back through as context. This is the current case for LLM models. Token prediction != Intelligence no matter how intelligent it may seem. I would say adaptability is a fundamental requirement of intelligence.


>AI certainly won't be intelligent while it has episodic responses to queries with no ability to learn from or even remember the conversation without it being fed back through as context.

Thank God no one at the AI labs is working to remove that limitation!


And yet, it is still a current limitation and relevant to all current claims of LLM intelligence.


The guy in memento is clearly still an intelligent human despite having no memory. These arguments always strike me as coming from a "humans are just special okay!" place. Why are you so determined to find some way in which LLMs aren't intelligent? Why gatekeep so much?


I mean humans have short term and long term memory, short term memory is just our context window.


I keep seeing this argument, but I don't buy it at all. I want a phone with an AGI, not a phone that is only AGI. Often it's just easier to press a button rather than talk to an AI, regardless how smart it is. I have no interest in natural language being the only interface to my device, that sounds awful. In public, I want to preserve my privacy. I do not want to have everyone listening in on what I'm doing.

If we can create an AGI that can literally read my mind, okay, maybe that's a better interface than the current one, but we are far away from that scenario.

Until then, I'm convinced users will prefer a phone with AI functionalities rather than the reverse. It's easier for a phone company to create such a phone than it is for an AI company.


This is clearly a joke, but just for completeness: This would be terrible idea as dosing is very important with this drug. Too little doesn't make sense, too much is extremely dangerous.


It is true that Germany (and Europe as a whole) suffers from a less than ideal investment and innovation landscape. The companies you mentioned, however, worked on products that barely make any sense. It is clear that those kinds of companies will not (and should not) survive outside of 0% interest paradigms.


I think, other countries would help these companies pivoting to defense/military applications. Munich in Germany is basically stuck during rush hour. Flying ambulance or cop car sounds good to me. And such application would be with high probability successful export product.


We already have Airbus Helicopters development and production in that area (Donauwörth).


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: