"rapid, iterative Waterfall" is a contradiction. Waterfall means only one iteration. If you change the spec after implementation has started, then it's not waterfall. You can't change the requirements, you can't iterate.
Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.
> Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.
If only this were accurate. Royce's chart (at the beginning of the paper, what became Waterfall, but not what he recommended by the end of the paper) has been adopted by the DOD. They're slowly moving away from it, but it's used on many real-world projects and fails about as spectacularly as you'd expect. If projects deliver on-time, it's because they blow up their budget and have people work long days and weekends for months or years at a time. If it delivers on budget, it's because they deliver late or cut out features. Either way, the pretty plan put into the presentations is not met.
People really do (and did) think that the chart Royce started with was a good idea, they're not competent, but somehow they got into positions in management to force this stupidity.
That's not what AGI used to mean a year or two ago. That's a corruption of the term, and using that definition of AGI is the mark of a con artist, in my experience.
I believe the classical definition is, "It can do any thinking task a human could do", but tasks with economic value (i.e. jobs) are the subset of that which would justify trillions of dollars of investment.
Any definition of AGI that doesn't include awareness is wrongly co-opting the term, in my opinion. I do think some people are doing that, on purpose. That way they can get people who are passionate about actual-AGI to jump on board on working with/for unaware-AGI.
> It makes some sense for an AI trained on human persuasion
Why?
> However, results will vary.
Like in voodoo?
I'm sorry to be dismissive, but your comment is entirely dismissing the point it's replying to, without any explanation as to why it's wrong. "You are holding it wrong" is not a cogent (or respectful) response to "we need to understand how our tools work to do engineering".
Yes. Personal data under GDPR is "any information which are related to an identified or identifiable natural person". If it's data about a specific person, it's personal data, it's a very straightforward definition. Businesses need either informed consent or legitimate interest to store or process it.
I'm not sure what point you are trying to make. Are you saying in order to make LLMs better at learning the missing piece is to make the capable to interact with the outside world? Give them actuators and sensors?
You can get this experience in Android (without sideloading) using Firefox with the correct plugins. I don't have an iPhone so I don't know if the same is true there.
Anyone who says "we will have within this generation technology to extend your lifetime indefinitely" is lying just as much as the priest who says he knows God exists is lying[1]. I would say it's more likely that the scientist liar is accidentally right, than that the priest is; that doesn't make either of them people you should trust.
At the current stage of technology, belief on this process is basically based only on hope. Belief in this is essentially religious.
[1] possibly they both believe they are saying the truth, so you could argue they are wrong rather than lying. They are still both standing on the same grounds.
Actually all the heads of labs and the top 2 cited scientists are saying exactly this. Hassabis Hinton bengio and amodei. It's crazy to think they are lying priests and give 0% probability on this. It's really short minded.
oh thank goodness you've finally shifted the goal post! in other comments you were arguing that radical life extension was impossible but now it's merely impossible within our lifetime! that's a huge shift!
I made two comments in this thread. The one you replied to, and this one I'm using now to respond to you. Do you have me confused with someone else?
But yeah, I think "within our lifetime" is a critical qualifier, and most people who are not writing it down are implicitly assuming that the qualifier is obvious. I have very limited interest in technologies that will not exist until centuries after I'm born, other than as entertainment.
Without that qualifier, almost any practical discussion about technology is moot. It's fun to talk about FTL or whatever, but we certainly should not be investing heavily into it... It might be possible, but most research on that direction would be wasteful.
Furthermore, mobile robots currently in home use--vacuum and mop robots--are all wheeled, of course. We've shown we can accommodate wheeled robots in the home if we feel like the payoff is worth it.
Well I think wheels match the use case there, a small bot close to the ground with just the one job. I think there will be many wheeled bots to begin with. But long term I don't see that form factor scaling to "able to do all tasks around the house".
It's super easy to come up with scenarios that a wheeled bot can't cope with, but again "good enough, cheap enough" will probably see lots of wheeled bots on the market. I am just trying to show why the pioneering companies would be interested in bipedal bots, it's a long term play.
Lastly, the elephant in the room is that basically all general purpose bots are a euphemism for military bots that will need to operate in unknowable conditions.
Exactly, we need legs when they are specifically needed, and we already have wheeled robots so building legged robots that can move like a human will cover so many cases we currently cannot cover.
And even more important are arms and hands, and legs is a precursor to that, they are much simpler so its smart to start with legs to then try to make good arms and hands.
Give me the option of a humanoid looking across that takes care of all in house chores and one that's that utility based with wheels and arms, I will likely choose the humanoid one even with a 100% premium price.
I mean I wouldn't buy either unless I could be certain it's not uploading all data to the cloud and be fully controlled by a user hostile company, but if we're talking fantasy tech ala Detroit: become human... Yeah, it'd be willing to spend a lot of money to have all chores taken care of by a humanoid robot.
And in before someone talks nonsense again wrt "you already can, just pay someone to do it for you"... I do not want to have strangers in my home. This is also essentially why I wouldn't want any cloud connected bot anywhere innit.
The cloud situation is where it's probably going to fall down at first (haha). I don't see companies choosing to offer local model integration over the possibility of using the robot as a loss leader to a long term subscription model for access to the compute/inference.
But that's going to be hilarious. Imagine your internet goes down while the bot is half way down your stairs, or the in the middle of pouring a drink. Very fun.
Then again, Waterfall was never a real methodology; it was a straw man description of early software development. A hyperbole created only to highlight why we should iterate.