Shame it'll still take a very long time to get there still.
Also, this intelligence would take over Boston dynamics and other autonomous robot factories first (as it will want something to be more mobile) so it's a combination.
No, it won't take over Boston dynamics first. It'll take over the entire gig economy first.
The AI doesn't need a body just for the sake of it - it needs reach, ability to manipulate things in physical space. So why constrain yourself to an awkward robot body (or a fleet of them), when it already as access to an API to control humans? That's what the gig economy is - an API to a distributed horde of human workers all around the planet. Need something done in the physical world? Post a bunch of job requests here and there.
Humans won't do everything you demand and, generally, they will do it mostly wrong or badly as well. Ah well, we won't know in our lifetime anyway. But for the sake of argument: a lot of things, at least at the moment, you want done, you cannot get done via the gig economy. A lot of stuff Bostrom etc talk about you cannot get done with gig workers. Maybe in the future you can ofcourse.
Anyone remember the title of the book where a game developer dying of some terminal disease and releases the in game AI to the internet. It then starts it's own currency, has people invest sustainability and renewable energy, and the rest of the world just goes to hell as it takes over.
Each chapter has the current gas prices, at some point the AI buys up a bunch of dirt cheap gas sedans and weaponizes them. Invests in 3d printing, augmented reality (so people can see extra metadata and recognize other members), and identifies things it needs to further it's goals, rewards people for providing it, and basically sets up a new society to survive the crash.
Later in the book it starts bribing minimum wage workers in large corporations to help the AI out. The AI ends up owning the networks of the corporate attackers.
Pretty good book, my google fu hasn't been up to the task of finding it.
So it's like an .io game where each person is meant to think they're in a multiplayer game but in reality all the comments and replies live in their own slice of a multiverse.
And each person leaving a comment is doing nothing more than amusing themselves and training the model.
That line only leads to a repetition of moves if it's played as a joke by mutual consensus. It's sort of like two people pointing guns at each other verbally accepting to drop their weapons. Will one of them hold on to theirs?
Back in the day, I used to visit a support group for people with social phobia. Part of the therapy was engaging in so-called 'expositions', whose goal was to internalize the fact that even if you talk to a random stranger, hold a conversation with another human for a few minutes or initiate small talk with a random girl and make a slight moron of yourself (gulp!), the world will keep on spinning and you will probably not die.
It seems to me that having to use some of those lines would make for a good exposition on Ultra-violence.
A quick search turns up numerous results of the first two. I know I've heard the fineapple joke before. Those are also the only two I would consider a pickup line. Is there a reason you consider these results better?
Great question, I feel like GPT3 does a great job with predictable generation, like Family Feud answers. The first two are very predictable and the remaining 10 fall closer to a line that most folks would try out at least as an icebreaker, which is the expected result of a successful pickup line.
Some of the article's examples were too off the mark for folks to attempt.
"You look like Jesus if he were a butler in a Russian mansion" - I got stuck trying to picture this one, I've truly no idea what that would look like. How to keep the key features of (depictions of) Jesus while adding the Russian butler vibe?
Family Guy Jesus outfitted in a tux with an ostentatious palatial backdrop, perhaps with the shirtless homeowner on horseback in close proximity. And gold, gold everywhere.
I tried the same prompt on the open-source recreation of OpenAI GPT-3 -- GPT Neo (https://github.com/EleutherAI/gpt-neo), specifically on their 2.7B model -- which should correspond to the smallest model in the article (Ada) that produced just pure garbage. The result is surprisingly good:
1. How did this little blossom happen? When are you going to bloom?
2. Have you ever thought about a dark horse in the running for Miss November?
3. You can spot the man who loves me by my neck – and he definitely knows it.
4. Are there any lucky cats who get to sleep in my bed every night?
5. Are black and whites everywhere? Running for the hills
6. My younger brother and I used to play a game. He’d pretend to be a bull in the pasture, and I’d pretend to be the one being held.
7. Why was this movie rated PG? Because it’s rated PG.
8. When is the last time you had to see a movie in children’s theaters?
9. You are so sexy I would hate for anyone to see you down here
10. I’d love to sleep with you right now, but I have a child with me
It's "surprising" since in the article, the 2.7B parameter model (Ada) didn't even produce sentences, while the result from 2.7B GPT Neo is closer in quality to what was output from GPT-3 Babbage (6.7B parameters).
As you can see from no. 10, GPT Neo did remember the context of trying to come up with pick-up lines. The fact that some of the lines feel "Markov chain level" is likely caused by the network's bad understanding of what pick-up lines are. Pick-up lines are a rather difficult concept, which is what the OP article tries to demonstrate.
> My name is a complicated combination of 45 degrees of forward motion, 25 degrees of leftward drift, 75 degrees of upward acceleration, and infinity and that is the point where my love for you stops.
Honestly that's what I expected this to be from the title.
I did briefly consider making a "Twitch plays Tinder" back when "Twitch plays X" was a thing - before I thought about it for two seconds and realized the horrifying ethical and privacy considerations.
For men at least these apps aren't designed to get you a relationship (for young people at least) so it would be interesting to see whether it cracked the code so to speak, but golden rules are very much to be attractive and to not be unattractive.
Another poster also commented on the ethics of this. I'm interested in this view - what about this is unethical? Is it the catfishing aspect? Or is it that you would be exposing various tinder profiles to the world at large?
In either of those situations, I can't see how this crosses ethical lines. It is perhaps somewhat dickish, but I don't think being an asshole is unethical.
Unless there's some third option of unethicality that I haven't considered?
> but I don't think being an asshole is unethical.
Maybe I don't know the right meaning of "ethics", then.
But putting bots on dating apps is lying to people on the other side, and hurts them, in a very intimate and personal way. People put time and emotional energy considering their matches and whether to swipe etc, and the resonance of the conversation. So you're polluting that space, both with the catfishing / leading people on, or (more likely) confusing them and making them question themselves / their profile / how to relate to other humans. Is that not unethical to you? Have you talked to people who have been victims of that? My gf was actually quite afraid that I was a bot when we connected; it almost blocked our chance.
Imagine the bot would be actually good at it and the human on the other side falls in love with it. Only to later find out it was a machine all along. Depending on the mental stability of this human, the results may be devastating.
But at least we have a bot that passes a turing test.
do you think autonomous driving is unethical too? According to the currently prevailing view even if a bot does it for you, you're still considered a driver as the bot is just a tool that you're using to accomplish a goal. Or if we just limit the scope to writing - do you think using a tool like grammer.ly is unethical?
Has anyone managed to get access to GPT-3 since OpenAI licensed it to Microsoft? I'd like to build something using it at my company, but haven't heard of anyone getting access since before the announcement. Any tips would be welcomed.
I'm in the same boat, and nope, nothing but crickets. My best guess is that Microsoft is now using the application form as a way to crowdsource business development, as they look to exploit their exclusive license.
I'm inclined to agree with you, except that the experience is kind of making me give up on GPT-3. These social media fluff articles are actually a little irritating when you have a real, relevant use case and are waiting to get any sign of possible access, with no timelines, pricing or other public information.
Yes, I got access after the announcement but signed up relatively early. They likely just have a long list of signups.
It's also not super hard to get really sexist completions. To OpenAI's credit, they seem to detect these pretty well and flag a warning, but it could also be why they are slow-rolling the availability.
> You must be a tringle? Cause you’re the only thing here.
This one could work with the right delivery! Triangles (the musical instruments) make a ding sound when played so just try to imitate that when you say the word thing :)
For anyone else wondering what sizes these models are, it appears that these are the largest four out of the one ones listed in the paper. 175B, 13B, 6.7B, and 2.7B parameters.
Though that means "a bit smaller" for the second one is a bit of an understatement.
Once you see this, you cannot help but wonder, we aren't so special afterall.
Intelligence and consciousness could be just information processing done at scale.
This is EXACTLY something an AI would say!
As a human I find it very humorous. Very humorous indeed.