State of the art is obviously a deep neural network trained for image generation/inpainting. Their inpainting mostly looks like a gradient smeared over the image. Current models can even create fine details and their problem, if anything, is being too detailed.
Ignoring the “spits out training data” bit which is at best misleading, it’s interesting that you use the word “abstract” here.
I recently followed Karpathy’s GPT-from-scratch tutorial and was fascinated with how clearly you could see the models improving.
With no training, the model spits out uniformly random text. With a bit of training, the model starts generating gibberish. With further training, the model starts recognizing simple character patterns, like putting a consonant after a vowel. Then it learns syllables, and then words, and then sentences. With enough training (and data and parameters, of course) you eventually yield a model like GPT-4 that can write better code than many programmers.
It’s not always that clear cut, but you can clearly observe it moving up the chain of abstraction as the training loss decreases.
What happens when you go even bigger than GPT-4? We have every reason to believe that the models will be able to think more abstractly.
Your “never gonna work” comment flies in the face of exponential curve we find ourselves on.
If we keep extrapolating eventually GPT will be omniscient. I really can't think of any reason why that wouldn't be the case, given the exponential curve we find ourselves on.
With real world phenomena that have resource constraints anywhere, a good rule of thumb is: if it looks like an exponential curve, walks like an exponential curve, and quacks like an exponential curve, it’s definitely a logistic curve
I have asked chat GPT to generate hypotheses on my PhD topic that I know every single piece of existing literature about and it actually threw out some very interesting ideas that do not exist out there yet (this was before they lobotomized it).
Of course, of course. Because god forbid anyone be able to reproduce your suggestion. Funnily enough I tried the same and have the exact opposite experience.
I think that ship has sailed, if you believe the paper (which I do).
LLMs are already super-human at some highly abstract creative tasks, including research.
There are numerous examples of LLMs solving problems that couldn't be found in the training data. They can also be improved by using reasoning methods like truth tables or causal language. See Orca from Microsoft for example.
they don't just spit out training data, they generalize from training data. They can look at an existing situation and suggest lines of experimentation or analysis that might lead to interesting results based on similar contexts in other sciences or previous research. They're undertrained on bleeding edge science so they're going to falter there but they can apply methodology just fine.
When you're this confident and making blanket statements that are this unilateral, that should tell you you need to take a step back and question yourself.
Worked at a couple FAANG companies and from what I've seen, management is absolutely right that remote work, overall, is inefficient. There was a Blind poll and I believe half the people admitted they were slacking off more frequently while remote.
However, being laid off is just a completely different matter. Depending on the company, the motivation for the layoff could significantly differ. Some did it to save costs, others have quotas to meet, etc. But I believe it is fair to say there is no observable correlation between remote work and chances of being laid off.
As someone else said, I think that poll is fundamentally flawed. People just didn't realize how much time they "wasted" being in the office.
For me:
When I was in the office I would be fairly regularly distracted by coworkers (which does have some advantages but is a loss when its every day) which causes me to loose my focus. Now I am "distracted" by my cat wanting on my lap which doesn't cause me to loose focus (if anything that keeps me in my chair for longer).
I would need a break from something and browse news and the internet A LOT in the office. I needed that mental break.
I would often try to force myself to not take a break for appearances causing the actual work output over the same period of time to go down.
I feel more comfortable in my home environment since it is decorated how I want so I am less stressed.
Finally the biggest one? I work more hours now than I did before. I am no longer commuting so I don't feel stressed if I need to finish something before going offline. Or if I am doing something that only needs half my focus I may work on it at night while doing something else (like maybe the final rounds of finishing a script that has a fair amount of downtime while it runs each time). Or if I just turn on my computer a bit earlier in the morning and do a quick something and then get back to my coffee, breakfast, whatever.
We can do anecdotes for days. I have worked at 3 remote companies, 2 transitioned from in person. It was clear to me that productivity at the 2 that transitioned became much worse due to remote. Productivity was never great at the other. I don't think it has much at all to do with hours worked but collaboration absolutely takes a nose dive when remote for the average employee.
I love working remote for the same reasons you do. Doesn't mean I think it's good for the company though.
> people admitted they were slacking off more frequently while remote
Honestly, I think people didn't realize just how much they slacked off while in the office. It was just a lot less fun and a lot less free.
Over a long period, I can sustain about 3-5 hours of truly hard productive work per workday. I can grind out an all-nighter once in a while; I can do an isolated back-to-back 10-12 hour day, but I can't sustain 8 hours of actual, hard nose-to-grindstone work per day on average (and I'm not ashamed to admit it).
In the office, this looked like dicking off with co-workers. At home, it looks like throwing in a load of laundry or taking the dog for a 20 minute walk. Those "feel" much better (because they are) and that feeling better I think people report as a form of guilt that they're slacking off more.
Talking to your co-worker about the TV show you both follow or the local sports team is every bit as much slacking off; it just doesn't engender any feeling of guilt.
There is no guarantee that slacking off more frequently means less efficiency.
Slacking off but then returning to focused, deep work, without distraction can absolutely be more productive than working consistently in a distracting environment. I've even found that "slacking off" is embraced by companies who have worked remotely. In one prior gig, letting the team know, "Hey, I'm going skiing for a couple hours and will be online this evening" was not at all an unusual message to see. People work when they are going to be productive and "slack" when they wouldn't be productive anyway. Remote work means you can optimize for your personal productive times... and still slack off more. Best of both worlds.
I dont see how one anonymous poll at a FAANG company proves that.
Just because you are doing busywork doesn't mean it's efficient.
FWIW I agree with your other points.
Don’t base any opinion off of something you see on Blind. That’s a very self-selecting group who participate there, and probably the majority of responses were just trolling.
Could be that they assumed it must be a photo and the smudge was done intentionally as an artistic expression? The art world is so out of touch now I wouldn't be surprised if that was the case.
Is there an actual source for that or are you just here to complain about Stable Diffusion users flooding your Twitter with AI waifus?
If David is such a "big dreamer hippie dork," he should: 1. either open source the model or at least make it cheaper ($8/month for just 200 images is ridiculous) and 2. not bow down to dictators, or just stay out of political subjects completely.
I agree with 1. but how can you ever hope to achieve 2?
Generating realistic images is a powerful tool, that is used for political reasons already, so politicians will get involved. So it is a hard question, but I hope if I am in such a position, I will not preemptively try to please the winni puhs.
One solution would probably be, to not allow generating pictures with real persons at all. Whether they are Trump or Xi.
Something worse? China and others bans almost every western social media company. Is the US not allowed to retaliate on those grounds alone? This bill only authorizes the banning of tech companies operated out of western enemy states: China, Russia, North Korea, Iran, Cuba and Venezuela (with a caveat).
The international economy is already not free (never has been), so I fail to see how this changes much other than shifting and clarifying some governmental powers. The US has banned Huawei telecommunications equipment, is this all that different other than the fact that it is much more public facing?
What makes the Chinese government problematic is not protectionism or even the great firewall, it is the genocide, the policing of criticism from all sources, the authoritarian single party rule.
You have the right to do all that, but you can't do all that and at the same time claim to be a morally superior, free and open society. Pick one: are your much-espoused values more important, or is revenge more important?
> You have the right to do all that, but you can't do all that and at the same time claim to be a morally superior, free and open society.
Again... in matters of trade and foreign affairs no country (including the US) is completely free and open. I don't think anyone in the US has ever claimed to have completely open international trade policies so not even sure where you are getting that?
Just because this affects an app used by average people doesn't change the facts.
Pretty interesting discussion - I particularly liked the part about how, in order to better align LLMs, we need better transparency so everyone can help study and align them.