Both Intellect-2 and zero data reasoning work on LLMs ("Zero data reasoning" is quite a misleading name of a method. It's not very ground-breaking.) If you wanna see a major leap in LLMS, you should check out what InceptionLabs did recently to speed up inference by 16x using a diffusion model. (https://www.inceptionlabs.ai/)
Our algorithms for time-series reinforcement learning are abysmal compared to inference models. Despite the explosion of the AI field, robotics and self-driving are stuck without much progress.
I think this method has potential, but someone else needs to boil it down a bit and change the terminology because, despite the effort, this is not an easily digested article.
We're also nowhere close to getting these models to behave properly. The larger the model we make, the more likely it is to find loopholes in our reward functions. This holds us back from useful AI in a lot of domains.
A few European countries' "masters of computer science" is just a normal "engineering" degree with a focus on software for any speciality credits. I can call myself an "engineer", even though my software profession does not value the distinction.
Though I'm sceptical it would help. API design is generally not taught in university courses, and perhaps shouldn't (too specific).
I instead feel that GDPR has already done a lot of heavy lifting. By raising the price of "find out", people got a bit more careful about the "fuck around" part. It seems to push companies to take it seriously.
The step two is forcing companies to take security breaches and security disclosures seriously, which CRA (Cyber Resilience Act) may help.... at the cost of swamps of byrocratic overhead that is also included ofcourse.
I mean, do you trust that the chemical industry will self regulate and keep dangerous chemicals out of your drinking water?
Then why do we trust software companies to keep you and your data safe?
We will get more regulations over time no matter how much we complain about it because people are rather lazy at the end of the day and more money for less work is a powerful motivator.
Ye. It's unfortunate that companies cannot responsibly do the proper thing on their own.
Though I wanna raise that, CRA may be an unfathomably high bureaucratic load on software companies. If it were just about security disclosure, it would be quite manageable.
As it is formulated in its current form, CRA sets the general software development industry... to the current industrial automation and automotive standards; which is absurd.
You can always wait another year. And then another. When is it time? 50 years? 100 years? "We will maximise good by donating it all in 1000 years" is questionably not a charity at all; it's just a massive pile of money that isn't used for anyone but paying the people sitting on it.
Even if you trick-feed donations to charity over 100 years, the sums may be insufficient to reach a usable scale.
- A big investment in research.
- A concentrated push to vaccinate against a Disease so it goes away for good.
- An infrastructure investment that lifts a community out of poverty.
These themselves produce "good over time," perhaps even faster than the money in the fund rises in value. It's a balance, but immortal trickle donations are likely quite far off to one direction of that scale.
Economies of scale could vastly benefit a lot of charity work, but few charities can attain sufficient scale to achieve that. There is an unfortunate amount of overhead and administration in charities that do not directly benefit the cause.
In that sense, I suspect targeted and planned large investments into charities with scalable plans is a lot more efficient than years of trickle donations.
Its a small button in the top right corner. Very oit of the way unless interacted with. I use zed because its faster and cleaner than vscode. I dont want AI in my editor.
The anxiety does hit you back when in bed trying to sleep. I notice a vast difference in my ability to fall asleep if I've gone on a walk with a podcast in my ear vs just silently walking with my thoughts.
Wow, thank you for saying exactly this, this "deferred anxiety" probably does partly explain sleep issues I've had the last 3-4 years. I agreed with GP's comment and have more issues with screens, but didn't even notice this difference in how I take my walks now.
Sometimes my inner thoughts can crowd aside the podcast and I'll get home and realize I didn't hear anything from the podcast, but more often it keeps me distracted the whole time. I think unplugging from podcasts on walks and in the car is definitely worth a try.
Walking and movement is supposed to be a massive help for sleep, but in periods where I have more anxiety, this effect is stronger than the benefits of the walk itself. I manage a quite decent hygiene around my phone use, but buying in-ear headphones was quite a mistake on my part, as I found this difference quite soon after.
at least according to "irresistible" by adam alter, it's also close to the definition of behavioral addiction, more broadly when we routinely maladapt to engage with a certain behavior to avoid emotionally uncomfortable that gets worse due to the avoidance.
LLMs are not good enough to replace programmers, authors or journalists (and I suspect never will, since they still rely on accurate and human written sources to produce anything of value)
However, AI art generators in their current form may render all artistic jobs unlivable within 20 years. Learning to draw is one of the most time-intensive skills to master. A master's degree in CS is sufficient to secure a good job, but five years of experience in art makes you a "novice". AI art is just good enough to devalue art as a whole, making it an infeasible profession to pursue, as it's already near the minimum wage on average.
In 20 years, there may not be any new professional digital artists. All art will become AI art. Do we like that world? Cheap, corporate, lazy, with no sign of effort or dedication.
I want LLMS to go away as well, but at the very least, there will always be a market for real text, and always be people able to produce it.
That’s both true and false. AI can definitely replace the "realization" part of art in some cases, not the "creativity/thinking" part. I’d AI art good? Sometimes, sometimes not. Is it good enough when you just want an illustration, with images serving no artistic purpose? Often yes. AI can also definitely replace part of the "writing code" bits. Can it replace all of it? No, some parts are still too complex for AI to grasp. Is it good enough for MVPs, throwing projects in the wild and see if they work? Of course yes.
I've used AI art a lot for tabletop RPGs. The level of actual creative control isn't great, even for what should be an easy case of one character in profile against a blank background. Even if you know how to use it well you're wrestling the systems involved to try and produce consistent output ot anything unusual. And that's fine for Orc #3 or Elf Lord Soandso, which are only going to be featured for fifteen minutes at a time and in contexts where you can crop out bad details or use low-effort color grading to get a unified tone.
But for a graphic novel? What? I can't imagine giving up that level of creative control, even as someone who sucks at actual drawing. You'll never be able to get the kind of framing, poses, and structuring you want, doubly so the second you want to include anything remotely original. It's about the absolute worst case for actually using these generation tools.
AI art is not limited to writing a prompt and hoping for the best. There's a multitude of ways to control the generation: img2img, ControlNet, Openpose, InstantID and several other techniques. You can train LoRAs on your characters for consistency.
It takes seconds to generate a panel for a comic; you make a sketch, then generate hundreds of candidates, pick the best one, maybe correct flaws in Photoshop, and it's still faster and cheaper than drawing it yourself from scratch. It's just another workflow for an artist. I use Blender to model rough sketches of 3D scenes, then use ComfyUI to render high quality images with lots of details.
For webnovels I've found it useful as someone who has borderline aphantasia. But in this case, the webnovels would normally have no graphics whatsoever in their chapters aside from the webnovel's cover art (which is usually done by an actual graphic designer).
It's very obvious that they're AI generated, and the authors are typically upfront about it. I still feel a bit of an ick when I see them, and Patreon discussions for the creators I follow also have similar sentiments. Not sure if it's truly a tolerable use-case for AI, but thought I'd throw it out there.
Our algorithms for time-series reinforcement learning are abysmal compared to inference models. Despite the explosion of the AI field, robotics and self-driving are stuck without much progress.
I think this method has potential, but someone else needs to boil it down a bit and change the terminology because, despite the effort, this is not an easily digested article.
We're also nowhere close to getting these models to behave properly. The larger the model we make, the more likely it is to find loopholes in our reward functions. This holds us back from useful AI in a lot of domains.
reply