6 months ago for $575, I picked up a 15" 1080p IPS display laptop with an AMD Ryzen 7 6800H (8 cores / 16 threads), 32 GB of DDR5 RAM, Radeon 680M iGPU that can use up to 8 GB VRAM and a 1 TB NVME SSD with a backlight keyboard, a bunch of USB ports and HDMI port. It weighs the same as a MBP and comes with a 2 year manufacturer warranty. It's upgradable to 64 GB of RAM and 2 TB SSD. It has Windows 11 but all of the parts are compatible with Linux if you want to go down that route.
It's from a brand I never heard of, Nimo N155 but I took a gamble and so far I couldn't be happier. The only problem now is there's major shortages and prices are jacked because of the RAM situation. The same model is $700 today and much harder to find, even their official site is out of stock on this model.
High quality for their time. The toilet bowl was very heavy for its screen size, and had minimal volume for battery. The G3 iBook lacked rigidity, and had a tendency to damage the mainboard if picked up from a corner. The G4 iBook had grounding issues, and would occasionally get spicy with two-prong outlets. All three of these issues were directly related to the plastic chassis. All three were great laptops for their day; none would be acceptable in this decade.
There’s nothing wrong with plastic as a material, but there’s a lot wrong with many of the designs of mid-tier laptops that happen to use plastic. The plastic isn’t as much a cause of their problems as it is a signature feature of all hastily assembled corner-cut devices.
It's made of metal and is sturdy. I've taken it on 2 trips (including international), it's all good and still feels like new but to be fair I don't abuse it. For traveling I put it into a regular backpack that has a laptop sleeve, I don't use extra packing.
The track pad is of course not as good as Apple's but it's good enough where it's not in the way and feels ok to use.
The brightness and battery life both fall into the same category of they haven't negatively impacted me in my day to day. For example a few hours of dev work in the park with the sun out hasn't been a problem for both battery life or visibility.
You are right in that I don't value battery life as a top tier feature. ~5 hours of "real work" is enough because if you need extended battery life for doing intensive tasks away from human civilization you can always keep a power bank on hand for extended usage. If you're not out in the middle of no where, access to a power outlet is readily available.
> A much larger laptop with less than half the number of display pixels is not really the same market. And how's that battery life?
Yes, the display isn't as good but the Neo with 512 GB of storage is already $700 and has half the storage of the other laptop. The Neo also has 8 GB of RAM vs 32 GB. Big differences IMO.
Battery life is "good enough" but not great. It really depends on what you're using it for. If you're doing CPU bound tasks a lot, it's not going to last as long. I guess a takeaway is I was never in a situation where I had to change my behaviors because of the battery life. Unless you're planning to be out in the middle of no where without a power bank for an extended period time doing workload intensive tasks it's fine.
Likewise, the display being only 1080p isn't as bad as you would think. I'd be surprised if anyone is running their 13" Neo at 2408 x 1506 at native scaling. That would be 219 PPI. For reference I run a 4k 32" monitor at native 1:1 scaling and that's 138 PPI. It would be bonkers to consider using 219 PPI from a normal viewing distance. Most scaled resolutions with the Neo would be effectively 1080p resolution but with sharper text.
You're not in the market for a netbook-type machine if this is the case.
> but with sharper text.
Text huh? Sounds important.
> Battery life is "good enough" but not great.
So, do you want a lightweight client / light productivity machine with tons of battery life, great text, and a kickass trackpad? Or an affordable workstation replacement? Different markets.
What you’re missing is that the target market for this devices — the casual laptop user — DGAF about memory or storage if it is at the expense of the directly observable user experience.
Few people want or need 32gb of RAM, nor give a shit about what it even means. Most people just want to run MS Word and Google Chrome and maybe TurboTax.
Sure but if people want a device for only casual browsing and are ok with 256 GB of storage and 8 GB of memory they can get a Chromebook for half the price of the Neo. Not all of them are bad, there's tons in the $300 range with good enough specs for casual usage.
If you want to spend ~$600-700, the laptop I mentioned fits the bill for casual use, a development workstation, media editing and casual gaming at a directly comparable price to the Neo. I replied initially because you wrote nothing good exists in the $600-700 range.
Again, this device isn’t someone who’s buying based on specs. Nor is it for somebody who’s buying based on price.
It’s for somebody who goes to the store, puts their hands on the keyboard, uses the touchpad, looks at the screen, and feels the chassis, and then makes their decision. This is how regular people purchase these commodity items. Most people have no clue what the difference between storage and memory is. They just want to know: will it run [software]? That’s all the specs they need to know. Maybe the battery life as well
If you haven’t already go put your hands on one of these at the store. There’s no $600 laptop that feels like it.
> It’s for somebody who goes to the store, puts their hands on the keyboard, uses the touchpad, looks at the screen, and feels the chassis, and then makes their decision.
We might live in different areas of the world. Every person I know who isn't into tech has never walked into a store by themselves and bought a laptop based on feel or a hunch.
They always get a recommendation from someone who is into tech, either for a specific model to buy online or someone to go with in real life at a store to help them make a purchase.
I don't blame them either, I wouldn't make a big purchase with no information and trust the sales floor to give high quality personalized advice.
Yep I had a GeForce 750 Ti (2 GB) and I was able to run a ton of things on Windows without any issues at all.
As soon as I switched to Linux I had all sorts of problems on Wayland where as soon as that 2 GB was reached, apps would segfault or act in their own unique ways (opening empty windows) when no GPU memory was available to allocate.
Turns out this is a problem with NVIDIA on Wayland. On X, NVIDIA's drivers act more like Windows. AMD's Linux drivers act more like Windows out of the box on both Wayland and X. System memory gets used when VRAM is full. I know this because I got tired of being unable to use my system after opening 3 browser tabs and a few terminals on Wayland so I bought an AMD RX 480 with 8 GB on eBay. You could say my cost of running Linux on the desktop was $80 + shipping.
tmux by itself lets you create any number of sessions, windows and panes. You can arrange them for anything you want to do.
Having a pane dedicated to some LLM prompt split side by side with your code editor doesn't require additional tools, it's just a tmux hotkey to split a pane.
There's also plugins like tmux resurrect that lets you save and restore everything, including across reboots. I've been using this set up for like 6-7 years, here's a video from ~5 years ago but it still applies today https://www.youtube.com/watch?v=sMbuGf2g7gc&t=315s. I like this approach because you can use tmux normally, there's no layout config file you need to define.
It lets me switch between projects in 2 seconds and everything I need is immediately available.
Why do you often need to re-prompt things like "can you simplify this and make it more human readable without sacrificing performance?". No amount of specification addresses this on the first shot unless you already know the exact implementation details in which case you might as well write it yourself directly.
I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worthy of being git commit.
I sometimes use AI for tiny standalone functions or scripts so we're not talking about a lot of deeply nested complexity here.
> I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worth of being git commit.
Are you stuck entering your prompts in manually or do you have it setup like a feedback loop like "beautify -> check beauty -> in not beautiful enough beautify again"? I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.
I do everything manually. Prompt, look at the code, see if it works (copy / paste) and if it works but it's written poorly I'll re-prompt to make the code more readable, often ending with me making it more readable without extra prompts. Btw, this isn't about code formatting or linting. It's about how the logic is written.
> I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.
If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning. If not, we're admitting it's providing an initial worse result for unknown reasons. Maybe it's to make you as the operator feel more important (yay I'm providing feedback), or maybe it's to extract the most amount of money it can since each prompt evaluates back to a dollar amount. With the amount of data they have I'm sure they can assess just how many times folks will pay for the "make it better" loop.
Why do you orchestrate the AI manually? You could write a BUILD file that just does it in a loop a few times, or I guess if you lack build system interaction, write a python script?
> If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning.
This is the wrong way to think about AI (at least with our current tech). If you give AI a general task, it won't focus its attention at any of these aspects in particular. But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.
People who feel like AI should just do the right thing already without further prompting or attention focus are just going to be frustrated.
> Btw, this isn't about code formatting or linting. It's about how the logic is written.
Yes, but you still aren't focusing the AI's attention on the problem. You can also write a guide that it puts into context for things you notice that it consistently does wrong. But I would make it a separate pass, get the code to be correct first, and then go through readability refactors (while keeping the code still passing its tests).
I have zero trust in any of these tools and usually I use them for 1 off tasks that fit well with the model of copy / pasting small chunks of code.
> But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.
I think that's where I was going with the need to re-prompt. Why not provide the result after 5 internal rounds of readability / optimization loops as the default? I can't think of times where I wouldn't want the "better" version first.
Make (or whatever successor you are using, I'm sure no one actually uses nmake anymore) is pretty reliable in filling in templates that feed into prompts. And AI is pretty efficient at writing make files, lowering the effort/payoff payoff threshold.
> I think that's where I was going with the need to re-prompt. Why not provide the result after 5 internal rounds of readability / optimization loops as the default? I can't think of times where I wouldn't want the "better" version first.
I don't think this would work very well right now. I find that the AI is good at writing code, or maybe optimizing code, or maybe making the code more readable (that isn't one I do often, but optimization all the time), but if I ask it to do it all at once it does a worse job. But I guess you could imagine a wrapper around LLM calls (ClaudeCode) that does multiple rounds of prompting, starting with code, then improving the code somewhat after the code "works". I kind of like that it doesn't do this though, since I'm often repairing code and don't want the diff to be too great. Maybe a readability pass when the code is first written and then a readability pass sometimes afterwards when it isn't in flux (to keep source repository change diffs down?).
There's two secret sauces to making Claude Code your b* (please forgive me future AI overlords), one is to create a spec, the other is to not prompt merely "what" you want and only what you want, but what you want, HOW you want it done (you can get insanely detailed or just vague enough), and even in some cases the why is useful to know and understand, WHO its for sometimes as well. Give it the context you know, don't know anything about the code? Ask it to read it, all of it, you've got 1 million tokens, go for it.
I have one shot prompted projects from empty folder to full feature web app with accounts, login, profiles, you name it, insanely stable, maybe and oops here or there, but for a non-spec single prompt shot, that's impressive.
When I don't use a tool to handle the task management I have Claude build up a markdown spec file for me and specify everything I can think of. Output is always better when you specify technology you want to use, design patterns.
Besides having a script to run `myip`, `myip --local` or `myip --public`, the post and video go over using strace to monitor network traffic to make sure the command to help get your local IP address isn't accessing the public internet.
Being able to scale an image without losing quality is going to be handy. I always found it odd that scaling down an image now and then scaling it back to its original size 2 seconds later with the same tool resulted in a loss of quality and having to delete the layer, then re-import the image to get the original quality back.
It's because each transform was "destructive" (like filters use to be by default). What link & vector layers do instead is store a transform matrix, so each transform just updates the matrix instead of actually re-rasterizing the layer each time.
We were hoping to expand that feature to all layer types for 3.2, but we ran out of time to properly test it for release. It'll like be finished for the next minor release.
I can't speak for all of us, but generally no (in terms of GenAI at least). There are concerns about generated code not being compatible with GPL, and honestly a lot of the drive-by GenAI coded merge requests tend to not work.
I see you are getting downvoted but I don't blame you for this question. I've been curious about what developers of established products are doing with LLM assisted coding myself.
Like most of us, they're certainly using ai-assisted auto-complete and chat for thinking deep. I highly doubt they're vibe coding, which is how I interpret the parent's question and probably why they are being down voted.
This is insulting to our craft, like going to a woodworkers convention and assuming "most of [them]" are using 3D-printers and laser cutters.
Half the developers I know still don't use LSP (and they're not necessarily older devs), and even the full-time developers in my circle resist their bosses forcing Copilot or Claude down their throats and use in fact 0 AI. Living in France, i don't know a single developer using AI tools, except for drive-by pull-request submitters i have never met.
I understand the world is nuanced and there are different dynamics at play, and my circles are not statistically representative of the world at large. Likewise, please don't assume this literally world-eating fad (AI) is what "most of us" are doing just because that's all the cool kids talk about.
Your IDE either uses an LSP or has its own baked-in proprietary version of a LSP. Nobody, and I mean nobody, working on real projects is "raw dawgin" a text file.
Most modern IDE's support smart auto-complete, a form of AI assistance, and most people use that at a minimum. Further, most IDE's do support advanced AI assisted auto-complete via copilot, codex, Claude or a plethora of other options - and many (or most) use them to save time writing and refactoring predictable, repetitive portions of their code.
Not doing so is like forgoing wheels on your car because technically you can just slide it upon the ground.
The only people I've seen in the situation you've described are students at university learning their first language...
I write code exclusively in vim. Unless you want to pretend that ctags is a proprietary version of an LSP, I'm not using an LSP either. I work at a global tech company, and the codebase I work on powers the datacenter networks of most hyperscalers. So, very much a real project. And I'm not an outlier, probably half the engineers at my company are just raw dawgin it with either vim or emacs.
Ctags are very limited and unpopular. Most people do not use them, by any measurement standard.
Using a text editor without LSP or some form of intellisense in 2026 is in the extreme minority. Pretending otherwise is either an attempted (and misguided) "flex" or just plain foolishness.
> probably half the engineers at my company are just raw dawgin it with either vim or emacs
Both vim and emacs support LSP and intellisense. You can even use copilot in both. Maybe you're just not aware...
When your language has neither name-mangling nor namespaces, a simple grep gets you a long way, without language specific support. Ma editor (not sure if it counts as IDE?) uses only words in open documents for completions and that is generally enough. If I feel like I want to use a lot of methods from a particular module I can just open that module.
I don't use an IDE under the common definition. All my developer friends use neovim, emacs, helix or Notepad++. I'm not a student. The people i have in mind are not students.
Your ai-powered friends and colleagues are not statistically representative. The world is nuanced, everyone is unique, and we're not sociologists running a long study about what "most of us" are doing.
> forgoing wheels on your car
Now you're being silly. Not using AI to program is more akin to not having a rocket engine on your car. Would it go faster? Sure. Would it be safer? Definitely not. Do some people enjoy it? Sure. Does anyone not using it miss it? No.
I didn't say using different technology was cheating, and metal tools are certainly part of woodworking for thousands of years so that's not really comparable.
It's also very different because there's a qualitative change between metal woodworking tools and a laser cutter. The latter requires electricity and massive investments.
Many years ago I tested a native OS/2 image editor with this feature. It also made it possible to undo an individual transform or effect in the current stack while leaving the rest untouched. Will that be possible in Gimp as well?
Yes, it's planned for transform tools and already possible with filters. Technically our transform tools are already capable of this (they use GEGL operations the same as our non-destructive filters). We just need to tweak it to not immediately commit the transform, and then implement a UI.
When does the final calculation happen then, at file save/export? Will be unexpected. Or does it end up in the final format? That's going to be a nightmare, because then you can't use GIMP to redact data anymore.
That's up to you. Right now filters work the same way - you can merge them automatically on creation, merge them at some point while working, or merge them on export. For formats like PSD, we'll eventually add the option to export as non-destructive filters as well.
We don't want to take away choices - we just want to add more options for people's workflows.
> I always found it odd that scaling down an image now and then scaling it back to its original size 2 seconds later with the same tool resulted in a loss of quality
Maybe it's because I grew up with Paint Shop Pro 6 and such, but that seems completely normal and expected to me
I was using Photoshop, I don't remember when exactly but it's probably in the 15-20 year range when non-destructive scaling was available. I don't remember not having it. Glad to see GIMP is moving in this direction.
> I always found it odd that scaling down an image now and then scaling it back to its original size 2 seconds later with the same tool resulted in a loss of quality
I'm honestly baffled at your surprise... say, if you crop an image, and 2 seconds later you enlarge it to its original size; do you expect to get the inital image back? Or a uniform color padding around your crop?
Scaling is just cropping in the frequency domain. Behaviour should be the same.
From a developer perspective you're obviously correct, but from a user perspective it doesn't make sense that the tool discards information, especially when competing tools don't do that.
Of course as a developer that makes it all the more impressive - kudos to the team for making such big progress, I can't wait to play around with all the new improvements!
Cropping IS a destructive operation. If the program isn't throwing information away, then it doesn't actually do cropping, but some different operation instead.
From a user perspective I wouldn't like it, if I were to crop something and the data would be still there afterwards. That would be a data leak waiting to happen.
I genuinely can't empathize with this objection. To me it's basically the same as arguing against Undo/Redo in a text editor because someone could come along and press Undo on my keyboard after I've deleted sensitive data.
What percentage of users sends around raw project files from which they've cropped out sensitive data to users who shouldn't see that data, vs. what percentage of users ever wants to adjust the crop after applying other filters? The latter is basically everyone, the earlier I'm guessing at most 1%?
going by your text editor analogy, we are arguing against implementing undo/redo as a "non-destructive delete", based on adding backspace control characters within the text file. I want infinitw undo/redo, but i also want that when I delete a character it is really gone, not hidden!
Sorry, but I still don't see it - the text editor analogy is stretched far too thin. If I share a project file, I want the other user to see all this stuff. If I don't want them to see all this stuff, I send them an export.
It would be a true shame if every useful feature was left out due to 1% of use cases becoming slightly different.
label = key:gsub("on%-", ""):gsub("%-", " "):gsub("(%a)([%w_']*)", function(f, r)
return f:upper() .. r:lower()
end)
if label:find("Click") then
label = label:gsub("(%a+)%s+(%a+)", "%2 %1")
elseif label:find("Scroll") then
label = label:gsub("(%a+)%s+(%a+)", "%2 %1")
end
I don't know Lua too well (which is why I used AI) but I know programming well enough to know this logic is ridiculous.
It was to help convert "on-click-right" into "Right Click".
The first bit of code to extract out the words is really convoluted and hard to reason about.
Then look at the code in each condition. It's identical. That's already really bad.
Finally, "Click" and "Scroll" are the only 2 conditions that can ever happen and the AI knew this because I explained this in an earlier prompt. So really all of that code isn't necessary at all. None of it.
What I ended up doing was creating a simple map and looked up the key which had an associated value to it. No conditions or swapping logic needed and way easier to maintain. No AI used, I just looked at the Lua docs on how to create a map in Lua.
IMO the above is a lot clearer on what's happening and super easy to modify if another thing were added later, even if the key's format were different.
Now imagine this. Imagine coding a whole app or a non-trivial script where the first section of code was used. You'd have thousands upon thousands of lines of gross, brittle code that's a nightmare to follow and maintain.
My main ones are Sony MDR-V6s which I've had for 10 years. They are the best headphones I've ever owned and they sound just as good today as they did a decade ago. They were originally made in 1985 and the wire never tangles.
The other are crappy $8 earbuds / mic combo that are maybe 7 years old and work just fine.
I have wireless earbuds that I occasionally use since the Pixel 9a has no 3.5mm jack. They are worse in every way that I care about. I have to babysit them to make sure they are charged.
Sure the wired earbuds get tangled sometimes but it's not a big deal to address that. I also think wired is an advantage for portable usage. For example, for running or doing any activity the wire ensures if they fall out of your ear you won't lose them. They also don't need a case so you can stuff them anywhere without a bulge.
I really admire Carmack and followed everything id software since the beginning.
They really did put a lot of things out in the open back then but I don't think that can be compared to current day.
Doom and Quake 1 / 2 / 3 were both on the cusp of what computing can do (a new gaming experience) while also being wildly fun. Low competition, unique games and no AI is a MUCH different world than today where there's high competition, not so unique games and AI digesting everything you put out to the world only to be sold to someone else to be your competitor.
I'm not convinced what worked for id back then would work today. I'm convinced they would figure out what would work today but I'm almost certain it would be different.
I've seen nothing but personal negative outcomes from AI over the last few years. I had a whole business selling tech courses for 10 years that has evaporated into nothing. I open source everything I do since day 1, thousands of stars on some projects, people writing in saying nice things but I never made millions, not even close. Selling courses helped me keep the lights on but that has gone away.
It's easy to say open source contributions are a gift and deep down I do believe that, but when you don't have infinite money like Carmack and DHH the whole "middle class" of open source contributors have gotten their life flipped upside down from AI. We're being forced out of doing this because it's hard to spend a material amount of time on this sort of thing when you need income at the same time to survive in this world.
Yes, I remember writing a VB6 driven editor. I was so happy when I got find and replace to work.
I still have the marketing page copy from 2002:
<UL>
<LI>Unlimited fully customizable template files</LI>
<LI>Fully customizable syntax highlighting</LI>
<LI>Very customizable user interface</LI>
<LI>Color coded printing (optional)</LI>
<LI>Column selection abilities</LI>
<LI>Find / Replace by regular expressions</LI>
<LI>Block indent / outdent</LI>
<LI>Convert normal text to Ascii, Hex, and Binary</LI>
<LI>Repeat a string n amount of times</LI>
<LI>Windows Explorer-like file view (docked window)</LI>
<LI>Unlimited file history</LI>
<LI>Favorite groups and files</LI>
<LI>Unlimited private clipboard for each open document</LI>
<LI>Associate file types to be opened with this editor</LI>
<LI>Split the view of a document up to 4 ways</LI>
<LI>Code Complete (ie. IntelliSense)</LI>
<LI>Windows XP theme support</LI>
</UL>
I went all-in developing that editor. It had a website and forums but it wasn't something I sold, you could download it for free. Funny how even back then I tolerated almost no BS for the tools I use. I couldn't find an editor that I liked so I spent a few weeks making one.
Fast forward 20 years and while I'm not using my own code editor the spirit of building and sharing tools hasn't slowed down. If anything I build more nowadays because as I get older the more I want to use nice things. My tolerance has gotten even stricter. It's how I ended up tuning my development environment over the years in https://github.com/nickjj/dotfiles.
This is definitely aging me, but I'm still disappointed that all caps didn't win. That style made it so much easier to visually parse tags when scanning through the HTML code. I admit that syntax highlighting has mostly done away with that benefit, and now that I'm used to the lower case I don't mind it anymore, but the uppercase always felt better to me. Even reading that example above it feels more natural. Style is a hard thing.
Is it though?
6 months ago for $575, I picked up a 15" 1080p IPS display laptop with an AMD Ryzen 7 6800H (8 cores / 16 threads), 32 GB of DDR5 RAM, Radeon 680M iGPU that can use up to 8 GB VRAM and a 1 TB NVME SSD with a backlight keyboard, a bunch of USB ports and HDMI port. It weighs the same as a MBP and comes with a 2 year manufacturer warranty. It's upgradable to 64 GB of RAM and 2 TB SSD. It has Windows 11 but all of the parts are compatible with Linux if you want to go down that route.
It's from a brand I never heard of, Nimo N155 but I took a gamble and so far I couldn't be happier. The only problem now is there's major shortages and prices are jacked because of the RAM situation. The same model is $700 today and much harder to find, even their official site is out of stock on this model.
reply