I don't have much to say about this post other than to vigorously agree!
As an engineer who's full-stack and has frequently ended up doing product management, I think the main value I provide organizations is the ability to think holistically, from a product's core abstractions (the literal database schema), to how those are surfaced and interacted with by users, to how those are talked about by sales or marketing.
Clear and consistent thinking across these dimensions is what makes some products "mysteriously" outperform others in the long run.
It's one of the core ideas of Domain-Driven Design. In the early stage of the process, engineers should work closely with stakeholders to align on the set of terms (primitives as another commenter has put it), define them and put them in neat little contextual boxes.
If you get this part right, then everything else becomes and implementation effort. You're no longer fighting the system, you flow with it. Ideas becomes easier to brainstorm and the cost of changes is immediately visible.
DDD suggests continuous two-way integration between domain experts <-> engineers, to create a model that makes sense for both groups. Terminology enters the language from both groups so that everyone can speak to each other with more precision, leading to the benefits you stated.
We ended up with something like five microservices - that, in principle, could have been used by anyone else in the company to operate on the Domains they were supposed to represent and encapsulate. This one holds Users and User data! This one holds Products, and Product interaction data!
Nobody touched any of those except us, the engineers working on this one very specific product. We could have - should have - just put it all on one service, which would have also allowed us to trivially run database joins instead of having to have services constantly calling each other for data, stitching it together in code.
Subdomain shouldn't be engineering related. That's putting the cart before the horse. Subdomain is more like: This barely have anything to do with that, other than data transmission (not transformation).
How you implement it, however is an engineering question. Microservice is not the only abstraction tool that exists. It's kinda the worse. You have procedure/class, file/module, package/libraries, processes and IPC. Network call is for when you have no other choice.
Now how do you get your company / do yourself the hiring of those people in such a way that you can basically just have a team of people like this work with PMs to build their ideas?
I like doing this FS journey myself but am stuck "leading teams" of FS/BE/FE mixes and trying to get them to build stuff that I clearly understand and would be able to build with enough time but all I have is a team of FE or BE people or even FS people that can't just do the above. You need to be very involved with these people to get them to do this and it just doesn't scale.
I've recently tried AI (Claude specifically) and I feel like I can build things with Claude much quicker than with the FE/BE/FS people I have. Even including all the frustrations that Claude brings when I have to tell it that it's bullshitting me.
I have exactly the same experience as you. I tried educating people but all those developers (and beyond, up to stakeholders), no matter their seniority, do not want to get involved in the domain too much, just as little as they need. That naturally leads to me micromanaging all the things, leading to non scalability and finally overburn. As soon as I stop doing micro, all the stuff start to break down pretty fast. I wrote a book per project trying to get everyone on the same level but nah (more than 3000 pages in last decade, 20+ projects). Tried everything in hiring too, found almost nobody during all that time.
I am now off the previous work and will devote time to try AI, because I concluded it can't be worse than that.
Reading this thread brought back fond memories of sitting with front-line staff and just chatting with them while watching them work from the corner of my eye. My gimmick was to turn up for morning tea (the staff were older ladies that took homemade cakes to work), and by lunchtime have some frustration of theirs resolved.
It’s such a great feeling when you can make someone’s work better, for the life of me I can’t understand why others wouldn’t jump at the opportunity!
Sadly at current $dayjob, the devs are held at arm's length from the customer. On purpose!
Same here. No matter how hard I try, and use different approaches, from coaching, to sharing videos, through poiting out why this can benefit you personally, to showing how exactly it creates results, there simply is no interest. People don't care.
It's even worse than that - even the owner of the company I worked for didn't care that the product of his own company will be mediocre, while shouting generally the quality is the goal. It turns out that it was the goal as long as it was incidental and free (no such thing, but it looks that way if you are not deeply involved) and because it sounds good. As soon as reputation collides with the immediate profit, profit always wins.
Yuup. I do find that most of the time business decision makers actually have no clue about quality. Especially with software products if it looks like it works in the demo/looks pretty then the quality must be good right, and these engineers are just being pedantic, cause theyre engineers.
That’s something I relate too as well. I like working on different abstraction levels throughout the system.
Only way to cope was to let go things and pick my battles.
I always think about the joke where a sailor goes down to the dock and asks dock men if they speak French, English or German- dock men only shake their heads showing no. Later dock men chat and one saying to other he could learn languages so he would be able to talk with the sailor. The other replied that sailor knew 3 and it didn’t help him.
Everything is too recent, nobody can give a sure advice on how to deal with your situation. From my view as a fullstack engineer working with LLMs for the past 3 years, your generated product is probably crap and your only way to assess it is by asking Claude or ChatGPT if it's good, which it'll probably say yes to make you feel good.
Now go ahead and publish it. If your app brings revenue, then you build something quicker. A Claude-generated prototype is as much a product as some PowerPoint slides
Huh, my experience has been generally the opposite - most FS/BE/FE folks want to understand the business, and while a good PM will enhance that, the median PM is actively detrimental.
Frankly if the people you have aren't good enough then you need to get good at training, get better in your hiring (does your hiring process test the skills you want? Without being so long that anyone decent is going to get a better offer before they reach the end of it?), or maybe your company is just not offering enough to attract decent talent. There are plenty of better-than-AI programmers out there, but even in this job market they still have decent options.
Yes, and conversely, in cases when the initial model misjudged future needs, the most disastrous projects are those where the requirements or the technical design flies in the face of the original model. When this is solved sloppily, this often begins the slow degeneration - from an application that makes sense to a spaghetti mess that is only even navigable by people who were around when those weird bolt-ons happened. Usually not only the code, but also the UI reflects this, as even massive UI overhauls (like Atlassian's in 2025) tend to just sweep everything awkward under a rug -- those things are still necessary to manage the complexity but now they're hidden under ••• → Advanced Settings → Show More → All Settings.
I don't suppose you have any tips on how to get this going in an org? I love where I work and I love the products we make, but my team (phone apps) are treated very often like an afterthought; we just receive completed products from other teams, and have to "make them work." I don't think it's malicious on the part of the rest of the teams, we're just obviously quite a bit smaller and younger than the others, not to mention we had a large departure just as I arrived in the form of my former boss who was, I'll fully admit, far more competent in our products than I am.
I've worked on learning all I can and I have a much easier time participating in discussions now, however we still feel a bit silo'd off.
I find this perspective bizarre. Though I'm not happy about it all being centralized, the closest thing we have these days to the very niche phpBB forums of the 2000s is various subreddits focused on very specific topics. Scrolling through the front page is slop, sure, but whenever I'm looking for perspectives on a niche topic, searching for "<topic> reddit" is the first thing I do. And I know many people without any connection to the software industry who feel the same way.
Perhaps swearing at the LLM actually produces worse results?
Not sure if you’re being figurative, but if what you wrote in your first comment is indicative of the tone with which you prompt the LLM, then I’m not surprised you get terrible results. Swearing at the model doesn’t help it produce better code. The model isn’t going to be intimidated by you or worried about losing their job—which I bet your junior engineers are.
Ultimately, prompting LLMs is simply a matter of writing well. Some people seem to write prompts like flippant Slack messages, expecting the LLM to somehow have a dialogue with you to clarify your poorly-framed, half-assed requirement statements. That’s just not how they work. Specify what you actually want and they can execute on that. Why do you expect the LLM to read your mind and know the shape of nginx logs vs nginx-ingress logs? Why not provide an example in the prompt?
It’s odd—I go out of my way to “treat” the LLMs with respect, and find myself feeling an emotional reaction when others write to them with lots of negativity. Not sure what to make of that.
I would like to propose a moratorium on these sorts of “AI coding is good” or “AI coding sucks” comments without any further context.
This comment is like saying, “This diet didn’t work for me” without providing any details about your health circumstances. What’s your weight? Age? Level of activity?
In this context: What language are you working in? What frameworks are you using? What’s the nature of your project? How legacy is your codebase? How big is the codebase?
If we all outline these factors plus our experiences with these tools, then perhaps we can collectively learn about the circumstances when they work or don’t work. And then maybe we can make them better for the circumstances where they’re currently weak.
I feel like diet as an analogy doesn't work. We know that the only way to lose weight is with a caloric deficit. If you can't do this, it doesn't matter what you eat you won't lose weight. If you're failing to lose weight because of a diet you are eating too much, full stop.
Whereas measuring productivity and usefulness is way more opaque.
Many simple software systems are highly productive for their companies.
Meridian | Founding Engineers (Product, Infra) | NYC, New York (In-person) | https://careers.meridian.tech | Full-time
Meridian develops software to accelerate the next generation of companies building in the physical world across aerospace, defense, automotive, robotics, and more. We automate the administrative work of quality and compliance to help our customers go to market faster, scale their production, and increase their pace of innovation.
Meridian is 3 months old. We’ve already signed paying customers, built and launched our product, and raised an oversubscribed pre-seed round.
For our three first hires, we’re looking for world-class generalist engineers who can ship great product experiences fast while laying the foundations for a platform that will scale to large and complex enterprises in the future. We're offering competitive salaries and above-market equity.
We're building an in-person engineering team that prides itself on shipping excellent products for a user segment (quality engineers in manufacturing) that's been sorely neglected in the past. We ship with speed and quality, own a large product surface area, and are relentlessly customer-focused.
To apply, send us your resume and anything else you’d like to careers@meridian.tech.
I think the utility of generating vectors is far, far greater than all the raster generation that's been a big focus thus far (DALL-E, Midjourney, etc). Those efforts have been incredibly impressive, of course, but raster outputs are so much more difficult to work with. You're forced to "upscale" or "inpaint" the rasters using subsequent generative AI calls to actually iterate towards something useful.
By contrast, generated vectors are inherently scalable and easy to edit. These outputs in particular seem to be low-complexity, with each shape composed of as few points as possible. This is a boon for "human-in-the-loop" editing experiences.
When it comes to generative visuals, creating simplified representations is much harder (and, IMO, more valuable) than creating highly intricate, messy representations.
Have you looked at https://www.recraft.ai/ recently? The image quality of their vector outputs seems to have gotten quite good, although you obviously still wouldn't want to try to generate densely textured or photographic-like images like Midjourney excels at. (For https://gwern.net/dropcap last year or before, we had to settle for Midjourney and create a somewhat convoluted workflow through Recraft; but if I were making dropcaps now, I think the latest Recraft model would probably suffice.)
No, I actually was referring to their native vector AI image generator, not their vectorizer - although the vectorizer was better than any other we found, and that's why we were using it to convert the Midjourney PNG dropcaps into SVGs
(The editing quality of the vectorized ones were not great, but it is hard to see how they could be good given their raster-style appearance. I can't speak to the editing quality of the native-generated ones, either in the old obsolete Recraft models or the newer ones, because the old ones were too ugly to want to use, and I haven't done much with the new one yet.)
Hm... I was definitely under the impression that it is generating SVGs natively, and that was consistent with its output and its recent upgrades like good text rendering, and I'm fairly sure I've said as much to the CEO and not been corrected... But I don't offhand recollect a specific reference where they say unambiguously that it's a SVG generator rather than vectorizer(raster), so maybe I'm wrong about that.
For me its based on that vector generation is much harder than raster, recraft has raised just over $10M (not that much in this space), and their api has no direct vector generation.
There is also the possibility for using these images as guidance for rasterization models. Generate easily manipulatable and composible images as a first stage then add detail once the image composition is satisfactory.
My little project for the highly intricate, messy representation ;) https://github.com/KodeMunkie/shapesnap (it stands on the backs of giants, original was not mine). It's also available on npm.
I agree, that's the future of these video models. For professional use you want more control and the obvious next step towards that is to generate the full 3D scene (in the form of animated gaussian splats since that's more AI friendly than the mesh based 3D). That also helps the model to be more consistent but also adds the ability for the user to have more control over the camera or the scene.
I couldn't agree more. I feel that the block-coding and rasterized approaches that are ubiquitous in audio codecs (even the modern "neural" ones) are a dead-end for the fine-grained control that musicians will want. They're just fine for text-to-music interfaces of course.
I'm working on a sparse audio codec that's mostly focused on "natural" sounds at the moment, and uses some (very roughly) physics-based assumptions to promote a sparse representation.
Install Cursor (https://cursor.com), go into Cursor Settings and disable everything but Claude, then open Composer (Ctrl/Cmd + I). Paste in your exact command above. I bet it’ll do something pretty close to what you’re looking for.
I've completely switched over to Cursor from Copilot. Main benefits:
1. You can configure which LLMs you want to use, whereas Copilot just supports OpenAI models. I just use Claude 3.5 for everything.
2. Chatting with the LLM can produce file edits that you can directly apply to your files. Cursor's experimental "Composer" UI lets you prompt to make changes to multiple files, and then you can apply all the changes with one click. This is way more powerful than just tab-complete or a chat interface. For example, I can prompt something like "Factor out the selected code into a new file" and it does everything properly.
3. Cursor lets you tune what's in LLM context much more precisely. You can @-mention specific files or folders, attach images, etc.
Note I have no affiliation whatsoever with Cursor, I've just really enjoyed using it. If you're interested, I wrote a blog post about my switch to Cursor here: https://www.vipshek.com/blog/cursor. My specific setup tips are at the bottom of that post.
Many AI-generated images you encounter are low-effort creations without much prompt tuning, created using something like DALL-E or Llama 3.1. For whatever reason, the default style of DALL-E, Llama 3.1, and base Stable Diffusion seems to lean towards a glossy "photorealism" that people can instantly tell isn't real. By contrast, Midjourney's style is a bit more painted, like the cover of a fantasy novel.
All that being said, it's very possible to prompt these generators to create images in a particular style. I usually include "flat vector art" in image generation prompts to get something less photorealistic that I've found is closer to the style I want when generating images.
If you really want to go down the rabbit hole, click through the styles on this Stable Diffusion model to see the range that's possible with finetuning (the tags like "Watercolor Anime" above the images): https://civitai.com/models/264290/styles-for-pony-diffusion-...
Ah, my mistake. "Meta AI" can generate both text and images, but apparently text prompts are handled by Llama 3.1 while image prompts are handled by Emu. I initially struggled to find the name of the image generation model.
As an engineer who's full-stack and has frequently ended up doing product management, I think the main value I provide organizations is the ability to think holistically, from a product's core abstractions (the literal database schema), to how those are surfaced and interacted with by users, to how those are talked about by sales or marketing.
Clear and consistent thinking across these dimensions is what makes some products "mysteriously" outperform others in the long run.