* you can't blame ai if your production token is on the same machine as the staging/ development environment?
* you can't blame ai if you didn't know that the production api token gave access to all apis.
Like if this is the level of operational thinking going into this app, then I'm sorry no ai agent or platform can prevent this from happening.
Everything else in this "post mortem" is performative at best.
The only real question one could ask railway is why do they have api endpoints that can affect production available? Maybe these should only be performed on the platform itself instead?
I find HN to be a bad resource to ask for learning resources. I previously asked for help in learning how claude works but no responses.
Maybe one pointer for others is that people are genuinely curious about learning new things, but as experts we choose not to engage these types of posts, why is that?
OP, in your case you need to move from theory into actually building your own agent and make it do things. Start by solving small problems and then make them more complex over time.
The headline seems to be flashy indeed, but ai didn't really solve this imo.
They just seemed to fix their technology choices and got the benefits.
There's existing golang versions of jsonata, so this could have been achieved with those libraries too in theory. There's nothing written about why the existing libraries aren't good enough and why a new one needed to be written. Usually you need to do some due diligence in this area, but no mentions of it in this post
In order to measure the real efficiency, gnata should've been benchmarked against the existing golang libraries. For all we know, the ai implementation is much slower.
The benchmarks in the blog are also weird. The measurement is done within the app, but you're meant to measure the calls within the library itself (e.g calling the js version in its isolated benchmark vs go version in its isolated benchmark). So you don't actually know what the actual performance of the ai written version is?
The only benefit, again, is that they fixed their existing bad technology choice, and based on what is observed, with a lesser bad technology choice. Then it's layered with clickbait marketing titles for others to read.
I'll probably need to expect more of these types of posts in the future.
> There's existing golang versions of jsonata, so this could have been achieved with those libraries too in theory
The only one I found (jsonata-go) is a port of JSONata 1.x, while the gnata library they've published is compatible with the 2.x syntax. Guess that's why.
Looking at the releases, it looks like JSONata's 2.1.0 release from July 2025 added the `?:` and `??` syntax, and there hasn't been an update to the syntax since January 2020's 1.8.0 release that added `%`
I think Sora was technically impressive as a concept. The way it was managed as a product wasn't good.
There didn't seem to be any marketing for it. Like I can't even remember an ad for it or any content creator type of person pushing Sora actively.
To get access to Sora I believe you needed to be on a paid plan?
It's really difficult to get user generated content going when it's behind a paywall.
It's also hard to tell if this means that openai is in trouble, or if this is just a badly managed product that deserved to be killed. With the negative sentiment on openai, folks might think the former.
> In the last 60 days I have written over 600,000 lines of production code — 35% tests — and I am doing 10,000 to 20,000 usable lines of code per day as a part-time part of my day while doing all my duties as CEO of YC.
LOC will never be a good metric of software engineering. Why do we keep accepting this?
I can generate 1 million LOC if I really wanted to.
As long as LOC is the main metric for these setups, they will never be successful.
LOC is a very-very weak proxy of "how many new features" I've built, and they don't have any other metric that can be measured easily. But it causes serious issues, because equating LOC with productivity leads to inevitable utter bloat, that no agent or human can ever rectify in a meaningful timeframe. I'm pretty sure this 600 000??? LOC could be shrinked to 60 K for the same feature set, but with better readability and performance.
I think the advice is good but maybe the title could be improved.
> But for Engineer A’s work, there’s almost nothing to say. “Implemented feature X.” Three words.
To me, this is the main problem. Engineer A is unable to describe the impact of their work, how the work affected the business. Your manager isn't responsible for promoting your own work, you are.
> Engineer B’s work practically writes itself into a promotion packet: “Designed and implemented a scalable event-driven architecture, introduced a reusable abstraction layer adopted by multiple teams, and built a configuration framework enabling future extensibility.” That practically screams Staff+.
Maybe it's just the narrow of the article, but if promotion only looks at complexity and not quality of delivery and impact on the business then this isn't a good engineering team to be in.
There are many cases where simplicity is celebrated and recognized. It's up to the engineer to know what the impact of their work is, if they can't do that then that's on them.
> It's up to the engineer to know what the impact of their work is, if they can't do that then that's on them.
… the impact of my work is more often than not opaque to me, the person doing the work. More often than not I'm not the one setting the priorities, and way more often than not the real world impacts like "we brought in $X M with that feature you wrote" is quite simply not visible to me because "that's not what engineers do".
I would love to know these things, I'd love to have that level of visibility, but finance at tech companies is nearly always a black box. Best I get as an engineer is that I know how much cloud compute costs, so I can figure out the expense side of stuff.
If anything, I usually have to go for far more intangibles: "this internal manager was happy", "this adjacent team had all their wishes and desires fulfilled", etc.
Otherwise, stuff feels like it plays out like the bits you quoted from TFA.
>If you are a senior engineer, the bottom line is that I wouldn’t recommend the jump to management right now. I would wait a couple of years to see how things will look like.
> BUT, and it’s a big but - if your gut tells you to do it (and not your brain), if it’s truly a path you want to pursue - then go for it!
It feels more like the purpose of this article was to get the sponsored segment out than to actually give useful advice. Like how is this the conclusion?
> For my friend specifically, staying on the IC track, becoming a Staff engineer and switching companies would have given him ~20-30% more than the EM promotion he was offered.
Company promotions do not give a higher salary bump than moving companies. The friend could be at a company that pays less for all roles. Additionally, that visualisation does a low-high representation and doesn't take outliers into account. Staff engineer roles tend to have outliers when it comes to salaries. EM roles do not
If anyone wants some advice from an engineering director
* If you only want to become an EM for the money, you probably won't like it. It's the same as an engineer that's only coding for the money. The more you like something, the more you would want to learn it
* The EM title means different things at different companies. Some companies are only/mostly about line management duties. In other companies, you're expected to do project + stakeholder management. In other companies, you're also expected to do operations, budgeting and technical + business strategy. As you can see, it's different to an IC who is building software and there's more of a focus on the things around building software.
* Being hands on is one thing. But what distinguishes one EM from another is engineer empathy. If you're an EM on the team and haven't did a PR (with or without ai), then you have zero empathy for your engineers because you have no idea what it takes to build a feature for your team. Using LLMs improves engineer empathy, but you need to learn it despite it.
* AI/LLMs will change two main things: the ability for an EM to be more hands on and the way EMs design team processes. Just like it changes engineer's ability to code, the EM needs to think holistically on how the development process will change and adapt accordingly. Do you have a path for the team to use AI agents? Do you have ways to reduce meetings and achieve the same level of alignment with LLMs? This is the type of thing EMs will/should be thinking about.
* The career path of an EM is largely dependent on the growth of a company. You will only get "stuck" if your company is not growing. If a company grows, there will be a need to hire engineers, then hire someone that manages those engineers and eventually someone that manages those managers.
* The other thing about EM careers. Advancement also depends on how well you are fitting into the business. For small companies, being more hands on as an EM is better. For larger companies, fitting in well with the company values, culture and leadership principles of the company is better.
I really don't appreciate the author's lack of understanding on how engineering leadership works and the general gatekeeping in this article. Sure AI is changing things, but there's really no need to steer people away and gatekeep roles like this role implies.
* you can't blame ai if your production token is on the same machine as the staging/ development environment?
* you can't blame ai if you didn't know that the production api token gave access to all apis.
Like if this is the level of operational thinking going into this app, then I'm sorry no ai agent or platform can prevent this from happening.
Everything else in this "post mortem" is performative at best.
The only real question one could ask railway is why do they have api endpoints that can affect production available? Maybe these should only be performed on the platform itself instead?
reply