Well, i feel a bit reluctant about sharing this, because it is so sad, but the following was recorded not long ago ... wouldn't be surprised if they took it down
Sure, but all the same, given what you've said, isn't it a bit weird that Kramnik himself has rushed to raise drug taking suspicions? Is he trying to divert attention? Regardless of whether he has anything to do with this, it would have been classy of him to stay quiet for a while about this
I think that Cursor is doing the same. A couple of weeks ago they removed the 500 prime model requests limit per month in the $20 plan, it seemed like this was going to be good for users, in fact it's worse, my impression is that now the limit is effectively much lower, and you can't check anymore in your account's dashboard how many of these requests you've made over the last month.
Actually yes! I saw this post some months ago, and thought to myself: "Wow this is really close to what we've been building". Kiro uses three files though: requirements, design, and then tasks. The requirements doc is a bunch of statements that define all the edge cases you might not have originally thought of. Design looks at what is currently in the code, how the code implementation differs from the requirements, and what technical changes need to happen to resolve the difference. Then tasks breaks the very large end to end development flow up into smaller pieces that an LLM can realistically tackle. The agent then keeps track of it's work in the tasks file.
Realistically, I don't think that Harper's statement of "I get to play cookie clicker" is achievable, at least not for nontrivial tasks. Current LLM's still need a skilled human SDE in the loop. But Kiro does help that loop run a lot smoother and on much larger tasks than a traditional AI agent can tackle.
Thank you, I will certainly check this out because this is something I've been sort of doing, manually, but I am still struggling to get the right workflow.
This recent OpenAI presentation might resonate too then:
Prompt Engineering is dead (everything is a spec)
In an era where AI transforms software development, the most valuable skill isn't writing code - it's communicating intent with precision. This talk reveals how specifications, not prompts or code, are becoming the fundamental unit of programming, and why spec-writing is the new superpower.
Drawing from production experience, we demonstrate how rigorous, versioned specifications serve as the source of truth that compiles to documentation, evaluations, model behaviors, and maybe even code.
Just as the US Constitution acts as a versioned spec with judicial review as its grader, AI systems need executable specifications that align both human teams and machine intelligence. We'll look at OpenAI's Model Spec as a real-world example.
Have you considered a fourth file for Implemented such that Spec = Implemented + Design?
It would serve both as a check that nothing is missing from Design, and can also be an index for where to find things in the code, what architecture / patterns exist that should be reused where possible.
And what about coding standards / style guide? Where does that go?
That is interesting. So far we are just using the task list to keep track of the list of implemented tasks. In the long run I expect there will be an even more rigorous mapping between the actual requirements and the specific lines of code that implement the requirements. So there might be a fourth file one day!
In stackoverflow's survey [1] it ranks below the 40th position in popularity, near the bottom and below the likes of Prolog, Apex, MicroPython, and just above Cobol.
If that isn't obscure, nevermind building all your tech around it like the article says, then I guess that nothing is.
They want to destroy the so called "dark state" by scaring and scarring officials out of their wits to do their biding, "the art of the deal", trimming the budget is the excuse.
https://youtu.be/6_anRV5ZFWY?si=csobK4NBKKT-tE8H
reply