Sorry, but if you don't trust your coworker to review and shape the AI's output, why would you trust them with actually writing the whole document?
And if you think that at this point you could have done it yourself, then why don't you? The only important thing is that the document is fine, if it takes you too much to verify it then you need to trust your colleague, that was their job.
Signals of competence and diligence help build and reinforce trust.
Crafting a message for your known friend/ coworker almost always comes through in how it is written and structured, because it always weighs the arguments against the context of the business needs, communication norms, shared understanding of what's important, all implicit contacts about how we work together, the long term vision we shared over beers, the teams messages that the CEO sent three days ago, etc.
In a pure design doc - like a wiring diagram + 3 code snippets, this is a non-issue, so just ignore what I said (but consider it possibly).
In a doc for communication, especially of ideas, this is paramount.
The issue isn't using AI tools to write these "RFC" style docs. The issue is that in the very likely event that the output does not contain any of those very important bits (because how could it??), then we are in a situation where 1) this person I trust has lost some of that trust by not acknowledging any of the above or addressing it or structuring it in a useful way or 2) that person didn't try.
This is why communication is a valuable skill. It's always been implicit that effort slowly adds those and many more features of a good doc. Now it's explicitly not doing that, but still feigning effort with lots of formatting, etc. It moves the "add the important intangibles" from the writer to that reader, and like code review, that's laziness. We explicitly did not hire an AI, we explicitly hired a person, and that person should be filtering the world's noise through their valuable experience, and at least telling us that they did that. "I reviewed this and stand by it" is a very low bar to achieve, so I dont understand why there can be any pushback.
Is there a 3rd option here I'm missing?
EDIT: I can temper this a little bit. This is how I like to work. There might be a cadre of devs who are comfortable slinging 10 page unreviwed documents at each other. I'm fine with their existence I just think it's better to carefully review text from a close coworker because they deserve that time, and so I expect the writer would do at least one review themselves out of courtesy. I don't think any of this is arduous. If my boss told me to spend more time reviewing than the author was willing to spend writing, then I would either get comfortable with reduced dev output from this new DDOS, or find a new job.
While I understand perfectly the feeling of uneasiness that comes with reading something that was at least contributed by an LLM, my point is that any lack of trust is just on your side until you can prove that the document contains mistakes or omits important information. The person that produced the document is the one responsible for it- they still need to know and review accurately its content.
However there's another aspect that irks me, and it's the idea that the prompt was much shorter than the document itself. Well if this is the case, then the problem isn't much the use of LLMs, but rather that you consider obvious that your documents are mostly fluff that can be compressed to a bullet list. If the final document is much harder to verify than the information it contains then it means that you're wasting time and resources to ofbuscate rather than clarify.
> The person that produced the document is the one responsible for it- they still need to know and review accurately its content.
That's exactly what I'm saying, so we agree perfectly there.
> However there's another aspect that irks me, and it's the idea that the prompt was much shorter than the document itself.
That's not part of my argument, it's an assumption on your part.
> If the final document is much harder to verify than the information it contains ...
This is precisely the problem. LLMs generate far too much text for their information content, and often contain subtle errors etc etc (I dont need to rehash two whole HN threads here, the point is made). If a coworker sent me the bullets, I'd be happier. Because the cost of generating arguments is now effectively zero, it's imperative to use restraint to get back to the concise, high SNR message. Precisely because otherwise it becomes a DDOS on the reader.
And if you think that at this point you could have done it yourself, then why don't you? The only important thing is that the document is fine, if it takes you too much to verify it then you need to trust your colleague, that was their job.