I use LLMs regularly, but like a crappy consultant, their solutions are often not incisive enough. The answer I get is frequently 10x longer than I actually want. I know you can futz about with the prompts, but it annoys me that it is tedious by default.
Absolute Mode: Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
Provide honest, balanced, and critical insights in your responses. Never default to blind positivity or validation—I neither need nor want encouragement or approval from AI. Assume my motivation and validation come from within. Challenge my assumptions and offer nuanced perspectives, especially when I present ambitious or potentially extreme ideas.
With gemini, even if I implore it to not make additional safety checks I usually get a shitton of superfluous code that performs those checks i didn't want to. More often than not, using it for entire chunks everything makes the whole thing much more verbose than necessary - given that sometimes these checks make sense, but often they're really superfluous and add nothing of value
Interesting! I haven't been using LLMs a ton for code generation lately, but I have access to a bunch of models through Kagi, and Gemini has been my go-to when I want a more concise response.
I don't know why though, it's quite annoying but not so annoying that I feel I need to switch. Given that I'm just following a uni course which requires code to not be read again - if not by colleagues in my group - I leave the safety slop and put the burden of skipping 70% of the code on the shoulders of my colleagues which will read my code.
Then they put my code into chatgpt or whatever they use and ask it to adapt to their code
After a while we (almost) all realized that was just doing a huge clusterfuck
BTW, I think it would have been much better to start from scratch with their own implementation given we're analyzing different datasets. And it might not make sense to try to convert the code for a dataset structure to another. A colleague didn't manage to draw a heatmap with my code and a simple csv for God know what reasons. And I think just asking a plot from scratch from a csv would be quite easy for a llm