Long ago I was active in experimental software engineering research. Publish or perish was brilliantly solved by the most successful researchers in the field who walked around with stellar bibliometrics, publishing 50+ papers a year, and secured endless rounds of funding with their status. The trick was simple: batteries of cheap students, postdocs and junior researchers/engineers operating at varying degree of independence that code and run experiments/simulations. The students/postdocs got their papers, and the scientists could salami slice the hell out of their "research". Each paper in a topic citing all of their previous papers, because of course your previous work is related work of your current work (and nobody filters out self-citations anyway). The quicker you could go through this loop of idea -> experimental validation -> results -> next idea, the higher the publication throughput. The slower link in the chain was of course transforming an idea into experimental results, hence this hierarchical structure of cheap workers in the research group.
With AI, dishing out massive amount of research in these simulation-heavy fields is trivial, and doesn't even require empire building anymore where you have to work your way through funding for your personal army. Just give an LLM the right context and examples, and you can just prompt your way through a complete article, experimental validation included. That's the real skill/brilliancy now. If you have the decency to read and refine the final outcome, at least you can claim you retained some ethical standard. Or maybe you can get AI review it (spoiler alert: program committees do that already), so that it comes up with ideas, feedback, and suggestions for improvements. And then you implement those. Or actually you have the AI implement those. And then you review it again. Or the AI does. Maybe you put that in an adversarial for loop, and collect your paper just in time to submit for the deadline -- if you don't already have an agent setup doing that for you.
Measuring the actual impact of research outside of bibliometrics has always been next to impossible, especially for high-velocity domains like CS. We're at an age where, barring ethical standards, the only deterrent preventing a researcher from using an army of LLMs to publish in his name is the fear of getting completely busted by the community. The only currency to this is your face, and your credibility. 5 years ago you still had to come up with an idea, implement/test it, then it just didn't work and kept not working despite endless re-designs, so eventually you cooked the numbers so you could submit a paper with a non-zero chance of getting published (and accumulate a non-zero chance at not perishing). Now you don't even need to cook the numbers because the opportunity cost of producing a paper with an LLM is so low that you can effortlessly iterate and expand. Negative results? Weak storyline? Uninteresting problem? Just by sheer chance some of your AI-generated stuff will get through. You're even in for the best paper award if the actual reviewers use the same LLM you used in your adversarial review loop!
This is just trying too hard. "Servant Leadership" is a buzzword invented to divert the general opinion from the power mechanics that hierarchical organizations are funded upon, i.e., the boss (sorry, leader) commands and the direct reports execute. Being "servant" basically just means being a decent human being, as per putting people in the right condition to carry out their duties, not coming up with unrealistic expectations, and do the required 1:1 coaching/mentoring for career development.
Hand-helding employees as this "blocker removal" interpretation of servant leadership seems to imply is just the pathway to micromanagement. It's ok to shield your juniors from the confusing world of corporate politics, but if your direct reports need you to do a lot of the sanitization/maturation of work items and requirements then why should you even trust their outputs? At that point you're basically just using them as you would prompt an AI agent, double- and triple-checking everything they do, checking-in 3 times a day, etc.
This "transparent" leadership is the servant leadership, or what it's intended to be anyway in an ideal world. Some elements of it are easily applicable, like the whole coaching/connecting/teaching, but they also are the least measurable in terms of impact. The "making yourself redundant", i.e., by avoiding being the bottleneck middle-man without whose approval/scrutiny nothing can get done is fantasy for flat organizations or magical rainbowland companies where ICs and managers are on the exact same salary scale. And it will continue to be as long as corporate success (and career-growth opportunities) is generally measured as a factor of number of reports / size of org. managed.
"Servant leadership" is not a buzzword but it's been misused and abused by Big Corporations to the point that it basically lost its meaning [1].
For me - personally - the idea is about being less of a boss and more of a nightwatchman or janitor.
I believe in agency and ownership and - in sane environment - people can be left alone with clear objectives. It's more about removing obstacles.
I'll give you a simple example.
Once a week a maid comes to our apartment. Despite a clear power balance disproportion (it's easier to find a new maid than a senior engineer) and her being used to being transparent and prioritizing to not disturb tenants for me it's the other way around. I'm super happy to hastily finish a call or leave my room is she feels the need to disturb me, and if she needs an extra pair of hands I'm happy to help her with anything. After all, I'm more interested with the final result than feeling important.
We have a bucket list of tasks than has to be performed that slightly exceeds her capacity and she has a full right to prioritize things. It took my a while but I eventually convinced her that it's ok to skip things - like cleaning the windows - if she's feeling under the weather or it's cold outside rather than faking it.
Most of the pointy hairs I worked in corporate environments would probably prepare a list of requirements and walked through the apartment with a checklist every time she would finish giving her a full, harsh performance review.
But that doesn't build trust and long term relationship.
And after some time she developed - what people around here call ownership - and sometimes I feel she cares about the household more than I do.
I forgot where I read it (Steve McConnell?) but the best analogy I've heard for a boss/project leader is to think of your job is moving a house and the bosses job is to be a few streets ahead taking down telephone pole wires so you aren't slowed down.
The fantastic element that explains the appeal of games to many developers is neither the fire-breathing monsters nor the milky-skinned, semi-clad sirens; it is the experience of carrying out a task from start to finish without any change in the user requirements.
That is always option #1, if only it was this easy. 3+ years now and countless applications later I'm still in search of this somewhere else. All I got is bait and switches in places where they just want a one man army or "can't see how you can do this if you don't have management experience" (i.e., direct reports, of which i have a whooping zero)
Interesting perspective on this beaurocratization of existing processes. Seems way an overkill, but if it gets into the so-beloved quantitative aspect of impact it can work I suppose. In my context, I'm not sure that the interactions I have are so structured that I can basically run a "ticket center" with a log of all small help I give around. Also, it's a bit unpredictable and if I'm just reached out occasionally it becomes hard to commit into a metric like "x tickets per quarter".
This is everywhere over mainstream news. Why is nobody discussing it here?
Is it a typical fake article with an untraceable source? All you read is the usual AI mumbo-jumbo, "scientists from Fudan University in China", but there's never any specifics, i.e., author names and link to a tech paper/report.
Let's make a concrete example: the business dev org wants to automate some manual process to cut some cost. This will come at a controlled risk, with a business case looking like "X $ saved at Y% decrease in output quality". The problem would be much easier if Y could be easily translated to money, but for compliance-heavy domains where things like reputational damage or regulatory fines are at stake it isn't. You could model this, but then you have the same problem, just one level down: who signs off that the model is good enough?
The decision is essentially a negotiation of what X and Y are acceptable. This negotiation is always stonewalled by some compliance function who want Y to be zero. They never own the costs (nor the X benefit), so why care right? The payoff matrix then looks like this:
We do what Dev says, Dev is right (Y% decrease in quality isn't significant): we saved $X, Dev gets promotions, good job.
We do what Dev says, Dev is wrong: regulatory fines, reputational damage, big panic, some head gets cut
We do what Compliance says, compliance is right (hard to actually verify since well, you just don't do anything): saved the day from incompetence and greedy risk appetite
We do what Compliance says, compliance is wrong: cost benefit not realized, a bunch of man hours wasted down the drain, not Dev's fault though (they tried). Compliance is ok because well, better be safe than sorry right?
All of these decisions are tracked in meeting minutes, but even in retrospect: how to verify when a decision is good? You've simply never taken it and maintained the status quo.
This is a rather simplified version, but the core still holds: people who own the costs and who own the risks are put in a cage fight to negotiate with the only escalation path being to higher level decision forums that at some point can't be bothered because the Xs and Ys are too small/irrelevant for them.
The only out I've seen to this is falling back to Big4/MBB consultants, where suddenly a double standard becomes super evident. Now compliance suddenly presents a much lower bar to jump: who cares if they fuck it up: it will be their fault. The idea is that potential regulatory/reputational damage can and will be deflected onto them. After all it's part of their job and they have large enough shoulders (political connections) to see it coming, mitigate, etc.
There is of course value in stopping stupid decisions, and every org should have some form of control for avoiding a director going completely crazy. But when you implement such independent compliance functions then how do you get anything done other than maintaining the status quo?
Think about developing a new automation project/feature that reduces some cost while increasing some risk in a controlled way. If you have a decision body with people who only own costs vs. people who only own risks you'll be in deadlocked forever.
With AI, dishing out massive amount of research in these simulation-heavy fields is trivial, and doesn't even require empire building anymore where you have to work your way through funding for your personal army. Just give an LLM the right context and examples, and you can just prompt your way through a complete article, experimental validation included. That's the real skill/brilliancy now. If you have the decency to read and refine the final outcome, at least you can claim you retained some ethical standard. Or maybe you can get AI review it (spoiler alert: program committees do that already), so that it comes up with ideas, feedback, and suggestions for improvements. And then you implement those. Or actually you have the AI implement those. And then you review it again. Or the AI does. Maybe you put that in an adversarial for loop, and collect your paper just in time to submit for the deadline -- if you don't already have an agent setup doing that for you.
Measuring the actual impact of research outside of bibliometrics has always been next to impossible, especially for high-velocity domains like CS. We're at an age where, barring ethical standards, the only deterrent preventing a researcher from using an army of LLMs to publish in his name is the fear of getting completely busted by the community. The only currency to this is your face, and your credibility. 5 years ago you still had to come up with an idea, implement/test it, then it just didn't work and kept not working despite endless re-designs, so eventually you cooked the numbers so you could submit a paper with a non-zero chance of getting published (and accumulate a non-zero chance at not perishing). Now you don't even need to cook the numbers because the opportunity cost of producing a paper with an LLM is so low that you can effortlessly iterate and expand. Negative results? Weak storyline? Uninteresting problem? Just by sheer chance some of your AI-generated stuff will get through. You're even in for the best paper award if the actual reviewers use the same LLM you used in your adversarial review loop!