Hacker News new | past | comments | ask | show | jobs | submit | gradstudent's comments login

>Or, when people become familiar with many instances of X they seek out the "best" instances of X

I think you're saying the same thing as the GP? Ulysses is a book for lit nerds, which I suppose the Modern Library board were.

Looking at the list, there's hardly any books from after mid 20th century. That makes me think that the board comprised primarily old lit nerds, who stopped reading long before voting. The list is also super ethno centric, which makes me more dubious still about the claims for "best" anything.


According to the NYT[1], between 1950 and 2018 95% of published English-language fiction authors were white. That Top 100 Novel list contains at least 3 black authors, Ellison, Wright, and Baldwin. Considering that the percentage of black authors for the period 1900-2000 was probably even less than for 1950-2000, and that there are actually only 75 unique authors in that list, on its face I don't see the bias by the voters. The bias is in the disproportionate share of published white authors.

[1] https://www.nytimes.com/interactive/2020/12/11/opinion/cultu...


^^ came to say this.


Came to say: Compilers: Principles Techniques and Tools after 1988.


The analysis mentions the correlation with the played moves vs. engines is ~95% for both players. But I recall a credible-seeming youtube analysis from last year's Hans Niemann cheating scandal which said the best players only have a ~70-75% correlation on average.

https://youtu.be/jfPzUgzrOcQ?t=222

I'm trying to cohere these two "facts". Does anyone know if the 2024 championship games simply played out along very well established lines?


You can’t compare those because they’re two different events. The World Chess Championship is unique among chess events because of the very long time controls (120 minutes per side, additional 30 minutes after 40 moves, plus 30 seconds per move starting from move 41) and the huge amount of prep time the players get to face only one opponent.

The prep time means players can stay within the top engine line for many many moves because they’ve memorized it completely. The generous time controls means the players have a lot of time to calculate the best move once they’re out of the prepared line. Lastly, the large amount of time increment after 40 moves (30 minutes plus 30s per move) means the players should be able to solve for draws or mates in the endgame. This is part of the reason Ding’s decisive blunder was so shocking: he had plenty of time but moved too quickly, not realizing his bishop could be trapped in the corner and traded off into a losing pawn endgame after he offered the rook trade.


I think those are two different definitions. In the video, the engine correlation represents the amount of moves that matched the top move of a chess engine, as defined here: https://en.chessbase.com/post/let-s-check-engine-correlation... The accuracy metric in the article is defined a bit differently according to how Lichess computes it: https://lichess.org/page/accuracy


The scandal was a big nothing in the end (Niemann didn't cheat at the time, though he had admitted to doing so as a younger player), and the video lacks credibility in that regard.

It's not clear where your 70-75% claim comes from, but you would expect a higher accuracy in classical vs speed games for instance.


I count 8 colours on this website. The 4 mentioned, plus: white (appearing in the hue slider), light-pink (background of code box) and two types of gray (appearing in the up/down widget)

Something something glass houses?


Well these are the colors they define:

    --primary-color: hsl(var(--hue), 50%, 90%);
    --pre-primary-color: hsl(var(--hue), 50%, 95%);
    --secondary-color: hsl(var(--hue), 50%, 10%);
    --tertiary-color: hsl(var(--tertiary-hue), 80%, 20%);
    --accent-color: hsl(var(--accent-hue), 80%, 20%);
As I see it that's either 3 colors or 5 colors but not 4


White and grey are shades, rather than typical colours. Any hue with 0 saturation will be a shade between white and black.

(Yes, probably there are arguments against this, but in terms of colour theory for dummies.)


So is a change of hue required in order to say there's been a change of color? I'm not offering saturation in this question, since "oh that's just a less saturated red" seems similar enough to the notion of "oh that's just a lighter red" that the two ought to be in the same "not color" boat...


Assuming the hue and saturation components stay the same, changes in lightness would result in various shades of the same “color”. I think. Maybe. The problem is that in real life changes in lightness result in changes in saturation as well, so things get wonky.


I think that’s more of a question of semantics than colour theory, but maybe? Is pink just light red? Feels like the answer should be no, and that undermines my basic premise above, so I find myself doing mental gymnastics trying to answer your question, which more often than not is a sure fire way to tell that I’m wrong.


> But if your team members close twice the tickets you did, they will have trouble justifying you are contributing as much as them.

The metrics make reporting to higher ups easier, no doubt. But the situation you describe is a classic sign of a shit manager: one who cannot justify their decisions except via reference to made up metrics.


Unfortunately, a lot of things boil down to metrics, even at companies with great engineering cultures.

If you have four L4 engineers on the team, all of whom are performing at the level described in the career profile as L5, but only budget to promote two of them, how do you pick which two? What if they have different managers, all of whom sincerely believe their report is the one delivering essential value?

If you have an organization with forced bucketing where X% of your team need to be given a subpar rating, how do you decide which one? If you don't have an obvious low performer you'd better have metrics.

This system is soul crushing but it exists all over the industry.


> how do you pick which two?

You (=hypothetical manager, please excuse second-person tense) use your managerial skills to make a decision, which considers metrics and other contributing factors. Then you write a justification which you defend, to higher ups and to those who weren't promoted. Because that's your job.


> Then you write a justification which you defend, to higher ups and to those who weren't promoted.

What happens next is this manager gets a low performance rating themselves, for making decisions not backed by metrics. So next year they conform.


This "don't make a decision unless it is 100% derived from metrics" mentality I just don't get. A robot could do that. Why is your company out there trying to hire/promote smart managers with good judgment if they don't let those managers apply their brains and judgment? "If employee's measured results > threshold, then reward employee" can be done by a computer. No need for a human manager.


People create process because of the principal agent problem.

The upper managers do that because they think the lower ones are lying or incompetent. A traceable process doesn't lie.

And yeah, it's stupid, and it makes the problem worse. It's the reason nonetheless.


> And yeah, it's stupid, and it makes the problem worse. It's the reason nonetheless.

While that's true, it's also a difficult problem to solve. In tiny organizations like startups where the CEO personally knows everyone and what they do, it's easy.

But as soon as you grow beyond that (and I've been in a number of startups that cross that gap), how do you objectively but fairly handle this? There is no easy answer.

You could go with fully empowering all managers to do as they wish. Trust them to hire, fire and promote correctly. This is great, until you hire some bad managers. And as you grow, it is 100% guaranteed at some point you'll hire bad managers. So then they ruin it for everyone, hiring and promoting their buddies.

And that's how you end up with more objective metrics. Take away some of that freedom, make everyone measure and justify actions based on metrics. It's terrible, but probably better than the alternative.


Yes, the Nuremberg defense - "I was just following orders" - is one approach.

It's a lot easier than applying back pressure, fighting for your reports, or quitting in solidarity.

"Sorry, Hugo and Maryna, you two only got the Fields medal while Anton and Alain got a Nobel Prize, so we'll have to let you go for your under-performance."


Here's how this works in practice:

* Corporate says "here are the buckets. They should match at the VP level since that's a large pile of people"

* VPs tell their Directors to match these buckets, who recurse further

* L1/2 Manager Alice says "my team is too small, this isn't how statistics work, I want an exception"

    * Problem #1: the teams with actual low performers will often make similar claims
* If the claim actually gets escalated all the way to the VP, the VP says "tough, fit the buckets".

* Alice is now a troublemaker in VP/Director's eyes

* If Alice and everyone who feels the same way quits in protest, nothing changes except that the org is full of yes men, none of whom are even trying to push for changes in the system any more.


So it's better that Alice stays because ... why?


Because Alice is a good manager who cares about their reports and is otherwise supporting them, advocating for them, pushing for changes to team culture, etc.?

The fact that they can't control this one thing does not mean that they should just abandon the whole company. If Alice finds a company where they can get similar compensation for similar workload without the forced bucketing, perhaps that's a good idea for their mental health, but Alice leaving is a large negative for the team.


I wrote 'applying back pressure, fighting for your reports, or quitting in solidarity'. Alice leaving was the third of these.

'advocating for them' and 'pushing for changes' are parts of the first two.

When back pressure and fighting for your reports does not work, what do you do then?

As you wrote it, Alice leaving is a large negative for the company to, making it full of yes men, unable to change away from a collision course.


>When back pressure and fighting for your reports does not work, what do you do then?

Continue fighting the battles you can win. Do your job and do it well. Changing jobs is hard, stressful, unavailable to many people for a variety of reasons, and not guaranteed to improve things. Particularly once you start becoming senior and in management.

If I left a job every time I was faced with a bad situation I would never built up the soft skills or connections to be any good at any connection. Particularly as a first-level manager, where 80% of your job is delivering messages you had no say in but have to own anyway.


The comment I replied to at https://news.ycombinator.com/item?id=42039995 was only about following orders, with zero mention of fighting any sort of battle.

My response was meant to be interpreted as doing something other than appeasement, which includes "fighting the battles you can win."


Nah, it's better on the long term if she goes work somewhere better.

But changing jobs doesn't happen immediately, and "somewhere better" may be very hard to find.


> If you have an organization with forced bucketing where X% of your team need to be given a subpar rating, how do you decide which one? If you don't have an obvious low performer you'd better have metrics.

If you’re a manager in this type of system, your job is to reach out constantly and find folks who are low performers and get them into your department. They will fill the bottom of your team rating chart. At that point, they can be managed out (ideally in a humane way) or just held onto to fill that cellar dweller role while not slowing others down (some people are ok with this as long as they get paid).

I would never choose to work in an environment like that, but some people find themselves there without better options (e.g., being location-bound due to family, etc.).


Wow, I never saw this type of advice before, but I like it. In short: If you are required to do stack ranking, where at least one person must get a shitty score/grade, then recruit someone internally who is below average and will take the hit. Brutal, but practical.


Or externally! I posted an idea here a while ago, where I thought I'd start a staffing company called "Scapegoat Consultants" and we would offer your team a "low performer" that you could hire and then fire after a year, to protect the rest of your team from stack-ranking. Our consultant will join your team and do as little as you want, or even nothing at all! We'd guarantee that they will at least not actively make your code base worse, but that's it. After a year of this, you can easily make the case that our recruit was a low-performer and manage them out. Don't worry, he won't mind--his job was to be the low performer, and we'll hire him out to the next BigTech company who struggles with stack ranking.

It used to be tongue in cheek, but maybe the industry actually needs something like this.


Cynical, but probably the most humane take I’ve seen here so far.


That's the standard strategy to survive stack ranking.

Have you heard any story by someone that was hired into some megacorp just to be sent into a PIP or fired by low performance before they had any chance to even do anything? Stack ranking is the most common reason those happen.


“Hire to fire”. Not a new idea. I have been hearing it for at least 5 years now.


> If you have an organization with forced bucketing where X% of your team need to be given a subpar rating, how do you decide which one?

Easy. You quit, and find a better job.

That practice is so toxic that it's sufficient to condemn the organisation as unworthy of any buy-in whatsoever. Just leave.


In defense of stack ranking, it does solve a very common problem -- managers who never fire people who deserve to be let go.

This ultimately rots an organization from the inside, as it leads to attrition of higher performers because they're forced to work with useless people.

You see this a lot in companies that rarely fire people, because managers optimize for accumulating direct report count (whether or not those direct reports are doing valuable work).


companies need to do much better about letting managers go. I get it, they are hard to find. and those that actually have any engineering management skill at all are even harder to find. and every time you hire a new one you're taking a risk that they'll be a absolutely terrible manager. a terrible manager can cause a huge swath of destruction.

but the answer can't be an army of useless middle managers diluting the impact of the people who actually do want to help the company and providing cover for people like them that are just phoning it it.


Absolutely. I'd like to see companies get more serious about driving manager requirement from span of control.

As well as regularly rotating managers, like the military does (e.g. 3 year reassignment).


> This ultimately rots an organization from the inside

Hum... So instead you decide to immediately rot the organization from the inside.

I can see how it avoids that one problem. The important problem is the waiting, right?


Try working for IT in a utility, insurance company, or other stable business. You'd be amazed how high the bar for termination is.


No disagreement here. But you are falling for a very bad logical fallacy.


Nah you make sure X% of your team is staffed with losers. It's a nutty system I know. But I'd imagine that's how things worked at companies that have stack ratings. Managers probably traded low performers like baseball cards.


> If you have an organization with forced bucketing where X% of your team need to be given a subpar rating, how do you decide which one? If you don't have an obvious low performer you'd better have metrics.

This is a case where you're forced to rate people who are up to par as subpar - the rating system is simply bullshit. You should be allowed to rate people according to their actual performance.

Metrics don't solve the underlying problem which is that the rating system sucks. Having a random number generator called "metrics" to "make decisions" isn't good either.


> If you have an organization with forced bucketing where X% of your team need to be given a subpar rating, how do you decide which one?

I think it's Joel Spolsky who has a tale of a manager asking him to do that for his team when everyone had gone all in with overtime to get something shipped on time. To their great credit, the author refused, and the manager saw sense.


Pfff, what kind of problem question is that. Manager promotes the ones who go with him for a smoke or do some other regular informal activity together, obviously. :)


Dice.


I've had experience with internal "support" that marks tickets as closed without actually fixing the problem. Sometimes the reason for closing suggests they haven't even read the email that opened the ticket.

Think something like "Tool $X is missing on machine $Y. Please can you install it, according to $POLICY it is meant to be on all prod machines." Then the ticket gets closed with "The policy is correct. $X must be on all prod machines, we cannot change this." Without installing the tool.

Then when the annual anonymous "rate your satisfaction with these services" survey came round, they wondered why the ratings were so bad - I made sure in the open text feedback not to go after the poor employee but to raise concerns about the performance of the team manager. I won't take credit for it, but I'm told things at $COMPANY have got better since.


> a shit manager

This isn't about a singular individual, it's about a group of professionals. You have to deploy systems thinking. If you give a cohort a tool that allows and incentives them to do worse at their job, the average person in that group will perform worse.

I like my boss; I have also built a skillset and frugalness where I don't worry about working for someone I don't respect ever again. But I still care about what's going on at large and trends. I don't want downward pressure on the average. Not only will that slowly seep into effecting me, I also care about the lives of the people at points in their career where they don't have employment opportunities that allow them to avoid bad management.


>the situation you describe is a classic sign of a shit manager

Well then it means the vast, vast, vast majority of companies with a coherent corporate structure are shit. Welcome to reality


> The metrics make reporting to higher ups easier, no doubt.

It's not about being "easy". It's about being objective, verifiable, and demonstratably unbiased. It's about justifying how you rank the performance in a way that's impossible to refute.

> But the situation you describe is a classic sign of a shit manager:

It's not, and frankly this "shit manager" accusation is an infantile remark that screams a failure to understand what it means to perform well.


> I did not read it, its behind a paywall

Browse to the article, click reader mode, click refresh. Might need to be in a private window, in case of cookie shenanigans.


In my experience, there is not much technical growth as you go upward because there's not that much need for technical depth. What most companies need is armies of low and intermediate programmers churning out various kinds of CRUD apps. There's a bit of scope to be a "senior" grunt, and there may even be some very small number of "architects" above that but generally what's needed is people to manage the grunts and senior grunts.

Further technical growth requires something like a PhD, and even then, that just makes you a grunt on a new (=academic) ladder, which has the same structure as before.


We tried various forms of this during the pandemic; systems like gather.town and so on. They were universally awful. I think travel is simply the price of good Science.


Great? Please block your way to irrelevancy Google, and take your browser with you.


Sounds great in principle. In practice, it's the stuff of nightmares. This is because the web version commits every keystroke of your online contributors, making it very difficult for you to actually merge your local commits (they need to stop typing!).


Yup. When I was using it, I don't think it was literally every keystroke but it was something pretty granular so that if your contributors were working on the document it was a nightmare to get anything pushed since it kept changing under your feet and causing conflicts. Finish a merge, and another one is waiting.


I was just about to comment something like: worked fine for me... But then I realised that the only time I did this I was 6+ timezones away from my collaborators.


I actually never experienced this issue, that sounds annoying. But most papers I work on have like 2 coauthors, where one of them is usually in a different time zone, so that might be why :)


In my recent experience, it only seemed to make a commit when you did a `git pull`.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: