I took an IQ exam in grade school. Went fine until I encountered a number sequence, what is the next number kind of thing. I found a pattern, and it's predicted number was not listed as possible choices. That was total crap. I got vexed and then tried to answer the last 3/4 of the test wrong. Parents were later told I was gifted. (Maybe they meant it as euphemism! Ha!). I later learned of a little-known proof that for every sequence, there are infinite number of correct next-numbers. So the test was flawed. As an adult I took a High IQ test and scored 179. (I still dont' grok epsilon-delta proofs and probably never will). I was tested in college against an IQ test and broke it by solving the unsolvable portion through a stochastic technique. He had to go back to his original assumptions.
I'd say, as a group, those with a higher IQ than another group from a random selection of a normally distributed population, they can be expected to perform better on mental tasks that we care about. But at the individual level? Meaningless. Feynman was ~120. I, who have not contributed to anything like quantum physics scored higher, much higher.
For AI, an IQ test is interesting, but I would randomize the temperature (and other knobs) and take lots of samples. Keep in mind a relatively low IQ can blow away an AI on all kinds of things like compassion, understanding the pains of the human condition, self-sacrifice, etc. (etc. means a whole book could be worth exploring).
I've seen this too in a similar aptitude test I bought at a garage sale back in the 90s. I think it was an air traffic controller prep test. But in addition to the number sequence tests, it extended the concept to shapes and lines series. After a certain point, it definitely becomes subjective and debatable what the next sequence is.
Curiously, I also noticed this ambiguity in the humanities classes in college. I took the class thinking it was open and accepting of all points of view, only to find that there is one correct interpretation and conclusion you must reach from the classic fiction you were assigned to read. I didn't learn that until after I graduated.
The equation to rank links is a poor substitute for an editor. Editors used to be the gateway of information that was disbursed via newspaper, radio, and TV. They were not without fault. So what is the least faulty filter? (Even your own brain, eyes, and ears are faulty, sorry perfectionists.) You cannot trust video or audio now due to deepfakes. What can you trust as a source of information more than 50%?
There's still information in the noise, you have to become your own editor.
Google originally used back links in the equation; where the goal is to take the information by editors on a massive aggregate to rank.
Brilliant until gamed.
Sounds as if that happened suddenly ("got hit"), what changed (in the workplace) that made such bad things possible? (After not having happened for 20 years)
How long did it take until you felt better again? If I can ask
The fun thing to do in these situations is to add yourself as reviewer to all PRs by the person giving such feedback and return the favor. They learn pretty fast.
That seems like it risks creating conflict out of what's often just a misunderstanding.
Assuming it's a corporate environment (it's fuzzier in the open source bazaar):
If it's the first time or I don't really know the reviewer, I ask them to hop on a call to discuss (usually to walk me through) their feedback and I go in with an open mind. That gives me the opportunity to find out if I'm missing some context, can see how reasonable they are, and can get clarification of what they actually care about versus FYIs/suggestions. As they go, if it isn't clear, I just ask them if something is a soft opinion or hard opinion.
If everything is a hard opinion and they don't seem reasonable, I reach out to someone else (ideally a team lead or peer on their team) over a private channel for a 2nd opinion. If they also think it's unimportant stuff, I ask them to add their own comments to the PR. Give it a reasonable amount of time and they'll either have reached a consensus or you can roll the side you agree with.
If it's an issue again later and they seem reasonable, respectfully push back. If they seem unreasonable, skip right to DMing their lead for a 2nd opinion.
If it keeps being in issue, then some frank conversations need to happen. Something I've noticed about folks who steadfastly focus on minor stylistic nits in CRs is they (1) tend to be cargo culting them without understanding the why behind them and (2) they're usually missing the forest (actual bugs in logic) for the trees.
Most people are pretty reasonable when they don't feel like they're under attack, so in my experience it's usually possible to resolve these things without dragging it out. Of course, if you're at a company with a lot of disfunction, well... I can understand why what I've written above won't work.
If it's the first time - yeah, reach out to the person and talk to them.
But if the person is consistently leaving such comments under the guise of mentorship, 'raising the bar', or some other bullshit which boils down to them attempting to demonstrate their own seniority at the expense of other people's time and stress - then showing them how it feels is a great approach.
Bringing in other people and managers is not very effective I've found - it takes additional time, other people have their own stuff to focus on, and managers often don't have the technical expertise or confidence to push back against subjective comments which claim to be 'raising the bar' or whatever. It also doesn't look great when you have to bring other people in to help you address PR comments.
And, of course, you do this without ostensibly creating any conflict - if they complain simply respond along the lines of 'I totally love the care and attention to detail you bring when reviewing my PRs, I've learned from you and thought it would be appropriate to keep the same high standards and not lower the bar... etc'.
> As they go, if it isn't clear, I just ask them if something is a soft opinion or hard opinion.
Nah, if they don't explicitly state that's a soft opinion via 'nit', approving with comment or some other means, they are disrespecting the person who's PR they are reviewing. I shouldn't have to chase them down to see how strong their opinions are.
> If it keeps being in issue, then some frank conversations need to happen. Something I've noticed about folks who steadfastly focus on minor stylistic nits in CRs is they (1) tend to be cargo culting them without understanding the why behind them and (2) they're usually missing the forest (actual bugs in logic) for the trees.
What do you do if the comments are purely subjective and all backed up by internal/corporate dogmaspeak? 'Raising the bar' ... 'keeping the standards high' ... 'mentoring', etc. , or open ended comments asking to explain how stuff works, and whether 'this approach is the best'? There is no shortage of rhetorical bullshit that can be used to justify subjective PR comments.
> Most people are pretty reasonable when they don't feel like they're under attack, so in my experience it's usually possible to resolve these things without dragging it out.
The above will generally not work in a company that emphasizes PR comment count as a good metric for promotions/performance, and has a lot of internal rhetorical dogma. You WILL get people who leave these types of comments because they view it as a way of promoting their career, these people often cannot be reasoned with logically because they aren't actually all that smart, and they view any pushback against their comments as an attack against them.
Other people's feedback against the bullshit comments definitely help, but can look bad if you keep reaching out to other people to help address PR comments - I made sure to go through other people's PRs, on my own initiative, and refute bullshit comments when I realized how some people were behaving.
If a developer making $1/hour produces $10 output (of something people buy), then if you add another developer making $1/hour produces $10 of output, you have $20 total product. Developer A and B can compete on their rate down to the point that it's not sustainable, and thus, an equilibrium will be struck.
How is adding more developers going to reduce the output?
> How is adding more developers going to reduce the output?
time spent coordinating, time spent arguing, time spent reaching consensus (dumb example: function signatures / architecture / api contracts), time spent comparing approaches.
Title does not approximate well the article. Ad probandum: "Computing is not yet overcome. GPT-4 is impressive, but a layperson can’t wield it the way a programmer can. I still feel secure in my profession."
A little bird told me that it was easier to break up big projects into classes and farm them out than in other languages at the time. The `protected` keyword for example prevented the other developer(s) from modifying that part of the object, creating safety.
Artificial intelligence firm DeepMind has transformed biology by predicting the structure of nearly all proteins known to science in just 18 months, a breakthrough that will speed drug development and revolutionise basic science
reply