If you offer them a better margin to mine in a more ethically palettable way, and any up-front resources to do so, then it's reasonable to assume that they will.
I think this quickly gets into the details though. How much safety is required and what does it cost? Are there alternative materials that cost less than ethical cobalt? What age restrictions should be put on the labour involved and what will those children do instead (both with their time and to earn money)? Where will the adult workers come from to replace those kids and what training do they need?
Places where rule of the land (or lack thereof) already lets this happen is unlikely to change simply from increased margins. Most likely they’ll just pocket the increased profit with no improvement in conditions.
These tech companies aren't direct buyers any more than you or me. Ore is sold to smelting companies, who sell ingots to sheet metal companies, who sell it to electronics component manufacturers, who sell it to contract manufacturers, who sell it to the giants. And that's the legal supply chain - add in some shell companies, forged documents, and resellers and it gets more complicated.
> These tech companies aren't direct buyers any more than you or me
Well, they're (a) further up the supply chain that we are, and (b) have the resources to understand and influence their supply chain. You can be pedantic about the word "direct" if you like but I don't think that's useful.
But why draw the line there? If we are making the whole chain liable then why not make the phone case manufacturers and app development shops liable too? They are only one hop down the chain and have the resources too.
All of the products I can buy may or may not contain this unthetical cobalt. I don't know which, and my personal buying choice doesn't effect anything.
What are you proposing, that everyone with a smartphone or a computer be sued? How will that work?
Vendors that don't get sued can compete better in the market and survive longer, and their prices won't go up (except maybe a little as rents if they have few competitors and those have been forced to raise their prices because they've lost a lot of lawsuits), right?
[EDIT] I'd don't really give a fuck about DVs but I'd love if some of the people DVing my comments on this thread would explain how I've misunderstood orthodox "right"-wing approaches to market-based regulation and commons management, since I don't think I'm claiming anything particularly radical here—quite small-c conservative, actually—and would like to know whether and how I'm missing the mark on it.
> All of the products I can buy may or may not contain this unthetical cobalt. I don't know which, and my personal buying choice doesn't effect anything.
All of the stock these corporations can buy may or may not contain this unethically-sourced cobalt. They don't know which, and their corporate buying choices don't affect anything.
Judging from these comments, the bar seems to be set at "if you can't prove the product doesn't contain unethically-sourced materials, don't buy it". That standard would apply equally well to end users. Of course you can't simply trust that your suppliers aren't lying to you, or that their suppliers aren't lying to them, so you have to be personally involved in auditing the entire process from mining to final production and delivery.
Or we could just be reasonable and agree that it's sufficient to avoid knowing or reckless involvement with unethical suppliers, and hold those who actually endanger their workers or lie about the sources of the materials they're selling responsible for their own crimes.
hey, great job. You've turned a discussion about child exploitation into a discussion about whether some random person on the internet will stop buying electronics.
> I will say though, the problem is one of “standardization” across an organization where it’s too big for everyone to fit in a room.
I think you've got a lot of this right (disclaimer: we've built the product I think you're describing)
The don't think the most important problem is standardisation though, it's observability/instrumentation ie. if you don't measure what's working, you can't improve things.
The very best tech companies measure quite a lot, and often look back at their hiring processes in the event of a mis-hire to figure out what went wrong and how they can avoid the same happening in future... but even then they only do that in exceptional cases because it's done fairly manually. That means they have low statistical significance and a stuttering cycle of learning.
I believe they should be constantly looking at what's working well, for every hire. So that's what we built.
Once your hiring pipeline is trivially visible, a lot of these questions go away. You can see what's working well and try new things in safety, you can optimise with your eyes wide open.
One thing we did straight away was to deprioritise CVs and replace them with written scenario-based questions relevant to the job. If managed properly that takes your sift stage from a predictive power around r=0.3 to a performance we find typically above r=0.6. Far fewer early false negatives makes your hiring funnel (a) less leaky, (b) more open to pools of talent previously ruled out by clumsy CV sifting, and (c) potentially shorter as the improved sift accuracy allows companies to consider dropping their phone interview stage(s)
Our NPS rating for HR teams is currently running at 85, and MRR churn is under 1% so there's clearly some value to the approach.
Presumably there's more to this than comes across in your comment.
After all, you don't avoid the unconscious bias of a single mind by adding more minds. That just gives you three sets of unconscious bias and adds biases caused by group dynamics.
Do you have a link? I may be googling the wrong terms.
You can still avoid most of the effects of a worst-case bias by adding two additional measurements.
Rather than 1 person, 30 minutes (one measurement), given 3 who all had the same experience, you are less likely to have all three have an impression unconnected with the substance of the conversation.
Each bias skews the same direction for each person, but not every bias is in the same direction for individual people. (Some people are biased in favor of Harvard/Ivy League graduates. Other people are biased against those exact same candidates. Bias is not by definition unidirectional for all people.)
The YC partners are trying to be similarly biased against entrepreneurs who (they believe) will not be successful in the program.
They are much less likely to be similarly biased against irrelevant factors like accents, mannerisms, backgrounds, etc.
> They are much less likely to be similarly biased against irrelevant factors like accents, mannerisms, backgrounds, etc.
They're not less biased, they just average out their biases over the group.
Your assumption is that three people chosen from a fairly homogenous pool are going to cancel out each others biases, which is... optimistic.
I don't know from this conversation what they're actually doing, but what they should be doing is using a diverse set of opinions to create a fixed set of questions and a fixed marking scheme, and then sticking to it for that round of interviews. Then looking back over time at every interview question and analysing how well it predicted later outcomes.
If you think they're sub-optimizing because of biases and a poor process, maybe that represents an opportunity for you or someone else to use your method to outcompete them.
Their track record suggests they're doing pretty well.
So your argument is that they should be above examination of their interview process because their investments are doing well? Come on, you're just arguing for the sake of it now.
Multiple independent assessments are great at reducing random noise. Bias is noise, sure, but it's by definition not random so you need other forms of intervention to counter it.
A huge part of it is resulting discussion that happens between different observers which puts the onus to check for biases and map the signals on objective parameters.
If I understand you correctly I think this is misleading.
Discussing candidates after an interview allows social dynamics within the group to distort the signal so you reduce the value of taking independent data points. Not only will it not reduce bias in the way you seem to suggest, but you'll also lose some of your ability to reduce random noise as the noise from more dominant interviewers will be amplified.
I don't have time to dig out citations, but a good starting point would be "What Works - Gender Equality By Design" by Iris Bohnet. She's one of the world's leading academics studying how biases are affected by different hiring techniques.
You seem to have made some (incorrect) assumptions based on very little text. Let me explain the process in somewhat more detail.
In my last company ($100B, publicly traded, extremely data driven), we interview candidates in a group of 2 (or more, but rarely) on clearly defined criteria to look for signals - in either direction.
During interview each of the interviewer looks for evidence to gather the signal - stronger the better and the purpose of the interview process is for all the interviewers to gather signals (preferably all criteria, preferably strong in either direction, but ofcourse bound by the realities of limited availability of time).
Once the interview process is over each interviewer jots down the signal strength and the related evidence on the scorecard independently and suggests the result of interview.
Later during a calibration, the signals and the evidences are presented to the interviewing peer group (recruiter, hiring managers, interviewers from other rounds), and pretty much disallows for any unconscious bias such as "I don't think Alice would be a good team lead (because she is a woman, and woman are not good managers), or "We should not hire Amit (because he is an Indian, and Indians write poor code").
Again the examples are too in-your-face, but unconscious bias is unconscious, and in the absence of having to defend your perspective to external parties with the support of evidences, which does not happen if there is only a single interviewer.
Think of it as the rubber duck for interview and biases, to keep your own unconscious bias as interviewer in check.
> Later during a calibration, the signals and the evidences are presented to the interviewing peer group (recruiter, hiring managers, interviewers from other rounds), and pretty much disallows for any unconscious bias such as "I don't think Alice would be a good team lead (because she is a woman, and woman are not good managers), or "We should not hire Amit (because he is an Indian, and Indians write poor code").
You've explained that your interview process has a predetermined scoring system which is a good start. I'm curious what the effect of this calibration stage is... did your company do predictivity and bias analysis on it?
Where should that threshold be? Does it move every year due to inflation?
I think a more reliable distinction would be whether a person's income predominantly comes from their labour, or whether it proedominantly comes from what they own.
> That’s a weird characterization, given that the trend has been in the opposite direction:...
I don't think you've adequately supported that criticism.
The article you're citing discusses the "top 4%" of income earners. That's a very different thing to "billionaires" of which there are 607 [Edit: in the USA].
The photo of a Clinton is also a prime example of left party gone center-right based on support from the billionaires and the (rest of the 4%).
Since the US has a FPTP system, the first layer of fighting is for both party nominations and is poor majority against (centrist) rich minority, which would have a clearer outcome if money in politics had clearer rules.
How about putting the re-usable item in a wrapper so that the same rule can apply.