Scientists' and doctors' views on questions of "ought" can also be influenced by their background in weird says. A dermatologist might insist that people always wear long sleeve shirts and pants outside. Even if moderate sun exposure causes people to statistically live a bit longer on net they see lots of people who die of skin cancer and no more people dying of heart attacks than anyone else so the former are a lot more salient for them.
As the famous saying goes, "nothing is as important as you think it is when you're thinking about it".
Every scientific discipline inculcates values that may be different from those held by the general public ranging from how important it is to give credit to the originators of ideas to the relative importance of difference species to whether naturalness is valuable or not. This isn't bad, its probably necessary. But it's something that has to be accounted for.
I see all the time PhD's who, when dealing with matters in their own field, assiduously adhere to reason, facts, the scientific method, evidence, etc.
But step outside that field, and they latch on to something that emotionally appeals to them, and all that reason, facts, etc., flies right out the window.
Critical thinking is domain dependent. For example, just because you can kill it on white board interviews doesn't mean you will be able to navigate relationship issues. The way I imagine many PhD's I've talked to is like the RPG character where you max out one line of development at the expense of all the other lines. The counter-example would be the "T shaped" development profile which is the ideal imo.
I often wonder whether the epistemology we are "born with" can actually be improved very much (in a general sense), or whether the best we can do is teach domain-specific techniques to override our defaults on those topics.
And it would be great if the public debate was about what outcome is better given two policies produced by honest but differently focused scientists.
It would also be good if all those models were used as basis to discuss divergent interests and that politics would remain politics as a tool to select policies that maximize a majority (but not everyone's) best interests.
I feel, instead the issue is that it's become unacceptable to disclose what are someone's own best interests and to defend those.
Instead everyone produce fallacious models that show that their prefered policy is in everyone's best interests.
Politics is a fundamentally alien experience to most STEM people.
Politics is manifestly not about the public good - not even with compromises.
It's mostly about a few unpleasant - sometimes charismatic - individuals pursuing power for their own personal benefit, and covertly that of their funders and sponsors.
Science has nothing to contribute to this, because the entire business is so hopelessly toxic and corrupt that rational debate about policy doesn't even get to first base.
If you want rational policy you don't want politics. It's perfectly possible you don't even want democracy.
What you want is administration - a very different process where mature competent executives and managers who do not have Dark Triad personality traits make intelligent, informed, and compassionate decisions in the public interest based on evidence, expert guidance, and their own good instincts. And their power is strictly limited to the lightest possible enforcement required to do the job.
No country has even been run like this, but a few have managed to operate like this in selected subfields.
> What you want is administration - a very different process where mature competent executives and managers who do not have Dark Triad personality traits make intelligent, informed, and compassionate decisions in the public interest based on evidence, expert guidance, and their own good instincts.
Which is basically inhuman. For an administration of any sort, you need a selection process and the selection process will then be gamed by exactly the people who seek power.
Which is why we have large bodies of political representatives: people will not stop seeking to acquire power and act in their interests and the interests of their backers, but they will be counteracted by people doing exactly the same but pursuing different ends. That’s a very human thing.
I only have some experience in low-level politics in a parliamentary system based on proportional representation, though I know a number of mid-level politicians and kind of know some top-level politicians in my home country.
Based on what I have seen, most politicians genuinely believe they are working for the common good. Their understanding of what the common good is obviously varies.
In a system like that, politics is mostly about playing the long game. You build a personal brand that gets you elected and re-elected and gives you influence within your party. You build and maintain your networks, you collect political capital by supporting others' goals, and you spend the capital to advance your own goals. You try to find a balance between short-term and long-term goals, because your opponents today may be your allies tomorrow, and you don't want to alienate them by playing too hard.
Unpleasant and toxic people may sometimes survive in politics. They are rarely successful, because politics is all about people skills, at least in the system I know of. (Such people are more common in administrative positions, because their status is more secure in a meritocratic hierarchy.) People focused on specific topics often also have a hard time in politics, because they have a tendency of becoming unpleasant when things don't go their way in the niche they are interested in.
The public image of politics can be toxic, because the current publicity game requires it. The same politicians often appear quite different behind closed doors, where they are allowed to speak off the record.
The purpose of this particular device was to permit a vote in the National House of Representatives to be taken in a minute or so, complete lists being furnished of all members voting on the two sides of any question Mr. Edison, in recalling the circumstances, says: "Roberts was the telegraph operator who was the financial backer to the extent of $100. The invention when completed was taken to Washington. I think it was exhibited before a committee that had something to do with the Capitol. The chairman of the committee, after seeing how quickly and perfectly it worked, said: 'Young man, if there is any invention on earth that we don't want down here, it is this. One of the greatest weapons in the hands of a minority to prevent bad legislation is filibustering on votes, and this instrument would prevent it.' I saw the truth of this, because as press operator I had taken miles of Congressional proceedings, and to this day an enormous amount of time is wasted during each session of the House in foolishly calling the members' names and recording and then adding their votes, when the whole operation could be done in almost a moment by merely pressing a particular button at each desk. For filibustering purposes, however, the present methods are most admirable." Edison determined from that time forth to devote his inventive faculties only to things for which there was a real, genuine demand, something that subserved the actual necessities of humanity.
https://www.gutenberg.org/cache/epub/820/pg820.txt
I agree completely: much of science now is highly politicized. We have seen what happens to professors or research scientists who try to take a position that countered popular opinion on a sensitive topic - they have protests against them, they are spat upon and assaulted, and they lose their jobs.
All of the other scientists and researchers see this and adjust their own behavior and areas of study to avoid being mobbed.
I have had to become increasingly cautious and wary of trusting all published science, especially that in social areas now.
I think you can rationally assert that the feeling of loving your wife or paragliding is more important to you than the risk of missing out or the risk of death.
It is a preference and that is what form the basis of interests.
> dying for your freedom
Is somewhat different, it's already the execution of a policy (fight) for a preference. And I think it can be perfectly reasonable outcome as it's not even hard to find chapters in History during which conditions made it the only acceptable choice
True, but you meet the same difficulties. A preference doesn't really have to be rational. Fund the park or the bath, paint the city hall blue or yellow. Especially in a democracy there often is no pure rational solution.
This is what the article is getting at, there are questions to be answered before you ever get to scientists evaluating evidence for and against different policies aimed at solving a problem.
What problem are we solving, what is or isn't a problem, what constitutes a better outcome, and what types of policies are allowable may be informed by science but are often (and often appropriately or at least inevitably) decided by culture, law, art, force majeure, etc.
> Even if moderate sun exposure causes people to statistically live a bit longer on net
This reminded me that the weight range we term "overweight" has lower all-cause mortality than the weight range called "normal weight". (There is another tier above "overweight", "obese", which has high mortality.)
I'm pretty sure the study you are thinking of [1] only applies to ages > 65 [2] for which there are a lot of possible explanations. Other studies have found that past a certain age no lifestyle choices matter including smoking & diet since it takes too long to influence when you die.
If you look at [1], a ton of the studies they looked at adjusted for preexisting illness like hypertension and diabetes. It's basically like saying people who fall off sky scrapers don't die of cancer.
It is also important to remember the associated costs and decrease in quality of life associated with being overweight.
That said it is possible the lower-end of the overweight BMI range could be moved up a point or two but that's probably less than the variance between populations anyway.
Reminds me of the stereotype of dentists giving out toothbrushes on Halloween.
Maybe if you're life is about preventing cavities you think that's what kids should always have in mind, but life would suck if before every action we asked "will this contribute to tooth decay" - much more than it would suck with a few cavities.
> As the famous saying goes, "nothing is as important as you think it is when you're thinking about it".
That seems like a very dangerous saying! As a scientist, I am biased, but I think it's also important to remember that nothing is as simple as you think it is when you're not thinking about it.
As a (former) scientist, I should also admonish that it is generally in the professional scientist's best interest to prolong the perceived complexity of any given thing to keep the grants coming. Even if they're not influenced by an interest outside of their scope, it's a mental habit that can be hard to break.
Perhaps wisdom starts by accepting the conjunction of both the saying and your retort, which is not so far from a better-known saying, "the devil is in the details".
Scientists often think about one topic at length, and therefore tend to overestimate it's importance. This is a bias that needs to be corrected for. Thinking that a topic with which one isn't familiar is simpler than it really is is another unrelated bias.
I remember an ad on the radio from the American Podiatrist's Council or some such organization, recommending that everyone get their yearly foot exam.
Sure, it was trying to stir up business; but I think that ad mainly existed because podiatrists really did, in good faith, think everybody should get their foot examined once a year.
Fun fact: incidence of skin cancer in notoriously cloudy and rainy WA is substantially higher than in almost always sunny HI or CA or TX. I bet the vast majority of "scientists and doctors" don't even know this.
The rates are extremely similar once you control for race (which the page you linked allows you to do, by the way). White people get skin cancer at much higher rates than non-white people, because melanin acts as permanent mild SPF, and white people have less skin melanin than non-white people: that's what makes them white, after all. Since Washington is more white than Hawaii, California, and Texas, if you don't control for race it looks — counterintuitively — like Washington is more risky than the others in terms of skin cancer. But it's not, it's just that you're selecting different population demographics.
Also notable is that sunniness doesn't really determine UV exposure; partial cloud cover can actually result in more UV-B exposure, counterintuitively: https://www.drgurgen.com/are-the-suns-uv-rays-really-stronge... And even full cloud cover doesn't completely block UV. So WA and CA aren't as different as you might think.
Meanwhile, as one might expect, most states that get freezing cold for months at a time have lower skin cancer rates than those that don't when controlling for race: if you don't go outside for more than a few minutes a day, while bundled up under multiple layers, for months at a time, well — you see less skin cancer. Some heavy farming states seem to have more skin cancer, but again that makes sense since farm workers are outside a lot.
Vermont and New Hampshire seem like odd exceptions to this rule — but I suspect there's also just some other selection bias at play. UV exposure from the sun causes skin cancer at high rates in white people; trying to make assumptions about states isn't necessarily an easy thing to do, since many other factors are at play when considering who lives where and how much they go outside.
That could have something to do with it, but as someone who lived in WA for quite some time - you don't go outside without clothing for ~7 months in any given year in WA either because it's raining and unpleasant. So that's where this hypothesis falls apart a little. My hypothesis is that constant relatively low level exposure to UV is more beneficial than acute exposure for just a few days / weeks in a given year. I don't have any data to support this hypothesis though.
There was (and maybe still is) Solstice parade where people ride their bikes through Seattle completely (or partially) naked. So that's only partially true.
I know a lot of time I'm guilty of quick/skim reading articles I like shared on HN. I think this one is worth reading more thoroughly than normal. I found doing so rewarding. I was particular pleased that a conclusion I was forming as I read through was then voiced near the bottom. Namely:
> The science policy scholar Daniel Sarewitz goes so far as to argue that science usually makes public controversies worse. His argument rests on the logical incompatibility of policy expectations and what science actually does — namely, that decision makers expect certainty, whereas science is best at producing new questions. That is, the more scientists study something, the more they uncover additional uncertainties and complexities.
I'm not sure I've ever seen this basic contradiction put so cogently. We want policy (politics) to create certainty and stability. "Science" increases our risk of the unknown by making us more aware of it.
Doubtful. There is a way to project confidence while acknowledging fundamental uncertainty, I think it's probably more effective and sustainable than outright lying. Being untransparent about difficulty in the long run creates distrust in authority because eventually you accumulate enough fuckups. I think the biggest error in modern leadership practices is confusing certainty for confidence.
Anyways your statement is literally begging the question:
["people want certainty and stability" is] not going to change. Certainty and stability is what people want.
Ahem, need - at least from our leaders. It may be that creating certainty amid uncertainty is itself the core of leadership. The stories we are most certain of are the ones we use to run our lives, to make decisions, and take action.
One heuristic for this is the 40-70 rule - a heuristic for decision making. In order to make a decision you should have no less than 40 percent of the information you would prefer to have, and you shouldn't wait to make the decision once you have 70 percent of the information you would prefer have.
I'm sympathetic to this. There is a strong argument to be made that this is a need.
> It may be that creating certainty amid uncertainty is itself the core of leadership.
I would agree with this without reservation.
But the phenomenon here is being driven by what people want, not what people need. If they're benefiting from the certainty they get, that's just a coincidence.
Wanting and needing are different things, and while people may need some certainty, they want much more than they need, and they're getting more than the optimal amount.
I remember a story , may from The Sea Around Us. politicians will decide how many fish they can capture in next year, scientists say 'please reduce x %, or we trust it will be terrible', politicians heard the advice and start taking. finally, they decided a number smaller than half of x.
> "Science" increases our risk of the unknown by making us more aware of it.
I'm not sure I understand the phrase "our risk of the unknown". The risk something poses to us is surely the same whether or not we are aware of it—just that our mitigation strategies, and even the awareness that we need to mitigate, change in response to increased knowledge.
Perhaps the idea is that people are more hesitant to make decisions when they become aware that what they had previously taken as absolute is in fact conditional? That seems like a good thing (although I agree that it can be taken to paralysing extremes).
It's worded a bit ambiguously, but I think it means we have to deal with the knowledge of how much we don't know about something and when making policy, every new think you don't know is a point of contention that can be argued over. In some cases that's beneficial, because it keeps us from making a mistake, in others it's detrimental, because it keeps us from making the beneficial change, but if all policy decisions start tending towards infinite argumentation as more and more things we don't know the answers to are linked to the topic, that's also a problem.
> Perhaps the idea is that people are more hesitant to make decisions when they become aware that what they had previously taken as absolute is in fact conditional?
Most people are horrible at dealing with uncertainty when making decisions. I don't know why this is. But taking an uncertain landscape, making a decision and then projecting certainty works better than conveying the risks for communicating with everyone but people used to making executive decisions.
So if you have two politicians, one who channels a scientist's healthy (and realistic) scepticism and one who takes a random position and blasts it, the latter will tend to be more popular.
> But taking an uncertain landscape, making a decision and then projecting certainty works better than conveying the risks for communicating with everyone but people used to making executive decisions.
I think it depends on what 'better' means. It works better in the senses of getting things done, and of popularity. But, unfortunately, the things that get done are those that are some weighted combination of (a) rewarding in the short-term and (b) in the interests of the person who's good at projecting an aura of confidence.
If the 'right' decision tends to align with the interests of the decision-maker, then it's great to have that decision-maker pushing it through. But, when the decision-maker's interests are not those of the general public, paralysis might be better than populist marching into short-term gratification.
(On the other hand, I also recognize that not making any decision until you know it's the right one is just a long-winded way of never making any decision. Making decisions about whether and how to make decisions is just as complicated as the non-meta decisions themselves ….)
The best the scientific process can do is establish facts about repeatable processes. Even then, it isn't carving facts in stone, but a case of increasingly accurate approximations of reality.
The reason that it is impossible for mere fact to end political dispute is that facts are only one element in policy. Values are more important in the long run and facts can only be used to improve policy around shared values.
People don't agree on values and they never fully will.
> The best the scientific process can do is establish facts about repeatable processes.
And if everyone had the time and resources to discover and digest every fact, facts might be definitive.
But everyone doesn't have time and resources. To compensate, we rely on others to curate facts for us. When we encounter an internally consistent subset of facts that suits our ideals and our interests, we adopt that point of view.
There are infinitely many subsets of curated facts that can be presented as internally consistent. That's why there are so many different points of view.
To complicate things further, it is difficult to get a man to understand something when his salary depends upon his not understanding it.
That's an interesting topic that sometimes gets explored in sci-fi. If an AI is created that is able to learn all knowledge and form it into a single consistent model of reality but it ends up making conclusions we don't want to hear, what are the consequences for humanity and living things in general?
The Hitchhiker's Guide to the Galaxy is almost exactly what Mountain_Skies is describing.
If I remember correctly, Peter Watts has a somewhat more realistic take on this in his Rifters trilogy (under the Novels section here: https://rifters.com/real/shorts.htm), where there's a brain in a box that sifts through loads of information and gives advice to political leaders. The trilogy as a whole is more... deep-sea cyberpunk than particularly centered on the brain in a box, though.
There are problems both on the producing side and the consuming side. Any scientist knows that research is often driven by politics, money or ego.
The bigger problems are probably on the consumer side, though, and not only in under-educated social groups. Here's an interesting post I stumbled across recently: "People who trust science are less likely to fall for misinformation -- unless it sounds sciency" (https://digest.bps.org.uk/2021/08/10/people-who-trust-scienc...)
This confirms my sense that we easily glom on to things that "sound right", and if you're a scientist or engineer "sciency" statements sound right to us. Do we really have the time to dive in deep enough to figure out whether it's pseudoscience?
That article is really interesting and something that, as an academic scientist, is pretty obvious. In theory, we have peer-review to keep ourselves honest, but as the article points out (maybe unintentionally), bad science gets through that process (really, as long as the "data" look believable enough and the "experiments" are methodologically sound and the interpretation of the "data" is also reasonable, it's reasonably undetectable to peer-reviewers).
There are a lot of problems with the way science happens, and a lot of bad data is never found out, because it can be hard to show, definitively, that it was fabricated or manipulated without someone from the lab in question speaking up about it (which probably doesn't happen enough).
At a certain point, though, there's too much information & data, good or otherwise, for any one person to parse. That's sort of why we have journalists: to acquire first-hand accounts and deliver them to a broader audience. The problem really arises when journalists significantly editorialize or disregard conflicting information. The Wakefield paper is a great example of that. Some journalists' cum pundits' gross negligence with regard to the retraction of that paper constitutes misinformation, but has shown to be very difficult to discredit because those actors abuse what we've all agreed is the role of the journalist: to give us valid accounts of actual events.
The study discussed in the article you cited even used a real, but heavily criticized (unknown to the participants) scientific study in their experiment. I think the question is "what is the amount of effort that it is reasonable to expect a lay-person to put into the in/validation of information presented to them?" with the caveat that trust is generally earned over time, but once earned, can be abused (and I use the word abuse very purposefully here, because it is a violation of one's relationship in a harmful manner). Should one be expected to more rigorously critique the statement of a trusted peer because of the potential for abuse?
I am immediately suspicious of any scientist (or expert in a subject) who doesn't say "well, actually, it's sort of complicated..." because it always is. Everything is complicated and confounded by innumerable factors, and requires years of study to even see the full shape of.
But, politics and media do not currently thrive on presenting complexity, so if you're going to run the risk of asking a scientist their opinion, you either select scientists who are willing to dumb down the science, or you ignore everything they say that doesn't match what you already believed.
I have a cognitive bias poster on my wall that categorizes about 1/4 of researched biases as "Too Much Information" [1].
In the context of Democracy, the trouble with "actually, it's sort of complicated", is that there is absolutely no way that all citizens can approach everything this way and still have time for, e.g. the Pursuit of Happiness.
In other words, trust (and trustworthiness) is key. We must delegate to someone, whether expert or not.
Traditionally, organizations with cult-like properties envelop people in a kind of "information bubble" that makes deciding who to trust a tractable problem.
The culture of science does this to some extent, but is unable to compete well for a number of reasons.
Yes, but delegation is always accompanied by principal-agent problems (like moral hazard). Lobbyists ensure that all but the most morally unimpeachable representatives will inevitably place moneyed interests before their electorate.
Is there political counterflow that compromises integrity in some parts of science? Of course there is. For millenia this has been so. From Gallileo to Darwin to Haber to Einstein, the political or religious disruptions arising from science theory or experiment or technology often prompt someone to argue that governmental policy should not change in the light new facts or a new interpretation -- not if that change disrupts cherished societal values or impedes vested interests.
It sounds like the author's argument is that political winds WITHIN scientific communities are fomenting bias in their work because they're not satisfied with publishing papers but now want to effect political change. Therefore they tolerate no dissent from an official party line.
Perhaps climate change warrants such circumspection, but I know of no other scientific subdiscipline that does. As such, the rise in politics in ONE scientific subject doesn't justify a book that seems to tar and feather ALL of science.
I suspect a book that attacks only the climate science community was deemed to narrow to attract a broad audience. And of course, exploring that topic wouldn't compellingly break new ground either.
Finally, it seems to me that this is exactly the WRONG time in history to be impugning science or scientists. Without concrete and viable suggestions of how to redress the forces that have broadly compromised scientific integrity (which I doubt the author proposes), chinking away at science's armor can only aid the cause of anti-science, and feed the rising barbarian horde.
> It sounds like the author's argument is that political winds WITHIN scientific communities are fomenting bias in their work because they're not satisfied with publishing papers but now want to effect political change. Therefore they tolerate no dissent from an official party line.
> Perhaps climate change warrants such circumspection, but I know of no other scientific subdiscipline that does.
Anthropology is especially notorious for this, far more so than climate science (!), but you could describe all of the social sciences this way.
To me this article seems to miss the point. Obviously science can't provide objective answers to subjective matters.
The problem we have in society right now regarding science, as I see it, is that we have a significant group of people who disagree about the objective parts.
The people who think the world is flat, the people who think the world's age is measured in thousands of years, the people who think all the world's top climate scientists are part of a massive conspiracy, the people who think COVID is a massive conspiracy, etc.
Our science problem as a society is that those people are mainstream. They're politicians, they're pundits, they're your next door neighbor, and they're all objectively wrong but they use the appearance of science to intentionally spread lies.
Alas, the people most willing to draw bold conclusions from science also put the least effort into being scientific.
Ted Cruz is a good example. He argues that he's a scientist not because he has done science, but because his parents have done science. Thus, he's a "legacy" scientist, and because they emanate from him, his arguments must be scientific.
The man isn't this stupid. But in endorsing such nonsense, his supporters are willing to be. "We don't need no stinkin facts!"
How can reason hope to overcome willful illogic of this magnitude?
One thing that I have been increasingly aware of is people confusing science with utilitarianism. Science investigates truths about the world we live in. Utilitarianism incorporates science, but is built on subjective moral and ethical assumptions.
I see a lot of people who disagree with utilitarian proposals labeled as "anti-science".
2500 years later, Aristotle is still correct, and the three modes of rhetoric must work in concert or you're doomed to fail.
Logos, Ethos, Pathos: you need all three. "This is true. I am trustworthy. This true thing is important."
It is very frustrating, especially to many of my fellow scientists, that bellowing "THIS IS TRUE" as loudly as possible is insufficient. But it is insufficient.
In your role as a scientist, shouldn't providing evidence of truth (or falsehood) be your main concern?
Statements about importance are value judgements that no doubt you have made, but are not really science and are basically political.
An easy example, "covid restrictions curb covid" may be true and you can provide experimental evidence. "We need to impose restrictions" makes a judgement about the overall situation under the restrictions being preferable to the one without them. This is not a scientific call, this is a societal judgement.
People too often pretend that an action immediately follows from some identified causal relationship. Smoking kills you does not immediately imply you should quit smoking, without the intermediate step of "you value your later life and health more than the ongoing benefits you get from smoking" (disclaimer, nonsmoker)
The science community as a whole needs to do the work of all three. The work of an individual scientist may be able to focus on just the one, and their lives are easier if they work in a context that gives them that specificity of responsibility.
But eventually, someone needs to do the work to convince people "science is trustworthy." And for a variety of reasons (both objective and social), scientists should be on the front lines of that work.
The question of "this is important" is tricky because it must involve a two-way discussion with the audience. But there are cases where a scientist must be making that case, e.g. in order to procure funding.
> The science community as a whole needs to do the work of all three.
This is exactly the source of the trouble. Hearing scientists project their value system onto everybody else destroys their credibility. If people felt that scientists were sticking to the facts and avoiding ideology and values, science would be better received all around.
Whether or not the Earth is getting warmer, and the cause of that warming, are scientific questions. Also science is a discussion of possible remediations and their costs. Which remediation to choose, if any, is a question of values and outside the scope of science.
Also important for science is being clear about levels of uncertainty and ranges of possible outcomes. Recent global temperatures have little uncertainty. Distant-past temperatures have a much higher level of uncertainty, and predictions about ranges of possible distant-future temperatures have yet more uncertainty. I never hear much about this from scientists or the media that cover them though.
> Whether or not the Earth is getting warmer, and the cause of that warming, are scientific questions. Also science is a discussion of possible remediations and their costs. Which remediation to choose, if any, is a question of values and outside the scope of science.
So how do you propose scientists "stick to" the discussion of remediations of "we can remediate X by doing Y at expense Z" without it being portrayed maliciously as pesky interfering scientists saying "we should do Y"?
The problem of lack of trust is hardly the sole responsibility of scientists here.
Do you think discussing uncertainty more is going to raise trust or just be more fodder for the people with non-scientific reasons to oppose action?
I propose scientists "only stick to the science" if and only if everyone else in the world sticks to their wheelhouse as well. Lotta citizens, politicians, and pundits out there without much training in making good ethical decisions either!
As long as the output of a scientist will be interpreted through other people's political lenses its fair game for them to frame it politically too.
> This is exactly the source of the trouble. Hearing scientists project their value system onto everybody else destroys their credibility.
> sticking to the facts and avoiding ideology and values
I think a few different things are being confused here, but as far as value is concerned, the choice to do science, to take scientific findings into account and to decide what to investigate are themselves the result of value judgements. There is no fact-value dichotomy. There is no "clean" separation between fact and value (as if value were a dirty word contrary to fact).
This fact-value dichotomy can be tied to the materialist worldview which denies objective value because it presumes a metaphysics that renders the world a kind of theater of senseless extension in space. Any value must therefore be a matter of subjective projection (and therefore delusion, putting to one side materialism's inherent inability to account for subjectivity). But this metaphysics is, to put it gently, problematic. The wish to separate fact from value (which I take to occupy one order, not two) is no doubt further encouraged by liberalism's pretensions to neutrality and the inherent tension within Lockean liberalism between science and liberty.
But I do agree that the actual deciding of policy is not to be left to scientists but to politicians and the like. Scientists are specialists who can supplement our knowledge in specific ways that generalists can then take into account along with other data and understanding when making judgements.
The problem is when people aspire to be scientists because they want to be authority figures rather than find out facts about the world. And then once they become an authority figure as they always wanted they just push their agenda because they never cared about the facts in the first place.
Of course lots of scientists cares about facts first, but the scientists with an agenda are much juicier for journalists to interview and write articles about so they are who we see.
(I believe in climate change, I am pro vaccine. Just noting this here since many will think that me having the above view means I am a climate change denier and anti vax)
The problem of correct motivation (and here I would argue understanding is the aim of science, not fact collection per se) is a separate problem, but it, too, is a matter of value judgement (what is the correct aim and motivation for science?). The concern for corruption cannot erase the essential value-ladenness of all activity, including scientific activity. Science is both shaped by and shapes value judgements and is itself suffused by them. You cannot escape from value. If you judge something to be valuable, you will act and shape reality according to that understanding of what is valuable. This is the essential nature of practical reason. All action is determined by value judgements.
> what is the correct aim and motivation for science
I think you misunderstood, I am not saying that they try to discover facts to support them using science. I am saying that they want the credibility of a scientist. You know how people start to listen a lot to the views of a Nobel Laureate regardless if it is their field of expertise or not? That sort of authority is very attractive to a lot of people and they will work really hard to get it, I am saying lots of people go into science since they want that authority, they don't have any care at all about doing science.
No, I understood you. What I'm saying is that a) your concern for bad motives is separate from the question of whether science if value-laden (which it is), and b) the question of "what is the correct aim and motivation for science?", meant rhetorically, demonstrates that your judgement about what is a bad motive for entering a scientific field itself involves a value judgement.
> demonstrates that your judgement about what is a bad motive for entering a scientific field itself involves a value judgement.
Yes, I judge people who try to corrupt our view of science. I never denied that.
> our concern for bad motives is separate from the question of whether science if value-laden (which it is)
Science having value is exactly why I don't want people to corrupt it. If you agree with me that science has value then you should agree that we should try to stop people from corrupting it.
If you argue that we can't judge who is corrupting, then I'd argue that you are so out there on the clouds with your definitions that we could just as well argue that a random youtube commenter is also doing science and that is a good thing and that we can't really say that youtube commenters are worse scientists than the people at universities since that is just a value judgement.
W.r.t. value, the only point I was making is that there is no fact/value dichotomy. It doesn't follow that I am therefore arguing that one cannot make value judgements. On the contrary, if no fact/value dichotomy exists and value is a matter of fact, then it follows that we can indeed make value judgements on par with factual claims.
But what I was addressing in an earlier post was the suggestion that there is a fact/value dichotomy and the notion that problems occur in science when value mingles with fact. I rejected this claim by arguing that there is no such dichotomy and by implication that the diagnosis is incorrect. Questions about corruption are fine as far as they go, but they are not relevant to this thread because they do not address the question of fact/value dichotomy and they presume value judgement.
>>Also science is a discussion of possible remediations and their costs. Which remediation to choose, if any, is a question of values and outside the scope of science.>>
This is where I've noticed mainstream political discussion often go off the rails. The best example is when "because science" is used to end conversation on the idea that 100% of the population should wear masks. Science, at least in this case, is clearly not prescriptive, so it can't be applied as a single justification like that. Perhaps science confirms that masks reduce transmission, therefore 100% of the population wearing masks is certainly one valid design. But one could come up with multiple other designs that would be equally confirmed by science to be effective at reducing transmission. So reducing transmission is not the hard part. The hard part is all the other variables that cause consequences in economics, mental health, other areas of healthcare, etc. Each model needs to be tested for it's utility across a variety of factors, but that idea is lost with the "science" cancel cudgel.
I felt more like what people had problems with was unnegotiable nature of logical thinkers and their plans(add air quotes as necessary). People expected to be involved but weren’t, and only plans that ignore them worked. That no one liked.
it just do not work, usually only scientists know truth, and they try to let people help, no one care, until Rachel Carson displayed how to use emotions let people 'understand'.
> But eventually, someone needs to do the work to convince people "science is trustworthy."
It's harder than that though.
Science is complicated, frequently messy and often adversarial, and that is how science makes progress.
There is no single narrative for scientific truth -- even when there is overwhelming consensus, science places high value on the coherent arguments at the margins.
This scientific embrace of complexity has been weaponized against us.
"Society" prefers a clear narrative they can comprehend, and science does not always provide it.
The media used to be relatively responsible stewards of the narrative, but that is very much no longer the case. (not the blame the media -- the media is us)
For an example of how science can end up with competing ideas and differing conclusions from experiments, see the little battle between Steve Mould and Mehdi "ElectroBOOM" Sadaghdar. They each made a few videos trying to explain the Mould effect (the effect where a chain dropped from a jar rises up from the lip as it falls). Each of them make compelling arguments as to why they're right and the other is wrong, but if not for them competing we'd just be assuming that Mould's original explanation was correct.
This question is not tricky. It is weighty and scary.
If it was tricky, there would be a puzzle you could solve to answer it.
Instead as you say it must first involve listening because people have the freedom to choose what is important to them.
Otherwise, you might assume that people place a highest value on sustaining human life. This can fail if you go to New Hampshire and encounter a person who value liberty more highly than life.
> But eventually, someone needs to do the work to convince people "science is trustworthy."
Yes, and science as an institution has been failing pretty badly at that. I posted this on HN just last week, but it's worth repeating:
1. Science journalism is almost universally terrible, so people already get sold half truths and sometimes even outright falsehoods from allegedly reputable sources. Messaging needs dramatic improvement.
2. The replication crisis has shown that up to 50% of published results in medicine can't be replicated (and up to 66% in social sciences), and there are virtually no incentives to replicate or publish negative results, and too many incentives to data mine/p-hack and publish sensationalized results (in fact, results that fail replication get cited more). There are now some efforts towards correcting this, but it's only just beginning.
I think there's another part that is also underappreciated, which is that the honesty of the public faces of science should be above reproach, and if someone in public facing positions lose their credibility by violating public trust, they should no longer be the public face.
The best recent example is Dr. Fauci. He has openly admitted to lying to the public on multiple occasions, such as whether the public should wear masks and about vaccination levels for herd immunity. It doesn't matter whether you think he did the right thing in those cases, he has unquestionably violated public trust and eroded trust in science as a result.
>In your role as a scientist, shouldn't providing evidence of truth (or falsehood) be your main concern?
Strictly speaking no, but that's more of a long digression on epistemology than what I think you mean. (but think of indiana jones here: "archaeology is concerned with _fact_, not _truth_ ...")
My role as a scientist is to work diligently to understand, as best as we are able, the natural world.
Yet, "I will always be conscious that my skill carries with it the obligation to serve humanity by making the best use of the Earth's precious wealth."
The point, I think, is that laypeople and scientists alike fall further and further into the fallacy of appeal to authority: 'a scientist said it's true, therefore it is true!' (for simplicity, true == a fact correctly expressed).
That thing may be true, but the rising disdain for not accepting a conclusion when the evidence isn't presented alongside is worrisome. The difference between science and religion is that science draws conclusions from reproducible research. Yet - especially now - many people take the naked word of 'a scientist' the same way a religious fanatic takes the word of their spiritual leader.
This definition gave me a lot of clarity: an expert is someone who has, can get, can make, or can cause to be made - and presents - evidence that supports their conclusions.
Statements of importance are indeed value judgements, That's why you have to argue that its universally good ( a virtue ) in order to get around the political stifle. This does happen quite alot but you don't always hear about it. The US happened to approve a 1 billion dollar funding request for Israel's Iron Dome. The vote was 420 to 9.
But that's not a publicly salient culture issue in the US. It's a simple matter of do you want aipac backing you or your opponent in the next race. Easy calculation, no higher principles necessary.
More concretely, there are also concerns with one side judging their own arguments (be them of any of the big three types) as valid without concern for their ability to convince others.
If you treat logos, pathos, and ethos as a self-certified checklist, you are doomed to fail as well. You must provide arguments of each three types that are convincing to others. Above all else, preaching to the choir is (IMHO) the biggest problem in my political environment (USA).
There are many other problems as well, but I think the topic of conversation in this thread is the oratory skills of policy makers, which might explain a few downvotes.
Was going to comment that it is not science that solves political issues, but rather, engineering. Even logos, ethos, and pathos together do not prevail over trebuchet.
I was thinking about this the other day; politics, war and climate. Americans (of which I am one) are very good at looking at political threats in terms of "others" and versus. Democrats vs. Republicans. America vs. ISIL, etc, etc. When it comes to fighting a more amorphous, but equally viable threat, like climate change, we have a hard time grasping the threat. It causes fire damage to California, Oregon and Washington; flood and wind damage to Louisiana, Mississippi, South Carolina, North Carolina, Georgia, Florida and Arkansas (at least); a migrant crisis at our southern borders; and shipment and food supply issues. If a foreign threat were causing these issues, no political force would stop "either side" from stumping to stop it. Yet, here we are, waiting it out. The Republicans denying and the Democrats in endless debate, unable to take action. Both sides unable to take action against an ongoing asymmetric war, waged upon us by a new form of enemy that we have created over the last century.
Well said! As a Canadian living in the US, I encountered the "us vs. them" culture as very subtle but real. It's like everything is a football game here, and that is the only way to frame things such that it gets attention.
"This is true" obviously needs to come from someone trustworthy or it is irrelevant.
This is why I categorically ignore everything from Dr Fauci. He's known to have lied to the public before to influence behavior. Since I'd have verify his claims from some other source anyway, there's no sense using him as a source at all.
He said that masks wouldn't be helpful march 08 2020. A month layer New York had a thousand detected covid deaths a day, so people really should have started to wear masks march 08 2020 since deaths lags infections by about a month. If they were giving proper advice instead of telling people that covid wasn't a big deal at the time then maybe that disaster could have been averted from the start? At the time the dangers of covid spread was well documented based on how quickly it got into Italy, but at the time it was the democrat party line to downplay the disease to avoid racism against the Chinese so that was the line Fauci took.
> "There's no reason to be walking around with a mask," infectious disease expert Dr. Anthony Fauci told 60 Minutes.
> While masks may block some droplets, Fauci said, they do not provide the level of protection people think they do. Wearing a mask may also have unintended consequences: People who wear masks tend to touch their face more often to adjust them, which can spread germs from their hands.
What I find ironic about the people that criticize Fauci for lying about the need for masks are often somehow still anti-mask.
Like, they acknowledge Fauci lying about not needing masks, which would imply that they should be wearing masks, but will now refuse to wear a mask because they think Fauci is lying about needing masks.
I'm sort of tired of being downvoted for saying this, but the data on masks other than well-fitted N95s is pretty shotty. Lots of P-hacking, lots of motivated reasoning, and the results pre-2020 differ in notable ways from that post-2020.
The situation gets even murkier when you talk about mask _mandates_ instead of individual decision making. The argument that mask mandates are helpful is tough to support in the face of the differences in the delta-variant curve, for example, in different counties in California.
Just to say it: even daring to compare the results in contra costa county and san diego county, california (which have different mask requirements) got me shadow-banned on reddit. The reasoning here is mostly political, not scientific/rational. No one cares what the science says.
A lot of the problem I think comes from messaging and deliberate bad-faith interpretations of messages.
Claims that "Masks slow the spread of COVID" gets interpreted as "Masks stop the spread of COVID", and so when we have mask mandates and yet COVID still spreads, people use that as evidence that masks are worthless.
It's interesting that people can draw opposite conclusions from the same scenario. COVID has continued to spread despite mask mandates. Some claim that means the masks are worthless. Others (including me) would claim that, despite how bad it is, the spread would be even worse without them.
> individual decision making
In most cases, I agree that people should be able to make their own health choices. You wanna eat McDonald's for every meal and walk less than 50 steps a day? Go for it. Hell, snort a few lines of cocaine for dessert if you want to.
But when it comes to a pandemic, it's different. Sure, the vaccines are 95+% effective, and masks might be X% effective, and social distancing is Y% effective, and so on...but when >30% of the population has zero interest in doing any of that, then you can take every protective measure you can (Besides just staying in your house) and still get the disease from some asshole at the grocery store that doesn't care if they spread it.
Also, consider last year's toilet paper shortage, and the short gas shortage a few months ago. Individuals will often act irrationally in their own interests rather than what's good for everyone as a whole.
To think of it another way, when at a pizza party, you will have some people who take 3 slices of pizza because there might not be enough for everyone so they want to make sure they get their share. Others might only take a single slice because there might not be enough for everyone so they want to make sure as many people get some.
Individual decision making only makes sense if people aren't selfish.
I agree that the data on masks can be interpreted both ways. This suggests to me the effect is small and probably second- or third-order (i.e. masks encourage more distancing, and that's actually what matters).
It isn't just that COVID continues to spread despite mask mandates. It's that the curves look nearly identical in areas with and without mask mandates. And, to show their effectiveness, epidemiologists have resorted to pretty serious P-value hacking.
Separately, I find it hard to get worried for my personal safety because of the 30 percent of people refusing to vaccinate themselves. It's just not that hard to avoid the sorts of places where such people are likely to be. And, being vaccinated and healthy makes it less of an issue for me than, say, the risk of a car accident. Sure, I could pass it on to someone else if I get it, but with reasonable precautions I don't think that's likely at all.
> He said that masks wouldn't be helpful march 08 2020. A month layer New York had a thousand detected covid deaths a day, so people really should have started to wear masks march 08 2020 since deaths lags infections by about a month.
I don't see how that is necessarily a lie. It could have been the best public health recommendation he could make at the time based on the available information. Research into how best to use masks is ongoing, so I would not expect today's masking recommendations to be the same as tomorrow's.
No, watch the interview. He actually says that wearing masks could actually be worse than not wearing them. In a later interview, he admits the suggestions against wearing masks was a lie because they were afraid of PPE shortages.
It's also not the only lie he's told and admitted to. See his claims about herd immunity, where the numbers kept going up, and when he was questioned on this, he outright said he just gave out numbers that he thought the public would accept at the time.
I don't know which interview you're talking about, but in any event you don't know what his motivation was for saying what he said when he said it. Perhaps he was wrong, perhaps he changed his mind, perhaps he was rationalizing on the fly, or perhaps he lied.
It's clear that what he and other scientists said about masks early in the pandemic certainly changed over time. I think science is like that. People who like simple, certain, unchanging answers can get them from religion or ideology. People who don't mind complexity, nuance, and change are more comfortable with science.
Assuming Fauci was lying because what he said then isn't what he is saying now just isn't logical.
The point of these arguments is to show that public health pronouncements can be political in nature and our chief authorities are not afraid to lie in order to achieve what they believe is a greater good.
It is not the specific content of the lie that is the issue, but the lack of integrity on display. It is used as a retort to "official X declared Y", and is meant to undermine the integrity of official pronouncements in general. There are many who bristled at these initial claims by pointing out (correctly) that promoting "noble lies" is terrible for public health officials and doing so would come back to bite them. For some reason, the medical profession seems to accept noble lies as being justified when the rest of society does not. This goes back to the old saw of doctors lying to their patients about their own health. It's a blemish on the profession, and one that needs to be erased and apologized for ASAP, and IMO, Fauci belongs to that old school and doesn't really get it -- and probably never will.
Also, as a protip to your finding of the fact that masks were being lied about but they themselves don't want to wear masks as being "ironic": the literal meaning of "irony" refers to saying something but meaning the opposite. For instance "Sure, I trust you", when the speaker clearly doesn't. There is also situational irony, which would be when the opposite of what is intended happens. E.g. trying to kill someone by giving them a poison that ends up curing them. So in this case, the irony would be saying a "noble lie" with the intention of saving lives but actually causing more lives to be lost -- that would be the true irony here.
The "I am trustworthy" part is totally taken for granted and too many supposedly trustworthy people lie straight to the public face just to maintain the narrative they are pursuing.
Ethos is the hardest claim to assert. You have to live that one out; your community must, quite literally, bear witness to your actions which earn trust.
It seems that many like to believe and follow leaders who affirm their beliefs even when they are caught lying repeatedly about important things. Posturing loudly and demonstrating confidence seems to trump all of the other principles for many.
It's similar but more emphasized with chimpanzees.
The problem is that we're not chimpanzees. We are smart enough to make very powerful technology, but not smart enough to use it sustainably.
This requirement places too high a burden on one person. Only a saint (or God) could achieve the standard claimed here. Objectivity on topics should be sufficient. Scientists who lose their objectivity cease to contribute usefully to the discussion.
Only God could be Objective; that whole Descarte's Demon thing makes True claims based on empirical observations a lot more fraught than you'd really like it to be, even in the best of cases where everyone agrees on: what language game we're playing today, what "an observation" is, that it was faithfully reported, and so on.
Nobody's perfect; what builds the sort of trust that permits action is a history of publically doing one's best:
I would like to add something that’s not essential to the science, but something I kind of believe, which is that you should not fool the layman when you’re talking as a scientist. I’m not trying to tell you what to do about cheating on your wife, or fooling your girlfriend, or something like that, when you’re not trying to be a scientist, but just trying to be an ordinary human being. We’ll leave those problems up to you and your rabbi. I’m talking about a specific, extra type of integrity that is not not lying, but bending over backwards to show how you’re maybe wrong, that you ought to do when acting as a scientist. And this is our responsibility as scientists, certainly to other scientists, and I think to laymen.
Do that, and you'll earn your Ethical appeal easily enough.
Ethos is very hard, but pathos is harder, especially in secular society.
Let me be clear, I think arguments about ethos generate the most argumentation and heat right now, but the silent killer is really pathos. People feel comfortable arguing ethos. Most people will not argue pathos openly though.
We have a crisis of pathos in our culture. How do you claim that something is important without an appeal to authority? You can't. We live in a culture that is fragmenting its sources of authority; different groups of people have different sources of authority.
Here's a good example. The external dialogue that a lot of conservatives give on climate change is that scientists can't be trusted. The internal dialogue that a lot of them (though not all) engage in goes something like, "The earth doesn't matter. God is going to come back and set things right. So we don't need to worry about it anyway."
Nah, the Pathetic appeal is easy. It isn't sustainable, but the simplest form of Pathos is "Hey, fuck THOSE guys, right? You're better than they are, you're special! Git 'em!", and we're practically swimming in it right now.
I don't know. Ostensibly it looks like pathos is easy right now. Politics seems super intense right now, but it also seems super fake. Mostly I think shows of pathos are fake right now. Getting people to actually do things and act seems like the hardest thing right now. Posting on social media doesn't count.
> too many supposedly trustworthy people lie straight to the public face just to maintain the narrative
This is particularly apropos in the wake of Russell Brand's recent video on FB's Fact Checkers for Covid 19 vaccine information [1] funded by BigPharma, having a huge financial stake in bigpharma, and intentionally hiding their funding by BigPharma, and not adding a notification that the fact checkers are in fact funded/invested by/in BigPharma.
I am starting to read completely disinterested 3rd party sources more, because weighing the pros and cons of a thing is more challenging when the supposed experts are deliberating concealing material relationships. Some old Emeritus professor, already retired, with several phds and just a passing bit of information often gives great criticism of a thing without having any stake in the outcome.
It is very frustrating, especially to many of my fellow scientists, that bellowing "THIS IS TRUE" as loudly as possible is insufficient. But it is insufficient.
While I understand your argument, I'm believe there's something more going on.
Taking COVID, for example. From the pro-science side, we get:
"COVID is a dangerous virus. We should take X, Y, Z actions" from somebody with a PhD in public health, medicine, or similar.
And from the anti-science side, we get:
"It's fake. It's the flu. X wasn't perfect, therefore X, Y, Z are all a plot to make you magnetic" from somebody on YouTube with no credential beyond being on Youtube.
I guess what I'm saying is there's something fundamentally broken about how people process information. All the Logos, Ethos, and Pathos in the world doesn't help when a significant portion of the population is brainwashed.
COVID is actually a great example of some of TFA is about.
Many of the "facts" about COVID are not actually in much dispute. The people who view it as "a dangerous virus" and those who see it as "the flu" are much less far apart than they appear.
Nobody has seriously disputed the R0 values for COVID, nor the (rather) broad range of deaths per 100k cases. What isn't agreed upon is what the implications for policy should be.
Those who view it as a dangerous virus point to its transmissibility and capacity to cause hospitalization, and therefore its implications for public health issues.
Those who view it as "the flu" point to its relatively low death rate compared to other global pandemics of the past, particularly when adjusted for demographics, and correctly note that a given individual's chance of dying from COVID are extremely low (at least in countries with roughly adequate health care systems).
Both can claim "facts" on their side. The question is what policy consequences follow, and that is where the major differences lie. Depending on your perspective, the deaths (and long term illness) caused by COVID are variously worse, better or about the same as the damage caused by policies to contain it. Since making this assessment necessarily involves subjective judgements and questions of morality, it can't be settled by an appeal to science let alone mere facts.
None of this is to say that there are not relatively fact-ignorant people making various cases for certain policy approaches. But we could ignore those people, and the core debate would remain, and it's not a debate about truth or science, but policy.
There are substantial differences (and, I would argue, motivated reasoning) on the subject of mask mandates. To me, that's the area where the political desire to "look like we're doing something" has trumped following the data.
On vaccines, case rates, etc. I agree with what you said.
> I guess what I'm saying is there's something fundamentally broken about how people process information.
As far back as you can go, yes. 2.5M years. But I think the point is not to call what exists "broken," but to understand and then use it to the advantage of humanity. Cult-like behavior is human behavior--i.e. the rule, not the exception. Now how do we use this to advance the species?
For context: I grew up Mormon, which I consider "cult-lite mainstream" on the cult<->religion spectrum. Having gone through that and left the church, I've spent some time deconstructing what influenced me to think the way I did as a believing member.
The best advice I can give is (a) make friends with people who are inside information bubbles [1], (b) people are motivated primarily by feelings and needs [2] despite what they say, (c) people often cover what they're really feeling with political, philosophical, and ideological language, which is often very difficult to decrypt to outsiders, and leads to significant misunderstandings, which usually serves to further separate and prevent friendship from happening, which is what's needed in the first place.
> And from the anti-science side, we get: "It's fake. It's the flu. X wasn't perfect, therefore X, Y, Z are all a plot to make you magnetic" from somebody on YouTube with no credential beyond being on Youtube.
Then why do we see articles saying that Youtube etc bans people with great credentials who are anti vax? There are people with great credentials on every side of every argument. Great credentials doesn't stop you from being an idiot or being wrong. If you believe the first person with great credentials you see, and that person happened to be anti vacc, would you be anti vax? Sounds like it to me. I'm pro vaccine, but I am strongly anti blindly listening to people with credentials.
Sorry, should have been more clear... I'll take Fauci's word over somebody on Youtube (regardless of that Youtuber's purported credentials). Fauci has a career in public health. Random Youtuber, I have no idea if they even have thee credentials they claim.
And there's a big difference between a scientist saying "people with previous COVID infections have more anti-bodies than those with vaccinations" and Joe Rogan saying "COVID is fake/just the flu".
The first is potentially true and most of us aren't qualified to either verify it or derive any course of action from it - regardless of it's truth, vaccination is probably the appropriate action for any individual (better safe than sorry). Using the statement as a argument to avoid vaccination is bad policy.
The second is on outright fabrication, yet we still have a significant portion of the population believing that crap.
There are a lot of weird claims in this post. Firstly, Fauci has publicly admitted to lying to the public on multiple occasions. I'm not sure why his credibility is unimpeachable simply because he's a bureaucrat in public health. Trusting him over a YouTuber "regardless of their credentials" seems like a significant overstatement of trustworthiness.
Secondly, Rogan has never said COVID was fake or just the flu, and the reason people assert that COVID is fake is down to motivated reasoning, which is the same reason Fauci thinks he was justified in lying to the public.
Getting people to change their behaviour starts by not dismissing them or dehumanizing them, and trying to steelman their position so they and you fully understand why they want to dismiss COVID. When that's done, it's often clear where you can compromise. Sadly, that's not what we see going on.
> Logos, Ethos, Pathos: you need all three. "This is true. I am trustworthy. This true thing is important."
This ivermectin thing basically proves that logos isn't useful. Ethos + Pathos alone can convince a large population of people.
The Apple "Reality Distortion Field" was never about logic. It was about making people feel good about buying Apple products. That's fine, because Apple has decent enough products (I don't like them myself, but I can see why some others would like them).
But today, we can apply the "Reality Distortion Field" to any subject. Most recently: ivermectin.
-------------
What I don't get: why are people choosing to push snake oil (like ivermectin), instead of pushing the drugs that do work (3 different vaccines, dexamethasone, and monoclonal antibodies)?
Society has developed working treatments for COVID19: dexamethasone cut the death rate in half IIRC, and monoclonal antibodies cut it in half yet again. And yet, people are seeking treatments that straight up have no evidence of working.
I can put papers up for the efficacy of dexamethasone + monoclonal antibodies, and how this cocktail saves the lives of countless people across this country. But then I'm suddenly left in a "Russet's teapot" scenario where I'm apparently supposed to prove-a-negation when discussing ivermectin (even if countless papers fail to distinguish ivermectin from the null-hypothesis).
People's brains turn off. Because today's reality distortion fields / marketing / propaganda are much, much stronger than logic.
---------
Let me tell you how to do things in today's world.
1. Automatically find the people who have the poorest logic. Use ads, memes, and other such "low-quality" discussion points to find the lowest functioning brains. For example, clickbait headlines or "Nigerian Prince" scams. The dumber the argument, the better.
2. Reasonable people will ignore you. The only people who will interact with you are people with weaker argument skills. Spend as much time convincing _this_ group of your benefits.
3. Make it fun: give them memes to share with their friends. Even if its a bad / crappy argument, that's okay. That's what memes are about.
4. Sit back and relax as your crowd automatically spreads whatever argument you want amongst their friends and family. Now they're doing the hard work for you.
5. Bonus points: get enough people moving as a crowd, and even smart people start to get drawn into the masses. You'll start finding apologists who make better arguments on behalf of you. Keep up with the meme culture and pick/choose the best arguments. Crowdsource your marketing: the memes that become popular are the arguments you want to use.
At no point is "working" on logos actually beneficial to building a RDF (reality distortion field). You can build ethos + pathos simultaneously by just seeding opinions into a crowd through meme culture.
Bonus points#2: Use really, really bad arguments (World is flat. Lets to go Mars. 9/11 was a hoax. Hydroxychloroquine can save you from COVID19) as practice. The better you get at seeding bad arguments, the better you get at seeding any argument.
> I'm apparently supposed to prove-a-negation when discussing ivermectin
i wonder if the bias towards ivermectin is because you can take it yourself, whereas dexamethasone + monoclonal antibodies you probably have to be in serious condition (in a hospital) before you can get it...?
is it possible there is some physiological issues about "going to the doctor" vs "self help/healing" ?
> is it possible there is some physiological issues about "going to the doctor" vs "self help/healing" ?
Its not physiological. Its simply marketing.
Its no secret that ivermectin / hydroxychloroquine makers are benefiting from this snake-oil bullcrap. Its no different from essential oils or other such snake oil products.
We just didn't care about essential oils 3 years ago because its fine for idiots to waste their own money on snake oil. But when the masses are tricked into distrusting COVID19 precautions and start spreading the virus around even more, its a bigger deal.
> Its no secret that ivermectin / hydroxychloroquine makers are benefiting from this snake-oil bullcrap. Its no different from essential oils or other such snake oil products.
interesting... if thats true, shouldn't there be legal repercussions?
> We just didn't care about essential oils 3 years ago because its fine for idiots to waste their own money on snake oil.
slightly off-topic, but as someone who's suffered severe allergic reactions to people using "essential oils" it baffles my mind that these (and supplements) aren't more strictly regulated...
To your point, American anti-intellectualism continuously undermines the "I am trustworthy" aspect of science. For a country whose current rise to prominence was so clearly aided by technology, and whose future relevance depends on technical innovation in an uncertain global future...
We are so lucky that COVID is only going to kill a million americans or so (700k and counting, probably 850k by Jan 1st), if this was the death rates of smallpox or the black death... ye gods. Our lack of respect in science is a huge part of vaccine hesitancy and our key role in continuing the pandemic.
The news media are terrible because for decades they have reduced science and technology either to simplistic caricatures, star trek gobbledygook, or even worse the WELL THEY SAID THIS, NOW THEY SAY THAT.
The general media's constant use of the high school nerd trope for all science and technology proficiency has been a long term failure in the progress of our civilization. Only, and only, because such proficiency leads generally to some degrees of upper middle class money to vast wealth is the reason it hasn't been worse.
But that media trope has always colored all science and technology policy as culturally "well, that's what the nerds say, and they aren't cool or fun".
Possibly it goes back to how most ultra-rich """elite""" made their money in America: by control and exploitation. People that have influence and position distrust science, because it is complicated and unknown and preys on their paranoia, and so often for industrialists produces complications to their overall plan of "make money by selling products, offloading the actual environmental cost/impact of those products on ... anyone else".
Maybe I'm paranoid, but it does seem with the mastery of social media propaganda/manipulation by monied interests, sowing distrust in science is now at an all time high.
So the business-friendly right will distrust science because it threatens their money and power. The apathetic center won't like science because it isn't cool. The left... well, scientists are generally white and male so they are distrusted by identity activists, and the rest of the left is too disorganized to rally around science policy effectively.
I am convinced scientific results should not be publicized, publicly promoted or broadly advertised at least a decade after publication. That's about the timescale it takes specialist communities to cross check or falsify results.
Of course, this is exactly the opposite of how science works today. There are poorly written press releases galore, and flashy but tenuous results that seldom hold up to scrutiny. There's a reason a journal like Nature is routinely mocked in scientific circles.
Why given a timeline, just say results are probably real after two independent replications, and there should significant pop science hesitation on publishing anything that hasn't been replicated yet.
My problem with this article is that it doesn’t know how to draw the line between what is science and what is politics.
Science absolutely exists as a thing on its own, “pure” if you will. And alongside it and with it also comes politics, yes! But that’s a feature, not a bug— both play a role. A very different role.
Science is for everything descriptive and inferential. Politics is for everything prescriptive.
Unless of course we're up front about what our goal/premise is. Then it's absolutely appropriate.
For eg.: "If we want to minimize the burden of disease from X in Y conditions, we should take Z course of action."
Other eg.: rather than saying "you should not urinate on the electric fence", a good scientist will say "if you want to minimize the likelihood of unnecessary suffering, you should not urinate on the electric fence."
> Science absolutely exists as a thing on its own, “pure” if you will.
Interesting. I thought it did a great job of illustrating that while the quoted is true, the quoted represents an "ideal" (or pejoratively put a "fantasy"), as the reality of doing any real amount of such ideal science coordinates with the real world, the context of doing so, means it never ends up anywhere near pure and unsullied. It's like the fact that adding an even number and an odd number will always produce an odd number.
Well, science itself is independent of political motivations. Some alien civilization running the same experiments in a galaxy far, far away will get the same results and, eventually, reach the same conclusions about how the world works.
Politics, being concerned with the allocation of limited resources, does two things to complicate this ideal picture:
1) It decides "which science" gets done right now, which means the progress of science may not be uniform in all areas.
2) It applies research findings improperly to serve non-scientific goals.
Notwithstanding, the science remains just science. And crucially, trying very hard to stop or limit #2 from happening, in particular, is essential in making better _political_ decisions.
Political decisions are inherently about how to manage conflict between social groups, and while it's not really my place to judge the quality of any given political decision, it's hard not to judge the quality of a political decision which is not based in the physical reality that we all share.
Politicians have a lot of overlap in their beliefs, but anything relevant to it will be waved through the legislative process without much comment. We're all left to focus on the remains where there isn't much consensus.
"Settling Politics" amounts to everyone agreeing about what to do on every matter all the time. Otherwise politics just isn't going to be settled.
If the question is why science hasn't settled a particular debate, the answer is typically that the facts aren't really in question, and there is a secret debate about who is going to have to wear the costs.
"Settling Politics" amounts to everyone agreeing about what to do on every matter all the time. Otherwise politics just isn't going to be settled.
But it's a misconception to think that the goal of politics is to "be settled". The goal of politics is to make decisions on how to rule the land right now, they don't get to wait 15 years and then try again (yes, some do, but that's a different problem).
there is a secret debate about who is going to have to wear the costs.
It's very sad that this has become a "secret" debate, because this is the heart of politics. I haven't decided if I blame politicians or the sensationalist media more for this, but these debates should not be secret. This is not just a problem in the US, in Europe I see the same: we are stuck with career politicians that are too chicken to either state their real beliefs in public, or to act on those beliefs in the senate.
Scientists see a block of wood, and observe that it has fixed dimensions, is made from a dead tree, and weighs a certain amount.
Politicians see a block of wood and ask, how do we use that wood? Do we build a house, make a fence, or carve it into an ax handle.
So, it's really apples and oranges. Scientists use the scientific method to determine the truth. Politicians are not seeking the truth, but are using consensus to govern the actions of their constituents. The consensus can be based on science and truth, or on irrationalities such as fear and hope.
The most recent issue of Scientific American has a piece that talks about how detrimental increasingly restrictive abortion laws are. That is a great example of science being politicized. From a science standpoint they could just as easily talked about the tens of millions of babies that have been killed by means of abortion. But the bottom line is that a science magazine should not be expressing an opinion on a controversial moral issue as though it is a matter to be determined by science.
Science is conducted by people, all of whom have preferences. Until the day arrives that science is explored only by preferenceless beings, science and politics will continue to have a significant overlap, much of it at the invitation of scientists.
Good example. I wonder if those in power who make money from coal plants know that they are causing some harm while they are causing some good (providing power) just prefer to deal with the pollution issues after they are dead :)
I don't know how objective science is, because we bring all of our biases to the table when we interpret data. A young earth creationist, an old earth creationist, a fanatical darwin evolutionist, and an ardent panspermia UFOlogist are all going to view the same data very different, even if they have identical educational credentials. The holes in evidence for each don't matter, because they are the persuaded.
If we think about truth bending to subjective reality, I am sure certain that some of the churchmen & scientists opposing Galileo's embrace of Copernican heliocentrism were completely certain geocentrism was the truth because their own astronomical observations were subject to slavish obedience to doctrine. This doctrine in their heads influenced their own observations. Narrative overwhelmed the senses.
It is the mad among us that have the rare ability to completely discount all external input and see a thing for what it truely is. And, that is why we call them crazy.
I don't know how objective science is, because we bring all of our biases to the table when we interpret data
You are conflating "science" and "scientist" here. The way science overcomes bias is two-fold: 1) by training scientists to be aware of their own biases, and 2) by having multiple scientists, each with their own biases, try to reproduce the same experimental results.
Your artificial example is just that, they all fail 1).
> Attempts to scientifically “rationalize” policy, based on the belief that science is purified of politics, may be damaging democracy.
Could anyone explain to me what the phrase "may be damaging democracy" means? I've been hearing it a lot over the last several years, and I've been wondering what democratic ideal people who use this phrase have in mind. It looks as if independent opinion-making based on one's trusted sources "may be damaging democracy", because most of us are not equipped to identify fake news; open and free exchange of opinions on social media "may be damaging democracy" for the same reason plus due to the tendency of falling into warring tribes; and now this tagline from the article claims that bowing to the experts may also be damaging democracy. Is there anything that doesn't, and what does it all mean?
IMO, it means something like undermining the trust in elected government and law makers.
There's a lot of policy making for which there's no solid evidence, because –crudely said– social sciences are far too sloppy. However, I don't think much policy is based on it. But in the public on-line debate (whether that's run by trolls or not), many appeal to science in their arguments. Perhaps that has some impact?
The article itself has the same vibe as online debates. The first example of biased science is medical experiments using men, and some totally unnecessary anthropomorphization of the reproductive process, which sounds more like virtue signalling than pertaining to the topic. Whorff-Sapir like arguments complete the picture.
Other arguments in the article point out that scientists can have a rather limited vision, resulting in sub-optimal solutions. But as I said above, I don't think much policy making is largely based on science, and the examples given point as much in the direction of tunnel vision by policy makers as the scientists they consult.
> Is there anything that doesn't
Where trust is absent, everything appears hostile. That would make a good Latin fake quote...
A practical example are the Jan 6 riots in DC and the attacks on the integrity of voting. Both peaceful transitions of power and trustworthy elections are bedrocks of democracy and so attacking them weaken democratic societies. At some point a coup succeeds and democracy is dead.
Weren't the rioters convinced that they were saving the democracy that was about to be undermined by what they considered to be a stolen election? If so, weren't they participating in the democracy?
As a thought experiment, consider protests against fraudulent elections in other countries (Russia and Belarus come to mind, but I am sure I saw some other countries in newspaper headlines recently). Would you consider those protests to be an exercise in democracy?
What saddens me about that business, is that the criticism against certain voting machine companies in the US is considered "anti-democratic conspiracy theories" while the same criticism against the very same companies in other countries is considered "pro-democratic".
The thing is, these companies are running the elections like a black box, Princeton University already has shown it is easy to hack Diebold machines for example, and we had very suspicious cases in other countries (for example voting machines in a state election in Brazil registered more votes than voters, and the amount of "invalid" votes, absentees and "blank" votes were all identical, when someone asked for a recount the response of the government was to say the machines must be trusted and fine the shit out of the candidate that complained, for "education purposes.")
Democracies don't die from a single stolen election. The remedy is in the courts to determine if the election was stolen and then in the legislature to fix whatever problems allowed it. Convincing people that democracies must be saved by violent revolt is what kills democracy. The true enemy is lawlessness, not the other half of the country.
Fraudulent elections are happening in countries that are, in fact, no longer democracies and have not been for some time.
I'm open to arguments that the U.S. is no longer a functioning democracy, but the ability to still have mostly peaceful transitions of power and for election audits to come back clean reassures me a little bit.
Or a coup does not succeed and the powers that be start looking for ways how to actually strengthen the trust in democracy.
For example, Swiss constitution of 1848 was a result of a civil war. Its provisions made the division of powers between the federation and the cantons clear and Switzerland has never had a violent crisis again.
I take it to mean damaging to democratic countries or, in a more abstract sense, damaging to the credibility of the democratic form of government in general. A good example of this is the current effectiveness of disinformation in democratic societies. Authoritarian countries do not suffer nearly as greatly from this.
So, instead of the abstract textbook word "democracy" one could rephrase this more concretely as "damaging to the present political situation in our country"?
When put like this, this statement sounds truly conservative (small c, no necessary relation to parties); but I am hearing this phrase from people who are fine with the disruption of the political arena in the name of what they consider to be progress.
Science shouldn't be settling political disputes. Politics is the realm of people's most base instincts and desires - try to step in that and like the man wrestling a pig you'll end up covered in filth and the pig will be deliriously happy.
I doubt the wisdom of people who say - I want to stop global warming so I studied climatology. You're going to open yourself up for a heartache if the plan is to study and learn enough so that you become a voice of authority and then use that voice to control what happens.
It's not like people are terribly confused about what the least damaging course of action to take is. If resources were unlimited there would be little disagreement about what to do. Resources are limited though so we need to choose who gets what they want and who does not. There are messy compromises that can be made. Tricks that can be played. Sometimes there's power enough so that only one side sacrifices. In all of that though Science can inform what choices are available but when it starts trying to make them it will get slapped down.
Alvin Weinberg wrote along these lines years ago in his Science and Trans-Science paper [1] from 1972. Once I understood the concept, I see it everywhere these days.
The distinction exists but its also rhetorically insignificant. Nobody is conflating the academic research complex with the scientific method. But just as how other definitions have changed over time everyone knows that when someone refers to problems with science they mean problems with the academic research complex and related topics.
Put another way in modern colloquial English what you call science is almost always called the scientific method. What everyone else calls science is really Academia. When people start publicly calling for other forms of reasoning and problem solving to be taught then your argument will be convincing.
Critical thinking is at an all-time low in Western society.
Enough people aren't being trained to think critically nowadays, and increasingly, societal pressure, which used to put pressure on officials to vet and verify facts and positions, is diminishing. Until that changes, we will fall further and further into the cult of celebrity/personality, which is fundamentally appeal to authority.
A person reading this MIT article should ask "Why should I believe what this article says?" If the answer is "because it's MIT! Duh!", then that's the problem in a nutshell. MIT is not a substitute for their own critical thinking faculties, nor is an MIT professor/academic/spokesperson/author the absolute arbiter of truth.
I'd actually say critical thinking has seen considerable improvement in the past few years. For one thing people no longer unconditionally trust the press and government institutions like they used to in, say '00s, and instead see them for what they are - a little more than Pravda and Politburo. Wars are also a lot less feasible now (thanks, ironically, to the cancer that is social media) than they used to be - people will bypass the media and shit all over the military industrial complex saber rattling. The establishment's control over narratives is at historic lows - that's why you're seeing increasing censorship - "consent of the governed" is getting a lot harder to manufacture. I'm pretty sure NYT would not be able to sell Iraq war to the public in 2021. I like that. I wish there was a less society-damaging way to achieve the same effect, but there doesn't seem to be one.
I've seen a lot more blanket skepticism and cynicism, but these new views are often as uncritical as the credulity they replaced (and as easily exploited).
I haven't really seen an increase in people doing the hard work of gathering facts, considering new perspectives, and challenging their own assumptions or prior beliefs. It's more like switching from uncritical consumption of NYT to uncritical consumption of YouTube.
I did like your idea that this could be a good thing though, not sure I believe it yet.
If all we get from this is the impossibility of droning children in the Middle East, it's a worthwhile tradeoff in my view. But I feel like we'll get more than that, in time.
We've kept a lot of drones in the region, and have continued using them (though at a reduced pace). I hope you're right that it will become an impossibility but I really doubt it.
These are all very good points. The ability for misinformation to propagate means that manufacturing consent has become a lot harder. As we've seen with COVID, that has some downsides, but it also has upsides because selling war is going to be a lot harder. Every tool has light and a dark side.
Think of it this way: the establishment is merely not a single source of misinformation anymore. Misinformation was always easy to propagate. I'd say easier, in fact. See e.g. Iraq war - 100% disinfo, parroted uniformly and enthusiastically by the entirety of the mainstream press.
Except in the past you could create (via mass media controlled by, quite literally, five people) an airtight, completely impenetrable narrative and feed it to the public, and now the public can get both the information, and conflicting disinformation elsewhere. Oops. Bet the CIA did not think of that when they helped create Twitter and Facebook.
I didn't read the article in its entirety and this comment isn't directed at the content of the article.
Science itself has been politicized where the pursuit of truth has become secondary to the pursuit of funding and the alignment with the political agendas du jour. When I was younger my faith and trust in science was quite high. As I have aged and seen more of humanity and how it permeates all aspects of our existence I don't trust science like I did when I was younger. Now all I see are the motivations of those who are doing the "research".
Having studied statistics in college with the express intent of how it is used for scientific studies I am well aware that with enough data you can get any statistical result you want. Even better vaguely word it so it resonates with main stream media and still gives the authors an out with their peers.
> Even better vaguely word it so it resonates with main stream media
This explanation doesn’t resonate with me. Research most often has incremental results that need to be carefully qualified. Isn’t the far bigger problem that mainstream media takes subtle research results and “simplifies” them for the public by adding certainty and often mis-interpreting the results completely?
Political agendas have been recently systematically trying to erode trust in science. (Because science and truth does threaten some politicians.) The idea that science can’t be trusted as the high-level summary is exactly what some people want, and it seems to be working. But what is the alternative? We have nothing better. The point of science is to try to protect against motivation and agenda, and it does work sometimes. Even when people are motivated, when the methods are reproducible and the results are peer-reviewed, that does help filter out some of the badness. And if it’s not enough: what should we do to improve it?
Media distortion is definitely a problem too, but it's not the only problem.
Solar geoengineering is the hobbyhorse I usually use as an example of this. It's an open secret (see e.g. https://www.nature.com/articles/d41586-021-01243-0) that most scientists in the field won't research it, partly due to safety concerns but partly because they think that decarbonization is the right policy and they don't want to risk "detracting from efforts to rein in greenhouse-gas emissions". Maybe that's the right judgment, but it's hardly free from motivation and agenda.
I think that you will find that science has been pretty much that way since the beginning. Can't do science if you don't have funding. If anything, this is simply more transparent these days.
Grant review sessions most definitely have some amount of cronyism, and that famous luminary in a field will almost certainly get funding even if their grant application is worse than that of somebody with fewer citations or younger and with less of a track record. But there's awareness of it at least. And you'll find scientists on Twitter being very open about it, whereas before you'd mostly only find it at the pub during conferences.
There's a great passage from ET Jaynes that I'm having difficulty finding right now. He was a physicist from the 1940s on, and as a young graduate student he talks about how he had to be very careful about what he studied, because if it had the potential to contradict one of the big names of the field, then it could tank his entire career before getting started. If he hadn't "played ball" early on, we'd never have gotten his later Bayesianism.
Science is always a human process, there will be politics to some degree. Politics can only be minimized, not eliminated completely.
Science used to be a self-funded hobby of the upper classes. Then after that came the tenure system, where people had at least had safe and stable emplyment. Nowadays there are legions of precariously employed scientists desperately trying to hawk their science.
There's also just an extreme naivety when it comes to the definition of "politics." If you genuinely think that your position is morally and factually correct, you'll pursue that position.
Now, imagine that your position truly is objectively factually and morally correct. (I understand things generally don't work this way, but stick with me for sake of example.) All you have to do is find a vocal group of people who disagree with the morally and factually correct position, and you can now call this "political stance" which someone can be "biased towards." (a simple example might be flat-earthers.)
Now of course, this same process can work out in nearly any permutation: the mainstream group could be pursuing an idea which they believe is morally and factually correct, but of course simply be wrong. And then the protesting group, calling out the politicization of the issue, could then be the correct group.
It must also be stated that the closer you get to the hard sciences, the less any of this politicization works. It's worth noting that things such as semi-conductors and computer chips all rely on science, and no one is politicizing whether they exist, or work. What is politicized is a bit more predictable: Questions of the cause of the ills of society, (ie, social sciences) questions about the role of men and women, questions about public policy.
I suppose the point I'm trying to make is that it's quite easy to call one man's truth a "political" ideology. If you poll enough people, nearly anything is "just a point of view" which could be construed as political.
I don't see it as politicized science, but as abuse of science by politics. Just as when religious speakers abuse science for their purpose, science doesn't become religious, analogously when politicians abuse science for their purpose, science doesn't become political. And this abuse isn't new, politicians did it for ages, it just hit the fan now.
While I see that funding will tend to constrict the research around the desires of the funders, it's worth considering that 'science' has also expanded into areas that don't deserve the title.
The softer disciplines are particularly susceptible to the style of the day but lust after the cloak of correctness (and status) that hard sciences wear.
> Having studied statistics in college with the express intent of how it is used for scientific studies I am well aware that with enough data you can get any statistical result you want.
Just doing what a good pseudoscientist does: first dismiss any rigorous logical tools...
I'm starting to see anti-scientific method memes in the dark corners of the internet. I mean this literally: they suggest that the scientific method is a tool of the progressive left, and that the proper thing to do is reject it. They specifically call out the "I fucking love science" language used in insipid places like Facebook, and more broadly, must be referring to phrases when get cast about such as "follow the science."
Of course nearly everything is politicized currently, and science has certainly been politicized in the past, (Galileo, Scopes Monkey Trial, Creationism in schools, etc.) but that doesn't make this latest round any less troubling.
I want to be clear here that I'm not suggesting that science is "left" or "right," or that the left is blameless here. (they're definitely not blameless) That said, I do think it's very interesting to see a strict attack on the scientific method itself, which is at least to my eyes seems new. For example in the old Creationism debates of the 90s and early 2000s, the certainty of specific scientific evidence is what was brought into question. Perhaps that was only because the debates had to happen in the public eye, and have some air of validity though.
The quotes you provide are not anti scientific method. They're meant to mock unthinking obedience to the authority figure science has become. The scientific method is great!
'Follow the science' means don't question me I'm on the side of science.
Quotes like 'I fucking love science.' belie a tendency to get caught up in science's many miracles such that one can be lead by corrupt institutions claiming scientific credulity.
The two quotes are used in the 'dark corners' you mention to mock naivety in the belief of purity in science.
I don't disagree with the criticism that people claiming "I fucking love science" and "follow the science" are often (usually?) themselves not particularly scientific. On lousy image boards, very clever points do exist, but they're mixed in seamlessly with very poor points.
Yes, when a museum puts out a poster saying the scientific method, rational thinking, is akin to whiteness... you know well certain that the very concept of science is being undermined as well as weaponized. I mean, someone unironically worked on this poster, its content, and its layout, probably spending several tens of hours looking at it on their computer screen, and then said to themselves and their peers, "this is fine".
Also the reason for the pushback from the right. It is actually the regressive left that started poking at it and opened the pandora box without thinking about the consequences of doing so. Now the whole scientific method is up for debate.
Not just the scientific method, but base reality itself.
When I was growing up the general idea was that there was a base reality and that we can perceive it to varying degrees based on our senses and other tools (e.g. electron microscopes). Postmodernism questions even the idea of a base reality and when you squeeze that lemon weird things shoot out like "lived experience" being more important than what actually happened.
Interestingly, to these same people if your "lived experience" is something you felt in church then you're just a moron who doesn't follow the science. But if your lived experience is within a set of approved narratives, it's 100% fact and we are allowed to riot over it.
I also see stuff on the left calling objectivity a tool of "white supremacy." So yeah, very dangerous. The problem with science is that it shows the truth that both sides wish wasn't there.
Are you aware of how often science and medicine has been used specifically to repress non-white populations? I'm not saying that's an inherent trait of science, but it can easily be employed in that way if people aren't actually aware of the biases people hold implicitly.
The history of psychology psuedoscience being used to validate racial bias is undeniable. I'm sure there are groups on the far left throwing the baby out with the bathwater, but by in large the left complain about psuedoscience being used as a tool of white supremacy.
> The history of psychology psuedoscience being used to validate racial bias is undeniable.
At the time it was called just "science". Then new people entered the field, called the old things pseudoscience and started the cycle anew. I'd argue that not much improved, the social sciences was bad back then and are just as bad today and most of the things they preach today will be seen as totally wrong and ignorant and dangerous by the scientists of the future just like we view those of the past.
This makes it sound like the scientific method is somehow soft or relative. It isn't. Short circuiting the scientific method was pseudoscience then, it's pseudoscience now.
And in the middle you have the people who say they "believe the science"and are religiously unwilling to accept anything that isn't orthodoxy. Less dangerous in most cases, but kind of ironic.
That is post-modernism [1], the tendencies of our era. Boghossian's commentary [2] published in 1996 is still relevant these days, though it would need to be put in context.
> Postmodernists like to respond [...] that both claims can be true because both are true relative to some perspective or other, and there can be no question of truth outside of perspectives.
> Postmodernists like to respond [...] that both claims can be true because both are true relative to some perspective or other, and there can be no question of truth outside of perspectives.
I wonder who the 'postmodernists' mentioned here are; for example, Foucault, who is often grouped under that umbrella, went to pains to say that just because multiple readings are possible, it does not mean that they are all valid from a single point of scrutiny.
It's also rather strange that the talk of 'postmodernists' is as if it's an academic movement rather than the state of the world today (e.g. see Azuma's analysis of otaku in Japan as symptomatic of the postmodern condition; the increasing reliance on trope and categorized personalities in modern media compared to the grand narratives of modernist literature). Theorists of postmodernism tend to argue four fronts: why we're living in a postmodern world, how we got here, and what that means for meaning (political, religious, cultural) and how to deal with that.
The people who usually decry 'postmodernism' are in general perfectly happy to live in the world that word describes, with its media and piecemeal, disconnected consumption and purchase of images and simulations lacking any real narrative. Even more absurd is the tendency for some of the same people to resist narrative (e.g. "no politics in video games").
Whether truth is relative or not has no effect on politics, because truth is simply not a concern there. The point of political claims is political interests, not truth, so they can't be defeated by walking and talking about truth.
That’s interesting - do you have some examples? I’ve seen quite a few similar memes, including some which explicitly support the scientific method, by contrasting it with what they claim to be politically motivated cherry-picking, “scientism”, etc.
I've seen nobody criticize the method in the abstract. I agree with the grandparent poster that that's troubling, if you've seen it.
What I've seen from the right is a skepticism of the institutions. I'll admit I share this sentiment to an extent, though my perspective is as a mostly ignorant outsider, trying to figure out who and what to trust.
I have seen criticisms of the peer review system, though it came from Eric Weinstein, not a right winger. And if I had to guess, he is concerned about climate change. I don't understand the peer review system well, but it makes sense to me that that if badly designed it could go horribly wrong.
Do you think it is that people truly believe the scientific method is wrong or do you think it is something else?
I have a theory (I'm programmer, not psychologist) that with the pandemic there is a lot of emotions (fear, anxiety, anger, unknowns, etc.) floating around people's heads that get all jumbled up and they grasp for something that they can cling to which, however illogical to many, calms those fears to a manageable level. And then they find things online which support those ideas and they go deeper with them because it calms them to have "answers".
I see headlines of similar anti-vaxx, etc. things from US/EU and wonder if this is a western phenomenon? Are people seeing similar sentiments rising up in say Japan? Or other non-western countries?
Yeah but is it? As someone who considers himself at least culturally left-leaning, I'm definitely seeing a worrying trend of moving away from the conviction that data, numbers, science, logic are valuable and desirable tools to interpret the world.
The fact that one of the catchphrases of the alt-right is "facts don't care about your feelings", as idiotic as that is, should really make you stop and think about what kind of worldview is being pushed.
> I'm definitely seeing a worrying trend of moving away from the conviction that data, numbers, science, logic are valuable and desirable tools to interpret the world.
My favorite example of some of the fallout resulting from this trend: the vilification and abandonment of nuclear energy by so-called environmentalists.
I trust it is reversed. left trust some concept and fighting for it, extreme guys never changed.(e.g., don't kill cute animals!!) right build a perfect model for exist world, and fight for keep it. scientists work for find new theory -- it is a new concept, look so good! so scientists will move towards left, when they find theory do not work, they go back to mid. but sometimes they will walk too far to back...
I think we should punish public figures for getting things wrong and prescribing things wrong.
A good example was mask mandate and outside gatherings.
Everybody from politicians, officials, scientists and celebrities got it terribly wrong with stay home (turned out to be uneeded because risk of infection was so small).
So until those people have been punished they must be barred from giving any more advice.
Btw - this is not about scientists during pure science, but when they cross the border into policies. Wrong advises should have bad consequences for the people giving them
As populations, we often start with good ideas - it's the implementation (through multiple industries) that often pervert (or ruin) the ideas impact.
Our legal system is an example of what happens when you take good ideas and mutate them for months or years between many parties before providing overly complex implementation guidelines for which the actual enforcement parties find ways to cut corners to reduce workload.
Building codes, police, healthcare, education, pollution, the list goes on and on...
A core part of the issue is statistical literacy and statistical expertise. A surprising number of scientists suck at stats. This leads to all sorts of shifty analyses and conclusions, often about statistical significance or causation. People who appeal to science as objective tend to not understand how much freedom there is in choosing analyses. The most overconfident seem to know just enough stats to be dangerous.
Rationality was hip at the end of the 19th century when the intelligentsia believed religious dogmatism was the root of all evil. Rationalism was the logical counter-draft.
The early 20th century proved that humans can also commit atrocities and form dogmata without religion.
Later thoughts relativized rationality, which was a necessary step in my opinion.
We rarely see rationality today even if many try to claim it for themselves.
I would hardly call myself anti-science, but I (and I imagine many on this forum) get frustrated at bad science. Things like shampoo adverts, lottery number predictions.
So I can see how for, some people at least, statistics which measure the probability of you dying from COVID are lumped into the same category as the probability of England winning the World Cup. It seems like the whole this is made of arbitrary guesswork (though I realise it isn't, at least in the case of the former).
Then there is the problem of the media mis-interpreting science. So we have the never-ending compilation of headlines from "Chocolate/Coffee/Bacon is good/bad/irrelevant for your health". Again, these are much more about getting views than sharing information, but there's a consequence of sowing mistrust in science.
And then we just have people not quite understanding complex statistics. "Nate Silver predicts 90% chance of Biden winning election" seemed to be interpreted by many as "Nate Silver predicts Biden will get 90% of the vote". There's been loads of this around COVID stats. It confuses the hell out of me - I've found myself listening to the same episode of "More or Less" twice, in an attempt to really understand it. So I search for a simple explanation and stumble on someone giving a simplistic - but wildly wrong - interpretation, but I'm too stupid to realise it's wrong.
Finally, many reporting on science use it to justify their political views. So we have "climate change stats predict sea level will rise by x%" (science) followed by "therefore it is scientifically proven that we must spend $x billion" (politics). Clearly, the first is a different type of fact than the latter, but if you disbelieve the second you may find yourself doubting the first as well, because they're asserted by the same people. When the assertions are more subtle, this is really difficult.
As a Christian (more of the Benjamin Franklin variety than Joel Osteen), I find the trend to set up science as some kind of ultimate authority on everything to be just as troubling as those use The Bible for the same purpose. Ultimately, both of these rely on the same flawed human tendency to want simple answer to complex questions. As so many have said before me, science is about "what and how", religion is about "why", but the questions and answers are just as complex in both cases.
Predictive power, the capability of making accurate predictions, should be the measure of what we call "science". E.g. physics is science, but economics, psychology, sociology are not. Today we have modern equivalents of the astrologers and tea leaf readers of old.
The difference between you and a scientist is that the scientist very well may be wrong, but at least the scientist tries. If you don't like that uncertainty, Twitter is full of people who are very, very sure.
Is this coming from the same place as articles arguing that we should just give up trying to deal with COVID-19 and let people day because yeah, a shame, but that's the cost of doing business?
From my perspective, a lot of powerful and also impressionable people are wrongly applying the label of "science" to the field of "risk-management".
As an example, science can objectively tell you what impact they believe that a virus or a vaccine has on a human body.
Science can not tell you what decision to make in light of the facts that the scientific-method produces. That decision-making process is the field of risk-management where everybody weighs the known facts and decides if the pros to some action outweighs the cons of that action.
Saying something like "Science tells us that you must wear a mask and take the vaccine" is a completely false statement. That is not science. That is just one individual risk-management assessment that may or may not be identical to another person's.
Science qua science is also value-free. The scientific-method cannot tell you what values in life are worth pursuing and taking risks for. Maybe living in the most free way possible (without vaccines or masks and avoiding decades of a medically paranoid health-state dystopia) is something that some people find worth risking their lives for. Science qua science cannot state that pursuing a subjective value is a good or bad idea.
I'd have expected an article like this to include discussion of Naomi Klein's "Merchants of Doubt" which describes how 'respected scientists' have prostituted themselves to the fossil fuel sector (denying fossil-fueled global warming), to the tobacco sector (denying links between smoking and cancer), to the agrichemical industry (denying the link between pesticide use and bee/insect declines), and so on. It's a very long list and a very sobering read.
Another good one is "University Inc. The Corporate Corruption of HIgher Education (Jennifer Washburn)", discussing how Bayh-Dole and exclusive licensing, along with the infiltration of corporate managers into research universities has fundamentally altered norms in academic research, to the detriment of scientific integrity.
This kind of thing has happened in other countries, most notably in the Soviet Union under Stalin when Lysenko came to power. Fauci actually has some similarities to Lysenko, i.e. being primarily political creatures seeking to retain control of their institutions by whatever means available.
Scientists don't just prostitute themselves to big businesses. Often they do it for small amounts of grant money provided by ideological entities. A scientist's price can vary wildly.
Klein's basic premise is correct but it's too narrowly focused on involvement in science influence by groups she already has a dislike for. Of course this is often handwaved away as not being relevant because "the good guys" have less money to pervert things than "the bad guys" do, which leads to "it's ok when we do it because our motives are pure", which even if true, doesn't change the fact that it's still another perversion.
> "In a popular YouTube video from a few years ago, for instance, astrophysicist Neil deGrasse Tyson claims that America’s problems stem from the increasing inability of those in power to recognize scientific fact. Only if people begin to see that policy choices must be based on established scientific truths, according to Tyson, can we move forward with necessary political decisions."
Neil is a blow-hard. Due to his own lack of humility and skepticism, he is precisely the worst public figure to represent science. His prescription is a recipe for dark ages totalitarianism.
> "Scientizing public controversies often prevents the recognition that people without science degrees often have important contributions to make."
Yeah. Because politics is about many people coming together, sorting out their needs and desires, and taking action. Not only the high priests.
> "Decision makers expect certainty, whereas science is best at producing new questions."
Decision makers don't expect certainty (they are good at dealing with uncertainty). Rather, decision makers (all of us) produce certainty when we make a decision, by coming up with the current best story about how the world works, what we need and desire, and how we might attain it together. Science adds important information to our current best story of how the world works.
> "Scientizing policy privileges the dimensions of life that are easily quantifiable and renders less visible the ones that are not."
Are we tracking case numbers? What about the fact that 4th graders are now at a 2nd grade level of education? Divorce rate? Depression? Suicide? Social cohesion? Incomes? Crime? Nope, just case numbers. Carry on!
> "Political scientism starkly divides societies into friends and enemies, the enlightened and the ignorant."
The vxd and the unvxd. Ask yourself a simple question how far away are we from recommending sending people away to training camps so they can learn the proper gospel of science? Or simply letting them waste away in a ghetto or gulag? Alternatively, how far are we from inviting those we disagree with over for dinner? How far from a warm embrace of someone wearing a red hat are you? Which are you closer to?
> "In a culture dominated by political scientism, citizens and policymakers forget how to listen, debate, and explore possibilities for compromise or concession with one another."
I recommend we read Hannah Arendt for an understanding of what politics must be for us. This essay points to a fundamental misunderstanding, in my opinion, that politics is about making "the best decisions" but really it's about making "the best decisions we can make together*".
From Stanford Encyclopedia of Philosophy:
This capacity to act in concert for a public-political purpose is what Arendt calls power. Power needs to be distinguished from strength, force, and violence (CR, 143–55). Unlike strength, it is not the property of an individual, but of a plurality of actors joining together for some common political purpose. Unlike force, it is not a natural phenomenon but a human creation, the outcome of collective engagement. And unlike violence, it is based not on coercion but on consent and rational persuasion.
Arendt maintains that the legitimacy of power is derived from the initial getting together of people, that is, from the original pact of association that establishes a political community, and is reaffirmed whenever individuals act in concert through the medium of speech and persuasion. For her “power needs no justification, being inherent in the very existence of political communities; what it does need is legitimacy ... Power springs up whenever people get together and act in concert, but it derives its legitimacy from the initial getting together rather than from any action that then may follow”
#EvidenceBasedPolicy is a worthwhile objective even if only because the alternative is to just blow money without measuring ROI at all [because government expenditures are the actual key to feeding the beast, the economic beast, the...].
What are some examples of policy failures where Systematic review and Meta-analysis could have averted loss, harms, waste, catastrophe, long-term costs? Is that cherry picking? The other times we can just throw a dart and that's better than, ahem, these idiots we afford trying to do science?
Wouldn't it be fair to require that constituent ScholarlyArticles (and other CreativeWorks) be kept on file with e.g. the Library of Congress?
Non-federal governments usually have very similar IT and science policy review needs. Should adapting one system for non-federal governments be more complex than specifying a different String or URL in the token_name field in a transaction?
When experts review ScholarlyArticles on our behalf, they should share their structured and unstructured annotations in such a way that their cryptographically signed reviews - and highlights to identify and extract structured facts like summary statistics like sample size and IRB-reviewed study controls - become part of a team-focused collaborative systematic meta-analysis that is kept on file and regularly reviewed in regards to e.g. retractions, typical cognitive biases, failures in experimental design and implementation, and general insufficiencies that should cause us to re-evaluate our beliefs given all available information which meets our established inclusion criteria.
We have a process for peer review of PDFs - and hopefully datasets with locality for reproducibility and unitarity which purportedly helps us work through something like this sequence:
Data / Information / Knowledge / Experience / Wisdom
We often have gaps in our processes to support such progress in developing wisdom from knowledge that should be predicated upon sound information and data and then experience, bias, creeps in.
Basic principles restricting the powers of the government should prevent the government - us, we - from specifically violating the protected rights of persons; but we have allowed "Science" to cloud our judgement in application of our most basic principles of justice - i.e. Life, Liberty, and the pursuit of Happiness; and Equality and Equitability - and should we chalk the unintended consequences up to ignorance or malice?
More science all around: more Data Literacy - awareness of how many bad statistical claims are made all day around the world everywhere - is good and necessary and essential to Media Literacy, which is how we would be forming our opinions if we didn't have better tools for truth and belief for science.
"What does it mean to know?" etc.
Logic, Inference, Reasoning and Statistics probably predicated upon classical statistical mechanics are supposed to bring us closer to knowing: to bring our beliefs closer to the most widely observed truths.
Which Verifiable Claims do we trust? What studies do we admit into our personal and community meta-analyses according to our shared inclusion criteria?
"Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)" is one standard for meta-analyses, for example. http://www.prisma-statement.org/ . Could the bad guys or the dumb good guys lie with that control in place, too? Can knowing our rights - and upholding oaths to uphold values - protect us from meta-analytical group failure?
Perhaps STEM (Science, Technology, Engineering, art, and Medicine/Math) majors and other interested parties can help develop solutions for #EvidenceBasedPolicy?
This one fell flat. Maybe it was the time of day? The question should be asked every year, at least, eh?
"Ask HN: Systems for supporting Evidence-Based Policy?"
https://news.ycombinator.com/item?id=22920613
>> What tools and services would you recommend for evidence-based policy tasks like meta-analysis, solution criteria development, and planned evaluations according to the given criteria?
>> Are they open source? Do they work with linked open data?
> I suppose I should clarify that citizens, consumers, voters, and journalists are not acceptable answers
The headline is an easy thing to explain [0], but the article has nothing to do with the headline.
[0] to wit:
You cannot resolve “ought” questions with “is” facts,
Political disputes are almost universally clashes of “ought” beliefs (even when rhetorically framed as being about “is” facts to convince people of different “ought” beliefs to sign on)
Science addresses “is” facts.
Therefore, science cannot resolve most political disputes.
As the famous saying goes, "nothing is as important as you think it is when you're thinking about it".
Every scientific discipline inculcates values that may be different from those held by the general public ranging from how important it is to give credit to the originators of ideas to the relative importance of difference species to whether naturalness is valuable or not. This isn't bad, its probably necessary. But it's something that has to be accounted for.