I agree that the goals are worthwhile, and also feel that requiring every proposal to include this is not efficient and/or very effective. They should take all the funds and time spent on this every year as part of every award, and just fund programs specifically designed to attract inner-city kids to science, or funnel talented, low-income, high school students to be mentored, taught advanced classes, etc.
I would be happy to spend time mentoring URM, etc. But it'd work a lot better if others managed such a program, thought about how to attract them, etc. Specialization is good.
>They should take all the funds and time spent on this every year as part of every award, and just fund programs specifically designed to attract inner-city kids to science, or funnel talented, low-income, high school students to be mentored, taught advanced classes, etc.
Or just have pay for decent, functional K-12 schools in non-rich districts without housing bubbles?
>just fund programs specifically designed to attract inner-city kids to science, or funnel talented, low-income, high school students to be mentored, taught advanced classes, etc.
And how do you find out which programs successfully do that without studying them?
That seems like a reasonable middle ground to me. But I didn't have any problem with DEI. If you have inner-city kids underrepresented by 50% then fund according to %population * %underrepresnted -- if 25% population then 12.5% of funding for programs to increase participation (don't think it would be that high). Maybe divide that by N if you have N different groups you would like to be more represented (eg rural kids). Performance is just that. Funding would be more effective. I expect anti-DEI folks would like the funding effort even less than the performance effort. It's obvious they believe diversity itself is part of the problem (whether admitted or not).
> they believe diversity itself is part of the problem (whether admitted or not).
they admit it. President Trump stated unambiguously that the recent crash and tragic loss of life is due to DEI hires, even though all the pilots and air traffic controllers involved were able-bodied white (presumably
heterosexual and cis-gendered) men.
> Asked by a reporter how he could blame diversity programmes for the crash when the investigation had only just begun, the president responded: "Because I have common sense."
I listen because he controls the most powerful army in history and can crush my country's economy if we don't allow his gang of oligarchs to mess with our democracy.
When the US sneezes, the whole world catches a cold.
Sure, you listen in the same way that you listen to the demands of an armed gunman in a store.
But you can't engage with trying to find any sense in it. His administration does not live in the same reality that anyone else does. Repeating the lies only gives them undeserved legitimacy.
I rather thought my original post was dripping with sarcasm, but I understand that it doesn't translate as well in text. I quote the orangeführer and his sidekick to bury them (in ridicule), not to praise them.
The way Trump works is that he's careless with details, entirely on purpose. Because it causes opponents to dispute the details he gets wrong, which in turn allows him to frame the debate. Compare this to someone who says something indisputably true, which no one can counter, and so then opponents ignore it and talk about something else instead.
It also allows him to exaggerate, because if you say "we got the best numbers ever achieved in history" but it's actually "just" the best numbers in a century, opponents who point out that was actually done by Teddy Roosevelt 120 years ago and not since are only underlining that it hasn't been done in a long time, while making themselves look like pedants.
But that doesn't mean that "Trump said it, therefore it's wrong" gets you out of the debate. For example:
> President Trump stated unambiguously that the recent crash and tragic loss of life is due to DEI hires, even though all the pilots and air traffic controllers involved were able-bodied white (presumably heterosexual and cis-gendered) men.
A plausible explanation for this is that because they were already all white men, new hires weren't allowed to be, so viable candidates were turned away, leading to under-staffing.
Is that what actually happened? Someone would have to investigate it, but until they do, you can't be sure that Trump is even wrong, or that he's necessarily wrong on the logic of it as might apply to other cases in general regardless of what turns out to be the case here in particular. And in the meantime he's got people talking about the possible dangers of doing the thing he doesn't like.
> because they were already all white men, new hires weren't allowed to be, so viable candidates were turned away, leading to under-staffing
Is there any evidence that this has ever happened in the history of the US? Not in this particular case as we wait for the details to emerge, but for any documented case?
> Is there any evidence that this has ever happened in the history of the US?
How would you even have a DEI program that didn't work this way?
You're currently understaffed and make efforts to hire someone to fill the position. The first qualified applicant you find doesn't meet the DEI goals. Your options are now a) hire them, stop being understaffed, but don't meet the DEI goals, or b) don't hire them and hold out for a candidate that can meet the DEI goals, but in the meantime you remain understaffed.
You don't always have the luxury of choosing between two concurrent applicants and this is a pretty unambiguous trade off whenever that doesn't happen.
In every place I worked in, qualifications are a show-stopper hard goal, DEI is a soft goal.
When someone doesn't have the qualifications you don't hire them on a micro level. When you don't meet the soft goal you look at your approach on a macro level. Is your pipeline bad? Are you missing qualified candidates? Are you biased?
You can work to improve all those things without any compromise on quality. And sometimes the soft goal is not reachable, or it's not reachable industry-wide, due to the pigeonhole principle. But that's doesn't feed into anecdotes and fairy tales about the boogieman.
What's actively repulsive and illogical is what he's doing - one or a group of people made a mistake, nobody yet knows what the mistake is, what contributed to it, and how it should be prevented in the future, but he spins a fairy tale that rests on assuming that they are unqualified. The only verifiably unqualified person in this story at the moment is him. Don't give that nonsense the time of day.
And look, here I am, not taking my own advice and engaging with that crap.
> In every place I worked in, qualifications are a show-stopper hard goal, DEI is a soft goal.
In this case the issue isn't qualifications, it's mean time to fill the position when you're understaffed.
Suppose you could find a qualified non-DEI candidate in one month and a qualified DEI candidate in six months. If you take the time to find the DEI candidate, the position is left unfilled for five months longer than it would have been. That's pretty bad if the result is a higher probability of a plane crash in the meantime, or any other important thing.
You don't seem to understand how these programs actually work in practice (One might even snark and say that you aren't sufficiently qualified.) DEI is measured as a statistical, after-the-fact, population-level macro mandate.
Any particular hiring decision is made on qualifications. It's illegal to not hire someone because their protected characteristics, and if you see someone doing it, you should whistleblow.
It's not illegal to look at your hiring efforts and decide to cast your hiring net in a different demographical area. But at the end of the day, you hire the candidate that meets the bar.
If you can't fill headcount, either your bar is too high, your firm sucks to work for, you aren't paying enough, or you need to work harder on hiring.
I have been involved in a lot of hiring over the years. I've never, ever been pressured, or even implied at that I should base any of my micro decisions (whether in rating or hiring) on someone's protected characteristics. I've likewise never seen them being used as justifications for my peers' decision-making. Oh, and everyone always bitches that hiring is too slow and it's too hard to find people. That's a universal constant, just like the universal constant of 'there's always more work to do than there is time to do it in'.
I have, however seen demographic-level statistics to be used as justification for "Maybe we should have a career booth at a work fair in the University of St. Louis. Does anyone want to go do that?"
> DEI is measured as a statistical, after-the-fact, population-level macro mandate.
For this to mean anything it has to be not just a metric but a target, and as soon as it's a target, you're not just measuring something after-the-fact.
> It's not illegal to look at your hiring efforts and decide to cast your hiring net in a different demographical area.
"Not illegal" isn't the issue when the debate is about what the law should be.
Suppose that you want to hit your numbers and that you further know that you can find more qualified candidates in a specific area which is predominantly white. If you advertise in the predominantly white area, you find a candidate in a month. If you advertise in that area and other areas, you find a candidate in a month, but it's the same candidate. Neither of these allow you to meet your numbers. Whereas if you advertise only in predominantly black areas, you eventually find a qualified candidate, and you meet your numbers, but it takes six months and in the meantime you're understaffed.
> If you can't fill headcount, either your bar is too high, your firm sucks to work for, you aren't paying enough, or you need to work harder on hiring.
Or you're hiring for a job with a limited number of qualified applicants while purposely reducing the number of applicants from a particular demographic because they would be qualified but mess up your numbers.
Why do you a-priori bias yourself to assume that looking somewhere else will reduce the number of qualified candidates that will make it through your pipeline?
How are you so confident that it won't increase it?
If it would increase it then you wouldn't need any DEI requirements to get the employer to do it because that would be their pre-existing incentive.
Also, sometimes the data is actually available. Demographic data is generally public. If you need to hire 100 people and you start off with broad-based advertising and then after a week have filled a quarter the positions from disproportionately one area, the normal incentive is not to stop advertising in that area, but if more candidates from that area are going to mess up your numbers, now that is your incentive even if it reduces the rate at which you fill the positions.
By that same logic, nobody should ever change anything about how they do business. We were doing it, clearly it was the thing that the business chose to do, if it weren't the best thing we ever did, we wouldn't have been doing it, ergo, we should not stop doing it.
Employers make mistakes all the time. But if they're doing something against their own self-interest by mistake, you could just show them the data demonstrating their error and they would have the incentive to change it of their own volition without forcing them to under penalty of law.
The situation in question is with respect to the government, which under the previous administration was imposing these requirements on government departments and contractors by executive order.
But there have also been attempts to interpret parts of the Civil Rights Act to prohibit "disparate impact" which would effectively require DEI because neutral hiring practices could otherwise result in disparities in outcome as a result of external factors.
> A plausible explanation for this is that because they were already all white men, new hires weren't allowed to be, so viable candidates were turned away, leading to under-staffing.
No, that's not even remotely plausible. Trump said it to deflect from having fired the head of the FAA because Musk didn't like him, among other things.
He could be right, but so could a stopped clock. You don't look at one to check the time, and you don't listen to a word said by someone whose favorite dish is the Gish Gallop.
If anything they are saying is right, someone who hasn't completely blown their credibility will eventually think of it. Or, they won't, and that's fine too, you're still better off on the net not trying to sift for candy in a large pile of shit.
Likewise, sometimes wearing a seatbelt does more harm than good, but you can't actually predict before-the-fact when that's going to happen, and you shouldn't use that as a reason to not wear one. Attention is limited, and you can't waste it on serial frauds.
The only thing he says that's often honest is what he intends to do. His words can teach you about him, but they won't teach you anything about the world.
> He could be right, but so could a stopped clock.
A stopped clock is right for two seconds a day out of 86400, i.e. 0.002% of the time. The problem with Trump is it's more like 30-50% of the time.
And then there are three things you can do when he says something.
You can ignore it, but that's not a great option when he's the President of the United States.
You can dispute the narrow ways he gets the details wrong, but that's just pedantry and you'll spend all day doing it to no avail because he can say stuff like that faster than you can fact check the details.
Or you can engage with the general shape of the argument he's trying to make and see if this is one of the times that there's actually something there. Which doesn't happen all the time, or even necessarily most of the time, but it's still often enough that universally assuming the opposite of anything he says isn't a valid heuristic.
As stopped clock is mostly right - within less than an hour of the truth - ~17% of the time.
If you're requiring five nines-level precision, he is right precisely zero percent of the time.
> but it's still often enough that universally assuming the opposite of anything he says isn't a valid heuristic.
He doesn't have the magic power where the truth is always the opposite of what he says. It's just that what he says is white noise. Its Shannon entropy is near-zero.
The response to that isn't to assume the opposite is correct. It's to just assume it's hallucinatory bullshit that's not worth listening to, or repeating, except to inform yourself on what he is going to do. His speech teaches us a lot about him, and nothing about anyone else.
Stopped clocks have more problems than precision. Their Shannon entropy is literally zero, because when you already know what time they say it is, the result will never change.
Whereas if Trump is talking about DEI stuff, regardless of the details, you can deduce that he's soliciting support for a policy change there, which means it's time to have the debate about that policy and see if we should support it or not.
You can't distinguish the things he says he's going to do from the things he says are happening, because the reason he says the latter is in support of the former.
> Whereas if Trump is talking about DEI stuff, regardless of the details, you can deduce that ...
... it's consistent with his lifelong racism, as well as the racism that permeates Project 2025. On day one he rescinded LBJ's executive order from 1964 barring federal contractors from discriminating.
Accusations of racism have been bandied about with such reckless abandon that they've lost all meaning without specifics and hard evidence of context.
Employers are prohibited from discriminating by statute which can't be amended by executive order. The executive order in question was regarding the use of affirmative action.
If a little Sieg Heil at the inauguration didn't convince you that this administration is pretty down with racism, I don't believe there's any evidence on Earth that will.
Specific, hard evidence, all in context, the man who gave it got an ovation and an office built in the White House, and a free hand in the agencies that are supposed to be regulating him.
You're referring to the thing the Anti-Defamation League called "not a Nazi salute" and the prime minister of Israel said he "is being falsely smeared". It's exactly what I mean by reckless abandon. It's a desperation to find something regardless of whether it's actually there.
It seems like we've gotten to the point that actual racists are so uncommon that people have forgotten what they look like:
> "We recognize the fact of the inferiority stamped upon that race of men by the Creator" -Jefferson Davis (referring to black people)
> "Anyone who has traveled in the Far East knows that the mingling of Asiatic blood with European or American blood produces, in nine cases out of ten, the most unfortunate results." -FDR
> "We stand for the segregation of the races and the racial integrity of each race." -Strom Thurmond
These quotes are not ambiguous or taken out of context. If you had asked these people if they were being misconstrued, they would have reiterated their positions, because they were actually racists who thought that other races were inferior and openly favored slavery or internment or segregation.
And it's sad that we're losing the ability to distinguish these things because of all the crying wolf.
I'm referring to the thing that anyone with eyes can see is a Nazi salute.
There's a beautiful side by side video clip of Elon actually throwing his heart out to the audience, and you'd have to be blind to think it's the same thing.
Who should I believe? My lying eyes? Or what the prime minister of a country that's courting US support in an internationally unpopular war says to curry favor?
The man's grandparents were registered Nazis, the man's rubbing elbows with the AFD, he's throwing out the 'roman salute' to thunderous applause at a political rally, and you're twisting yourself into a pretzel to explain how no, no, this is all a smear job against an innocent man.
How can you not see it? It's hard to tell if you're cynically taking the piss out of me, or if you can actually look at clip, and see something else.
There is a reason that doesn't make any sense: He did it in public, but then denies that's what it was.
If he was trying to be open about it then there would be no reason to deny it. If he wasn't trying to be open about it there would be no reason to purposely do it to begin with.
> There is a reason that doesn't make any sense: He did it in public, but then denies that's what it was.
Doing it then pretending that he doesn't do it carries two messages. For actual fascists, it's an indication that he's will support them, and that they should support him.
For people apologizing for fascists, the denial provides them the chance to spill gallons on ink on speculating about his state of mind and other plausible deniability for their insistence that he's not really courting fascists.
When someone does one thing, but says another, believe what they do, not what they say. Doubly so in politics.
There's no ambiguity in this symbol, or what it represents. Believe your eyes.
> For actual fascists, it's an indication that he's will support them, and that they should support him.
What actual fascists? That is not a large population segment in modern day. And support for what? He's not running for anything.
> the denial provides them the chance to spill gallons on ink on speculating about his state of mind
The theory is then that he made that hand gesture on purpose, intending to deny it.
Opponents would (as you do) disbelieve any denial and then try to use it against him, which is a significant cost. In exchange for this he would be trying to shore up support among the a) far smaller group of actual fascists who b) were previously on the fence and c) would not remain on the fence even though denying the gesture would be sending mixed signals.
There is no gain from doing that. It's why opponents are the ones talking about it. If doing that was actually to his benefit then opponents would want to be silent about it because publicizing it would be helping him to court the hypothetical large group of fascists who would look favorably on it.
> When someone does one thing, but says another, believe what they do, not what they say.
That's kind of the point. Hand gestures are in the category of saying something rather than doing something.
> There's no ambiguity in this symbol, or what it represents.
The obvious problem here is that the Nazis used "raising your hand" as a symbol.
The most plausible argument that he did it on purpose would be as a troll. Which isn't completely out of character for him, but it still has all of the same negative consequences above with the unambiguous implication of "not worth it".
What I have yet to see whatsoever is any evidence that he supports genocide, internment camps, incarceration without due process or the like. These are the things that make the bad guys the bad guys, not some kind of 5-D chess dog whistle rubbish.
has he actually denied it? or did he deflect with more Nazi "jokes"?
his neo Nazi supporters have enthusiastically welcomed the success of getting a fellow Nazi in the white house and he has continued to work with Nazi organisations across Europe.
> his neo Nazi supporters have enthusiastically welcomed the success of getting a fellow Nazi in the white house and he has continued to work with Nazi organisations across Europe.
Asking questions to neo Nazis is like asking questions to asylum inmates. Their brains are known to be broken so anything they say is presumed nonsense, and they're also very easy to manipulate into saying whatever you want to put into a story.
These right wing talking points have been repeatedly refuted. ADL contradicted their own definition of the salute. Both they and Bibi have vested political interests and citing them is grossly dishonest cherry picking.
It seems more like ChatGPT was asked a rather bizarre question with far too little detail to make sense, and ChatGPT failed to notice or to ask for more information. Although it did get rather impressively confused about the pressure of the air.
I mean that ChatGPT had no questions about the container of the gas (does it have weight? is it tared out?) or about buoyancy. And it’s really rather sad that ChatGPT calculated the volume of air at atmospheric temperature and pressure and didn’t notice that it was nowhere near fitting into its supposed container. (At least 1.01lb of liquid nitrogen would fit.)
My favorite MLE example: Suppose you walk into a bank and ask them to give you a quarter. You flip the quarter twice and get two heads. Given this experiment, what do you estimate to be the probability p of getting a heads when you flip this coin? Using MLE, you would get p = 1. In other words, this coin will always give you a heads when you flip it! (According to MLE.)
Are you just demonstrating overfitting when estimating using too little data? Or is there something deeper going on in your example? What does the bank have to do with anything?
The bank is context that gives us a prior probability. However, MLE does not consider a prior. So MLE can give results that are not very helpful in the real world. All it does is answer: What parameter value (in case the probability) of a head, makes the observed outcome most likely? But it considers all parameter values equally likely. In reality, we know that it is highly likely that a random coin from a bank is a fair coin. Thus, if we flip two heads, we are almost certain that it's still a fair coin. If, on the other hand, we flipped 10 heads in a row, we might start to wonder if somehow the bank gave you a trick coin. MAP is an alternative to MLE, arguably better in many situations: [https://www.cs.cmu.edu/~aarti/Class/10701_Spring23/Lecs/Lect....
The example only seems ridiculous because you've deliberately excluded relevant knowledge about the world from the model. Add a prior to the model and you'll have a much more reasonable function to maximise.
Only if you let the mistake go unmentioned. I do a version of this where I glibly include a mistake, like:
// Examples of dereference operator.
int i, *ip = ..., **ipp = ...;
i = *ip; // Assuming ip has been correctly initialized.
i = **ipp; // Likewise.
// The address-of operator is the opposite.
ip = &i;
ipp = &&i;
I actually talk through the last line. Almost no one ever questions it. I then ask students to look at that last line again, and ask them if an address has an address, and if so, what does that mean, could it ever be useful?
If this is the lesson where you're introducing pointers to students, you're probably doing them a disservice. Reminds me of my engineering professors who were bored with the material, so they dove straight into difficult problems.
You might be interested in this video about why Gen Z is starting to use typewriters again [0]. In a word, focus. They say they are often too distracted from writing when using a computer as it is easy to surf the web instead of writing your paper, so having a single purpose utility rather than a multipurpose one is actually a boon.
Sounds like we need a single-purpose Linux distro that only runs a word processor. Of course that's not nearly as interesting as using a physical typewriter, but it sure is easier than scanning all those typewritten pages using OCR.
Theres a whole category of products that is just a keyboard with a tiny 3 or 4 lines of text lcd. (google electronic word processor, or Tandy WP-2).
Probably not as popular today as they were back in the early 90s before everyone had a Pc, but I think they're still manufactured.
There were word processors with storage - I can’t remember how they worked but a dedicated typewriter doesn’t mean you can’t also get an electric copy.
I built something similar using a spare ThinkPad x220 I had lying around and a minimal Debian installation. I would prefer something closer to the AlphaSmart Neo line of digital typewriters, though.
Sounds like they need to learn how to deal with this. Turning off notifications might help as well. Eventually typewriter will not work as it's a mind issue and not a tool issue imo.
I don’t think this is true. They might struggle with distractions elsewhere but if they’ve created a ritual out of writing in this distraction free environment they’ve created it will probably always work for them (and maybe better over time). Having the experience of doing things without distraction might also help them ignore distractions elsewhere.
By way of analogy, learning to swim in calm waters helps you learn to swim in rough waters by giving you the experience of what swimming is even like.
For a long time I blamed myself for things being difficult. But self knowledge surely includes knowledge of how conditions affect your nervous system. Totally plausible a given nervous system works better with a typewriter than any networked device. Even like when you have to take a break from the thinking you will be more productive pacing or taking a walk on the grounds than flipping over to y combinator.
Make it easy to be good isn't just a parenting precept, it works to manage yourself as a mature adult as well.
I read an anecdote once that novelist Jonathan Franzen writes on a laptop which has had the WiFi card removed and Ethernet port glued shut. He's pretty successful so whatever works imo.
As someone who was diagnosed with ADHD at the age of two and have dealt with it my entire life I gotta completely agree with you. The quest for quiet is impossible. You will never have completely stimulus free environments. The way our bodies work competes against this whole idea. If you're in a dark room your eyes adjust. If you are in a quiet room your ears basically have a compressor built in. Everything that was in the shadow or in the quiet will eventually make itself known. Thrive in noise, thrive in distraction, thrive in chaos.
Edit: but one thing that is incredibly important is partitioning your workspace. Perform work where work should be performed and keep that separate from where you automatically do leisure activities or seek out pleasurable distractions.
That feels a bit like saying you disagree with farm automation so you fired your oxen and pull the plow yourself now.
There's no need to throw the baby out with the bathwater. I'm empathetic to the people that feel like they can't focus in commercial operating systems, but their only option is to adapt or fall off. Making MacOS or Windows into a usable and non-distracting environment is basically the only way I have been able to make money in the tech industry. If I told my boss I was switching to a typewriter for efficiency purposes, I'd be gone before the end of the day.
It doesn't even need to be code; I simply can't turn in work physically. If I type out my project notes or Kaizen report in a typewriter, I'll be asked to make a digital copy next. This isn't just programming, everywhere you go is digital-first and would vastly prefer a digitized copy from the start as opposed to OCRing a photo of my typewritten document.
Again - for personal use, go crazy. Nostalgic stuff is fun! This is not a solution for 90% of the workforce though and I would argue that relying on a typewriter for isolation is harming your professional prospects. Apply to any job and compare the reactions you get bringing your typewriter to the first interview with the reactions you get from bringing your laptop.
It's not a strawman at all. The parent claimed "The typewriter is them dealing with it" and I am listing all of the different ways a can typewriter impair you personally.
If you don't care about the way people perceive you, how productive you are, how accessible your work is or how error-proof your product is, maybe a typewriter is for you. I cannot imagine a practical application (even casually) where you would benefit from a typewriter over a word processor and inkjet printer. I say this as someone with a typewriter not 20 feet away from where I'm standing now; they suck.
You are still missing the point of why they use a typewriter. With a word processor on a computer, I can easily start browsing TikTok instead of writing my paper. Not so with a typewriter. Of course, it has its own cons compared to a computer as you state, but to say there are no "practical applications" is wrong, as evidenced by the fact that people do in fact use typewriters as I've stated. If it were not practical at least in some small way, they wouldn't be using it.
> With a word processor on a computer, I can easily start browsing TikTok instead of writing my paper.
Is that a personal problem, or a computer one though? Many people (myself included) have zero issue ignoring Twitter and Instagram while we work. In fact, typing on a computer is much easier than using a typewriter for a number of reasons:
- Don't need to buy ink ribbons or paper to continue typing
- Don't need to stop and switch out stamps to change your typeset
- Can infinitely reproduce a single document as many times as you want
- No white-out or paper strips required when you make a mistake
I don't know if you've ever used a typewriter before, but it should simply be common knowledge that it's the slower and more distracting way to type. Every second you spend using a typewriter instead of getting comfortable with a computer is wasted effort. Every time you take your typewriter apart to make a simple change, that's time you could be spending writing uninterrupted on a digital medium.
> Is that a personal problem, or a computer one though? Many people (myself included) have zero issue ignoring Twitter and Instagram while we work.
Then it's not for you, continue using a computer. It's a personal problem solved by the use of a single purpose technology rather than a multipurpose one, as I've initially stated.
I have used a typewriter and while it can be slower than a computer, some wasted time is better than wasting all one's time because one can't focus and distracts themselves instead. Sounds like you still simply don't get it, and I'm not sure how I can explain it further as I've restated my points several times now that those who use it can't focus when writing on computers.
I don't get it. I also have a typewriter and would rather use Vi to type a term paper than even entertain the thought of switching out LATEX typesets. It's a no-brainer, it's far, far easier to dumbify your computer than it is to modernize a typewriter.
Creative writing can be better accomplished with a typewriter. Imagine yourself in a cabin in a forest, with no electricity. That's extreme, but you get the idea.
Also, having a physical copy of your work >feels< safer.
And amazingly, people did that for a century or so before word processors came along...published books and magazines, too. And before the typewriter, there was pen on paper. People really were creative before computers!
Nah, if you don't set up on a train station platform and do all your work from there you simply have a mind issue and should learn how to deal with distractions
Career writers have been using "dumber" text editors and computer systems to better mentally isolate their work for decades. It's not even an attention thing.
People need all sorts of excuses to just calm their mind and say it’s some disorder.
But I’ve literally never met someone who genuinely tried meditation and it didn’t help them.
I used to run a meditation group at work and the dozen or so people who consistently showed up reported that it changed their life. And I’m no expert I just do breathwork and concentration on a single object like a red dot sticker.
They rather use medication or spend money buying gadgets and toys.
Oh well. Not my problem when the solution is literally built in.
Don't get me wrong, I do believe that behavioural approaches should be tried first. On the other hand, framing the failure of behavioural approaches being the result of not making a genuine attempt is harmful. It may dissuade them from finding more effective treatments for their particular case, or at the very least delay them seeking help.
Medication isn't something they just hand out to anyone who asks. The reason it exists is because there is a large body of scientific research that all points to it helping treat disorders such as ADHD, whether you believe it or not. Meditation may also provide benefits, although there is less scientific evidence today that it does.
>People need all sorts of excuses to just calm their mind and say it’s some disorder.
Get off your high horse. My brain is literally, physically, developed wrong. It's broken. I need treatment, medication, therapy, not a fucking meditation tape. Not that meditation is bad or wrong or worthless, because I used to like it, but it's not medicine.
Do you also insist people with bad vision just try looking harder? Maybe squint a little bit more? Who needs glasses, the fix is built right in!
I am in my 20s and I use a typewriter somewhat regularly to journal. I was raised on computers, getting the jumble from my brain onto paper is faster with a keyboard than a pen/pencil and paper. And a typewriter is nice and analog - no screen, no lights, no battery. I'm disconnected, focused, and performant.
but I'd really like to bring my own keyboard and have the e-ink display at a more ergonomic height. Combine that with Vim, and that'd be something I'd use
You might enjoy some of the full-fledged e-ink tablets (with folio keyboards, iPad style) on the market right now. Some even run Android, so you could definitely find a way to run Vim.
I was just looking at some today but the biggest downside right now is that they're pretty expensive for what you get.
It's kind of surprising that there are no "typewriter OS" based on Alpine Linux, but it's always has to be paired with hardware sales to go past prototype stage as a business, and even then the viability is dubious.
I used to love doodling and drawing, but as soon as I start to write my hand cramps up. I take hand written (short notes) for work and I struggle to read them a month or so later when the context is gone.
I also really struggle to spell, and will consistently get common words wrong.
BUT on a keyboard I can type almost as fast as I can think - and I can also spell 90% better - I don't know how it happens but it is like the words 'flow' out of my fingers when i type - and I can easily spell words that if you asked me how to spell I wouldn't have a clue. Also if you asked me to find you a key on a keyboard I'd have to look - but when I'm typing my fingers just know where they are.
I'm a 44 yo successful man, but I still don't know my alphabet well (for example I couldn't start in the middle or recite it backwards) - but put me in front of a keyboard and I can type all day long (note - I am VERY thankful for spellcheck though!)
I always had similar problems in school growing up. A few things that I've found helpful:
- Try a larger pen. It helps you maintain your grip on the pen without as much effort.
- Try a pen with less viscous ink. If you're used to ballpoints, this can mean e.g. a rollerball. This lets you write without putting much pressure on the page, which at least for me significantly helps to avoid hand cramps. (I use fountain pens these days myself which write with even less pressure, but rollerballs are a more familiar starting point.)
Thanks, I think for me part of the cramp is a mental block - I spent a long time hating writing (and English lessons in particular) and being told I was bad at it/lazy.
but as soon as I could type my essays I loved English and writing.
No it hasn't. Just 1.5 years ago I tried all the latest OCR tools, including AWS, GCP and Azure services, and none of them could consistently and reliably read a receipt printed at a store.
I was OCRing documents with ABBYY or Tesseract in 2000s if not a little bit earlier. I have been OCRing text documents with my phone for the last 6 years or so, with Prizmo.
The iPad, with the Apple Pencil is pretty much there. It’s actually amazingly good. I have terrible handwriting, and it doesn’t seem to have a problem with it.
If anyone ever tried using a Newton, there was a series of Doonesbury comics[0] about its awful handwriting recognition.
I got pretty good at writing with the Newton, but it was me adapting rather than the Newton understanding my natural handwriting (which is fairly neat given my parents are both teachers).
Yup, iPad and Apple Pencil do an amazing job, either with the built-in Notes app or several third party apps. Even better with a screen protector like the Paperlike that gives a little tooth to the screen to make it a bit more like writing on paper.
Oh, I personally don't currently use them. This was in the past, starting from playing around with my Dad's manual typewriter. Took typing course in 8th grade on a manual. Owned a Smith Corona electric in high school. Used IBM Selectrics for school newspaper, etc. I'm old. :-)
Yes, used in the past. I don't currently use them, though I think they are cool mechanical marvels, especially IBM Selectrics. Those were way too expensive to own personally, but were common in offices. My personal typewriter was a much cheaper Smith Corona electric.
A list of ones I've seen in this thread or know of already:
* Nonplussed (miffed)
* Ambivalent (conflicted)
* Factoid (incorrect statement)
* Bemused (confused)
* Peruse (read thoroughly)
* Travesty (distortion)
* Transpire (to be revealed)
* Literally (in actual fact)
There's also "beg the question" which is often used to mean "naturally give rise to the question" but I believe originally meant "assumes the answer which it is trying to prove".
The modern colloquial usage of "begs the question" bugs the heck out of me. I try not to get too pedantic about language usage, but that one just sticks in my craw. I think because it's actually quite useful to understand when one is begging the question, and it feels a disservice to water down the phrase.
To be fair, I can at least see roughly why the meaning changed. The word "beg" is being used in a strange (maybe archaic?) way, and it is also useful to encode the idea of giving rise to a follow-up question. The best outcome might just be for a different phrase to capture the original meaning, something like "assumes the conclusion".
IMO, not using any optimization flags with C is somewhat arbitrary, since the compiler writers could have just decided that by default we'll do thing X, Y, and Z, and then you'd need to turn them off explicitly.
FWIW, without -O, with -O, and with -O4, I get 2500ms, 1500ms, and 550ms respectively. I didn't bother to look at the .S to see the code improvements. (Of course, I edited the code to output the results, otherwise, it just optimized out everything.)
One optimization for the C code is to put "f" suffixes on the floating point constants. For example convert this line:
t[i] += 0.02 * (float)j;
to:
t[i] += 0.02f * (float)j;
I believe this helps because 0.02 is a double and doing double * float and then converting the result to float can produce a different answer to just doing float * float. The compiler has to do the slow version because that's what you asked for.
Adding the -ffast-math switch appears to make no difference. I'm never sure what -ffast-math does exactly.
> I believe this helps because 0.02 is a double and [...] can produce a different answer
In principle, not quite. The real/unavoidable(-by-the-compiler) problem is that 0.02 is a not a diadic rational (not representable exactly as some integer over a power of two). So its representation (rounded to 52 bits) as a double is a different real number than its representation (rounded to 23 bits) as a float. (This is the same problem as rounding pi or e to a double/float, but people tend to forget that it applies to all diadic irrationals, not just regular irrationals.)
If, instead of `0.02f` you replaced `0.02` with `(double)0.02f` or `0.015625`, the optimization should in theory still apply (although missed optimization complier bugs are of course possible).
I think this is because the optimization isn't safe. I wrote a program to find a counter example to your claim that "the optimization should in theory still apply". It found one. Here's the code:
#include <stdio.h>
#include <stdlib.h>
float mul_as_float(float t) {
t += 0.02f * (float)17;
return t;
}
float mul_as_double(float t) {
t += (double)0.02f * (float)17;
return t;
}
int main() {
while (1) {
unsigned r = rand();
float t = *((float*)&r);
float result1 = mul_as_float(t);
float result2 = mul_as_double(t);
if (result1 != result2) {
printf("Counter example when t is %f (0x%x)\n", t, *((unsigned*)&t));
printf("result1 is %f (0x%x)\n", result1, *((unsigned*)&result1));
printf("result2 is %f (0x%x)\n", result2, *((unsigned*)&result2));
return 0;
}
}
}
It outputs:
Counter example when t is 0.000000 (0x3477d43f)
result1 is 0.340000 (0x3eae1483)
result2 is 0.340000 (0x3eae1482)
On my machine, the complier constant-folds the multiplication, producing a single-precision add for `mul_as_float` and a convert-t-to-double, double-precision-add, convert-sum-to-single for `mul_as_double`. I missed the `+=` in your original comment, but adding a float to a double does implicitly promote it like that, so you'd actually need:
t += (float)((double)0.02f * (float)17);
to achieve the "and then converting the result [of the multiplication] to float" (rather than keeping it a double for the addition) from your original comment. (With the above line in mul_as_double, your test code no longer finds a counterexample, at least when I ran it.)
If you ask for higher-precision intermediates, even implicitly, floating-point compliers will typically give them to you, hoped-for efficiency of single-precision be damned.
Me too when I am away from C for a while.
The topic has been on HN [3]
* Enable the use of SIMD instructions
* alter the behavior regarding NaN (you can't even check for NaN afterwards with isnan(f))
* alter the associativity of expression a+(b+c) might become (a+b)+c which seems inconspicuous at first, but there are exceptions (as example see [1] under -fassociative-math)
* change subnormals to zero (even if your program isn't compiled with this option, but a library you link to your program).
A nice overview from which I summarize is in [1] which contains a link to [2] with this nice text:
"If a sufficiently advanced compiler is indistinguishable from an adversary, then giving the compiler access to -ffast-math is gifting that enemy nukes. That doesn’t mean you can’t use it! You just have to test enough to gain confidence that no bombs go off with your compiler on your system"
Should also test -Os when doing this sort of thing. Sometimes the reduced size greatly improves cache behavior, and even when not it's often outright competitive with at least -O2 anyway (usually compiles faster too!)
For the discrete case, it seems that a better thing to do is consider the likelihood of getting that number of heads, rather than the likelihood of getting that exact sequence.
I am not sure how to handle the continuous case, however.
Of course you ignore irrelevant ordering of data points. That's not the issue.
The issue, for discrete or continuous (which are mathematically approximations of each other), is that the value at a point is less important than the integral over a range. That's why standard deviation is useful. The argmax is a convenient average over a weightable range of values. The larger your range, the greater the likelihood that the "truth" is in that range.
If you only need to be correct up to 1% tolerance, the likelihood of a range of values that have $SAMPLING_PRECISION tolerance is not importance. Only the argmax is, to give you a center of the range.