Many Useless Uses of cat in this documentation. You never need to do `cat file | foo`, you can just do `<file foo`. cat is for concatenating inputs, you never need it for a single input.
As someone who worked with Unix/Linux and command line arguments for 30 years and still "abuse" cat like the documentation, I regularly hear this complaint.
Yes, "cmd <file" is more efficient for the computer but not for the reader in many cases.
I read from left to the right and the pipeline might be long or "cmd" might have plenty of arguments (or both).
Having "cat file | cmd" immediately gives me the context for what I am working with and corresponds well with "take this file, do this, then that, etc" with it) and makes it easier for me to grok what is happening (the first operation will have some kind of input from stdin).
Without that, the context starts with the (first) operation like in the sentence "do this operation, on this file (,then this, etc)". I might not be familiar with it or knowing the arguments it expects.
At least for me, the first variant comes more naturally and is quicker to follow (in most cases), so unless it is performance sensitive that is what I end up with (and cat is insanely fast for most cases).
Why include writing the primes to a file instead of, say, standard output? That increases the optimization space drastically and the IO will eclipse all the careful bitwise math
Does having the primes in a file even allow faster is-prime lookup of a number?
No real reason. It's just an arbitrary task I made for myself. I might have to adjust the goal if writing to the file becomes the lion's share of the runtime, but I'll be pretty happy with myself if that's the project's biggest problem.
Society absolutely needs to more correctly incentivize smart people to 'get together' to form child creating and raising units (families).
Maybe society should focus on supporting high quality environments for raising children well.
This probably includes a bunch of budget expensive things like...
* Rich interaction between smart adults and children, at low density
* Ensure good breakfast and lunch at minimum
* year round childcare
* Every child great medical care
If we like the idea of biological parents bonding strongly with their children, the whole 'work from home' and 'work life balance' things should also be strongly evaluated. I happen to think that delivering strongly on the above points would also pair well with at least some 'work from home' so that parents have time to work, time for being human, and time to be a good parent. Harder to measure experimental results probably include a healthier emotional and motivational status, lower stress for everyone involved, and maybe even higher output if not just higher quality output during hours worked.
There can be also a softer version of it, which is that cultural richness and focus on education are easily transmitted within families. A society that doesn't value culture and education is going to produce less educated families with even less educated children.
It’s also true that IQ is both real and highly heritable. The military uses what’s essentially an IQ test to screen out the bottom 15% or so of the population: https://www.psychologytoday.com/us/blog/after-service/201801.... The military has found that people with aptitude test scores below the cutoff can’t be trained to competently perform any job in the military.
Genetic heritability, which is a subset of heritability, does not generally mean what people think it means, though. To the extent IQ can be attributed genetically to parents, it does not imply a "compounding" effect where smart people continuously have smarter kids and dumb people have dumber kids. This can understood by analogy to other personal traits like eyesight, height, weight, hair color, etc. which vary within a range across generations with occasional unpredictable outliers. This is why folks who do test for intelligence, have to test each person individually and not just rely on bloodline tracking.
Yes, but since the heritability is high, the average IQ of the children will be close to the average IQ of the parents, despite the fact that it will tend to regress towards the mean.
That’s exactly how it works in the standard additive model of heritability, and we have lots of empirical evidence that heritability of intelligence matches that model very well.
IQ correlates most strongly with socioeconomic class, with members of the same ethnic group scoring higher over the decades as that ethnic group as a whole becomes wealthier.
Socioeconomic status limits genetic potential. Thus the effects of SES dominate for those in poverty, but heritability dominates for those with higher SES.
For the intuition think of height - a malnourished child will not reach their “genetic” height. A fully nourished child will be limited by their “genetics”. Why wouldn’t other biological characteristics be similar?
Exactly the opposite is true. Adoption studies have been used to isolate the effect of SES itself, and the contribution of that factor is low: https://www.sciencedirect.com/science/article/abs/pii/S01602... (“Proportion of variance in IQ attributable to environmentally mediated effects of parental IQs was estimated at .01… Heritability was estimated to be 0.42.”).
This is just SIBS data. It has all the standard "Minnesota" limitations: the study is tiny, the cohort isn't demographically representative, adoption isn't itself random, nothing deconfounds the prenatal environment, and the children in the cohort are also adopted at different ages.
It's one thing to call out an interesting paper; it's another to act as if the matter has been settled simply by pointing to SIBS.
Compared to what though? Also, how is success in the military even defined? Highest rank? Most years served? Least injuries? The highest body count? Lowest double-digit APR% on 2018 Mustang with a rebuilt title?
You can learn about these things by following the links given by the GP of my comment and reviewing the studies they reference. The other strongest criteria of success was the time of their two-mile run (at their specific age of enlistment).
> The purpose of this research was to assess how well success in early combat training was predicted by scores on a test of general intelligence
It seems this research pertains to early combat training and not broad, post-training success in one's military career. Not to mention the research is essentially predicting test performance from test performance. I imagine the same predictions can be made about one's two-mile run and one's three-mile run.
> Analysis indicated that intelligence test scores AND run time significantly predicted success, each adding to the prediction provided by the other.
Which is not surprising. It shows that intelligence is just one of the multiple contributing factors. Being exceptionally tall is essential in the NBA, but being exceptionally tall, alone, is insufficient to make it to the NBA.
They eliminate the bottom third of applicants by ASVAB score which would mean an IQ of 93 if it translated exactly (ADVAB to IQ and SAT is only about 0.8 correlation). That is the minimum to get in to do any job at all for a minimum enlistment contract. So to talk about statistically significant career success predictors you would have to get into the low three digits, just looking at nothing but the math of IQ distributions. A cut-line of 80 would only drop the bottom 9% of the general population.
Do we even know what turns someone into an Einstein? The sheer reason you even mention Einstein is because he was beyond exceptional. There have been millions of people to walk the face of this planet with astronomical IQs, but there has only been a handful of Einsteins, von Neumanns, Eulers, Mozarts, etc.. So few that the uttering of the names of these individuals carries strong meaning.
It's also worth noting that none of these individuals ever took an IQ test. Their genius is entirely recognized through their work. Which again raises the question of what exactly IQ testing and what IQ is adding to our understanding of exceptional ability.
It appears that over a century of research in psychometrics has demonstrated nothing we already could not infer. We do not need some boring puzzle test to tell us someone with Down syndrome will not be a Nobel Prize winner. Nor do we need some boring puzzle test to tell us that von Neumann or Mozart had godlike childhood abilities and could maybe make large contributions for humanity.
Sure, but what has that got to do with “IQ is also highly heritable?” which, in this context, suggests intelligence is something innate and biological, rather than recognizing an IQ test as a gauge skewed by culture and socioeconomic status.
Well, that was a different reference in GP's post. You can go read it. Heritability is definitely a thing, but far from the only thing and it isn't simple Mendelian inheritance - there are many components to intelligence that reflect differently in gene transfer and so while you can see a correlation in specific individuals and their immediate ancestors there are lots of exceptions and its probably a mirage - if seen at all - in any large demographic. See my other comment on the problems with eugenics in this thread.
I'm trying to understand your comment. You write: "heritability is definitely a thing". I think I agree? What thing is it you're saying heritability is?
Attacking IQ test is like vaccine denialism. People don’t like the fact that requiring individuals to cooperate can enhance health outcomes for the group as a whole. Similarly, people don’t like the idea that some individuals are just born smarter than other individuals.
> People don’t like the fact that requiring individuals to cooperate can enhance health outcomes for the group as a whole.
I am not certain where you are deriving this claim from.
> Similarly, people don’t like the idea that some individuals are just born smarter than other individuals.
Nor this claim, as well.
I have had many discussions on the topic of IQ, and I have never once seen anybody ever argue that there is no variance in human intelligence. There is a large range of variance in every human attribute. That is not the focus of the debate. Rather, most of the debate seems to be surrounding the construct validity of IQ. Statistical validity != construct validity.
There's no debate on construct validity of IQ among the experts in the field. The consensus position is that IQ tests measure something real, that the tests enjoy extremely high measurement invariance (which implies construct validity), and that the results have extremely high predictive validity (relative to literally anything else in the entire field of psychology). The current debate is more along the lines, whether the contribution of genes to variance in IQ is closer to 30% or to 80%.
Wait, this comment starts out with an assertion about one scientific question (the construct validity of a quantitative psychological metric) and ends with a statement about the range of a totally different question, and it's one studied by different fields than the former question.
I’m really struggling to understand what your point is. The person I replied to was wrong as to where the current debate is among experts, so I pointed it out, and gave an example of where the debate currently is. Is that really so strange thing to say?
Sure. But in science, we regularly postulate the existence of some construct, and confirm that construct by conducting many empirical tests that return results consistent with the existence of that construct. General intelligence is like that. We can’t see it directly. But we have myriad results that are statistically consistent with its existence.
Results consistent with the existence of a construct are not sufficient evidence for the existence of a construct. We can talk about statistical correlations till the Sun goes down, and I will not dispute that this ethereal 'g-factor' can infer minor to moderate predictions in some domains of people's lives at a population level.
However, I have one question. What evidence is there that this 'g-factor' is actually representative of general intelligence? You may not use the correlation values used to derive the g-factor to support your argument. My understanding is that correlations cannot be used to explain the general factor because the general factor should be what explains the correlations.
If you are interested, I implore you to read this blog from the statistician, Cosma Shalizi, of CMU. His explanation is far better than anything I could attempt to make.
Sorry, what exactly do you mean by "is representative of general intelligence"? This is a very abstract statement. What does this mean in scientific, empirical terms? What kind of facts we would observe in the world where this is true? What empirical observations we'd make in the world where it's false?
> Sorry, what exactly do you mean by "is representative of general intelligence"? This is a very abstract statement.
No need to apologize. Perhaps my g is too low to describe my thoughts properly.
> "is representative of general intelligence"?
This factor that is derived from the positive correlations, g, is called general intelligence. So, g is nominally general intelligence, but is g actually what the name implies? One can take n number of positively correlated but independent things, and there will always be a some factor that can be derived from it. However, that does not mean the underlying factor is necessarily causal.
> This is a very abstract statement.
We are discussing abstract concepts.
> What does this mean in scientific, empirical terms?
That causality would be scientifically and empirically verifiable.
> What kind of facts we would observe in the world where this is true? What empirical observations we'd make in the world where it's false?
Alas, that is precisely the point I was trying to paraphrase from Shalizi. Whether g be true or false -- the result wouldn't look any different. The methodology being used cannot determine what is true nor false, and that is the crux of this entire problem.
One can take n number of positively correlated but independent things, and there will always be a some factor that can be derived from it.
I hope you understand that your vague question cannot be seen as equivalent to this rather more concrete statement. That’s why I asked for clarification, and your patronizing comments were really not called for.
In any case, Shalizi is very wrong, probably because he is entirely unfamiliar with the literature. He is wrong on multiple accounts.
First, yes, any number positively correlated measurements will yield a common factor. However, when talking about g, this is not an artifact of how we constructed IQ tests. Shalizi says:
What psychologists sometimes call the “positive manifold” condition is enough, in and of itself, to guarantee that there will appear to be a general factor. Since intelligence tests are made to correlate with each other, it follows trivially that there must appear to be a general factor of intelligence.
But this is just not true. Tests are not made to correlated with each other. Any time anyone attempts to construct a test of general mental ability, we always find the same g factor, even if they explicitly attempt to make a battery that tries to measure distinct, uncorrelated mental aptitudes. Observe how Shalizi fails to provide a single example of a test that does not exhibit the positive manifold with other tests.
Second, unlike Shalizi, we know that g is the predictive component of the IQ tests. IQ predicts real world outcomes very well, but what is really interesting is that the predictive power of individual subtests of an IQ test is practically perfectly correlated with g-loadings of the subtest. This would be very surprising if g was just a statistical artifact.
Shalizi says
So far as I can tell, however, nobody has presented a case for g apart from thoroughly invalid arguments from factor analysis; that is, the myth.
But this is just baffling if you have any familiarity with the literature.
Whether g be true or false -- the result wouldn't look any different. The methodology being used cannot determine what is true nor false, and that is the crux of this entire problem.
That’s just not true. For example, if g was a statistical artifact, one of the hundreds of intelligence tests devised would have not exhibited the positive manifold with all the others. It would not be correlated with heritability. It would not be correlated with phenotype features like reaction time. The world where g is a statistical artifact looks much different than our world.
> If you are interested, I implore you to read this blog from the statistician, Cosma Shalizi, of CMU. His explanation is far better than anything I could attempt to make.
Ah, this essay is very, very good. I’m not surprised, Shalizi is a genius, but I hadn’t read this particular one before. Thanks for the link.
> I will not dispute that this ethereal 'g-factor' can infer minor to moderate predictions in some domains of people's lives at a population level
The U.S. military won’t hire people below an 83 IQ to peel potatoes, because experience shows that such people can’t effectively be trained. So it’s more than just “minor” predictions.
One, you are conflating IQ and g, which are not the same thing. IQ is a proxy for the measurement of g, but it does not measure g directly.
Two, the military does not administer protected IQ exams, but rather, the ASVAB which correlates with IQ.
Three, our entire global society does not revolve around who can do what for the military.
Four, what can someone with an IQ of 84 do that one with an IQ of 83 cannot?
Five, the US military has plenty of uses for low scores. If one can't peel potatoes, then I'm sure the military would just send them out to stomp for land mines. However, Federal law is what disallows this, not the military: https://codes.findlaw.com/us/title-10-armed-forces/10-usc-se...
And why do you think Congress has passed this law? What prompted them to micromanage the military in this manner? I encourage you to research this topic, “McNamara’s folly” will serve as a good starting keyword. Spoiler: it has everything to do with unsuitability of low IQ enlisted.
FWIW, ASVAB is an IQ test. Any intelligence researcher will tell you so, because it exhibits the usual positive manifold, you find the usual g factor in it, and it shows high correlation with other IQ test. The military doesn’t usually call it as such for political reasons, but will happily admit in private that ASVAB and WAIS measure the same thing: https://web.archive.org/web/20200425230037/https://www.rand....
I don’t mean to suggest that an IQ test doesn’t have any value, only that they don’t account for many subtleties across (sub)cultural boundaries and are too heavily considered in determining one’s intellect, and often worth, by society.
You’re using Motte-and-Bailey tactics to conflate IQ test results with vaccines denialism, on the basis that they are both “for the greater good”, which conveniently paints my point in a certain political light. How exactly does selectivity on the basis of IQ test results “enhance health outcomes for groups as a whole”? Maybe you could back up this argument with some historical context.
> “Similarly, people don’t like the idea that some individuals are just born smarter than other individuals.”
What data do you have to support this claim? And how much of this inherent intellect factors into IQ test results?
> You’re using Motte-and-Bailey tactics to conflate IQ test results with vaccines denialism
No, I’m pointing out that in both cases people attack the science because the implications of the science are in tension with their ideological priors. The fact that top-down coercion is an effective response to pandemics is inconvenient for libertarian-conservatives. Likewise, the fact that people differ in their intellectual capabilities from birth is inconvenient for liberal egalitarians.
> Likewise, the fact that people differ in their intellectual capabilities from birth
Yes, of course people differ in intellectual capabilities at birth. That is not the argument. The argument is how much that actually impacts IQ test score results.
Your point suggests that “science” supports the idea of IQ being predetermined at birth.
> Likewise, the fact that people differ in their intellectual capabilities from birth is inconvenient for liberal egalitarians.
It is? Egalitarianism is usually considered to be the position that all humans have equal moral worth, not the position that all humans have equivalent physical and mental capacities. Not even actually existing Communists believed the latter, as far as I’m aware.
But just to double check:
> Egalitarian doctrines are generally characterized by the idea that all humans are equal in fundamental worth or moral status. As such, all people should be accorded equal rights and treatment under the law.
Nor does the existence of special laws or practices concerning low (or high, for that matter) IQ people pose any great obstacle to egalitarianism, assuming the laws or practices in question do not infringe upon moral personhood.
Rawls is a typical egalitarian philosopher and he takes care to account for these natural variations in the human condition during both the classic _A Theory of Justice_ and the modern “restatement” in _Justice as Fairness_.
Attacking IQ is nothing whatsoever like vaccine denialism. The valid/meaningful uses of IQ are widely debated in several hard science fields. That's not true of vaccines.
> ttacking IQ is nothing whatsoever like vaccine denialism. The valid/meaningful uses of IQ are widely debated in several hard science fields
You’re shifting the goal posts from the first sentence to the second sentence. What I said was: “IQ is real and highly heritable.” Responding to that by asserting that IQ tests are “skewed” and culturally biased, as OP did, is up there with vaccine denialism.
If you want to make a more nuanced point about what you can use IQ to prove, sure, that’s up for debate. But that’s not what we were talking about.
The movie did have an unfortunate eugenic implication, which is doubly unfortunate because it wasn’t even necessary for the plot. Society can just get dumb due to people not valuing education.
Genetically we’re not that different from cavemen, so the floor (without any weird eugenic theories about dumb people breeding too much) is “tamed caveman.”
Education is still very much present in Idiocracy (Brawndo blah blah). It's the lack of value in logic and thought process that causes the problem. When people value winning an argument on a logical fallacy, there's a severe issue. Education is oft used as the fallacy itself.
Much like today on all sides of every significant debate. Where the loudest most emotional rise on feelings over logic.
If a person doesn't immensely value learning they're wrong, they exist as part of the problem.
I think you hit on a key note about learning when they're wrong, and I think that's one of the biggest issues with social media and modern debate - namely that being wrong in public is incredibly painful and can often destroy a reputation. But then people realized there are groups who agree with them even when they're wrong, so the most important thing is to cater to them and never agree that you're wrong in public, and some percentage of people will go along with your argument.
I think never believing fully in your own ideas and always being able to admit you're wrong and always questioning is almost a super power that I wish we valued more.
A room full of people in charge of the most power nation wouldn't fall prey to something like false dichotomy, during an important address to said nation? Would they...?
It’s not the stupid people’s fault. They’ve always bred. The problem is smart people used to as well, and now we’ve stopped. Because of lots of reasons that make sense for the individuals:
- childfree is probably more enjoyable
- kids expensive and student debt crippling, so let’s delay starting till we’re 37 and own a home
- scary time/place to raise kids if you think about it too much
- etc.
So that’s thrown things out of balance. it’s not eugenics to say that smart people shouldn’t hold their birth rate so close to zero.
"Stupid people breed too much" is a proposition that is either true or false, within some worldview.
(For example, how stupid is stupid? How much breeding is to much? Is there even such a thing as too much breeding? All these are variables up for debate.)
But preventing the spread of an idea that you fear may be true, simply because you don't like the consequences, is intellectually dishonest.
Would you endorse suppressing the idea that the earth orbits the sun just because you lived in a milieu where the primacy of the church was more important than truth?
Argue against eugenics because it's unethical to prevent people from reproducing (and therefore no amount of "stupid people" reproducing is "too much"). Don't cloud your judgment by denying propositions that you fear may be true.
Depends on the timescale you care about. It is, objectively, a very big problem over larger timescales (assuming we aren't killed off and don't engineer our children's genes).
Really the issue is about cultivating a culture of caring and willingness to learn. That generally threatens the powerful so it is always an uphill battle to protect said values.
Casually tossing about an accusation like that is not at all in keeping with the guidelines for this website.
That movie can be understood in several different ways.
Also, I'd like to point out that the core problems with eugenics isn't an assertion that intelligence is hereditary, but that:
- Race is not a scientifically grounded concept
- Complex traits do not have Mendelian inheritance
- Measurement of intelligence is problematic
- Even measures that strongly correlate with success are confounded by environmental, cultural and economic factors
Thus, the conclusions drawn by eugenicists are based on their racism and prejudice, not by any scientific conclusions. It is a pseudo-scientific framework to justify (at the limit) ethnic cleansing.
The opening of the movie could also be read as a commentary or satire about a certain type of reality TV show or talk show that was popular at that time, but it was also a really cheap shot at a specific class of people and demonstrated a level of contempt that cannot really be defended.
But the rest of the movie was focused more on anti-intellectual and shallow culture and corporate greed - the heritability of intelligence never got another mention.
The patent application hasn't been published yet so I can't link it, but it's the integration of a bot management system with a queuing system (think, preventing bots from taking space in the line waiting to buy tickets from Ticketmaster when everyone's in the waiting room)
>doesn’t consider windows, it considers apps. So if you have multiple browser windows or word windows open, you can’t alt-tab between them. It’s totally confusing.
You use cmd-tilde to switch between windows.
>So I install an app just to get the normal alt-tab behavior of other OSs, to alt-tab between windows (mine is called alt-tab, and it’s a bit buggy and slow, I think they all are)
You don’t need an app.
>Next, Apple does not respect the multiple desktop boundary. If I click on the safari icon in the dock, it will switch to some seemingly random safari window in some other desktop. If I close any window, it will also run off to some other window of the same app in some other desktop (who came up with that behavior?) when I dismiss an outlook notification, it will run of to another desktop to look at outlook (actually I think this one is Microsoft’s fault, but Apple could probably do something about this one). The result is that while working, I have trouble staying on the desktop I’m working on, I constantly am getting sent off to some other random desktop, and have to find where I am and where I was.
There must be a better, more productive way to manage windows and desktops.
This is a configurable setting.
>(Also what’s up with the autocorrect, I had to retype every instance of “I think” in this message, because it insists it should be “o think”)
>>Next, Apple does not respect the multiple desktop boundary...
> This is a configurable setting.
If you mean the "When switching to an application, switch to a Space with open windows for the application" settings, this works only partially. When clicking the dock icon its behaviour depends on if there are windows in your current Space (virtual desktop) or not. And don't get me started on where macOS decides new windows should go.
the approach is still pretty different in macOS compared to most other WM behaviors, namely: cmd-tilde cycles windows within the currently-open application, and cmd-tab cylces through applications. in most other environments, alt-tab will cycle windows across all open applications (and win-tab does something like cmd-tab on macOS but somehow horribly).
I see this comment often and I usually pipe up to say that if you don’t have a US ANSI keyboard it can feel unintuitive. You can remap the hotkey to Option + Tab in those cases, easier to get used to.
It doesn’t go to recently used, it CYCLES through the windows.to get to most recent used, you need to cycle through all the windows of the app. Who came up with that!?
Give me pointers please. Getting same headaches every day. Clicking on icon in dock, closing some window produces random results every time, across many, many apps
They finally managed to get feature parity with the Windows start menu :)
For around a decade typing "word"+return into the windows start menu search box usually opened Edge with a search result of "ord". Recently it opens Word most of the time.
Love it when it forgets the Mac apps exist, and launches Maps or Calendar in phone mirroring. I use mirroring a fair bit, but never for anything where I have the Mac app installed.
I hide my Dock completely and used to rely entirely on Spotlight for launching. After it failing to work so often, I found Raycast which has not failed me once. I can't see how they don't decide an indexing method/schedule based on a user's Spotlight settings.
This. Spotlight has gotten awful under Tahoe. I was forced to upgrade at work but in my humble opinion, it's best not to update MacOS unless I really need to. Things get worse, things get better, but overall the experience is diminished.
I haven’t had a single issue with this. I’m guessing many people had this issue immediately after upgrading while Spotlight was still re-indexing and are just running with it since it’s cool to hate Tahoe right now.
No, if this happened to people 2-3 times right after upgrading this wouldn't be something they bring up because the indexing doesn't take months. It's broken. Spotlight used to work almost perfectly since it was introduced and it's been lagging and somewhat defunct since Tahoe.
My search never recovered but I just didn't care to fix it, too many things to fix and my IDE has its own spotlight. I'm normally a vanilla-don't-touch-settings guy.
When I try to launch system settings through spotlight, it launches system info. They have the same prefix, but that's no excuse. Never happened since Tiger or so.
I'm specifically commenting on their UX decisions, and in that respect literally everything. Tahoe, like every major upgrade, is iterative. Very few things that bowl a person over. Somethings are good, some things are "meh". But Liquid Glass is an abomination.
Also, unlike ActivityPub, it's actually useful for building features that normal people expect from social apps — for example, algorithmic feeds and search, and a single interlinked world (rather than fragmented "servers").
Eh, AP has its own sets of problems (underspecified protocol, split-brained on discoverability, new developments are met with hostility in the community)
reply