Hacker News new | past | comments | ask | show | jobs | submit | gshubert17's favorites login

> It's a fiat currency based system that persists simply on the belief that it's too big to stop (i.e., fail). Just keep your head stuff in the sand, keep pushing forward and pretend there's no cliff ahead.

Any social contract operates on that principle.

Government only exists because most people believe in its legitimacy. Law only works because most people follow most of it. Property rights only exist because most people respect them most of the time. Contracts only work because most signatories follow them most of the time.

It's weird how fiat money is the one thing that gets singled out, here, when all the social agreements that actually make our society work are also artificial.


It will, but that's also an x/y problem. We're seeing ridiculous energy costs and supply chain issues for crypto and AI because they're using the wrong architecture: GPU/SIMD.

I got my computer engineering degree back in the 90s because superscalar VLSI was popular and I wanted to design highly-concurrent multicore CPUs with 256 cores or more. Had GPUs not totally dominated the market, Apple's multicore M1 line approach with local memories would have happened in the early 2000s, instead of the smartphone revolution which prioritized low cost and low energy use above all. We would have had 1000 core machines in 2010 and 100,000-1 million core machines for 2020, for under $1000 at current transistor count costs. Programmed with languages like Erlang/Go, MATLAB/Octave, and Julia/Clojure in an auto-parallelized scatter-gather immutable functional programming approach where a single thread of execution distributes all loops and conditional logic across the cores and joins it under a synchronous blocking programming model. Basically the opposite of where the tech industry has gone with async (today's goto).

That put us all on the wrong path and left us where we are today with relatively ok LLMs and training data drawn from surveillance capitalism. Whereas we could have had a democratized AI model with multiple fabs producing big dumb multicore CPUs and people training them at home on distributed learning systems similar to SETI@home.

Now it's too late, and thankfully nobody cares what people like me think anyway. So the GPU status quo is cemented for the foreseeable future, and competitors won't be able to compete with established players like Nvidia. The only downside is having to live in the wrong reality.

Multiply this change of perception by all tech everywhere. I like to think of living in a bizarro reality like this one as the misanthropic principle.


Assuming this is the only loud event for the next 50 years.

Or, this past three years event is the start of a longer, bigger event. Then, the ignition is lost in the flame.


The postmortem is sort of dissolved into the bloodstream of these threads:

Tell HN: HN Moved from M5 to AWS - https://news.ycombinator.com/item?id=32030400 - July 2022 (116 comments)

Ask HN: What'd you do while HN was down? - https://news.ycombinator.com/item?id=32026639 - July 2022 (218 comments)

HN is up again - https://news.ycombinator.com/item?id=32026571 - July 2022 (314 comments)

I'd particularly look here: https://news.ycombinator.com/item?id=32026606 and here: https://news.ycombinator.com/item?id=32031025.

If you scroll through my comments from today via https://news.ycombinator.com/threads?id=dang, there are additional details. (Sorry for recommending my own comments.)

If you (or anyone) skim through that stuff and have a question that isn't answered there, I'd be happy to take a crack at it.


Jonathan Blow's talk 'Preventing the Collapse of Civilization' goes into detail about how technology can regress, with the Mechanism being one example. This kind of thing is far more common than we think, most would be surprised to learn that Ancient Greece had writing for about 600 years before forgetting it. There was no writing in Greece for over 400 years, until they adopted the Phoenician alphabet around 730 BC.

He compares this situation to the state of software development today. It's a sobering watch.

https://youtu.be/pW-SOdj4Kkk


There's a minor genre of imagining US journalism standards for world reporting applied to reporting on the US: https://slate.com/tag/if-it-happened-there

We just wrote a piece this morning in Orbital Index about the iteration speeds of SLS, New Glenn, and Starship. SpaceX is putting everyone else to shame.

https://orbitalindex.com/archive/2021-03-03-Issue-106/#heavy...

Reminds me of this quote from NASA Administrator Charles Bolden in 2014: "Let's be very honest again. We don't have a commercially available heavy lift vehicle. Falcon 9 Heavy may someday come about. It's on the drawing board right now. SLS is real."


We must start reading the rules of the online places we visit, as a start, and obeying them. If we don't agree with the rules, don't like "codes of conduct"? Fine, we do not participate there at all.

It's their house and we abide by their rules.

If we break a rule and it's pointed out, then we apologise and goto 10: read and follow the rules. We do not throw tantrums, we do not cry "censorship! suppression!".

We act in good faith: if we post a thread, open an issue, submit a PR, and it is closed, then we do not simply repeat our action. Whether we agree with the closure or not, repeating is an attempt at evasion and a smack in the face of those running the place. Either of these two behaviours then invite us to be banned, because we have acted in bad faith.

We do not immediately and vocally assume that an act we don't like is a personal attack against ourselves or our values. If our post is "hidden by the community", this does does not mean "the leadership of the project is orchestrating an agenda against us". It means our peers have found our conduct distasteful and is a very loud alarm that we must heed: that we have behaved outside of the expected conduct and our peers found it distasteful, unhelpful, insulting. If a web site algorithm has prevented us from posting a link, an image because our account is new or it has triggered anti-spam measures, we do not post elsewhere about how we're being persecuted.

We invite like-minded people to join the discussion when they have innovative ideas, when they can add material to a discussion that has not yet been supplied, an angle that has not been addressed, or a concept that has been misunderstood. We never ping our friends to jump on our bandwagon, shouting the same things over and over again. Perhaps if a concern is dismissed as an outside, then more voices can be constructive, but they must conduct themselves with civility and be particularly aware that they need to add to the discussion, not to add pressure.

If a counterpoint is given to something we passionately believe in, then to discuss is to use logic and data to refute it. In the ideal, we ask ourselves to fight for this counterpoint: perhaps it is entirely valid? What we must refrain from is reading a fair and polite counterpoint and immediately treating it as an attack, a dismissal. This prompts a counter-attack and we are no longer discussing - we are now detracting from the point. When we make our issue or improvement a negative it reflects back upon us. Who wishes to discuss with a party that cannot cope with rational disagreement? In addition, we must resist the urge to simply exaggerate our cause: to state an incorrect point more loudly does not make it correct, it just antagonises those who disagree. Those who we are trying to see our reasons, our solutions, or problems.

Once we have broken the rules, assumed and publicised bad faith, breached expected conduct, ignored the ire of our peers, evaded bans, repeated actions which were turned down, called on our friends to flame and troll, replied to constructive criticism with louder voices, manipulated the conversation with hyperbole and outright refused to listen to the possibility we may be wrong...then we hold a beacon above our heads, advertising that we are incapable of joining a rational debate and seek not to improve anything but only be told we are right and righteous.

I say this not to you, but to answer your question: if anyone reads what has transpired in this matter, and then asks your question, they need to very deeply analyse their behaviour because it is unacceptable in any civilised society.


Not proving solutions to textbooks seems to be a common theme in mathematics and theoretical computer science. It makes it difficult for those outside of the traditional classroom to learn the material. Instead textbook writers seem to have this adversarial approach against readers, thinking they’ll “cheat themselves” if they look up solutions or attempt to verify their work. Experts make mistakes, beginners would presumably make even more mistakes. Without a feedback mechanism beginners can’t truly know whether their logic is impeccable or if they have a subtle error that they themselves cannot detect. They could easily fool themselves that they have correct understanding.

Due to that I will not recommend this current book to any colleagues.

If you want an example of a fellow hacker news member that did things right, check out http://joshua.smcvt.edu/linearalgebra/

He provides solutions and lecture videos... this is truly a democratization approach to learning and a model that other academics should follow.


As the Stanley Milgram experiments show, most people will do horrible things to others when they can tell themselves that someone else is responsible

Turns out this isn't accurate. See https://theconversation.com/milgram-was-wrong-we-dont-obey-a...

But as my detective work in the Yale Archives has revealed, in the filmed version of the experiment 65% of participants disobeyed. Yet Milgram edited his film to show the opposite: that two-thirds will do as they’re told.


These pop-economics posts have arm waves treatments of cost disease, slowing productivity growth, growing inequity. Very unsatisfying. I'd prefer they wouldn't nod towards those issues without even making a guess to the root causes.

--

Supply side is a "push" strategy. Top down, more or less. Monetarians, those who primarily see money as a commodity, prefer top down. They worry more about deficits and balances.

Demand side is a "pull" strategy. Bottom up, more or less. Fiscal-o-tarians (I forget the proper label), those who primarily see money as measure of value, prefer bottom up. They worry more about interest rates.

The period from The New Deal to the 70s was mostly demand side policies. It got out of whack. Then Reaganomics switched us to supply side policies. It too got out of whack.

The Correct Answer™ is some kind of hybrid. Balance the two forces. Ideally with automatic triggers (adjustments) to dynamically rebalance. Recognizing that policy making is too slow, automatic triggers will minimize recurring under- and oversteer.

Today, we need to shift emphasis from just deficits and inflation to also include interest rates. We must remain vigilant about unemployment. And we need better policies for productivity and inequity.

--

Technology influences our mental models for how the world works.

In the West, Greeks gave us mind & body duality. Clocks made us imagine the Great Clockmaker in the Sky. Automata got us thinking we're all just machines.

And now the digital age. We're just computing and communication systems. Input, processing, output. Massive game-like simulations. Optimization algorithms with feedback loops. Transaction costs. Attention economy.

I definitely have an algorithmic view of humanity.

So here's my response to this "progressive supply side" OC:

We rebuild the economy, creating wealth, by keeping the money moving, by increasing the velocity of money.

Hoarding money suppresses wealth creation, increases inequity, and reduces productivity.

So The Correct Answer™ (today):

"Pull" more money thru the economy by giving money directly to people. Then recover that money thru taxes.

Implement complimentary policies to keep the money moving.

Tax idle wealth. Tax corporate profits, to restore prior levels of R&D and profit sharing (wages, benefits). Tax windfall profits.

My own notions are adjacent ("Yes, and...") to advocates for universal basic income (UBI) and modern monetary theory (MMT). Mostly, I think in algorithms and systems.

--

Beyond the scope of this thesis:

Rebuild economy thru "caring work". All the stuff where people are helping others, creating new stuff, building a better world. Teachers, doctors, eldercare, police, firefighters, parenting, journalism, farming, etc, etc.

Disincentives for managerialism. All that wealth destroying middle management. The unnecessary work in financialization, public relations, advertising, project management, etc.

h/t David Graeber and many others



That's identity politics that stems out of centralization.

Imagine 2 hot dog stands across the road. Alice makes sour hot dogs, while Bob makes sweet ones. They both focus on how they make the hot dogs. If fewer people want to buy the sour ones, or customers start giving strange looks, Alice will think about tweaking the recipe. Likewise the Bob can now charge more, or open a spin-off at the next intersection. The fun part is that Alice and Bob can even be buddies. They can go play pool together in the evening, because business is separate from your private life.

Now imagine a HotDogCo buying both stands. Both Alice and Bob now follow exactly the same tested and optimized recipe, patented by the corporation. Their hot dogs are indistinguishable. So, they now focus on what they are. If someone gives you a strange look, you interpret it as a personal insult. If Alice sells fewer hot dogs than Bob, she will think it's due to her gender, some other identity attribute. The energy that could be spent on improving the recipe is now spent on blaming Bob across the road for having an unfair advantage, and asking your followers to boycott him.

The corporations are happy. Out of each 0.99$ hot dog, 0.5$ goes to the corporate profit, 0.2$ - to the ingredients, and Alice and Bob get paid minimum wage. They are also interchangeable and are now too busy fighting each other to launch an independent stand and keep that $0.5 to themselves. Mission accomplished, HotDogCo stocks soar, while Alice and Bob are now locked into misery, hatred, and zero savings or retirement perspectives.


I think part of the problem is that people see criticism as blame and bad actors. Really criticism is about improving and is an essential aspect of any democracy. It's also fair to say that many that complain blame, but some take advantage and deserve blame.

But a big problem is that any criticism is considered a bad thing. I actually run into this problem a lot when I tell people it's more important to criticize the leaders of your party, because you have a voice in that party. So you can actually iterate and improve your party. You don't have a voice in the opposing party and many times complaining about the other is used smugly to express how your party is superior.

We forgot that critique is about improving, not hating.


It's not about that, it's about the power structure. Imagine a hypothetical small town with several owner-operated grocery store, a local power station, owner-operated service companies, etc. Each of the involved actors is knowledgeable about what they are doing. They are hard to replace, their experience is important, so they have more bargaining power and can charge higher for their work. The wealth is more or less evenly distributed among the community. Most people can afford a house and a good living. They also have to maintain business relations with other people (as opposed to faceless corporations). They need each other, are nice to each other and can relate to each other's problems.

Now look what happens when centrally managed grocery chains, power companies, repair services, etc. move in. All management is centralized. All locals are now employed by one of the big companies following corporate protocol. They are now easily replaceable, so 80% of the town works on a minimum wage after completing a one-week training course. Most of the wealth gets transferred to the corporations. Another 10% of the town works as middle management for the corporations, stuck playing dirty political games. The luckiest 10% keeps positions that cannot be taken by corporations: small niche businesses, trades, doctor practices. Since most of the people are replaceable, people don't really need other people. They depend on corporations instead, so maintaining interpersonal relations becomes unnecessary. People hate each other over small disagreements. They envy others' jobs. They consider life very unfair because in the absence of a clearly visible connection between the merit and the outcome, the grass on the other side always looks greener.

The "well-regulated economy" usually implies raising the taxes for top earners and redistributing the wealth to lower earners. Except, the corporations usually find tricks to avoid it, so you end up taxing the few remaining non-replaceable people, and redistributing it down the road. Supply/demand comes into play, so asset prices rise accordingly, 80% of people still cannot afford a decent living from their salary. Self-improvement still doesn't pay, since the corporations want people to be replaceable. So, social tensions are still high, and on top of that, we start having a shortage of doctors, niche shops, etc., because with the higher taxes and hostile attitude from lower-earners, it's just not worth the hassle anymore. On top of that, we now have a class of social justice politicians, who appeal to groups of people by pitting them against other groups and promising better treatment at the expense of other groups. So not only the pie is shrinking, but we are now supposed to fight for it.

If you want to fix the problem, you need to incentivize personal self-improvement again. Tax the hell out of monopolies, platforms and aggregators. Make sure running your small business (and learning how to make your customers happy) becomes viable again. Give the economic leverage back to the people putting their time into it, and not to the abstract corporations.


>Why, at the government level, do we not look at best practices worldwide?

Because there is a ruling class and for the rulers it is not about best practices or happiness, it is about maximizing wealth and maintaining power.

Education and healthcare (which should be considered human rights not industries) are very helpful in seeing some of the tools the ruling class uses to maintain power. Both education and healthcare are standardized enough they are rated by Country and while the US spends more on both than any other country the US is not even close to being rated number 1 in either category.

Yet, to your point, when one points to other countries as evidence how well other systems work (with less funding) that person will be demonized as Anti-American. Even in the current pandemic where the numbers speak for themselves, we are subjected to having to hear the ruling class get on TV and tell us how we are #1 and doing better than every other Country and even worse in many cases this ruling class refers to the pandemic in the past tense and proclaim how they defeated it. It is Machiavellianism incarnate.


Roughly in order of user friendliness and accessibility:

Puzzles can help introduce very powerful ideas without any baggage like mathematical notation. Smullyan's "Knights and Knave" style puzzles often touch very deep ideas in mathematical logic.[1] To Mock a Mockingbird[2] is probably his most famous book.

Godel, Escher, Bach has very clear, fun, and memorable descriptions of formal systems and their fascinating properties. After reading that it will be easier to view real world systems as formal systems and to understand the implications of that.[3]

Most of object-oriented programming and entity-attribute-value models can be found in the writings of Plato and Aristotle. For the purposes of abstract thinking, Plato's theory of forms[4] and Aristole's Organon[5], especially its Prior and Posterior Analytics which describe syllogistic reasoning, are probably the most important. For roughly 2000 years, this was logic. The Theaetetus[6] is also a very good introduction to epistemology and the deductive method of philosophy. In a practical sense, there is very little that programmers do in terms of modeling data or systems that does not derive more or sense directly from these two thinkers.

It's only been in the last two centuries that we've improved on Greek logic. Boole and De Morgan for propositional calculus[7], Frege and Pierce for quantification[8], which combine to create first order predicate logic[9]. From their you can either go to second-order logic or to set theory in order to begin talking about collections of things. Naive Set Theory[10] is a good introductory book, although you can jump straight in to ZFC set theory[11] for an axiomatic approach.

Relational algebra, which will be familiar in a loose sense to anyone who has ever worked with a relational database, is a formal theory that can be studied in the abstract[12]. I find the terminology (like "theta join") to be useful for thinking about advanced SQL statements. It's also very interesting to contrast relational algebra with ZFC set theory - many of the axioms are similar, but there are also crucial differences.

Lately, in the last century or so, abstract algebra[13] has proven very useful in modelling all kinds of real-world phenomena. For example, Lie groups in physics, or finite fields in cryptography. Abstract algebra basically strips down numbers to their most basic axioms and generalizes them. In group theory we study structures that have a single operation (say addition) then "rings" allow a second operation (say multiplication) and "fields" allow this second operation to be inverted. It is incredibly fruitful to model your real-world system as an abstract algebra and then to add axioms that fit your system (do your operations commute? Are the associative? Can they be reversed?) because you can then leverage a huge number of appropriate theorems.

The mother of all "abstract thinking" has to be category theory[14] which is so abstract I can hardly even describe it. Nevertheless many people find it a useful framework, with commutative diagrams[15] showing up all kinds of papers.

[1]: https://en.wikipedia.org/wiki/Raymond_Smullyan

[2]: https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird

[3]: https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

[4]: https://en.wikipedia.org/wiki/Theory_of_forms

[5]: https://en.wikipedia.org/wiki/Organon

[6]: https://plato.stanford.edu/entries/plato-theaetetus/

[7]: https://en.wikipedia.org/wiki/Propositional_calculus

[8]: https://en.wikipedia.org/wiki/Quantifier_(logic)

[9]: https://en.wikipedia.org/wiki/First-order_logic

[10]: https://en.wikipedia.org/wiki/Naive_Set_Theory_(book)

[11]: https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...

[12]: https://en.wikipedia.org/wiki/Relational_algebra

[13]: https://en.wikipedia.org/wiki/Abstract_algebra

[14]: https://en.wikipedia.org/wiki/Category_theory

[15]: https://en.wikipedia.org/wiki/Commutative_diagram


Infinite in All Directions is still one of my favorite science books.

"Science and religion are two human enterprises sharing many common features. They share these features also with other enterprises such as art, literature, and music. The most salient features of all these enterprises are discipline and diversity. Discipline to submerge the individual fantasy in a greater whole. Diversity to give scope to the infinite variety of human souls and temperaments. Without discipline there can be no greatness. Without diversity there can be no freedom. Greatness for the enterprise, freedom for the individual- these are the two themes, contrasting but not incompatible, that make up the history of science and the history of religion."


"Behave: The Biology of Humans at Our Best and Worst" https://www.amazon.com/Behave-Biology-Humans-Best-Worst/dp/0...

"The Mosquito: A Human History of Our Deadliest Predator" https://www.amazon.com/Mosquito-Human-History-Deadliest-Pred...

"The Man Who Solved the Market: How Jim Simons Launched the Quant Revolution" https://www.amazon.com/Man-Who-Solved-Market-Revolution/dp/0...

"Hacking Darwin: Genetic Engineering and the Future of Humanity" https://www.amazon.com/Hacking-Darwin-Genetic-Engineering-Hu...

"Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do" https://www.amazon.com/Biased-Uncovering-Hidden-Prejudice-Sh...

"Range: Why Generalists Triumph in a Specialized World" https://www.amazon.com/Range-Generalists-Triumph-Specialized...

"The Spy and the Traitor: The Greatest Espionage Story of the Cold War" https://www.amazon.com/Spy-Traitor-Greatest-Espionage-Story/...

"Trick Mirror: Reflections on Self-Delusion" https://www.amazon.com/Trick-Mirror-Self-Delusion-Jia-Tolent...

"Talking to Strangers: What We Should Know about the People We Don't Know" https://www.amazon.com/Talking-Strangers-Should-about-People...

"Prediction Machines: The Simple Economics of Artificial Intelligence" https://www.amazon.com/Prediction-Machines-Economics-Artific...


The loan packaging and insurance was a symptom of the underlying problem that drove the 2008 crisis: transatlantic transactions that worked to evade regulations to control systematic risk.

This process started, if you had to pick a place, with a US bank issuing a mortgage. The cash part eventually wound its way to various short-term "risk-free" assets, one of which being short-term asset-backed loans to European financial institutions. The mortgage asset got packaged up and sold on to the broader capital markets, of which European financial institutions participated. So, the US banking system basically created a market in which European financial institutions could buy mortgage-backed paper, financing the purchase at good short-term rates generated by the cash created by issuing the backing mortgages.

Of course, banking regulators know that just letting banks go hog-wild creating asset-liability pairs is going to end up badly, so this is where the regulatory arbitrage comes in. Under US banking regulations, banks had limits to the assets on their books. There's a strong incentive to offload them into the capital markets - the banks realize an immediate profit, and clear out valuable space on their balance sheet. European banks, on the other hand, had a more complex leverage limits that took into account the perceived riskiness of various assets that they owned, with AAA-rated assets giving the highest leverage limits.

So that, in a nutshell, is what happened in the run-up to 2008. Banks generate more profits when they create more risk, so there's regulatory limits on risks. US banks evaded those risk limits by selling the risk on to the capital markets, and US regulators assumed that this process was systematically safe. EU banks evaded those risk limits by blindly levering up to buy whatever AAA-rated dollar-denominated assets the US capital markets handed to them, and EU regulators assumed that high leverage on AAA-rated assets was systematically safe. Neither regulatory regime had a good holistic view of the financial asset vortex brewing in the Atlantic.

Anyhow, my overall point is "so that they could be sold to suckers" doesn't really tell the entire story. The asset packaging was done in order to take as much risk as possible under US and EU banking regulations, taking advantage of transatlantic differences in regulatory regimes.


If you want to learn Morse it is a lot easier if you don't try to learn it in alphabetical order. Instead you should learn it in order of the lengths of the signals, which corresponds roughly to the frequency order of the letters in English, i.e. learn the one-beep-long letters first (E, T), then the two-beep-long letters (A, N, I. M), then the three-beep-long letters, etc.

E . T -

I .. A .- N -. M --

S ... U ..- R .-. D -..

H .... B -... L .-.. F ..-. V ...-

W .-- K -.- G --. O ---

Z --.. C -.-. X -..- P .--.

J .--- Y -.-- Q --.-


If you're looking for a replacement you don't understand the problem.

The problem isn't that P=.05 is an arbitrary measure of significance. The problem is that only publishing significant results is a bias against the null hypothesis.

Let's say you're doing a study of flipping coins. The null hypothesis is that the coin is evenly weighted. If the null hypothesis is true, when you flip a coin once, it will come up heads with P=.5. If you flip the coin twice, the null hypothesis is that both flips come up heads with P=.25. The probability of all coins coming up heads is P=0.125 for 3 flips, P=0.0625 for 4 flips, and P=.03125 for 5 flips. So if we flip a coin 5 times, and get heads all 5 times, we can conclude that the coin is weighted in some way with P=0.03125.

Let's say all the major journals of coin flipping only publish results with the high significance of P<0.05. Alice flips a quarter 5 times and gets 3 heads and 2 tails, and nobody will publish her study because it has P=.3125 (see [1] for an intuitive explanation of how this was calculated). Bob flips a quarter 5 times and gets 2 heads and 3 tails--again, no journal will publish him. Catherine, David, Ellen, Frank, and Geri all perform the same experiment, most of them getting groupings 3:2 result ratios, some getting 4:1 result ratios, but nobody getting all heads or all tails, just as one would expect given the null hypothesis. And the journal editors tirelessly send out rejection letters to all their studies.

Now somewhere down the line, Robert flips a coin 5 times and gets 5 heads. This result has P=.03125, which meets the requirement of P<.05! He sends it to the American Journal of Coin Flipping Studies (AJCFS), and they are very excited to publish his results! Nature and Science magazines do front page pieces with headlines "Quarters Found Heavy-headed" and "Washington Shows His Face" respectively. A casino hires Robert as a consultant for the design of their coin-flipping games. His quarter-flipping study is cited in the abstracts of two dime-flipping studies and a half-dollar flipping study. During the trial of a murderer who placed quarters tails-side up on his victims, Robert is called as an expert witness to say that the coins were placed there, not flipped there.

A week later Sally flips a quarter 5 times and gets 2 heads and 3 tails. She sends the results of her study to the AJCFS noting a failure to reproduce Robert's result, but her study is rejected because it has P=.3125.

Now, if you survey the AJCFS and all the other academic journals on coin flipping, you'd conclude that quarters are significantly weighted towards heads. But in fact, the null hypothesis is true: quarters are pretty evenly weighted. Robert's low-P result is exactly what you'd expect to happen eventually if you have enough people perform the 5-quarter-flip experiment--in fact, if a lot of people are studying coin flips, the P of getting a low-P result approaches P=1. But because the AJCFS has a P=.05 requirement, they've created a bias against the null hypothesis, which deceives the public into thinking that flipped quarters are more likely to come up heads than tails.

This is likely the reason why so many fields, most notably psychology[2], are having a replication crisis[3] and a similar effect can be used in P-hacking[4] to bolster results that are essentially fake.

Unlike coin-flipping, fields with replication crises like psychology and medicine have real affects on real people's lives. It's irresponsible and unethical for journals to publish with a bias against the null hypothesis, and adjusting the P-value requirements to another significance requirement, even a less arbitrary one, doesn't fix the issue.

The solution, I think, is for journals to commit to publish studies before the study has been performed, based on the methodology, previous studies on the subject, and qualifications of the researcher. This would mean that many, many studies would be published with null results, and that would be a good thing.

[1] There are 32 possible outcomes for flipping a coin 5 times. If we group them by how many heads and tails, we can calculate a probability for each outcome:

    HHHHH 1 result of 5 heads       ->      P  = 1/32 = .03125

    HHHHT
    HHHTH
    HHTHH 5 results of 4 heads, 1 tails  -> P  = 5/32 = .15625
    HTHHH
    THHHH

    HHHTT
    HHTHT
    HHTTH
    HTHHT
    HTHTH
    HTTHH 10 results of 3 heads, 2 tails -> P = 10/32 = .3125
    THHHT
    THHTH
    THTHH
    TTHHH

    HHTTT
    HTHTT
    HTTHT
    HTTTH
    THHTT 10 results of 2 heads, 3 tails -> P = 10/32 = .3125
    THTHT
    THTTH
    TTHHT
    TTHTH
    TTTHH

    HTTTT
    THTTT
    TTHTT 5 results of 1 heads, 4 tails  -> P  = 5/32 = .15625
    TTTHT
    TTTTH

    TTTTT 1 result of 5 tails       ->      P  = 1/32 = .03125
[2] https://thepsychologist.bps.org.uk/what-crisis-reproducibili...

[3] https://en.wikipedia.org/wiki/Replication_crisis

[4] https://journals.plos.org/plosbiology/article?id=10.1371/jou...

EDIT: Also see roenxi's excellent post on how "significant" means different things in statistics and colloquial English: https://news.ycombinator.com/item?id=20895893


Hypothesis:

1. Advances in fundamental science are what ultimately dictate the growth rate of an economy. For example when you discover quantum mechanics you can make lasers, circuit boards and therefore computers which boosts everything enormously.

2. The more science you have done the harder it is to do more. You can only discover a truth once. For example discovering new elements is extremely hard now compared to when Hennig Brand discovered phospherous in his urine.

Consequences:

1. When a society gets the scientific method right it goes through an S shape curve (logistic curve), where science accelerates for a while and then plateuas out.

2. When you hit the plateau of the curve the maximum growth rate of the economy becomes very limited, it's just not possible to have new ideas which are worth developing very fast.

3. Human labour still provides a surplus so what you get is more and more resources piling up with nowhere to invest them. Look at Apple's $245bn cash pile, if they had new ideas they would put that money to use.

This will cause bond yields to fall to almost nothing and loads of money to be pumped into any startup with a vague hope of accomplishing something.

If we are in this situation it would also mean that the 08 crisis is not needed to explain the slowing growth in the world, things are slowing because we don't have many new ideas.

Common objections:

1. But what about X invention that happened in the last few years? Invention still happens on the plateau, just slower, NN's, crispr, exoplanets, smartphones are cool, they're not relativity, quantum mechanics, electrification, aeroplanes etc.

2. Scientific progress is exponential! : Accurately fitting a curve to a logistic curve gives you an exponential up until it shifts on you and starts slowing.

3. China and India are still growing strong. : Essentially they are still deploying the previous discoveries. China's growth is slowing over time precisely as it has less and less discoveries to deploy.

More to read:

DOE finds it can't get supercomputers like it used to.

https://www.nextplatform.com/2019/05/06/doe-on-collision-cou...

https://www.theatlantic.com/science/archive/2018/11/diminish...

https://slatestarcodex.com/2018/11/26/is-science-slowing-dow...


Devastating.

I'd like to point out a recent tweet @joeerl shared [0][1] that contained things he felt were good wisdom but lost over time:

--

Ideas that we forgot:

1. Flow based programming.

2. Pipes.

3. Linda Tuple Spaces.

4. Hypertext (=/= HTML).

Computer Science 101:

1. Observational equivalence.

2. Isolation.

3. Composition.

4. Causality.

5. Physics.

Two papers to read:

1. The Emperor's Old Clothes - ACM.

2. A Plea for Lean Software - Nikalus Wirth.

Two videos to watch:

1. The Computer Revolution Hasn't Happened Yet - Alan Kay.

2. Computers for Cynics - Ted Nelson.

Four old tools to learn: Emacs, Bash, Make, Shell.

Three books to read:

1. Algorithms + Data Structure = Programs.

2. The Mythical Man Month.

3. How to Win Friends and Influence People.

Correct a typo:

?? Learn git -> locate program that creates page -> locate typo -> correct -> send push [sic] request.

!! Select text -> type in correction -> people see change.

Two projects:

1. Link to content hash not a name (request content by sha256, immune to pepole in the middle).

2. Elastic Links (links should not break if you move an endpoint).

--

Easily one of the best thinkers of his generation [2]. RIP.

[0] https://pbs.twimg.com/media/Ds19oHnXoAAwlAp.jpg

[1] https://www.youtube.com/embed/-I_jE0l7sYQ

[2] https://learnxinyminutes.com/docs/erlang/


The Software Carpentry team has stressed the usefulness of shell scripting for years [1]. One nice feature is that the scripts are easily tested as they are developed. For example, biologists find this approach helpful for constructing large data processing pipelines [2]. Because the scripts are plain text, they play well with version control (e.g. git). The tools are free, well tested, and will work on most hardware.

[1] http://swcarpentry.github.io/shell-novice/

[2] https://computingskillsforbiologists.com/


Here that same Google design ethicist explains in detail how technology hijacks your mind [1].

TL;DR:

Hijack 1: If You Control the Menu, You Control the Choices. Ask yourself: What’s not on the menu?, Why am I being given these options and not others? Do I know the menu provider’s goals? Is this menu empowering for my original need, or are the choices actually a distraction?

Hijack 2: Make apps behave like Slot Machines - give a variable reward. If you want to maximize addictiveness, link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.

Hijack 3: Fear of Missing Something Important (FOMSI). If I convince you that I’m a channel for important information, messages, friendships, or potential sexual opportunities — it will be hard for you to turn me off, unsubscribe, or remove your account — because there is a 1% chance you could be missing something important.

Hijack 4: Social Approval. When you get tagged by my friend, you think s/he made a conscious choice to tag you, when actually s/he just responds to Facebook’s suggestion, not making an independent choice. Thus Facebook controls the multiplier for how often millions of people experience their social approval on the line.

Hijack 5: Social Reciprocity (Tit-for-tat). You follow me — it’s rude not to follow you back. When you receive an invitation from someone to connect, you imagine that person making a conscious choice to invite you, when in reality, they likely unconsciously responded to LinkedIn’s list of suggested c ontacts.

Hijack 6: Bottomless bowls, Infinite Feeds, and Autoplay

Hijack 7: Instant Interruption vs. “Respectful” Delivery. Messages that interrupt people immediately are more persuasive at getting people to respond than messages delivered asynchronously.

Hijack 8: Bundling Your Reasons with Their Reasons. When you you want to look up a Facebook event happening tonight (your reason) the Facebook app doesn’t allow you to access it without first landing on the news feed (their reasons), so Facebook converts every reason you have for using it, into their reason which is to maximize the time you spend consuming things. In an ideal world, apps would always give you a direct way to get what you want separately from what they want.

Hijack 9: Inconvenient Choices. Businesses naturally want to make the choices they want you to make easier, and the choices they don’t want you to make harder. NYTimes.com claims to give you “a free choice” to cancel your digital subscription. But instead of just doing it when you hit “Cancel Subscription,” they force you to call a phone number that’s only open at certain times.

Hijack 10: Forecasting Errors, “Foot in the Door” strategies. People don’t intuitively forecast the true time cost of a click when it’s presented to them. Sales people use “foot in the door” techniques by asking for a small innocuous request to begin with (“just one click”), and escalating from there (“why don’t you stay awhile?”). Virtually all engagement websites use this trick.

===

[1] http://www.tristanharris.com/2016/05/how-technology-hijacks-...


There are some good lecture videos with Dijkstra online that give a better introduction to his way of thinking than anything he wrote:

"Reasoning about programs": https://www.youtube.com/watch?v=GX3URhx6i2E

"The power of counting arguments": https://www.youtube.com/watch?v=0kXjl2e6qD0

"Structured programming": https://www.youtube.com/watch?v=72RA6Dc7rMQ

"NU lecture": https://www.youtube.com/watch?v=qNCAFcAbSTg


Java compiles to a bytecode which is not machine code. Once bytecode is executed on target platform runtime, it is then compiled down to machine code.

But there's more to it than that. The bytecode is actually interpreted at first by the JVM runtime. The code is also continuously dynamically profiled. There are two compilers C1 and C2.

Whatever functions are using the most cpu time get compiled using C1. C1 rapidly compiles to poorly optimized code, but this is a big speedup over the bytecode interpreter. The function is also scheduled to be compiled again in the near future using the C2 compiler. The C2 compiler spends a lot of time compiling, optimizing and aggressively inlining.

But there's more. C2 can optimize its compile for the exact target instruction set, plus extensions, for the actual hardware it is running on at the moment. An ahead of time C compiler cannot do that. It needs to generate x86-64 code that runs on a large variety of hardware processors.

But there's more. The C2 compiler can optimize based on the entire global program. Suppose a function call from one author's library to another author's library can be optimized in some way by writing a different version of that function. C2 can take advantage of this and do it where a C compiler can not because it doesn't know anything about the insides of the other library it is calling -- which might be rewritten tomorrow, or might not be written yet. Once the Java program is started, the C2 compiler can see all parts of the running program an optimize as needed.

But there's more. Suppose YOUR function X calls MY function Y. If your function X is using much CPU, it gets compiled to machine code by C1, and then in a short time gets recompiled again by C2. The C2 compiler might inline my Y function into your X function. Now suppose the class containing my Y function gets dynamically reloaded. Your X function now has a stale inlined version of my Y function. So the JVM runtime changes your X function back to being bytecode interpreted once again. If your Y function is using a lot of CPU, then it gets compiled again by C1, and then in a while, by C2.

All this happens in a garbage collected runtime platform.

It is why Java programs seem to start up, but take a few minutes to "warm up" when they start running fast. Many Java workloads are long running servers, so startup is infrequent.

Now you know why Java can run fast for only six times the amount of memory as a C program.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: