It goes further back than that. In 2014, Li Yao et al (https://arxiv.org/abs/1409.0585) drew an equivalence between autoregressive (next token prediction, roughly) generative models and generative stochastic networks (denoising autoencoders, the predecessor to difussion models). They argued that the parallel sampling style correctly approximates sequential sampling.
In my own work circa 2016 I used this approach in Counterpoint by Convolution (https://arxiv.org/abs/1903.07227), where we in turn argued that despite being an approximation, it leads to better results. Sadly being dressed up as an application paper, we weren't able to draw enough attention to get those sweet diffusion citations.
> how much livestock is grown and slaughtered specifically for leather
Yes and, I would hope that the majority of leather comes from cattle that are grown and slaughtered for meat. In that case, the amount of cattle put through the meat grinder is a function of both the demand for meat and the demand for leather, but it would be unlikely to be sensitive to both at the same time. I suspect, given that meat has so much turnover whereas leather lasts a long time, that we are meat-bound rather than hide-bound.
I think you're not aware of all the available evidence. The lab leak theory does not implicate China so much as it implicates particular people in the US who intentionally moved the research to China to evade the US' ban on gain-of-function research. These same people then got to be the experts with the authority to craft the official narrative on the subject.
The most important pieces of evidence (imho) are
- No evidence for zoonotic origin has been uncovered, *and it's not for lack of trying*.
- Peter Daszak applied for funding with DARPA to take a bat coronavirus and insert the furin cleavage site; his proposal was rejected.
- The cover-up started with the "proximal origins" paper: a *peer-reviewed* paper in a *respectable journal* establishing zoonotic origin with certainty in the public eye. The Fauci emails show that the authors were far from certain, and several were leaning the other way.
I simply do not get my "evidence" from known conspiracy theorists or sites known to be funded by them. If you really want to go to these bubbles, you might as well find sources claiming evidence for the exact opposite. But as of today there exists no credible, vetted evidence to support either side.
And yet "debunked" it shall be, by taking the weakest argument and showing it is wrong with such condescension that good citizens will be afraid to believe any of it.
The one thing that I most wish Python had is Common Lisp's restartable conditions. They're like exceptions but they don't break your whole process if you don't handle them in the code. If you forgot to define a variable, you'll have the option to define it and resume computation. If you forgot to provide a value, you'll be able to provide one now and resume computation. As it is, Python just dies if something unforeseen happens, which feels like a missed opportunity given that Python is a dynamic language (for which it pays a price).
I work in ML, and launching a job on a cluster just to have it fail an hour later on a typo got old ten years ago. Being able to resume after silly mistakes would easily reduce debugging time by an order of magnitude, just because I need to run the job only once and not ten times.
This review of Wildberger's book at https://philarchive.org/archive/FRADPR says the point isn't that the PhD mathematician finds trig hard, but rather to beginners:
> ... very becoming-numerate generation invests enormous effort in the painful calculation of the lengths and angles of complicated figures. Surveying, navigation and computer graphics are intensive users of the results. Much of that effort is wasted, Wildberger argues. The concentration on angles, especially, is a result of the historical accident that serious study of the subject began with spherical trigonometry for astronomy and long-range navigation, which meant there was altogether too much attention given to circles. ...
> Having things done better is one major payoff, but equally important would be a removal of a substantial blockage to the education of young mathematicians, the waterless badlands of traditional trigonometry that youth eager to reach the delights of higher mathematics must spend painful years crossing
I apologize. I misread the context of the thread. I mistakenly though you were commenting on Wildberger's reasoning for promoting rational trigonometry, rather than on Cook's decision to use that approach.
If "formal verification" involves using a software-based proof assistant or automatic theorem prover then perhaps that easier to encode for those tools via rational trigonometry?
Cook isn't proving results about trig functions/identities; they're proving results about software. Software can never correctly implement trig functions; it must always approximate, and that is the reason it's hard to verify.
In contrast, arithmetic on rational numbers can be implemented exactly; at least, up to the memory limits of the machine (e.g. using pair of bignums for numerator/denominator)
I notice it mentions Ctrl-a and Ctrl-x which "increment or decrement any number after the cursor on the same line". I hate these shortcuts because several times I've accidentally pressed them (e.g. on some level thinking I'm in Spacemacs or in readline) and introduced a hard-to-find bug into my code without knowing it.
In my own work circa 2016 I used this approach in Counterpoint by Convolution (https://arxiv.org/abs/1903.07227), where we in turn argued that despite being an approximation, it leads to better results. Sadly being dressed up as an application paper, we weren't able to draw enough attention to get those sweet diffusion citations.
Pretty sure it goes further back than that still.