Hacker Newsnew | past | comments | ask | show | jobs | submit | koningrobot's commentslogin

It goes further back than that. In 2014, Li Yao et al (https://arxiv.org/abs/1409.0585) drew an equivalence between autoregressive (next token prediction, roughly) generative models and generative stochastic networks (denoising autoencoders, the predecessor to difussion models). They argued that the parallel sampling style correctly approximates sequential sampling.

In my own work circa 2016 I used this approach in Counterpoint by Convolution (https://arxiv.org/abs/1903.07227), where we in turn argued that despite being an approximation, it leads to better results. Sadly being dressed up as an application paper, we weren't able to draw enough attention to get those sweet diffusion citations.

Pretty sure it goes further back than that still.


> how much livestock is grown and slaughtered specifically for leather

Yes and, I would hope that the majority of leather comes from cattle that are grown and slaughtered for meat. In that case, the amount of cattle put through the meat grinder is a function of both the demand for meat and the demand for leather, but it would be unlikely to be sensitive to both at the same time. I suspect, given that meat has so much turnover whereas leather lasts a long time, that we are meat-bound rather than hide-bound.


I don't believe it is.


I think you're not aware of all the available evidence. The lab leak theory does not implicate China so much as it implicates particular people in the US who intentionally moved the research to China to evade the US' ban on gain-of-function research. These same people then got to be the experts with the authority to craft the official narrative on the subject.

The most important pieces of evidence (imho) are

  - No evidence for zoonotic origin has been uncovered, *and it's not for lack of trying*.
  - Peter Daszak applied for funding with DARPA to take a bat coronavirus and insert the furin cleavage site; his proposal was rejected.
  - The cover-up started with the "proximal origins" paper: a *peer-reviewed* paper in a *respectable journal* establishing zoonotic origin with certainty in the public eye. The Fauci emails show that the authors were far from certain, and several were leaning the other way.
US Right to Know does good journalism on this and other subjects: https://usrtk.org/category/covid-19-origins


I simply do not get my "evidence" from known conspiracy theorists or sites known to be funded by them. If you really want to go to these bubbles, you might as well find sources claiming evidence for the exact opposite. But as of today there exists no credible, vetted evidence to support either side.


And yet "debunked" it shall be, by taking the weakest argument and showing it is wrong with such condescension that good citizens will be afraid to believe any of it.


I think the point is that 0.5 is 1/2, and the 5 digit only appears because of our base 10 presentation of numbers.


Have you seen the author list of DeepSpeech2? https://arxiv.org/abs/1512.02595


The one thing that I most wish Python had is Common Lisp's restartable conditions. They're like exceptions but they don't break your whole process if you don't handle them in the code. If you forgot to define a variable, you'll have the option to define it and resume computation. If you forgot to provide a value, you'll be able to provide one now and resume computation. As it is, Python just dies if something unforeseen happens, which feels like a missed opportunity given that Python is a dynamic language (for which it pays a price).

I work in ML, and launching a job on a cluster just to have it fail an hour later on a typo got old ten years ago. Being able to resume after silly mistakes would easily reduce debugging time by an order of magnitude, just because I need to run the job only once and not ten times.


This is a feature that is commonly overlooked. Interactive restarts are a life saver for long running processes.


They meant algebraic in the sense of https://en.wikipedia.org/wiki/Algebraic_operation.


Sure, but to be honest I'm a little bit boggled at the idea of a phd mathematician finding trig manipulation hard enough to be worth commenting on.


This review of Wildberger's book at https://philarchive.org/archive/FRADPR says the point isn't that the PhD mathematician finds trig hard, but rather to beginners:

> ... very becoming-numerate generation invests enormous effort in the painful calculation of the lengths and angles of complicated figures. Surveying, navigation and computer graphics are intensive users of the results. Much of that effort is wasted, Wildberger argues. The concentration on angles, especially, is a result of the historical accident that serious study of the subject began with spherical trigonometry for astronomy and long-range navigation, which meant there was altogether too much attention given to circles. ...

> Having things done better is one major payoff, but equally important would be a removal of a substantial blockage to the education of young mathematicians, the waterless badlands of traditional trigonometry that youth eager to reach the delights of higher mathematics must spend painful years crossing

Whether that's correct is different. https://handwiki.org/wiki/Rational_trigonometry seems to give a good description of opposing views.


I don't have a particularly high opinion of John D. Cook but I still wouldn't call him a 'beginner'.


I apologize. I misread the context of the thread. I mistakenly though you were commenting on Wildberger's reasoning for promoting rational trigonometry, rather than on Cook's decision to use that approach.

If "formal verification" involves using a software-based proof assistant or automatic theorem prover then perhaps that easier to encode for those tools via rational trigonometry?

Why not ask him directly?


Cook isn't proving results about trig functions/identities; they're proving results about software. Software can never correctly implement trig functions; it must always approximate, and that is the reason it's hard to verify.

In contrast, arithmetic on rational numbers can be implemented exactly; at least, up to the memory limits of the machine (e.g. using pair of bignums for numerator/denominator)


I notice it mentions Ctrl-a and Ctrl-x which "increment or decrement any number after the cursor on the same line". I hate these shortcuts because several times I've accidentally pressed them (e.g. on some level thinking I'm in Spacemacs or in readline) and introduced a hard-to-find bug into my code without knowing it.


Agreed. I love vi, but this is almost always an anti-feature.

To disable, add these lines to .vimrc:

  map <C-a> <Nop>
  map <C-x> <Nop>


Shows how crucial customizability is because I don't think I can work without being being able to rely on <c-a/x>, and more importantly g<c-a/x>.


Won’t you spot that through you source control?


They can be quite useful, but I just stopped using them due to muscle memory of Ctrl-a for screen / tmux.


More like 10x. Black is truly a terrible thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: