There seems to be a mistake in the "Building the Transition Matrix" section of this article. Instead of showing the diagonal matrix D with the normalization coefficients it instead shows the normalized Markov matrix M, incorrectly claiming it is equal to the unnormalized matrix C.
Come on now. You know he's not talking about small machine learning models or protein folding programs. When people talk about AI in this day and age they are talking about generative AI. All of the articles he links when bringing up common criticisms are about generative AI.
I too can hypothetically conceive of generative AI that isn't harmful and wasteful and dangerous, but that's not what we have. It's disingenuous to dismiss his opinion because the technology that you imagine is so wonderful.
At one point they took the roof off of a building because they were taking too long. Another, they changed their diet to bread and water. In one instance they stopped their pay until they decided.
You should look into the history of choosing a pope, it’s wild.
They had to. They are locked there, isolated, until they elect. That is how catholic pope elections work. Their job is to elect and then move on their normal duties and interactions.
That would mean deactivating all discounts. Honey actively scrapes for them, so as soon as a discount is available on the internet it will find it. Not an impossible solution, but not a popular one.
You could probably be clever and come up with a more complicated discount scheme that's not so easy for Honey to take advantage of, but that adds complexity for users as well.
You can do unique discount codes that are one time use or maybe up to 5 times. Common especially if you want tracking like you send out mailers or emails.
The entire 3blue1brown series[0] on linear algebra is well worth watching, it has really intuitive graphical explanations of a bunch of concepts. Here's the one on determinants in particular[1].
TL;DW the determinant represents how much you scale the area/volume/hypervolume (depending on dimension) of a shape by applying a matrix transformation to each point.
Something that seems to be frequently lacking in discussions of convergence in introductory texts on Taylor series is the possibility that the series DOES converge, but NOT to the approximated function. It's not sufficient to conclude that the derived Taylor series must converge to cos(x) because it always converges, since any of the infinitely many functions that match cosine's derivatives at x = 0 will have the same Taylor expansion. How do you know cos(x) is the one it will converge to?
I'm not quite sure whether you're asking for an explanation or whether you're simply making the observation that this is often omitted from the discussion. Either way I think it's an interesting point, so I'll elaborate a bit on it.
As you say, there's no guarantee that even a convergent Taylor series[0] converges to the correct value in any interval around the point of expansion. Though the series is of course trivially[1] convergent at the point of expansion itself, since only the constant term doesn't vanish.
The typical example is f(x) = exp(-1/x²) for x ≠ 0; f(0) = 0. The derivatives are mildly annoying to compute, but they must look like f⁽ⁿ⁾(x) = exp(-1/x²)pₙ(1/x) for some polynomials pₙ. Since exponential growth dominates all polynomial growth, it must be the case that f(0) = f'(0) = f"(0) = ··· = 0. In other words, the Taylor series is 0 everywhere, but clearly f(x) ≠ 0 for x ≠ 0. So the series converges only at x = 0. At all other points it predicts the wrong value for f.
The straight-forward real-analytic approach to resolve this issue of goes through the full formulation of Taylor's theorem with an explicit remainder term[2]:
f(x) = Σⁿf⁽ᵏ⁾(a)(x-a)ᵏ/k! + Rₙ(x),
where Rₙ is the remainder term. To clarify, this is a _truncated_ Taylor expansion containing terms k=0,...,n.
There are several explicit expressions for the remainder term, but one that's useful is
Rₙ(x) = f⁽ⁿ⁺¹⁾(ξ)(x-a)ⁿ⁺¹/(n+1)!,
where ξ is not (a priori) fully known but guaranteed to exist in [min(a,x), max(a,x)]. (I.e the closed interval between a and x.)
Let's consider f(x) = cos(x) as an easy example. All derivatives look like ±sin(x) or ±cos(x). This lets us conclude that |f⁽ⁿ⁺¹⁾(ξ)| ≤ 1 for all ξ∈(-∞, ∞). So |Rₙ(x)| ≤ (x-a)ⁿ⁺¹/(n+1)! for all n. Since factorial growth dominates exponential growth, it follows that |Rₙ(x)| → 0 as n → ∞ regardless of which value of a we choose. In other words, we've proved that f(x) - Σⁿf⁽ᵏ⁾(a)(x-a)ᵏ/k! = Rₙ(x) → 0 as n → ∞ for all choices of a. So this is a proof that the value of the Taylor series around any point is in fact cos(x).
Similar proofs for sin(x), exp(x), etc are not much more difficult, and it's not hard to turn this into more general arguments for "good" cases. Trying to use the same machinery on the known counterexample exp(-1/x²) is obviously hopeless as we already know the Taylor series converges to the wrong value here, but it can be illustrative to try (it is an exercise in frustration).
A nicer, more intuitive setting for analysis of power series is complex analysis, which provides an easier and more general theory for when a function equals its Taylor series. This nicer setting is probably the reason the topic is mostly glossed over in introductory calculus/real analysis courses. However, it doesn't necessarily give detailed insight into real-analytic oddities like exp(-1/x²) [3].
[0]: For reference, the Taylor series of a function f around a is: Σf⁽ᵏ⁾(a)(x-a)ᵏ/k!. (I use lack of upper index to indicate an infinite series as opposed to a sum with finitely many terms.)
[1]: At x = a, the Taylor series expansion is f(a) = Σⁿf⁽ᵏ⁾(a)(a-a)ᵏ/k! = f(a) + f'(a)·0 + f"(a)·0² + ··· = f(a). All the terms containing (x-a) vanish.
[3]: Something very funky goes on with this function as x → 0 in the complex plane, but this is "masked" in the real case. In the complex case, this function is said to have an essential singularity at x = 0.
I'm pretty sure this occurs when the timer goes off during your query, and the alarm gets queued after the response. In my experience the alarm usually goes off immediately after, though, I've never experienced a 30s delay.
The belief that a normal number must eventually contain itself arises from extremely flawed thinking about probability. Like djkorchi mentioned above, if we knew pi = 3.14....pi..., that would mean pi = 3.14... + 10^n pi for some n, meaning (1 - 10^n) pi = 3.14... and pi = (3.14...) / (1 - 10^n), aka a rational number.
> The belief that a normal number must eventually contain itself arises from extremely flawed thinking about probability.
Yes. There is an issue with the premise as it leads to a contradiction.
> Like djkorchi mentioned above, if we knew pi = 3.14....pi..., that would mean pi = 3.14... + 10^n pi for some n, meaning (1 - 10^n) pi = 3.14... and pi = (3.14...) / (1 - 10^n), aka a rational number.
Yes. If pi = 3.14...pi ( pi repeats at the end ), then it is rational as the ending pi itself would contain an ending pi and it would repeat forever ( hence a rational number ). I thought the guy was talking about pi contain pi somewhere within itself.
pi = 3.14...pi... ( where the second ... represents an infinite series of numbers ). Then we would never reach the second set of ... and the digits of pi would not be enumerable.
So if pi cannot be contained within ( anywhere in the middle of pi ) and pi cannot be contained at the end, then pi must not contain pi.
The very snippet you gave there is a counterexample to your argument. He defines miles in terms of feet (which is in turn defined by meters) allowing him to use commonly known conversion factors as a sanity check, while still keeping all values in meters. If he had used his already present definition of an inch as 0.0254 meters to define feet, he could have compounded this even further. The true answer is almost certainly that he simply did whatever came to mind first, and didn't think of defining feet in terms of inches because he hadn't defined inches yet.
I mean the legal definition of the inch is 25.4 mm. All other imperial length units are derived from the inch.
Furthermore, there are more than one definition of the inch. If he worked on astronomical data from the 19th century UK, all these units would have to be changed. By tying the foot to the inch he'd only have to redefine the inch to the old British inch.
To quote the great Bob Nystrom's Crafting Interpreters, "Compiling is an implementation technique that involves translating a source language to some other — usually lower-level — form. When you generate bytecode or machine code, you are compiling. When you transpile to another high-level language, you are compiling too."
Nowadays, people generally understand a compiler to be a program that reads, parses, and translates programs from one language to another. The fundamental structure of a machine code compiler and a WebAssembly compiler is virtually identical -- would this project somehow be more of a "real" compiler if instead of generating text it generated binary that encoded the exact same information? Would it become a "real" compiler if someone built a machine that runs on WebAssembly instead of running it virtually?
The popular opinion is that splitting hairs about this is useless, and the definition of a compiler has thus relaxed to include "transpilers" as well as machine code targeting compilers (at least in my dev circles).