Hacker Newsnew | past | comments | ask | show | jobs | submit | howling's commentslogin

An article that argues against GA: https://alexkritchevsky.com/2024/02/28/geometric-algebra.htm...

TLDR:

- GA tends to attract a lot of crackpot. In fact most mathematicians avoid the name Geoemtric algebra and call it Clifford algebra to disassociate with them.

- Most of the usefulness of GA comes from just exterior algebra and exterior product/wedge product is more important than geometric product.

- It shows up in spinor field in physics but this does not concern most people that do not work in quantum physics.

My rudimentary view towards GA:

- It is useful in game physics since rotors can represent n-dimensional rotation in 2^{n-1} numbers instead of n^2 numbers as 2^{n-1} < n^2 when n <= 6. You can use PGA if you want to add translation as well. It is also better in interpolation.

- Outside of this you should just probably just learn exterior algebra instead.


I did two streams where I went through this article and explained the many places it is wrong. The second part of the article has more maths in it, so most of the content is there, you can watch it here: https://www.twitch.tv/videos/2282548167

(it's very long so I plan to edit the two streams into a digestible 10-15m or something. His fault not mine I'd say!)

Probably other commenters have already said, but the biggest giveaway is how he says we should move away from quaternions, and then demonstrates little to no awareness of why quaternions are used in engineering (vital in gamedev for example, your animations will look awful without quaternions). Yes, quaternions are hard if you are completely married to the idea that everything in geometry is ""vectors"". But the games industry put on its big-boy pants and learned to use them - they wouldn't do that if the things weren't useful for something, so it's bit silly to write an article like this if you haven't figured out why that happened.


A quaternion is just an even subalgebra of the Clifford algebra Cl3,0(R), what’s the problem?


> but the biggest giveaway is how he says we should move away from quaternions

I'm sorry I must have missed that part. Can you point me to where did he say this?


The second paragraph of the conclusion: "Nor should we be trying to make everything look more like complex numbers and quaternions. Those are already weird and confusing; we should be moving away from them!"

It's also implicit in the thing he says throughout: "bivectors and trivectors are good, but there's no reason to add a scalar to a bivector or a trivector to a 1-vector, nor is there a reason to multiply such objects". A quaternion is a scalar and a bivector added together!


Hi I wrote that article and I would say that I am extremely aware of how quaternions are used in engineering.

My stance on quaternions is that they are an opaque representation of what they are trying to do, which makes them unnecessarily-difficult and annoying. Not to mention hard to learn to visualize. But GA isn't much less opaque either. The "actual" representation which I find to be most agreeable is the stuff I mentioned about viewing them as operators. Given a bivector B which describes a rotation, you can treat it as an operator on vectors via contraction R(v) = B⋅v. Then exponentiating e^(Rθ)(v) (either the one-sided rotations or two-sided rotors) gives the same rotation formalism as quaternions and GA, but without any of the weird unpedagogical stuff. The notion of an exponential map and exponentiating generators is, IMO, much more "natural" and intuitively straightforward than the alternatives. Perhaps I should update the article to make this more clear.

What irritates me about GA---well, one of the things---is that it treats bivectors/trivectors as both geometric primitives (oriented areas) and operators (rotations, say), and totally conflates the two and never explains to the student how to detach the two notions from each other (and I doubt most of the writers on the subject even know). IMO it is best viewed as a version of representation theory: rotations are operators which happen to have representations on bivectors of the EA; not all operators will have that property, and then you will want other algebraic structures to do algebra with them, if that's a case you're considering.


A quaternion is some amount of identity averaged with some amount of line reflection. You can visualize the line reflection as a pair of planar reflections at 90 degrees to each other. You can visualize the identity as a pair of planar reflections that are the same. Averaging the two of those will give you a pair of planar reflections that are some angle apart.

You're correct that you can construct a quaternion with the exponential map - but the most common way to make a quaternion is with a pair of vectors. Every game engine will have that function. GA will tell you how that function works - the vectors are planar reflections, you compose those to get a rotation by twice the angle, and you add (average with) the identity rotation to get a rotation by the precise angle.

> how to detach the two notions from each other

Can you say why would you want to do that? A plane always defines a planar reflection, a point always defines a point reflection, a line always defines a line reflection (assuming we're in euclidean space, which engineering is). To me this doesn't seem to be "happen to have" territory, this seems fundamental.


>unpedagogical

Does this word mean "not the way things mathematicians teach things"? My experience has been that mathematicians teach things that are useful to mathematicians to students who are not going to be mathematicians and would be better served learning other things. I wasted countless hours of my life finding analytical solutions to toy calculus problems in a universe that will never yield to those methods.

>My stance on quaternions is that they are an opaque representation of what they are trying to do, which makes them unnecessarily-difficult and annoying. Not to mention hard to learn to visualize.

I've seen some remarkably confusing attempts to understand and visualize these things. This has always baffled me because the equivalent objects in geometric algebra aren't that hard to understand. I really think this is a problem with your pedagogy. You've hidden the geometric meaning of the bivector components in these imaginary components, i,j,k, and you have to take it on faith that i*j=k and instead of it being just the product of two bivector blades.

>The notion of an exponential map and exponentiating generators is, IMO, much more "natural" and intuitively straightforward than the alternatives.

That's because you learned it that way. Exponentiating an oriented area to generate a rotation is perfectly intuitive to me, and I'm not sure how somebody could be confused by the fact that a bivector can be an oriented area or the logarithm of a rotation because it's a simple matter of context.


Eh, I disagree.

By "unpedagogical" I mean: very hard to learn, and even when you learn it, often hard to explain. It's information that doesn't compress well, generalize well, or really convey direct understanding of what's going on. Maybe not the best word for this. Really, I just mean "bad". To me it's incomplete knowledge; it needs to be improved upon so that it makes more sense for people who need to know it. In doing so it will also become easier to learn.

I think quaternions are confusing, GA bivector notations are less confusing but still confusing, and the operator version which I endorse is the least confusing of the three. This is just an aesthetic judgment on my part. IMO if things were taught in the way I prefer, more people could learn them, faster, and come away with a more solid understanding afterwards for less work. (You and I agree that GA's version is better than quaternions for the same reason. I just think that there are better versions still.)

If you want to tell someone that exponentiating an area gives a rotation, you need to basically deal with the fact that that sentence sounds like nonsense. An area's an area, why would exponentiating it... do... anything? My preferred explanation of all this stuff avoids that saying things that sound like nonsense.

(I prefer to think of a bivector not as an oriented area per se but as a type of tensor which happens to represent those things, but also represents other things, including the logarithms of rotations, due to the properties that those two things happen to share. That's a perspective that generalizes very well, compared to the GA version, because it separates the tensor representation from the operators; the two sides end up generalizing in different directions as you go to more complicated objects. When I get around to writing to my own exposition on this I'll go through that perspective very methodically.)


>If you want to tell someone that exponentiating an area gives a rotation, you need to basically deal with the fact that that sentence sounds like nonsense.

It's not just an area though, it's an area with orientation and magnitude. It's the complex exponential function extended to 3 dimensions.

>By "unpedagogical" I mean: very hard to learn, and even when you learn it, often hard to explain. It's information that doesn't compress well, generalize well, or really convey direct understanding of what's going on. Maybe not the best word for this. Really, I just mean "bad".

I can see where you're coming from on the learning part. There are as many different ways of learning GA as there are teachers, and this does hold it back.

>To me it's incomplete knowledge; it needs to be improved upon so that it makes more sense for people who need to know it. In doing so it will also become easier to learn.

I wish you luck, and I hope that I find it on here someday.


You have to read the first paragraph as well.

> I have given a lot of reasons why I think GA is problematic: the Geometric Product is a bad operation for most purposes. It really implements operator composition and is not a very fundamental or intuitive thing. Using a Clifford Algebra to implement geometry is an implementation detail, appropriate for some problems but not for general understandings of vector algebra and all of geometry. Giving it first-class status and then bizarrely acting like that is not weird is weird and alienating to people who can see through this trick.

If I understand him correctly, he means Clifford algebra is "appropriate for some problems" but we should "move away" from "giving it first-class status" as it is not more fundamental and often does not help students understand geometry better. I also readily admitted that it has some use cases in game physics in my comment.


You're making the word "some" do a lot of work in this comment. It's true that eg I would represent the inertia tensor with a matrix. But I would calculate it with GA... and I struggle to think of many other physics problems for which I wouldn't use GA (Poisson bracket in GA is particularly elegant). See my other comment on projections I suppose.

On the other hand, from what you've posted you're clearly experienced with computer graphics - it sounds like you have at some point interpolated with dual quaternions?

This hackernews thread may not be sustainable, so perhaps join the bivector discord? https://discord.gg/bBvkuTrM I would be very interested to find out the precise things you care about, on the off chance that there is some not-yet-known-to-you way that GA can help you.


The article reads more like a trolling attempt. Geometric/Clifford algebra is incredibly useful and by throwing away its product you lose a lot of the power of the algebra. It's like saying matrix multiplication is not useful and you really want to be multiplying and adding numbers in various ways. After all GA/CA elements can always be mapped to elements of a matrix algebra. To get rid of the idea of linear transformations and that they should compose just doesn't sound well thought through.

I don't know what sort of crackpots he's talking about, personally i haven't heard of them, only the accusations. If the author can't separate the math from the people who developed and/or popularized it, too bad. Does GA magically give intuitive explanations for all sorts of weird things? no. Can you formulate a lot of stuff much more efficiently and concisely, and does it help gain new perspective on some things? yes, absolutely. It provides a wonderful framework for expressing geometric ideas.


> As for pure math—it seems like research mathematics readily talks about and uses Clifford Algebra, but is uninterested in or specificaly avoids the terms and concepts that are specific to Hestenes’ “Geometric Algebra”. I can speculate as to why: even by the 90s/00s, GA had gotten a bad reputation because of its tendency to attract bad mathematicians and full-on crackpots.

https://alexkritchevsky.com/2024/02/28/geometric-algebra.htm...


No example of such a crackpot is given. The author just claims this without evidence. I've heard this sort of argument before but it's not very clear what it refers to.


> Can you formulate a lot of stuff much more efficiently and concisely, and does it help gain new perspective on some things? yes, absolutely. It provides a wonderful framework for expressing geometric ideas.

Can you elaborate on what stuff does it help to formulate much more efficiently and concisely?


I think one of the coolest examples is probably classical mechanics. See the SIBGRAPI 2021 videos on https://bivector.net/doc.html


All of these stuff can be done in normal linear algebra. Some (not all) of the operations can be done more efficiently with GA in low dimensions. It is neither more concise nor more intuitive to understand than normal linear algebra.


> All of these stuff can be done in normal linear algebra

On its own that is not a very strong argument. What you can do in linear algebra can be done by scalar add multiply and divide. That additions can be done with logical gates does not mean that programming an accounting application with logical gates as primitives is a good idea.

> It is neither more concise nor more intuitive to understand than normal linear algebra.

The real contention is this one. I have met people who hold opposite views on this


There are ten thousand examples I want to give of why you're wrong. We have to start somewhere so here's a favourite, the "universal projection formula":

(A.B)/B

Projects any A onto any B, in any number of dimensions and with any signature (eg hyperbolic/Euclidean/elliptic). A and B can be lines, planes, points, and with a conformal or anti de Sitter metric a sphere or hyperboloid etc ("blades").

It works because A.B is dimension independently the object "orthogonal to A and containing B or vice versa". And division by B will intersect that orthogonal object with B.

Concise, intuitive, and powerful. What's the linear algebra formula you'd consider to be comparable?


// tends to attract a lot of crackpot. //

Come on. You know what else attracted a lot of crackpots? The internet. If you are criticizing math, criticize the math, not the people.

// just learn exterior algebra instead of//

YMMV, but I like to know where the mathematical concepts came from. GA gives a nice origin story, see below:

// Most of the usefulness of GA comes from just exterior algebra //

Dot products come from the geometric product. If e1 & e2 are two basis vectors such that e1*e1 = 1, and e1e2 = -e2*e1, then if you multiply two vectors:

(a1*e1 + a2*e2)(b1*e1 + b2*e2) =

a1*b1*e1*e1 + a1\b2*e1*e2 a2*b1*e2*e1+ a2*b2*e2*e2 =

a1*b1 + a2*b2 + (a1*b2 - b2*a1)*e1*e2 =

(a . b) + (a ^ b)

The first is the dot product. The second is the exterior product that everybody agrees is so useful. Now you know where both concepts came from. They are just from multiplying polynomials. The geometric product is a *a product*, it's the product of two polynomials.

Yes, sometimes you just need the dot product, and sometimes you just need the exterior product. If you are coding, or giving the final form of some formula, you don't have to always put both of them in your code or paper. But neither the dot product nor the wedge product are investable by themselves. Having an investable product on vectors is endlessly useful while you are *deriving* the formulas.*


> Yes, sometimes you just need the dot product, and sometimes you just need the exterior product. If you are coding, or giving the final form of some formula, you don't have to always put both of them in your code or paper.

In my experience 99% of the time you just want the dot product or the exterior product. Even when you want both it is rare that you want to combine them linearly except in some niche physics/mathematics.

> But neither the dot product nor the wedge product are investable by themselves. Having an investable product on vectors is endlessly useful while you are deriving the formulas.

Do you mean invertible? Why is invertibility is so useful?


Yes, invertible like if you have a.x=b, then you can find x=b/a if . is the geometric product.

Why? Well, solving equations sounds somewhat useful, right?


First of all it is only invertible for some non-zero elements, especially if `a` is a linear combination of multivectors or we work in PGA that explicitly adds a basis vector of norm 0. Yes sometimes it is useful but that doesn't automatically makes it more fundamental than the inner product and exterior product.


You make it sound as though multivectors being invertible is a special case, when the opposite is true. In 2D and 3D GA, every non-zero k-vector and versor has an inverse. In PGA, every non-zero, non-ideal plane, line, and point, and versor has an inverse. The inverse is used all the damn time when composing and applying transformations and performing projections and rejections.

As to which is more fundamental, I don't think it matters. You could argue that the dot and exterior products are more fundamental because the geometric product is their sum (for vectors). You could also argue that the geometric product is more fundamental because it is simply the Cartesian product of two multivectors, and you derive the dot, exterior and commutator products by filtering that product by grade. Both definitions are true, and "fundamental" is both a matter of perspective and irrelevant to any practical concern.


> In 2D and 3D GA, every non-zero k-vector and versor has an inverse.

Of course by definition every versor has an inverse. The invertibility of k-vector gets hairier for higher dimensions though. Even in 3D GA, some mixed-grade elements are not invertible.

> As to which is more fundamental, I don't think it matters.

It doesn't matter mathematically but it matters pedagogically. GA enthusiasts seem to advocate teaching GA to anyone that has learnt linear algebra. I believe it is more appropriate to stick to teaching tensor algebra and its quotient exterior algebra. Then it is up to you to learn Clifford algebra as a generalization of exterior algebra; especially if you are a game dev, a physicist, or a topological K-theorist.


I'm curious about your perspective on exterior algebra. So far I've mostly seen it as a special case of GA so my view is probably GA-tinted. I'm just curious how you even do any sort of transformations since the exterior product doesn't allow these sorts of things. It seems like it only gives you the "things" in your algebra with few ways to do anything with them. You seem to agree with the author of the article you linked to in that the most useful aspects of GA come from EA. I find this very hard to see, so maybe you can shed some light on it? E.g. how do you rotate a bivector in EA?


There is no equivalent notion of rotor in EA. I would say its most important use is that you need it to define differential form in which we can define exterior derivative and integeration in arbitrary differentiable manifolds. It also enables to define determinant elegantly.


Sorry I forgot to answer your question: To rotate a bivector of the form v ^ w, just do R v ^ R w where R is the rotation matrix. This is a linear map so you can extend the operation linearly to arbitrary bivector.


>Even in 3D GA, some mixed-grade elements are not invertible.

This paper [1] claims to have inverses for general multivectors up to a certain dimension, but I've never needed them and haven't dived into it. I'm curious what the applications would be for general multivectors, I've never come across them in practice.

1 - https://www.sciencedirect.com/science/article/abs/pii/S00963...


The paper's introduction says:

we first establish algebraic product formulas for the direct computation of the Clifford product inverses of multivectors in Clifford algebras Cl(p, q), n = p + q \le 5, excluding the case of divisors of zero.


Without reading either article - yet - I can tell you "crackpot" can mean a lot of things, and one of them that is relevant to this context is "an academic that's more interested in ideas than in being a good research bureaucrat".


Interesting article.

I have been telling people for more than a decade now to use the exterior product (and e.g. get rid of the clutter of strange Minus-Signs and div, grad and rot in Electrodynamics).

And I was really happy to see that people finally start doing that.

But when I saw the Geometric Product, it didn't look like anything I want. If someone says that it looks like a thing that the cat brought in, I'll think about it and will probably agree.


The geometric product does transform composition. It has all the properties you want for multiplication in an algebra, or indeed a monoid or group.

So it is like matrix multiplication, but for transforms represented as multivectors. Multivectors are nicer than matrices because they are made out of the separate (exterior algebra) objects so you can geometrically interpret them. For example, a rotation-reflection (rotoreflection/improper rotation) will have a grade 1 part and a grade 3 part. One of them is the plane you reflect in, one is the point you rotate around.


You might be interested in https://github.com/HigherOrderCO/HVM


HVM looks very interesting. Thx for posting.


I always dislike the unnecessary use of dual of dual. We should define tensor product as a quotient space of all linear combinations of v₁⊗v₂ as in https://en.wikipedia.org/wiki/Tensor_product#As_a_quotient_s... The advantage of not using dual of dual is that it generalizes correctly to infinite dimensional vector spaces and modules.


System F just isn't flexible enough to encode most of mathematics. You need at the minimum dependent type and preferably a more flexible identity type to avoid setoid hell. The closest attempts I know of are HOTT in Coq and Cubical Agda. These systems still have a lot of kinks to be sorted out though, e.g. https://www.youtube.com/watch?v=TpUOweB2kHU


I think interaction nets [0] is actually a simpler model and can simulate Turing Machine efficiently. I wish courses in Computability/Complexity theory would be taught in interaction nets instead of Turing Machines as the program written would be so much nicer and compose easily. It also has real-life uses in compiling functional languages. [1]

[0]: https://en.wikipedia.org/wiki/Interaction_nets [1]: https://github.com/HigherOrderCO/HVM


How is this simpler?

It seems to me a more complex lambda calculus.


Symmetric interaction combinators (an instance of interaction nets like 2-state 3-symbol Turing machine is a particular type of Turing Machine) only has 3 agents and 3 rewriting rules. This makes it about as simple as lambda calculus with 3 possible terms (variable, application, and abstraction) and α-equivalence, β-reduction, and η-reduction. The advantage of interaction nets is that each rewriting rule is local and "atomic" unlike subsitution step in β-reduction which can take arbitrarily long time as it is defined recursively.


This is the same company that creates the touted highly secure LiteLok Gold that LPL cuts in 16s. https://www.youtube.com/watch?v=D-On0DGcDlc

I'm skeptical of any bike lock's security that is not reviewed by LPL.


> Google: 90% of our engineers use the software you wrote (Homebrew), but you can’t invert a binary tree on a whiteboard so fuck off. — Max Howell (@mxcl)

If inverting a binary tree means swapping the left and right subtrees of every node, I wouldn't want to work with someone who can't do that either and Google is definitely right to reject him.


The actual work for which you are hiring an engineer is building a software product/service, and the Homebrew developer has a track record of delivering great results.

Rejecting the guy because he cannot do a whiteboard brain teaser is like rejecting LeBron James because he did not make a shot at the arcade basketball game.

I'm not saying the guy would be perfect. Comparing him to LeBron James might not be a great example. Google might have other reasons to reject him.

What I'm trying to say is the current coding interview is a really poor mechanism to gauge a software engineer, especially when it comes to hiring one with real-world engineering experience.


Maybe he would have been a poor fit at Google. Just because you wrote some popular open source software doesn't mean you will succeed in a corporate environment. The dynamics and skills required are quite different.

Now does being able to reverse a binary tree mean that you would fail at Google? I have no idea. But we don't really know if that was the reason he was rejected, it's just his own guess. There could have been other reasons.


> What I'm trying to say is the current coding interview is a really poor mechanism to gauge a software engineer.

People like to say this, but in my experience this is not true. It's just that people misunderstand the goal of technical interviews and they often are poor at evaluating their own skills.

First off, these giant tech companies have enormous economic incentives to improve their interview processes as much as possible. They also do a pretty rigorous assessment of the effectiveness of their interview process (Google, for example, has publicized some of their data). I'm not saying these tech companies interview processes are perfect, but I also have a problem believing they're so fundamentally flawed that these companies can't figure out how to fix them given the giant economic returns they get for optimizing their hiring processes.

Moreover, as some other comments mentioned, many companies (and individuals, myself included) believe it is much worse to hire someone who ends up not cutting it, than missing out on a potentially good hire. I can list out all the reasons why, but Joel Spoelsky has a pretty famous essay from a couple decades ago on the topic that explains it well [1].

Thus, it's not surprising hearing a lot people complain that they can do the job, but they aren't good at interviews. Because, from Google's/Microsoft's/etc. perspective, they're fine with a bit higher false negative rate if they can greatly reduce their false positive rate. And my experience matches that: I have never seen a candidate who did awesome in "whiteboard-style programming questions" who couldn't cut it programming-wise (they may have had other issues, but "coding productivity" wasn't one of them). Now, I certainly believe and have seen that there are some people who aren't good at these questions who can do a job well, but there are also a ton more people who can't do the job if they can't pass a technical screen, so hiring any of these folks means much more risk.

I also think that whiteboard-style coding questions help show a quality that is very important to businesses, even if those questions don't represent "real world" work. There are basically 2 types of people that do well at these questions: people who are just naturally smart and have a ton of experience to the point that they wouldn't even need to study to do well, and people who are of more "normal" intelligence/ability, but who can do well if they study a ton. Either of those two groups would likely do well in a programming role. So often I hear the complaint "I'm a busy person, I've got outside responsibilities, you can't expect me to spend all this time studying". And that may be true, but you'll be competing against people who are willing to study, so I don't think you can fault Google et al for favoring people who show a willingness to do more preparation.

1. https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid... "And in the middle, you have a large number of “maybes” who seem like they might just be able to contribute something. The trick is telling the difference between the superstars and the maybes, because the secret is that you don’t want to hire any of the maybes. Ever."


> First off, these giant tech companies have enormous economic incentives to improve their interview processes as much as possible.

But we have examples where the companies themselves have admitted that their past interview practices turned out not to work: https://business.time.com/2012/10/23/no-brainer-brainteaser-...

> They also do a pretty rigorous assessment of the effectiveness of their interview process

Companies do review their hiring processes, but actual experiments and data seem fairly rare. It's harder than you think. What experiment would you run? Hire a group of people entirely randomly, and compare their performance reviews after 2 years?


> But we have examples where the companies themselves have admitted that their past interview practices turned out not to work: https://business.time.com/2012/10/23/no-brainer-brainteaser-...

That's pretty much exactly my point. In the 90s, wide-scale hiring for software engineers was a relatively new thing - many companies were just figuring it out. And so they did some shit pretty early on that didn't make sense. But for all the times I hear folks pulling out the "Why are manhole covers round?" and "How many cars are there in Manhattan?" examples, I haven't heard these types of brainteaser questions being used for nearly 2 decades.

I'm not arguing that the FAANGs have some perfect, unassailable interview process that can never be improved, but I am arguing that so often I hear grumbling discontent from people who don't like the interview process, but rarely do I see much examination around why those particular hiring processes appear to work fairly well for the likes of Google, Apple, etc.


Those puzzle questions weren't just used to hire software engineers. Hiring fads sweep corporate America, despite no evidence of effectiveness, and despite economic incentives to hire well.

Yes, they did move away from it, but that doesn't mean we aren't now in the grip of equally bad fads.

> but rarely do I see much examination around why those particular hiring processes appear to work fairly well for the likes of Google, Apple, etc.

You assume they work well, but you don't have any data to support that. That's sort of assumption is basically where these hiring fads come from.


Not knowing how to do something from the top of your head is not the same thing as not being able to do it.


… but that's an interview. If you cannot, within the time, demonstrate any ability, why should you be hired?

The question above, as clarified, is not complicated, nor does it rely on memorization or some "trick": anyone purporting to be a SWE should be able to write an essentially de novo solution to it.

(And in my own technical interviews, there are multiple questions, to specifically hedge against any one being "that one question a good candidate is going to miss because it's just not their day". It doesn't happen: it's either all or nothing.)


> If you cannot, within the time, demonstrate any ability, why should you be hired?

It is more likely that the interview process is broken and missing the right candidates, than it is that the interviewees are all mediocre. Most interviews are very non-inclusive the same way that the main track of school is becoming less and less inclusive. Different people need different methods to bring out the best in them.


> It is more likely that the interview process is broken and missing the right candidates, than it is that the interviewees are all mediocre.

Here's a thought experiment for you: if the interview process is so broken, why hasn't some tech company succeeded and become famous for an improved interview process, e.g. "Moneyball style"? My guess is because the process is not actually that broken, at least from the employer's perspective. I'm sure the interview process could be changed to be less regimented and more "inclusive", but that's also likely to reduce it's predictive power (i.e. you're more likely to make bad hires, and from a company's perspective that's almost always worse than missing out on a great hire).


> if the interview process is so broken

Hiring is guessing. Firing is knowing. If the hiring process worked, we wouldn't have layoffs like we do.


The layoffs that I have seen reported have been reported as being random. I've been involved now in 3 layoffs directly in my career, and 100% of them, the laid off individuals were laid off without regards to skill. The reporting in the media on layoffs happening elsewhere largely matches my experience.

Sure, an argument exists around "you shouldn't've hired that many people", but that is different from an argument of "the hiring process can't discern good hires". The former is a management & long-term planning issue, the latter is how interviews are conducted.


I'm on the same page.

Sure, your day-to-day work may not involve manipulating binary trees. But presumably it does involve working with variables, objects, references, manipulating data of some kind... And if you're comfortable with the fundamentals of those, then this is something you should be able to figure out even if you've never heard of a "binary tree" before, once somebody has sketched it or shown you the definition of their TreeNode class, right?

It honestly baffles me how people consider this something which needs to be drilled or memorized.

There are absolutely algorithmic questions which would fall into that category. But if somebody considers this to be one of them - or something like "find the smallest number in an array" - then I have to question whether they have an understanding of the most fundamental concepts in programming...

Or if they get through each day solely using things they've memorized by rote, or looked up, and they don't really have any idea how any of the foundations they're building on actually operate.


So hire a baby ? That'll eventually do in 20 years ..... works, right ?


That is a fine and common opinion, but just one question: how often have you inverted a binary tree at your job? Because after nearly 20 years it hasn't come up once for me. I am sure for some roles it is a necessary skill but my issue is that most of these questions are more or less toy problems that come from academia and not business. They are a great test of your retention of a data structure class but not super relevant beyond that.

I would rather hire an engineer with a strong business or user sense - reading between the lines of requests and anticipating future issues or uses adds so much more value in a real sense.

To me, these are great entry level questions because it is a good baseline for new grads when you have little work experience to judge. Past that, it is like making a lawyer take a mini bar exam for every new job - a waste of effort if you want to hire for specific skills and experience.


Just because you haven't worked in a team that requires those skills doesn't mean they aren't valuable.

In my old team, I had to come up with a coupon distribution logic based on count, percentage, time, then generating reproducable random values that required to deep dive (algorithm) into library code & explicitly storing state in redis, then an application of dynamic programming in building as custom platform, atomic token validation, custom rate limiter algo, state machine, scheduler, distributed circuit breaker, etc.

In my current team, I had to read raft paper, zab paper, look into their implementation, make a poc with raft protocol, then autoscaling algorithms, scheduler algo's, different data structures, heck even the oss engine itself is DAG, heavy threads + concurrency stuff. Even now I come across new data structures and algorithms.

Clearly you don't know the entire industry, just because you haven't worked in such teams, doesn't mean these aren't important.

You are experienced in a bubble. The hiring bar for our team is higher than other teams and heck even for SDE3 - the requirement is higher. You would be very much surprised to know that even the senior members have research publications and deal with complex stuff.

Core teams like in AWS or GCP or Azure solve these sort of problems.

Who do you think will solve autoscaling (that's what I'm doing now) or managed scaling or network or security or any infra problems in these cloud platforms ?

As experience increases, we expect more knowledge & insights - doesn't mean to ignore basic coding stuff like arrays or linked lists or trees or graphs or simple message queues or etc.

If companies are paying competitive TC and there are multiple candidates, why not hire a smart person ? What's so special about doing regular normal stuff ? That's just a normal dev right ?


None, but I have written code that requires more data structure and algorithm knowledge than inverting a binary tree (at my job).


Sure, but job isn't to write Homebrew either, so it just seems like a bad example all around.

(I'm not sure Homebrew is all that well engineered, actually. Hard to tell, but I've had trouble with it and avoid it.)

I think what it comes down to is that nobody really knows what to interview for.


Just last week I had to implement a common graph algorithm from scratch to solve a business problem (basically topological sort, with a minor variation). It's certainly much harder than reversing a binary tree (which is one of the easiest possible interview questions you could imagine).

Generally it wouldn't really make sense to reverse a tree in practice (why not just build it the other way initially?) but it has a similar structure to other tree traversal things that could actually come up so it's reasonable to ask.


For every one of you, do you think there are more engineers doing things just like you or do you think there are more engineers who never have to do a binary sort and can still lead a perfectly satisfying, fulfilling career?


So if you are just doing normal stuff, then why should anyone pay competitive TC ? That's just a normal dev, right ? What's so special or talented about you ?

Companies that pay huge TC want to hire smart people not just an average joe. Sure, you can live satisfying career and that's your pov.

What about a company's pov ? Did you ever think about it ?


Wouldn't that be a "mirror" operation, while inversion would be (I dunno) swapping the direction of the edges?

I went out of my way to avoid homebrew (still do) when I worked at google because it would reliably fail to complete some key operations in a dag, hence the interest in ensuring developers know how to do CS things.


Yeah it's not clear what he was asked. Swapping the direction of every edge of a binary tree would result in a DAG that is likely no longer a binary tree though.


Are you confusing a binary tree with something else?

Here's an example:

https://leetcode.com/problems/invert-binary-tree/

It's an 'easy' question. The solution is <10 lines.


Yeah it's really easy. I wrote a correct solution in about 30 seconds on my first try. And I haven't actually practiced these problems for 10+ years.


"Mirror" is what this question is generally understood to be asking. I don't think swapping the direction of the edges produces a tree in general although it would produce a DAG. I think "invert" could mean "mirror" or turn upside-down depending on the context.


Personally, if I were asked this, I would just say "convert the graph to a matrix, invert the matrix, and then convert the resulting inverted matrix back to a graph", and let them try to figure out if that would work for a bit before joking "oh come on, preorder traversal with a temp var, do you have a more interesting question?"


It would be more leetcode to be given an ordered binary tree and asked to reverse it O(N). It's a lot more fair to the interviewee to be given the explicit task without knowing 'the trick' unless one considers knowing recursive functions to be a trick.


There you go ! This comment should be highlighted.


It doesn't matter, the candidate should ask for clarification if the question is ambiguous.


I was referring to second para or statement. (-_-)


Most of us havent touched a tree structure since college, because there are other, real, problems out there. Trying to remember, or rederive it from scratch is slower and error-prone and bad for interviews


You are just ignorant and naive.

Tree structure ? json, xml, protobuf, classes, functional programming, databases with foreign key, database internals, etc ?

Oh I forgot you also serialize and deserialize data - did you forget how that works ? Tree traversal again.

Do you know how organizationl hierarchy is structure ? It's a tree.

Do you know various maps and their usages ? We use them daily - it's very very important to know their internals. Hashing vs Trees vs Linked hash vs etc.

Google maps ? n-d trees ? Comparing data - merkel trees ? etc.

Every dev out there has common work with mine. But you won't be able to solve the problems that I face on a daily basis without thinking hard & without this dsa + concurrency knowledge.

Now, is it reasonable to ask these questions ? Heck yes.


JSON is also a tree structure. Granted one rarely inverts it, but “no trees in the real world” is not true.


Then most of us aren't suitable for jobs where this stuff is important. Google does have 'real' problems involving trees.


Also, I don't believe that 90% of Google engineers use Homebrew. I'm not sure I'd believe 9%. Google is a Linux shop with its own internal package repo. Even if you're using a Macbook to work remotely, you're using it as a fancy terminal wrapper to connect to a Debian-based system to do your real work.


Yeah it's such a basic structure we use in-directly - in classes & sub-classes, intellij dependency list on left side, using google maps, etc.

How complex is homebrew ? Can no one else replicate it ? Why should a company hire for something you did that's simple ?

What are the skills he posses that no one else has ?

Learn your basica dsa stuff for gods sake people.


Judging by Google's track record, hiring some people who have demonstrated an ability to launch and maintain a piece of software would bring a skillset they desparately need. Google engineers are incapable of keeping much going for the long term.


It is managers who decides what project dies or lives, not engineers.


That's a problem in every company ? How do you know they don't need smart people ? Do you've any public stats or reasonable justification ?


I have a similar experience at Oxford. However, the introduction course doesn't even cover IO monad and all of our code are run in repl. No wonder the students regard Haskell as impractical as they did not even write a full program in it.


Still, if they'd found that Haskell was much easier or more productive to use than other programming languages for simple computational tasks, you'd think a few of them might have googled 'how do do IO in Haskell' and gone from there.


That seems to be getting things backwards. How could they find it easier or more productive if they haven't even done IO yet?


I would say the productivity gains of Haskell are actually most transparently obvious in the case of the sort of toy algorithmic code you're likely to be writing in a compsci class.


OK. And I would say the productivity gains of Haskell are most transparently obvious in the case of the sort of highly complex, stateful, I/O-interacting code you're likely to be writing for a successful tech company.


This doesn’t match my personal experience (having worked on a Haskell web app professionally for a couple of years). GHC does have some cool features like STM, but typical stateful code tends not to be radially more concise or easier to write in Haskell than in other high level languages (compared to, say, code for traversing a binary tree, or implementing a parser combinator library). Others may have different experiences.


nuclear fusion reaction with net energy gain?


net energy gain if you ignore the overwhelming net energy loss, sure.


It was nowhere near net gain.


In some sense, classical probability is just the study of category of Markov Kernels [0]. We can gain much of the insight without a lot of measure theory/functional analysis machineries by restricting to its full subcategory of finite states, which gets us to the category of stochastic matrix.

In this view, quantum probability (of finite states) is just about study of category of classical quantum maps described in Picturing Quantum Processes by Bob Coecke and Aleks Kissinger [1]. I highly recommend the book, which requires hardly any prerequisites other than linear algebra and mathematical maturity. Once we get into this framework, no philosophical interpretation is required. E.g. we can't reproduce a quantum state by classical data just means any morphism from a (non-trivial) purely quantum state to a purely classical state does not have a left inverse.

[0]: https://en.wikipedia.org/wiki/Markov_kernel [1]: https://www.cambridge.org/core/books/picturing-quantum-proce...


Idk anything about measure theory, but I do know some things about markov chains so I was excited about [0]. Unfortunately measure theory is invoked directly in the definition so I’ll have to go down that wiki rabbit hole


I worked through large parts of PQP ([1]) with only the prerequisites that you state, as part of a Uni course taught by Aleks Kissinger. I got quite comfortable with the formalism and could solve most problems without too much issue, but at the end I still knew nothing at all about quantum physics. YMMV


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: