Such ideas generally interfere with usability. I loath any such automations. Some more examples: the menu changes when the mouse hovers over the item; the window size or position changes when the window movement nears the border; Word automatically indent bullet items etc etc. The people designing such systems think they are designing a smart system and the usability is horrible, for me at least.
Fourier basis is unique in that the complex exponential basis functions are the eigen vectors of the linear time invariant (LTI) systems. No other transform has this property. Many real world systems (circuits, communication channels, antennas, etc) are LTI. This property make sure for example, signals transmitted over different frequencies do not interfere. That is why Fourier transform is so useful and used instead of other transforms. There is also the connection with quantum physics, in using Fourier pair as wave functions of position and momentum, which other transforms don’t have.
I'm surprised you're one of the only commenters to bring this up. I have an electrical engineering background -- for analysis, lots of systems are assumed to be either linear or very weakly nonlinear, and a lot of our signals are roughly periodic. Fourier transforms are a no-brainer.
Convolution turns into multiplication, differentiation wrt time of the complex exponential turns into multiplication by j*omega. I don't know about you, but I'd rather do multiplication than convolution and time derivatives.
As a corollary, once you accept "we use the Fourier representation because it's convenient for a specific set of common scenarios", the use of any other mathematical transform shouldn't be too surprising (for other problems).
Technically it's a specialized case of the laplace basis, right? I was always surprised that lots of courses jump directly from the (bilateral) fourier transform to the unilateral laplace transform without proper analysis of the most general case that is the bilateral laplace transform: https://en.wikipedia.org/wiki/Two-sided_Laplace_transform
That's true, Laplace corresponds to a basis of complex exponentials that can grow or decay in time instead purely imaginary exponentials. We restrict the Ae^[(a+jb)t] domain just to Ae^(jbt) for Fourier.
From an circuit analysis standpoint (your problem may be different), but exponentials that decay over time ("a" is negative) corresponds to loss in a circuit, whereas exponentials that grow over time ("a" positive) correspond to something blowing up (this is really a nonphysical result but generally means a circuit is going to oscillate on its own, without a source driving that response). I mostly do electromagnetics/passive RF types of problems, in which you generally want everything to be low-loss. In that case Fourier is perfect, especially since I typically care most about steady-state behavior.
The set of unix utilities have been tested a long time. I just wish the kernel and key utilities keeps fixed and not changed. Unless absolutely necessary. Don’t fix it if it ain’t broken. The software empire seems out of control.
Excellent illustration. After reading it I thought this looks like the PHD style. And I checked the author, who IS Jorge Cham. About 22 years ago I was reading his Piled higher and deeper, PHD, series and bought several books of his. It is a great feeling to see that he is still doing comics. Thanks Jorge!
I don’t believe that’s the reason. I think it’s project management priorities.
It’s possible to write efficient code to this day, examples of very complicated software implemented that way are triple-A videogames. However, it’s relatively hard (and therefore expensive) on quite a few levels. Performance targets should be accounted in functional specs, all developers should be aware of these targets, software must be specifically designed for performance, people (or better yet, robots) should continuously profile software, project management should prioritize performance-related issues and bugs.
And another thing. The hardware performance progress between 1975 and 2005 was exponential, driven by rapid advancements in photolithography. In these decades, we observed quite a few times that for many software products it’s often a better strategy to ship sooner relying on hardware progress to “fix” the performance, compared to spending time and money on software optimizations. The hardware performance progress slowed down substantially around 2010. Still, we have a generation of people in all positions over the industry who learned their experience in these decades of exponential explosion of compute power.
Vi was designed for very slow connections, maybe 300 symbols per second. As such it needed to minimize the keys needed to accomplish a task and maximize user friendliness within such constraints. Turns out that goal is still very much relevant today - although there is a learning process to learn the code book.
Last time I checked I disliked the size of search index folders and files. So I am sticking to mairix, which integrates well with mutt and is small and fast.
If I understand correctly the fix requires the new code to add a line in go.mod to use the new behavior. This is about the same as adding x:=x in the loop, and more hidden. Not good.
The alternative can be that whenever address of iteration variable is used inside the loop the variable is per iteration and otherwise it is per loop. This way it is not breaking the old code and have new semantics.
> This is about the same as adding x:=x in the loop
It's one per module though. In large enough modules you'll have tens of x:=x though. I assume this also opens the option of doing more such changes through the same system in the future.
The “static analysis” section says that it is impossible to catch all cases where address is used which is true. However if the analysis checks for whether address is TAKEN, then it is trivial. I would like to propose that as an alternative- whenever address is taken the variable is per iteration. Otherwise per loop.
And assuming new projects get the new mod by default, it also makes the langage default-safe, as opposed to requiring the cognitive overhead of evaluating individual loops, or requiring this nonsense as explicit prolog to every loop.