The problem is that epsilon deltas have very little practical use outside of theoretical proofs in pure mathematics. Even for cutting edge CS/statistics fields like high level machine learning, most of the calculus used are existing formalisms on multidimensional statistics and perhaps differential equations. Aside from Jensen's inequality and the mean value theorem, I have never seen any truly useful epsilon delta proofs being used in any of the ML papers with significant impact. It's perhaps mentioned once in passing when teaching gradient descent to grad students.
> Even for cutting edge CS/statistics fields like high level machine learning, most of the calculus used are existing formalisms on multidimensional statistics and perhaps differential equations.
If you mean experimental work, then sure, that's like laboratory chemistry. You run code and write up what you observe happens. If you're trying to prove theorems, you have to understand the epsilon delta stuff even if your proofs don't actually use it. It can be somewhat abstracted away by the statistics and differential equations theorems that you mention, but it is still there. Anyway, the difficulty melts away once you have seen enough math to deal with the statistics, differential equations, have some grasp of high dimensional geometry, etc. It's all part of "how to think mathematically" rather than some particular weird device that one studies and forgets.
I agree, and including delta-epsilon proofs in calculus 1 seemed like a way for the curriculum authors to feel good that they were “teaching proof techniques” to these students, when in reality they are doing no such thing. I later did an MS in math, and loved the proofs, including delta-epsilon proofs…after taking a one semester intro to proofs class that focused just on practicing logic and basic proof techniques
If you want to do "exact" computation with real numbers (meaning, be able to rigorously bound your results) you just can't avoid epsilon-delta reasoning. That's quite practical, even though in most applied settings we just rely on floating point approximations and try to deal with numerical round-off errors in a rather ad-hoc way.