Sounds like you picked some obscure tasks to test it that would obviously have low representation in the data set? That is not to say it can't be helpful augmenting some lower represented frameworks/tools - just you'll need to equip it with better context (MCPs/Docs/Instruction files)
A key skill in using an LLM agentic tool is being discerning in which tasks to delegate to it and which to take on yourself. Try develop that skill and maybe you will have better luck.
The opposite may be true, the more effective the model the lazier the prompting as it can seemingly handle not being micromanaged as with earlier versions.
Of a single sinusoidal component, sure, this is true. But phase differences between sonic features are absolutely detectable.
The effect is most noticeable on raw synthesized tones: sawtooth, square wave, etc. These tones contain sonic energy concentrated at discontinuities in the waveforms. The ear can hear this, as a "buzzing sound".
Run these tones through Paulstretch (even with 0 stretch), and the sonic energy is distributed throughout the wavecycle. These tones retain their spectral character, but noticeably lose the buzzing character.
I've uploaded a demo here: https://chris.pacejo.net/temp/phase.wav It is a 55 Hz sawtooth tone, alternating every 2.5 s between the raw tone, and the tone fed through Paulstretch with no stretching.
There was even a paper written on this. Laitinen, Disch & Pulkki, "Sensitivity of Human Hearing to Changes in Phase Spectrum". [1]
Paulstretch muddies up percussive transients (like hi hat strikes) as well.
Anyway it's the reason things like gammatone filters exist for analyzing audio. They reveal phase correlations in the same way the ear is able to. Windowed Fourier transforms (used by e.g. Paulstretch and Audacity for various purposes) obfuscate these relationships.
Seems to have been heavily downvoted also, it's flown off the front page. Times have changes for HN. Also double standard when it comes to the like of Deepseek r1 earlier this week :shrug: