Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you a musician? Have you ever used DAW like Cubase or Pro Tools? If not, have you ever tried the FOSS (GPLv3) Audacity audio editor [1]. Waves and Waveforms are colloquial terminology, so the terms are familiar to anyone in the industry as well as your average hobbyist.

Additionally, PCM [2] is at the heart of many of these tools, and is what is converted between digital and analog for real-world use cases.

This is literally how the ear works [3], so before arguing that this is the "worst possible representation of signal state," try listening to the sounds around you and think about how it is that you can perceive them.

[1] https://manual.audacityteam.org/man/audacity_waveform.html [2] https://en.wikipedia.org/wiki/Pulse-code_modulation [3] https://www.nidcd.nih.gov/health/how-do-we-hear




According to your link the ear mostly work in the frequency domain:

  Once the vibrations cause the fluid inside the cochlea to ripple, a traveling wave forms along the basilar membrane. Hair cells—sensory cells sitting on top of the basilar membrane—ride the wave. Hair cells near the wide end of the snail-shaped cochlea detect higher-pitched sounds, such as an infant crying. Those closer to the center detect lower-pitched sounds, such as a large dog barking.
It's really far from PCM.


Yeah, except no; the ear works by having cillia that each resonate at different frequencies, differentiated into a log-periodic type response. It is mostly a "frequency domain" mechanism though in the real world the time component is obviously necessary to manifest frequency. If we want to have a debate about how best to call it, the closest term I might reach for from the quite often mislabeled vernacular of the music/production/audio world would be "grains" / granular synthesis.

WRT the waveform tool in DAWs you should be aware that it doesnt normally work like you may assume it does. If you start dragging points around in there you typically are not actually doing raw edits to the time domain samples but having your edits applied through a filter that tries to minimize ringing and noise. That is to say the DAW will typically not just let you move a sample to any value you wish. In this case the tool is bending to its use as an audio editor and not defaulting to behavior that would otherwise just introduce clicks and pops every time it was used.

I stand by my argument that the author's terminology appears ignorant in an area where it ought to be very deliberately specific. I question the applicability and relevance of the work beginning at that point, even though the approach may have yielded a useful result.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: