Normal computer recording only goes up to 48kHz (24kHz Nyquist rate, with analog filters lower than that to prevent aliasing) but high-quality equipment might support 96kHz or even 192kHz.
even a headphone plugged in wrong is a good enough sensor to start.
when it gets bad you start looking at designing your own ribbon elements and considering active pickups. when it gets worse you may find yourself deeply concerned about ADC and clock skew.
And many modern sound card IC sample at an effective rate of 192kHz and use internal DSP to downsample that to the requested sample rate.
Much easier to design a good low pass filter on the cheap when your Nyquist frequency is on 96kHz instead of 22kHz.
So the frontend of the sound card might actually be remarkably wide.
I assumed that most of the normal consumer-level stuff is (perhaps very deliberately) insensitive to sounds that human ears are insensitive to? The common codecs, for sure, tend to have rather harsh cutoffs around 20kHz (the common sampling rates being 44 or 48kHz, and the Nyquist frequency half of that, 22-24kHz, so you have to start filtering well before?)
Plus, I'd rather not walk around with a laptop. But perhaps smartphone apps are capable of consuming the mic input unfiltered? (Never tried recording APIs for that purpose)
Studio Six Digital makes the very useful app AudioTools. They test all iOS devices to create generic profiles for the built in mics. They’ve found them to be very consistent which is kind of remarkable for a consumer cell phone device.
Any microphone hooked up to a laptop with a microphone input. Or a phone running some audio spectrogram app.
Getting started is very easy, but then you'll be constantly tempted by better sensors, more sensors, more channels, more analysis...