So, since last year, I’ve been mulling over a unique, and extremely fast(!) Autocorrelation scheme for monophonic pitch detection. Last weekend, I finally got myself to write the proof of concept. It’s not like any autocorrelation scheme I’ve seen before. I am still wondering why no one has thought about doing it this way. As far as I can tell, this is my invention, but please tell me if there’s something I am missing and if I’m not the first to actually do it this way. I dubbed the technique Bitstream Autocorrelation.
Unlike standard Autocorrelation, my scheme works on single bit binary data streams instead of floating point (or fixed point) real numbers. Compared to standard Autocorrelation, Bitstream Autocorrelation is wicked fast. As I’ve been working on multiple channels of audio on small Microcontrollers, I’ve consistently shied away from Autocorrelation schemes for pitch detection (see my original article: Fast and Efficient Pitch Detection). Popular time-domain Autocorrelation (ACF) based pitch detection, including variants such as AMDF (Average Magnitude Difference Function), ASDF (Average Squared Difference Function), YIN, and MPM, are quite expensive in terms of CPU cycles required (ACF is basically an N² operation for N samples).