←back to thread

146 points hugohadfield | 4 comments | | HN request time: 0.606s | source

This little project came about because I kept running into the same problem: cleanly differentiating sensor data before doing analysis. There are a ton of ways to solve this problem, I've always personally been a fan of using kalman filters for the job as its easy to get the double whammy of resampling/upsampling to a fixed consistent rate and also smoothing/outlier rejection. I wrote a little numpy only bayesian filtering/smoothing library recently (https://github.com/hugohadfield/bayesfilter/) so this felt like a fun and very useful first thing to try it out on! If people find kalmangrad useful I would be more than happy to add a few more features etc. and I would be very grateful if people sent in any bugs they spot.. Thanks!
Show context
pm ◴[] No.41864206[source]
Congratulations! Pardon my ignorance, as my understanding of mathematics at this level is beyond rusty, but what are the applications of this kind of functionality?
replies(5): >>41864688 #>>41864699 #>>41864774 #>>41865843 #>>41872941 #
1. hugohadfield ◴[] No.41864688[source]
No problem! Let's dream up a little use case:

Imagine you have a speed sensor eg. on your car and you would like to calculate the jerk (2nd derivative of speed) of your motion (useful in a range of driving comfort metrics etc.). The speed sensor on your car is probably not all that accurate, it will give some slightly randomly wrong output and it may not give that output at exactly 10 times per second, you will have some jitter in the rate you receive data. If you naiively attempt to calculate jerk by doing central differences on the signal twice (using np.gradient twice) you will amplify the noise in the signal and end up with something that looks totally wrong which you will then have to post process and maybe resample to get it at the rate that you want. If instead of np.gradient you use kalmangrad.grad you will get a nice smooth jerk signal (and a fixed up speed signal too). There are many ways to do this kind of thing, but I personally like this one as its fast, can be run online, and if you want you can get uncertainties in your derivatives too :)

replies(1): >>41864859 #
2. pm ◴[] No.41864859[source]
I'd been researching Kalman filters to smooth out some sampling values (working on mobile: anything from accelerometer values to voice activation detection), but hadn't got around to revising the mathematics, so I appreciate the explanation. Out of curiosity, what other ways might this be achieved? I haven't seen much else beyond Kalman filters.
replies(2): >>41867618 #>>41868390 #
3. nihzm ◴[] No.41867618[source]
Kalman filters are usually the way to go because for some cases it is mathematically proven that they are optimal, in the sense that they minimize the noise. About alternatives, not sure if people actually do this but I think Savitzky-Golay filters could be used for the same purpose.
4. hugohadfield ◴[] No.41868390[source]
You could almost certainly construct a convolutional kernal that computes smoothed derivatives of your function by the derivative of a gaussian smoothing kernal (that kind of technique is mostly used for images if I remember correctly ), in fact I recon this might work nicely https://docs.scipy.org/doc/scipy/reference/generated/scipy.n... although you would need to enforce equally spaces inputs with no misssing data. Alternatively you might also set up an optimisation problem in which you are optimising the values of your N'th derivative on some set of points and then integrating and minimising their distance to your input data, also would work well probably but would be annoying to do regularisation on your lowest derivative and the whole thing might be quite slow. You could also do B-splines or other local low order polynomial methods... the list goes on and on!