Most active commenters
  • pm(4)
  • hugohadfield(4)
  • caseyy(3)
  • 082349872349872(3)

←back to thread

146 points hugohadfield | 20 comments | | HN request time: 1.478s | source | bottom

This little project came about because I kept running into the same problem: cleanly differentiating sensor data before doing analysis. There are a ton of ways to solve this problem, I've always personally been a fan of using kalman filters for the job as its easy to get the double whammy of resampling/upsampling to a fixed consistent rate and also smoothing/outlier rejection. I wrote a little numpy only bayesian filtering/smoothing library recently (https://github.com/hugohadfield/bayesfilter/) so this felt like a fun and very useful first thing to try it out on! If people find kalmangrad useful I would be more than happy to add a few more features etc. and I would be very grateful if people sent in any bugs they spot.. Thanks!
1. pm ◴[] No.41864206[source]
Congratulations! Pardon my ignorance, as my understanding of mathematics at this level is beyond rusty, but what are the applications of this kind of functionality?
replies(5): >>41864688 #>>41864699 #>>41864774 #>>41865843 #>>41872941 #
2. hugohadfield ◴[] No.41864688[source]
No problem! Let's dream up a little use case:

Imagine you have a speed sensor eg. on your car and you would like to calculate the jerk (2nd derivative of speed) of your motion (useful in a range of driving comfort metrics etc.). The speed sensor on your car is probably not all that accurate, it will give some slightly randomly wrong output and it may not give that output at exactly 10 times per second, you will have some jitter in the rate you receive data. If you naiively attempt to calculate jerk by doing central differences on the signal twice (using np.gradient twice) you will amplify the noise in the signal and end up with something that looks totally wrong which you will then have to post process and maybe resample to get it at the rate that you want. If instead of np.gradient you use kalmangrad.grad you will get a nice smooth jerk signal (and a fixed up speed signal too). There are many ways to do this kind of thing, but I personally like this one as its fast, can be run online, and if you want you can get uncertainties in your derivatives too :)

replies(1): >>41864859 #
3. uoaei ◴[] No.41864699[source]
Basically, approximating calculus operations on noisy, discrete-in-time data streams.
replies(1): >>41864883 #
4. thatcherc ◴[] No.41864774[source]
I actually have one for this! Last week I had something really specific - a GeoTIFF image where each pixel represents the speed in "x" direction of the ice sheet surface in Antarctica and I wanted to get the derivative of that velocity field so I could look at the strain rate of the ice.

A common way to do that is to use a Savitzky-Golay filter [0], which does a similar thing - it can smooth out data and also provide smooth derivatives of the input data. It looks like this post's technique can also do that, so maybe it'd be useful for my ice strain-rate field project.

[0] - https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter

replies(2): >>41864872 #>>41865298 #
5. pm ◴[] No.41864859[source]
I'd been researching Kalman filters to smooth out some sampling values (working on mobile: anything from accelerometer values to voice activation detection), but hadn't got around to revising the mathematics, so I appreciate the explanation. Out of curiosity, what other ways might this be achieved? I haven't seen much else beyond Kalman filters.
replies(2): >>41867618 #>>41868390 #
6. pm ◴[] No.41864872[source]
Thanks for that, it looks like my research today is cut out for me.
7. pm ◴[] No.41864883[source]
This is what I was thinking, but stated much clearer than I'd have managed.
8. defrost ◴[] No.41865298[source]
I've been a heavy user of Savitzky-Golay filters (linear time series, rectangular grid images, cubic space domains | first, second and third derivitives | balanced and unbalanced (returning central region smoothed values and values at edges)) since the 1980s.

The usual implementation is as a convolution filter based on the premise that the underlying data is regularly sampled.

The pain in the arse occassional reality is missing data and|or present but glitched|spiked data .. both of which require a "sensible infill" to continue with a convolution.

This is a nice implementation and a potentially useful bit of kit- the elephant in the room (from my PoV) is "how come the application domain is irregularly sampled data"?

Generally (engineering, geophysics, etc) great lengths are taken to clock data samples like a metronome (in time and|or space (as required most)).

I'm assuming that your gridded GeoTIFF data field is regularly sampled in both the X and Y axis?

replies(2): >>41868896 #>>41869538 #
9. caseyy ◴[] No.41865843[source]
This is very important in controllers using feedback loops. The output of a controller is measured, a function is applied to it, and the result is fed back into the controller. The output becomes self-balancing.

The applications in this case involve self-driving cars, rocketry, homeostatic medical devices like insulin pumps, cruise control, HVAC controllers, life support systems, satellites, and other scenarios.

This is mainly due to a type of controller called the PID controller which involves a feedback loop and is self-balancing. The purpose of a PID controller is to induce a target value of a measurement in a system by adjusting the system’s inputs, at least some of which are outputs of the said controller. Particularly, the derivative term of a PID controller involves a first order derivative. The smoother its values are over time, the better such a controller performs. A problem where derivative values are not smooth or second degree derivative is not continuous, is called a “derivative kick”.

The people building these controllers have long sought after algorithms that produce at least a good approximation of a measurement from a noisy sensor. A good approximation of derivatives is the next level, a bit harder, and overall good approximations of the derivative are a relatively recent development.

There is a lot of business here. For example, Abbott Laboratories and Dexcom are building continuous blood glucose monitors that use a small transdermal sensor to sense someone’s blood glucose. This is tremendously important for management of people’s diabetes. And yet algorithms like what GP presents are some of the biggest hurdles. The sensors are small and ridiculously susceptible to noise. Yet it is safety-critical that the data they produce is reliable and up to date (can’t use historical smoothing) because devices like insulin pumps can consume it at real time. I won’t go into this in further detail, but incorrect data can and has killed patients. So a good algorithm for cleaning up this noisy sensor data is both a serious matter and challenging.

The same can be said about self-driving cars - real-time data from noisy sensors must be fed into various systems, some using PID controllers. These systems are often safety-critical and can kill people in a garbage in-garbage out scenario.

There are about a million applications to this algorithm. It is likely an improvement on at least some previous implementations in the aforementioned fields. Of course, these algorithms also often don’t handle certain edge cases well. It’s an ongoing area of research.

In short — take any important and technically advanced sensor-controller system. There’s a good chance it benefits from advancements like what GP posted.

P.S. it’s more solved with uniformly sampled data (i.e. every N seconds) than non-uniformly sampled data (i.e. as available). So once again, what GP posted is really useful.

I think they could get a job at pretty big medical and automotive industry companies with this, it is “the sauce”. If they weren’t already working for a research group of a self-driving car company, that is ;)

replies(1): >>41868993 #
10. nihzm ◴[] No.41867618{3}[source]
Kalman filters are usually the way to go because for some cases it is mathematically proven that they are optimal, in the sense that they minimize the noise. About alternatives, not sure if people actually do this but I think Savitzky-Golay filters could be used for the same purpose.
11. hugohadfield ◴[] No.41868390{3}[source]
You could almost certainly construct a convolutional kernal that computes smoothed derivatives of your function by the derivative of a gaussian smoothing kernal (that kind of technique is mostly used for images if I remember correctly ), in fact I recon this might work nicely https://docs.scipy.org/doc/scipy/reference/generated/scipy.n... although you would need to enforce equally spaces inputs with no misssing data. Alternatively you might also set up an optimisation problem in which you are optimising the values of your N'th derivative on some set of points and then integrating and minimising their distance to your input data, also would work well probably but would be annoying to do regularisation on your lowest derivative and the whole thing might be quite slow. You could also do B-splines or other local low order polynomial methods... the list goes on and on!
12. hugohadfield ◴[] No.41868896{3}[source]
Yeah regularly sampled is the goal almost always, and great when its available! The main times I deal with non-uniformly sampled data is with jitter and missing data etc
13. 082349872349872 ◴[] No.41868993[source]
Thinking about people as PID controllers: left to our own devices we're normally very good at the D term, but lousy at the I term, with the P term somewhere in the middle.

Give people clay/parchment/paper, however, and it becomes much easier to reliably evaluate an I term.

Example: https://xkcd.com/1205/ ; maybe each single time you do the task it seems like sanding out the glitches would be more trouble than it's worth, but a little record keeping allows one to see when a slight itch becomes frequent enough to be worth addressing. (conversely, it may be tempting to automate everything, but a little record keeping allows one to see if it'd obviously be rabbit holing)

replies(1): >>41871192 #
14. thatcherc ◴[] No.41869538{3}[source]
Yup, my data is nicely gridded so I can use the convolution approach pretty easily. Agreed though - missing data at the edges or in the interior is annoying. For a while I was thinking I should recompute the SG coefficients every time I hit a missing data point so that they just "jump over" the missing values, giving me a derivative at the missing point based on the values that come before and after it, but for now I'm just throwing away any convolutions that hit a missing value.
replies(1): >>41875323 #
15. caseyy ◴[] No.41871192{3}[source]
You might like the second episode of All Watched Over by Machines of Loving Grace. It talks about how techno-utopians tried to model society and nature as feedback loop controllers.

One might say a part of the reason they have failed is because nature and people don't much care for the I term. These systems have feedback loops for sudden events, but increase the temperature slowly enough and the proverbial frog boils.

There are very many undercurrents in our world we do not see. So much that even when we think we understand and can predict their effects, we almost never take into account the entire system.

replies(1): >>41874835 #
16. auxym ◴[] No.41872941[source]
My first thought as a mechanical engineer is whether this could be useful for PID controllers. Getting a usable derivative value for the "D" term is often a challenge because relatively small noise can create large variations in the derivative, and many filtering methods (eg simple first-order lowpass) introduce a delay/phase shift, which reduces controller performance.
17. 082349872349872 ◴[] No.41874835{4}[source]
Thanks! Parts of that ep reminded me greatly of The Tyranny of Structurelessness (1971-1973): https://www.jofreeman.com/joreen/tyranny.htm

(the notion that we tend to overly ascribe stability and reproducibility to a system reminds me of Vilfredo Pareto having convinced himself that 80-20 was the invariable power law)

replies(1): >>41875560 #
18. defrost ◴[] No.41875323{4}[source]
> For a while I was thinking I should recompute the SG coefficients every time

We had, in our geophysics application, a "pre computed" coefficient cache - the primary filters (central symmetric smoothing at various lengths) were common choices and almost always there to grab - missing values were either cheaply "faked" for Quick'NDirty displays or infilled by prediction filters that were S-G's computed to use existing points within the range to replace the missing value, that was either a look up from indexed filter cache or a fresh filter generation to use and stash in cache.

It's a complication (in the mechanical watch sense) to add, but with code to generate coefficients already existing it's really just looking at the generation times versus the hassle of indexing and storing them as created and the frequency of reuse of "uncommon" patterns.

19. caseyy ◴[] No.41875560{5}[source]
That was a very interesting read, thanks. Structurelessness seems to be very fertile ground for structure and organization.
replies(1): >>41877471 #
20. 082349872349872 ◴[] No.41877471{6}[source]
Rotating people through the root ("central" if you insist on camouflaging the hierarchy*) positions seems an excellent idea. The biggest problem with rotations in general is the existence of domains requiring large amounts of specific knowledge, but I don't think anyone is arguing that administrative positions are among them.

— We're taking turns to act as a sort of executive officer for the week

— Yes...

— But all the decisions of that officer have to be ratified at a special bi-weekly meeting

— Yes I see!

— By a simple majority, in the case of purely internal affairs

Be quiet!

— But by a two-thirds majority, in the case of more major

Be quiet! I order you to be quiet!

(I wonder how the Pythons themselves handled group decisions?)

* but see expander networks for a possible alternative network structure