←back to thread

Uncertain<T>

(nshipster.com)
444 points samtheprogram | 1 comments | | HN request time: 0s | source
Show context
btown ◴[] No.45059050[source]
Once one understands that a variable (in a programming context) can hold a specification for a variable (in a mathematical context), one opens up incredible doors that are at the foundation of modern AI.

When you see y = m * x + b, your recollections of math class may note that you can easily solve for "m" or find a regression for "m" and "b" given various data points. But from a programming perspective, if these are all literal values, all this is is a "render" function. How can you reverse an arbitrary render function?

There are various approaches, depending on how Bayesian you want to be, but they boil down to: if your language supports redefining operators based on the types of the variables, and you have your variables contain a full specification of the subgraphs of computations that lead to them... you can create systems that can simultaneously do "forward passes" by rendering the relationships, and "backward passes" where the system can automatically calculate a gradient/derivative and thus allow a training system to "nudge" the likeliest values of variables in the right direction. By sampling these outputs, in a mathematically sound way, you get the weights that form a model.

Every layer in a deep neural network is specified in this way. Because of the composability of these operations, systems like PyTorch can compile incredibly optimal instructions for any combination of layers you can think of, just by specifying the forward-pass relationships.

So Uncertain<T> is just the tip of the iceberg. I'd recommend that everyone experiment with the idea that a numeric variable might be defined by metadata about its potential values at any given time, and that you can manipulate that metadata as easily as adding `a + b` in your favorite programming language.

replies(4): >>45059815 #>>45062069 #>>45062097 #>>45062893 #
jonahx ◴[] No.45059815[source]
Very interesting.

Are there PLs that support this kind of thing at the language level as you are describing?

replies(2): >>45059994 #>>45060213 #
1. astrange ◴[] No.45060213[source]
If you're willing to be discrete about it, logic languages like Prolog and Mercury use "unification" instead of "evaluation" which means they can evaluate backwards.