←back to thread

1401 points alankay | 7 comments | | HN request time: 0s | source | bottom

This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).
Show context
adamgravitis ◴[] No.11940456[source]
Hi Alan,

I've heard you frequently compare the OOP paradigm to microbiology and molecules. It seems like even Smalltalk-like object interactions are very different from, say, protein-protein interactions.

How do you think this current paradigm of message-sending could be improved upon to enable more powerful, perhaps protein-like composition?

replies(1): >>11942093 #
alankay1 ◴[] No.11942093[source]
Not proteins, but cell to cell (this is still an interesting mechanism to contemplate and develop ...
replies(1): >>11944854 #
1. astrobe_ ◴[] No.11944854[source]
Do you believe in self-healing/self-repairing software?
replies(1): >>11945112 #
2. alankay1 ◴[] No.11945112[source]
This is a worthy goal, and I think quite possible. Note that Biology requires a lot of organization in order to do this, so it is likely not to be straightforward from where we are. But we had to make the Internet -- etc. -- self-healing in many respects (we had to go to dynamic stabilities rather than trying to make perfect machines ...)
replies(1): >>11945785 #
3. michaelscott ◴[] No.11945785[source]
Do you think that genetic programming and machine learning are effective avenues to pursue regarding this? Or is that introducing unnecessary complexity in many/most cases?
replies(1): >>11946069 #
4. alankay1 ◴[] No.11946069{3}[source]
I think "real AI" could help (because it could also explain as well as configure). Systems that can't explain themselves (and most can't) are a very bad idea.
replies(1): >>11946708 #
5. michaelscott ◴[] No.11946708{4}[source]
Because when things go sideways a human can't fix anything without copious amounts of reading/testing/poking around? I'm taking your meaning of "explain" literally here, which might be shortsighted.

Either way, the idea of machines or systems as "living" and able to communicate intent and process, even if only within their own "umwelt", is really interesting. Even a taste of that would make modern systems easier to debug and understand, if not more robust (which would be a better starting point for many systems anyway I suppose).

replies(1): >>11947277 #
6. astrobe_ ◴[] No.11947277{5}[source]
I believe he is referring to something like expert systems explanations, which was the holy grail 20 years ago (I don't know if it has been achieved), and as opposed to neural networks which are more like black boxes (at least to me).
replies(1): >>11952056 #
7. michaelscott ◴[] No.11952056{6}[source]
Ah I see, that's quite interesting. So the idea of a system that could explain its own decision-making and inferences?

Neural networks definitely are black boxes, at least at an individual level. Sure the concept remains the same generally, but the internals are different and hidden from case to case