>Let's do a dialog about this.
Okay! I'm up for a dialog with Alan Kay if Alan Kay is up for a dialog with me :)
>gives rise to many kinds of hedging.
100% agreed. I have a great example of hedging. Human languages are usually quite ambiguous. But there's an exception: when there is a legal document, then if it gets to court the other party can take any ambiguity and turn it around. (For example argue that an "it" refers to other than the closest possible referent.) As a result, legal documents are very unambiguous. This makes them very, very explicit. Implicit contexts, and human culture, are difficult - they get lost with time. If you've ever read Shakespeare's dialogues, they're hard to understand without heavy glossing just 400 years later. But we actually have a copy of Shakespeare's will. Here it is: http://www.cummingsstudyguides.net/xWill.html
Without a single gloss or footnote and without modernizing the spelling (as we do for plays) you can understand nearly 100.00% of it 400 years later. Without even modernized spelling, you could likely turn it into "code" (it's very similar to code) with complete unambiguity.
But the actual "conversation" that led to that, in the room where Shakespeare was talking with the lawyer, if we had a transcript, would be likely impenetrable to us without careful reading and maybe footnotes -- just like the dialogue of Shakespeare's plays. Imagine calling up your lawyer and leaving a voicemail describing what you want in some document. That might lead to a brief dialog and then a draft for your approval. That dialog would be hard to understand, possibly even for an outsider today.
So the question is: is there room for an 'agent' (service) that builds up a shared context with someone interacting with it, then produces something for outside consumption? That might mean that the user can say "it" or "unless" but the service turns "it" into a referent (and makes sure it got the right want) and turns unless into "; if ( not ) { }" for outsiders. I say this because this is only one interpretation of "unless". Another interpretation is that an exception might happen that you want to address and then not continue...
>Math is completely expressible in ordinary language, but instead, the attempt to make it less ambiguous leads to conventions that have to be learned.
This is very true and extremely interesting. When people rearrange an equation in symbolic form (crossing out common factors, etc), they do so doing complex symbol processing that isn't linguistic in nature. I think it's a different tangent from the one I'm asking about - after all, would it be common for anyone to write "The limit of the function f of x, as x tends to 0" instead of the common lim notation?
So the line of thinking with symbolism is extremely powerful, and after all isn't that why we have whiteboards that don't have neat lines on them for you to write sentences into? Diagrams / symbols / pictures are all very powerful and aspects of thinking. This part is tangential to what I was thinking of.
It would be interesting, though, if the interactive process could produce a diagram for you to rearrange if you wanted. I don't know if you know electrical engineering (probably!) but magine being able to ask an interactive service "I'd like a simple circuit that lights a led from a battery", and you get one -- as well as some questions about whether you really didn't need any fuses in it? what sized batteries you were talking about? that it's a DC circuit right? And so forth. It's a separate question whether you could rearrange the results.
Of course, if you are allowed to say something like "I'd like a circuit around an ARM processor where all the components cost under $30 in quantity under 1000 including pcb setup costs". That's like praying to a diety!
Is there room for some level of interaction between "double x" and praying?
Wolfram Alpha certainly suggests there is. Although it doesn't ask you anything back / isn't interactive, I've certainly been shocked at some of the things it was able to interpret.
For example, I could ask it how far sound travels in 10 ms, so that I could judge what large of a perceived physical offset effect introducing 10 ms of latency would cause. Well, I just tried it again so I could link to you, and it didn't get "how far does sound travel in 10 ms", it didn't get "distance sound travels in 10 ms", but on my third try "speed of sound in air * 10 ms" it got me the answer -
its interpretation was:[1]
>speed of sound in dry air at 20 °C and 1 atmosphere pressure×10 ms (milliseconds)
and it gave me 3.432 meters.
What is interesting is that there was nothing interactive in this process, just me guessing until it got what I meant. It didn't ask me anything back. For me, the three phrases I just quoted are equal. For Wolfram Alpha, it misinterpreted the first two quite badly, and got the third one easily.
So the question is - could such a process be applied to programming? Could the user try to write "lowest common factor of a and b" have the compiler completely miss, try "least common factor of a and b", have the compiler completely miss, try "least common multiple of a and b" and finally have the compiler get it? Because that's not how programming works today. At all. (Well, in actual practice it kind of is thanks to Google - but it's not what goes on in the IDE.)
So it would be interesting to know if some progress could be made along these lines.
>have you thought about (say) objects needing to negotiate meaning with each other?
No, it's a tough one. For simplicity, I thought of the current output being boilerplate code (like existing C/C++ code), so that the headache you just mentioned doesn't need to be thought about :-D
[1] http://www.wolframalpha.com/input/?i=speed+of+sound+in+air+*...