←back to thread

279 points nnx | 8 comments | | HN request time: 0.427s | source | bottom
Show context
PeterStuer ◴[] No.43543777[source]
Here's where the article goes wrong:

1. "Natural language is a data transfer mechanism"

2. "Data transfer mechanisms have two critical factors: speed and lossiness"

3. "Natural language has neither"

While a conversational interface does transfer information, its main qualities are what I always refer to as "blissfull ignorance" and "intelligent interpretation".

Blisfull ignorance allows the requester to state an objective while not being required to know or even be right in how to achieve it. It is the opposite of operational command. Do as I mean, not as I say.

"Intelligent Interpretation" allows the receiver the freedom to infer an intention in the communication rather than a command. It also allows for contextual interactions such as goal oriented partial clarification and elaboration.

The more capable of intelligent interpretation the request execution system is, the more appropriate a conversational interface will be.

Think of it as managing a team. If they are junior, inexperienced and not very bright, you will probably tend towards handholding, microtasking and micromanagement to get things done. If you have a team of senior, experienced and bright engineers, you can with a few words point out a desire and, trust them to ask for information when there is relevant ambiguity, and expect a good outcome without having to detail manage every minute of their days.

replies(2): >>43543792 #>>43543949 #
throwaway290 ◴[] No.43543949[source]
> If you have a team of senior, experienced and bright engineers, you can with a few words point out a desire and, trust them to ask for information when there is relevant ambiguity, and expect a good outcome

It's such a fallacy. First thing an experienced and bright engineer will tell you is to leave the premises with your "few words about a desire" and not return without actual specs and requirements formalized in some way. If you do not understand what you want yourself, it means hours/days/weeks/months/literally years of back and forths and broken solutions and wasted time, because natural language is slow and lossy af (the article hits the nail on the head on this one).

Re "ask for information", my favorite example is when you say one thing if I ask you today and then you reply something else (maybe the opposite, it happened) if I ask you a week later because you forgot or just changed your mind. I bet a conversational interface will deal with this just fine /s

replies(3): >>43544573 #>>43544963 #>>43546171 #
PeterStuer ◴[] No.43544963[source]
I do understand that in bad cases it can be very frustrating as an engineer to chase vague statements only to be told later 'nah, that was not what I meant'. This is especially true when the gap in both directions is very large or there is incompetence and/or even adversarial stances between the parties. Language and communication only work if both parties are willing to understand.

Unfortunately if either is the case "actual specs and requirements formalized", while sounding logical, and might help, in my experience did very little to save any substantial project (and I've seen a lot). The common problem is that the business/client/manager is forced to sign of on formal documents far outside their domain of competence, or the engineers are straitjacketed into commitments that do not make sense or have no idea of what is considered tacit knowledge in the domain and so can't contextualize the unstated. Those formalized documents then mostly become weaponized in a mutual destructive CYA.

What I've also seen more than once is years of formalized specs and requirements work while nothing ever gets produced, and the project is aborted before even the first line of code hit test.

I've given this example before: When Covid lockdows hit there were digitization projects years in planning and budgeted for years of implementation, that were hastily specked, coded and roiled out into production by a 3 person emergency team over a long weekend. Necessity apparently has a way of cutting through the BS like nothing else can.

You need both sides capable, willing and able to understand. If not, good luck mitigating, but you're probably doomed either way.

replies(2): >>43545243 #>>43546208 #
1. throwaway290 ◴[] No.43545243[source]
> What I've also seen more than once is years of formalized specs and requirements work while nothing ever gets produced, and the project is aborted before even the first line of code hit test.

It just shows that no one really understood what they wanted. It is crazy to expect somebody to understand something better than you and it is hilarious to want a conversational UI to understand something better than you.

replies(4): >>43545309 #>>43545370 #>>43545529 #>>43546229 #
2. johnnyanmac ◴[] No.43545309[source]
The US having this culture of blame and deflect doesn't help either. When you're more concerned about making sure you can't be held liable if X fails, then you spend more time covering your tracks than developing the project. And that's how the beauracracy creeps in.

And approach of shared responsibility in all respects (successes and failure) would accelerate past the inevitable shortcomings that occur and let all parties focus on recovering and delivering.

3. discreteevent ◴[] No.43545370[source]
> it is hilarious to want a conversational UI to understand something better than you.

This is true. But what if you swap "conversational UI" with something actually intelligent like a developer. Then we see this kind of thing all the time: A user has tacit, unconscious knowledge of some domain. The developer keeps asking them questions in order to get a formal understanding of the domain. At the end the developer has a formal understanding and the user keeps their tacit understanding. In theory we could do the same with an AI - If the AI was actually intelligent.

replies(1): >>43545942 #
4. PeterStuer ◴[] No.43545529[source]
"It just shows that no one really understood what they wanted."

Then what were the literally room full of formal process and spec documents, meeting reports and formal agreements (near 100.000 pages) by the analysts on either side for? And how did those not 'solve' the understanding problem?

When I go to the garage to have my car serviced, I expect them to understand it way better than I do. When I go to a nice restaurant I expect the cooks to prepare me dishes that taste greater than me writing them out a step-by-step recipe for them to follow. If I hire a senior consultant in even my own domain, I expect them to not just know my niche, but bring tacit knowledge from having worked on these types of solutions across my industry.

Expecting somebody to understand something better than me is exactly the reason why I hire senior people in the first place.

replies(1): >>43545802 #
5. throwaway290 ◴[] No.43545802[source]
> Then what were the literally room full of formal process and spec documents, meeting reports and formal agreements (near 100.000 pages) by the analysts on either side for? And how did those not 'solve' the understanding problem?

Sure.

There are many possible factors (eg. somebody had a shitty idea and a committee of people sabotaged it because they didn't wanted it to succeed, or it was good but committee interests/politics were against it, or it was generally a dysfunctional org) but it's irrelevant so let's pretend people are good and it's the ideal case.

There was likely somebody who had a good idea originally. However somebody failed to communicate it. Somebody brought vague vibes to the table with N people and they ended up with N different ideas and could not agree on a specific.

It just reiterates the original problem that I described doesn't it?

6. throwaway290 ◴[] No.43545942[source]
You described an interaction not between product owner and software engineer but between a user and product owner. A product person can also be a developer, it happens, but do not confuse the two roles before people think you're saying that a conversational UI can be product owner.

The original example I replied to was where somebody had an idea and went with it to some engineering team or conversational interface.

"If the AI was actually intelligent" does a lot of work. To take a few words and make a detailed spec from it and ask the right questions, even humans can't do it for you.

First because most probably you don't really understand it yourself, because you didn't think about it enough.

Second somebody who can do it would need to really deeply understand and want the same things as you. But if chatbot has abilities like "understand" and "want" (which is a special case of "feel", another famous special case of "feel" is "suffer") that is a dangerous territory, because if it understands and feels and has no ability to refuse you and fulfill its wishes etc your "conversational interface" becomes an euphemism, you are using a slave.

7. brookst ◴[] No.43546229[source]
How about a conversational UI to help you iterate and explore what you want rather than having to know it clearly and in detail before anyone writes any code?
replies(1): >>43555613 #
8. throwaway290 ◴[] No.43555613[source]
Regarding iteration, as the article says natural language is just slow and lossy. If you are ok iterating more slowly and constantly explain and correct things then why not? I find it tedious