←back to thread

1016 points QuinnyPig | 9 comments | | HN request time: 1.283s | source | bottom
Show context
stillpointlab ◴[] No.44563838[source]
I love all of this experimentation in how to effectively use AIs to co-create output with human steering. This pattern, of the human human focusing on the high-level and the AI focusing on the low level feels like a big win.

In some sense, we are starting with a very high-level and gradually refining the idea to a lower and lower levels of detail. It is structured hierarchical thinking. Right now we are at 3 levels: requirement -> spec -> code. Exposing each of these layers as structured text documents (mostly Markdown right now it seems) is powerful since each level can be independently reviewed. You can review the spec before the code is written, then review the code before it gets checked in.

My intuition is that this pattern will be highly effective for coding. And if we prove that out at scale, we should start asking: how does this pattern translate to other activities? How will this affect law, medicine, insurance, etc. Software is the tip of the iceberg and if this works then there are many possible avenues to expand this approach, and many potential startups to serve a growing market.

The key will be managing all of the documents, the levels of abstraction and the review processes. This is a totally tractable problem.

replies(1): >>44566448 #
1. zmmmmm ◴[] No.44566448[source]
> Exposing each of these layers as structured text documents

If we take it far enough, we could end up with a well structured syntax with a defined vocabulary for specifying what the computer should do that is rigorously followed in the implemented code. You could think of it as some kind of a ... language for .... programming the computer. Mind blowing.

replies(1): >>44566677 #
2. stillpointlab ◴[] No.44566677[source]
I get you are being sarcastic, but lets actually consider your idea more broadly.

- Machine code

- Assembly code

- LLVM

- C code (high level)

- VM IR (byte code)

- VHLL (e.g. Python/Javascript/etc)

So, we already have hierarchical stacks of structured text. The fact that we are extending this to higher tiers is in some sense inevitable. Instead of snark, we could genuinely explore this phenomenon.

LLMs are allowing us to extend this pattern to domains other than specifying instructions to processors.

replies(1): >>44568920 #
3. a5c11 ◴[] No.44568920[source]
And we re-invent the wheel basically. You have to use very specific prompts to make the computer do what you want, so why not just, you know... program it? It's not that hard.

Natural language is trying to be a new programming language, one of many, but it's the least precise one imho.

replies(2): >>44569635 #>>44569705 #
4. stillpointlab ◴[] No.44569635{3}[source]
> Natural language is trying to be a new programming language, one of many, but it's the least precise one imho.

I disagree that natural language is trying to be a programming language. I disagree that being less precise is a flaw.

Consider:

- https://www.ietf.org/rfc/rfc793.txt

- https://datatracker.ietf.org/doc/html/rfc2616

I think we can agree these are both documents written in natural language. They underpin the very technology we are using to have this discussion. It doesn't matter to either of us what platform we are on, or what programming language was used to implement them. That is not a flaw.

Biological evolution shows us how far you can get with "good enough". Perfection and precision are highly overrated.

Let's imagine a wild future, one where you copy-and-paste the HTML spec (a natural language doc) into a coding agent and it writes a complete implementation of an HTML agent. Can you say with 100% certainty that this will not happen within your own lifetime?

In such a world, I would prefer to be an expert in writing specs rather than to be an expert in implementing them in a particular programming language.

replies(2): >>44569718 #>>44580298 #
5. sirsinsalot ◴[] No.44569705{3}[source]
I agree with this. There's so much snake oil at the moment. Coding isn't the hard part of software development and we already have unambiguous language for describing computation. Human language is a bad choice for it, and we already find that when writing specs for other humans. Adding more humaness to the loop isn't a good thing IMHO.

At best an LLM is a new UI model for data. The push to get them writing code is bizarre.

replies(1): >>44580109 #
6. sirsinsalot ◴[] No.44569718{4}[source]
In this world where the LLM implementation has a bug in it that impacts a human negatively (the app could calculate a person's credit score for example)

Who is accountable?

replies(1): >>44571273 #
7. stillpointlab ◴[] No.44571273{5}[source]
I couldn't even tell you who is liable right now for bugs that impact human's negatively. Can you? If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible? Is it me? The QA team? The management team? Some 3rd party auditor? Some insurance underwriter? I have a strong suspicion it is very complicated as it is without considering LLMs.

What I can tell you is that the last time I checked: laws are written in natural language, they are argued for/against and interpreted in natural language. I'm pretty confident that there is applicable precedent and the court system is well equipped to deal with autonomous systems already.

8. a5c11 ◴[] No.44580109{4}[source]
> Coding isn't the hard part of software development

That's actually a relief, when after hours and days of attending meetings and writing documentations, I can eventually sit in front of my IDE and let my technical brain enjoy being pragmatic.

9. a5c11 ◴[] No.44580298{4}[source]
> I disagree that being less precise is a flaw.

> Biological evolution shows us how far you can get with "good enough".

That are valid points when speaking about human-human interactions where we can allow ourselves for much more freedom in forming thoughts. But still, written text never will be as expressive as face to face communication when you can hear, see and feel emotions. So even for humans, a raw text is not enough. But it's good enough for biological brains.

When speaking about human-computer interactions, you are just being ignorant. Programming is engineering, and engineering is not biology. Biology allows itself for freedom and randomness, in engineering we rather avoid that kind of stuff unless it's a part of the job. Would you use a "good enough" banking system? Would you use a car with "good enough" safety features? Would you happily cross a bridge which is just "good enough"?

We invented computers to replace humans in critical jobs requiring fast and precise actions. Natural language is not sufficient to give precise instructions concisely because it was never meant to. Sure you can hammer nails with pliers, but... we have hammers.

> In such a world, I would prefer to be an expert in writing specs rather than to be an expert in implementing them in a particular programming language.

Words of a man who didn't choose his career path properly.