←back to thread

1013 points QuinnyPig | 2 comments | | HN request time: 1.569s | source
Show context
stillpointlab ◴[] No.44563838[source]
I love all of this experimentation in how to effectively use AIs to co-create output with human steering. This pattern, of the human human focusing on the high-level and the AI focusing on the low level feels like a big win.

In some sense, we are starting with a very high-level and gradually refining the idea to a lower and lower levels of detail. It is structured hierarchical thinking. Right now we are at 3 levels: requirement -> spec -> code. Exposing each of these layers as structured text documents (mostly Markdown right now it seems) is powerful since each level can be independently reviewed. You can review the spec before the code is written, then review the code before it gets checked in.

My intuition is that this pattern will be highly effective for coding. And if we prove that out at scale, we should start asking: how does this pattern translate to other activities? How will this affect law, medicine, insurance, etc. Software is the tip of the iceberg and if this works then there are many possible avenues to expand this approach, and many potential startups to serve a growing market.

The key will be managing all of the documents, the levels of abstraction and the review processes. This is a totally tractable problem.

replies(1): >>44566448 #
zmmmmm ◴[] No.44566448[source]
> Exposing each of these layers as structured text documents

If we take it far enough, we could end up with a well structured syntax with a defined vocabulary for specifying what the computer should do that is rigorously followed in the implemented code. You could think of it as some kind of a ... language for .... programming the computer. Mind blowing.

replies(1): >>44566677 #
stillpointlab ◴[] No.44566677[source]
I get you are being sarcastic, but lets actually consider your idea more broadly.

- Machine code

- Assembly code

- LLVM

- C code (high level)

- VM IR (byte code)

- VHLL (e.g. Python/Javascript/etc)

So, we already have hierarchical stacks of structured text. The fact that we are extending this to higher tiers is in some sense inevitable. Instead of snark, we could genuinely explore this phenomenon.

LLMs are allowing us to extend this pattern to domains other than specifying instructions to processors.

replies(1): >>44568920 #
a5c11 ◴[] No.44568920[source]
And we re-invent the wheel basically. You have to use very specific prompts to make the computer do what you want, so why not just, you know... program it? It's not that hard.

Natural language is trying to be a new programming language, one of many, but it's the least precise one imho.

replies(2): >>44569635 #>>44569705 #
stillpointlab ◴[] No.44569635[source]
> Natural language is trying to be a new programming language, one of many, but it's the least precise one imho.

I disagree that natural language is trying to be a programming language. I disagree that being less precise is a flaw.

Consider:

- https://www.ietf.org/rfc/rfc793.txt

- https://datatracker.ietf.org/doc/html/rfc2616

I think we can agree these are both documents written in natural language. They underpin the very technology we are using to have this discussion. It doesn't matter to either of us what platform we are on, or what programming language was used to implement them. That is not a flaw.

Biological evolution shows us how far you can get with "good enough". Perfection and precision are highly overrated.

Let's imagine a wild future, one where you copy-and-paste the HTML spec (a natural language doc) into a coding agent and it writes a complete implementation of an HTML agent. Can you say with 100% certainty that this will not happen within your own lifetime?

In such a world, I would prefer to be an expert in writing specs rather than to be an expert in implementing them in a particular programming language.

replies(2): >>44569718 #>>44580298 #
1. sirsinsalot ◴[] No.44569718[source]
In this world where the LLM implementation has a bug in it that impacts a human negatively (the app could calculate a person's credit score for example)

Who is accountable?

replies(1): >>44571273 #
2. stillpointlab ◴[] No.44571273[source]
I couldn't even tell you who is liable right now for bugs that impact human's negatively. Can you? If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible? Is it me? The QA team? The management team? Some 3rd party auditor? Some insurance underwriter? I have a strong suspicion it is very complicated as it is without considering LLMs.

What I can tell you is that the last time I checked: laws are written in natural language, they are argued for/against and interpreted in natural language. I'm pretty confident that there is applicable precedent and the court system is well equipped to deal with autonomous systems already.