Most active commenters
  • aDyslecticCrow(4)
  • LtWorf(4)

←back to thread

Nobody knows how to build with AI yet

(worksonmymachine.substack.com)
526 points Stwerner | 40 comments | | HN request time: 1.005s | source | bottom
1. Flatcircle ◴[] No.44616899[source]
My theory on AI is it's the next iteration of google search, a better more conversational, base layer over all the information that exists on the internet.

Of course some people will lose jobs just like what happened to several industries when search became ubiquitous. (newspapers, phone books, encyclopedias, travel agents)

But IMHO this isn't the existential crisis people think it is.

It's just a tool. Smart, clever people can do lots of cool stuff with tools.

But you still have to use it,

Search has just become Chat.

You used to have to search, now you chat and it does the searching, and more!

replies(9): >>44616955 #>>44616960 #>>44616976 #>>44617019 #>>44617060 #>>44617065 #>>44617099 #>>44620763 #>>44623695 #
2. jopsen ◴[] No.44616955[source]
It's clearly useful for many things other than search.
3. staplers ◴[] No.44616960[source]
A lot of modern entry-level jobs were filled by people who knew how to use google and follow instructions.

I imagine the next generation will have a similar relationship with AI. What might seem "common sense" with the younger, more tech-saavy crowd, will be difficult for older generations whose default behavior isn't to open up chatgpt or gemini and find the solution quickly.

4. ivanjermakov ◴[] No.44616976[source]
> Search has just become Chat

I think chat-like LLM interfacing is not the most efficient way. There has to be a smarter way.

replies(5): >>44617027 #>>44617074 #>>44617423 #>>44620491 #>>44622026 #
5. Quitschquat ◴[] No.44617019[source]
Google doesn’t have to change search. It already returns AI generated crap before anything useful.
replies(5): >>44617042 #>>44617089 #>>44617146 #>>44617504 #>>44617593 #
6. clickety_clack ◴[] No.44617027[source]
There’s an efficient way to serve the results, and there’s an efficient way for a human to consume them, and I find LLMs to be much more efficient in terms of cognitive work done to explore and understand something than a google search. The next thing will have to beat that level of personal mental effort, and I can’t imagine what that next step would look like yet.
replies(1): >>44617128 #
7. arrowsmith ◴[] No.44617042[source]
To be fair, Google also returns a lot of useless crap that wasn't generated by AI.
replies(1): >>44618551 #
8. jayd16 ◴[] No.44617060[source]
Unlike peak google, this reduces signal to noise and obfuscates the source data its pulling against.
replies(1): >>44617139 #
9. maqnius ◴[] No.44617065[source]
I agree that people are using it for things they would've googled, but I doubt that it's a good replacement.

To me it mostly comes with a feeling of uncertainty. As if someone tells you something he got told on a party. I need to Google it, to find a trustful source for verification, else it's just a hint.

So I use it if I want a quick hint. Not if I really want to have information worth remembering. So it's certainly not a replacement for me. It actually makes things worse for me because of all that AI slop atm.

10. majormajor ◴[] No.44617074[source]
I think Photoshop is a good guide here.

Famously complicated interface with a million buttons and menus.

Now there's more buttons for the AI tools.

Because at the end of the day, using a "brush" tool to paint over the area containing the thing you want it to remove or change in an image is MUCH simpler than trying to tell it that through chat. Some sort of prompt like "please remove the fifth person from the left standing on the brick path under the bus stop" vs "just explicitly select something with the GUI." The former could have a lot of value for casual amateur use; it's not going to replace the precise, high-functionality tool for professional use.

In software - would you rather chat with an LLM to see the contents of a proposed code change, or use a visual diff tool? "Let the agent run and then treat it's stuff as a PR from a junior dev" has been said so many times recently - which is not suggesting just chatting with it to do the PR instead of using the GUI. I would imagine that this would get extended to something like the input not just being less of a free-form chat, but more of a submission of a Figma mockup + a link to a ticket with specs.

replies(1): >>44621475 #
11. patcon ◴[] No.44617089[source]
I have systemic concerns with how Google is changing roles from "knowledge bridging" to "knowledge translating", but in terms of information: I find it very useful.

You find it gives you poor information?

replies(1): >>44617210 #
12. aDyslecticCrow ◴[] No.44617099[source]
As search gives the answer rather than the path to it, the job of finding things out properly and writing it down for others is lost. If we let that be lost, then we will all be lost.

If we cannot find a way to redirect income from AI back to the creators of the information they rehash (such as good and honest journalism), a critical load-bearing pillar of democratic society will collapse.

The news industry has been in grave danger for years, and we've seen the consequences it brings (distrust, division, misinformation, foreign manipulation). AI may drive the last stake in its back.

It's not about some jobs being replaced; that is not even remotely the issue. The path we are on currently is a dark one, and dismissing it as "just some jobs being lost" is a naive dismissal of the danger we're in.

replies(1): >>44617375 #
13. aDyslecticCrow ◴[] No.44617128{3}[source]
I find a well-written human article or guide to be far more efficient when it exists. But if AI rehash them... then the market for those may disappear, and in the process, the AI won't be very good either without the source to summarise.
replies(1): >>44626209 #
14. hmmokidk ◴[] No.44617139[source]
Creation of source data has been disincentivized
15. brabel ◴[] No.44617146[source]
I was a bit wary of trusting the AI summaries Google has been including in search results… but after a few checks it seems like it’s not crap at all, it’s pretty good!
replies(1): >>44617198 #
16. SoMomentary ◴[] No.44617198{3}[source]
I think their point is that all of the content out there is turning in to AI Slop so it won't matter if search changes because the results themselves have already been changed.
17. aDyslecticCrow ◴[] No.44617210{3}[source]
Always check the sources. I've personally found it;

- Using a source to claim the opposite of what the source says.

- Point to irrelevant sources.

- Use a very untrustworthy source.

- Give our sources that do not have anything to do with what it says.

- Make up additional things like any other LLM without source or internet search capability, despite reading sources.

I've specifically found Gemeni (the one Google puts at the top of searches) is hallucination-prone, and I've had far better results with other agents with search capability.

So... presenting a false or made-up answer to a person searching the web on a topic they don't understand... I'd really like to see a massive lawsuit cooked up about this when someone inevitably burns their house down or loses their life.

replies(2): >>44619181 #>>44622911 #
18. JSteph22 ◴[] No.44617375[source]
I am looking forward to the "news industry" breathing its last breath. They're the ones primarily responsible for the distrust and division.
replies(3): >>44618502 #>>44623074 #>>44623715 #
19. Fade_Dance ◴[] No.44617423[source]
There is certainly much innovation to come in this area.

I'm thinking about Personal Knowledge Systems and their innovative ideas regarding visual representations of data (mind maps, website of interconnected notes, things like that). That could be useful for AI search. What elements are doing in a sense is building concept web, which would naturally fit quite well into visualization.

The ChatBot paradigm is quite centered around short easily digestible narratives, and will humans are certainly narrative generating and absorbing creatures to a large degree, things like having a visually mapped out counter argument can also be surprisingly useful. It's just not something that humans naturally do without effort outside of, say, a philosophy degree.

There is still the specter of the megacorp feed algo monster lurking though, in that there is a tendency to reduce the consumer facing tools to black-box algorithms that are optimized to boost engagement. Many of the more innovative approaches may involve giving users more control, like dynamic sliders for results, that sort of thing.

20. mrandish ◴[] No.44617504[source]
Append -ai to your query to omit AI results.
replies(1): >>44622903 #
21. accrual ◴[] No.44617593[source]
I like the way DuckDuckGo does it - it offers a button to generate a response if you want to, but it doesn't shove it down your throat.

It's handy when I just need the quick syntax of a command I rarely need, etc.

22. aDyslecticCrow ◴[] No.44618502{3}[source]
No, i fully disagree.

The economic viability to do proper journalism was already destroyed by the ad supported click and attention based internet. (and particular the way people consume news through algorithmic social media)

I believe most independent news sites have been economically forced into sensationalism and extremism to survive. Its not what they wilfully created.

Personally, i find that any news organisations that is still somewhat reputable have source of income beyond page visits and ads; Be it a senior demorgaphic that still subscribe to the paper, loyal reader base that pay for the paywall, or government sponsoring its existence as public service.

Now what if you cut out the last piece of income journalists rely on to stay afloat? We simply fire the humans and tell an AI to summarise the other articles instead, and phrase it how people want to hear it.

And thats a frightening world.

23. jenscow ◴[] No.44618551{3}[source]
wasn't generated by their AI, more like
24. siliconwrath ◴[] No.44619181{4}[source]
I’ve had to report AI summaries to Google several times for telling me restaurant items don’t contain ingredients I'm allergic to, when the cited “source” allergen menu says otherwise. They’re gonna kill someone.
25. mmcconnell1618 ◴[] No.44620491[source]
English and other languages come with lots of ambiguity and assumptions. A significant benefit of programming languages is they have explicit rules for how they will be converted into a running program. An LLM can take many paths from the same starting prompt and deliver vastly different output.
replies(2): >>44622110 #>>44676256 #
26. 827a ◴[] No.44620763[source]
Yeah; there's still a massive chasm between "I spent hours precisely defining my requirements for this greenfield application with no users and the AI one-shot it" and "million line twenty team enterprise SaaS hellscale with ninety-seven stakeholders per line of code".

The fact that AI can actually handle the former case is, to be clear, awesome; but not surprising. Low-code tools have been doing it for years. Retool, even back in 2018, was way more productive than any LLMs I've seen today, at the things Retool could do. But its relative skill at these things, to me, does not conclusively determine that it is on the path toward being able to autonomously handle the latter.

The english language is simply a less formal programming language. Its informality means it requires less skill to master, but also means it may require more volume to achieve desired outcomes. At some level of granularity, it is necessarily the case that programming in english begins to look like programming in javascript; just with capital letters, exclamation points, and threats to fire the AI instead of asserts and conditionals. Are we really saving time, and thus generating higher levels of productivity? Or, is its true benefit that it enables foray into languages and domains you might be unfamiliar with; unlocking software development for a wider range of people who couldn't muster it before? Its probably a bit of both.

Dario Amodei says we'll have the first billion dollar solo-company by 2026 [1]. I lean toward this not happening. I would put money on even $100M not happening, barring some level of hyperinflation which changes our established understanding of what a dollar even is. But, here's what I will say: hitting levels of revenue like this, with a human count so low that the input of the AI has to overwhelm the input from the humans, is the only way to prove to me that, actually, these things might be more than freakin awesome tools. Blog posts from people making greenfield apps named after a furrsona DJ isn't moving the needle for me on this issue.

[1] https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-pr...

replies(2): >>44622917 #>>44644144 #
27. skydhash ◴[] No.44621475{3}[source]
> Famously complicated interface with a million buttons and menus.

Photoshop is quite nice for an expert tool. Blender is the complicated one where you have to get a full-sized keyboard and know a handful of shortcuts to have a normal pace.

> The former could have a lot of value for casual amateur use; it's not going to replace the precise, high-functionality tool for professional use.

I was just discussing that in another thread. Most expert works are routine, and they will build workflows, checklists, and processes to get them to be done with the minimum cognitive load. And for that you need reliability. Their focus are on the high leverage decision points. Take any digital artist's photoshop settings, They will have a specific layout, a few document templates, their tweaked brushes. And most importantly, they know the shortcuts because clicking on the tiny icons takes too much times.

The trick is not about being able to compute, it's knowing the formula and just give the parameters to a computer that will do the menial work. It's also not about generating a formula that may or may not be what we want.

28. mbesto ◴[] No.44622026[source]
Search wasn't just "search". It was "put a prompt in a form and then spend minutes/hours going through various websites until I get my answer". LLMs change that. I don't have to go through 20 different people's blog posts on "Which 12v 100Ah LifePO4 battery tests for the highest watt hours", the LLM simply just gives me answer that is most relevant across those 20 blog posts. It just distilled what I would have taken an hour to do down to seconds or 2 minutes.
replies(1): >>44622898 #
29. ip26 ◴[] No.44622110{3}[source]
I do agree… perhaps the thing to do is write fragments of the program, like the start and end, asking it to complete the middle. If you have precisely described how the output will be printed, for example, then you have essentially formally specified how the data should be organized…
30. LtWorf ◴[] No.44622898{3}[source]
> LLMs change that

Yup. Now you get a quick reply and have to then do the same job as before to validate it. Except all websites are deploying crawler countermeasures so it takes even longer now.

31. LtWorf ◴[] No.44622903{3}[source]
I appended changing my search engine
32. LtWorf ◴[] No.44622911{4}[source]
> - Using a source to claim the opposite of what the source says.

That's because a lot of people do that all the time when arguing online. Cite something without bothering to read it.

33. LtWorf ◴[] No.44622917[source]
> Dario Amodei says we'll have the first billion dollar solo-company by 2026 [1]. I lean toward this not happening.

Why not? Not like companies have to actually do anything beyond marketing to get insane evaluations… remember theranos?

replies(1): >>44635946 #
34. twixfel ◴[] No.44623074{3}[source]
The news industry of the future will be Joe Rogan and friends. Arguably it already is. Hard to see how that’s an improvement on what came before.
35. fullstackchris ◴[] No.44623695[source]
This comment makes no sense here. Did you read the article? The author built an entire SaaS app in a few days with an agent. That isn't "just search"
36. rightbyte ◴[] No.44623715{3}[source]
My take is that journalists, was fighting enshittification as long as they could one looming bankruptcy leading to consolidation or closed shop at a time.
37. clickety_clack ◴[] No.44626209{4}[source]
I don’t disagree with that at all, but that’s not what I’m talking about. The market for serving information goes where the people want to consume it. The old portals of the 90s gave way to search because it was easier for people to find what they wanted. LLMs give people an even easier way to find information. The downstream effects don’t factor into most people’s decision to use an LLM over source material.
38. 827a ◴[] No.44635946{3}[source]
Theranos peaked at ~800 employees, FYI.

Reason 1: Because any company with one individual who leverages AI to achieve a billion dollar valuation would trivially & obviously be more valuable and be able to achieve more if they had two people leveraging AI. And, at a billion dollar valuation; why not pay that extra salary? Why not add a third? There's a huge difference in potential output by building with a founder team of 2 or 3 people versus 1; with not that much difference in cost.

Reason 2: The most capital efficient valuations in history are B2C companies (Instagram & Whatsapp are maybe the two best examples) during the VC boom era of the 2010s. B2C is naturally very capital efficient; you can do inbound marketing, no sales teams, just build. But, B2C success stories are more-and-more rare; the name of the game in the 2020s era has generally been B2B. Cursor might be the fastest company to reach $500M in revenue, and it got there on B2B, Notion isn't building its AI tools to sell to its consumer customers, etc. B2B is a lot harder; it requires outbound marketing & sales, as well as customer-led product development that usually requires interacting with people. AI will absorb some of those roles, but to suggest it can widdle down to one person per billion dollars in valuation in a year feels too accelerated to me. The world is complex, old, and crusty.

Just look at the YC batches for 2025 [1]: Of the 375 companies in the three batches, 20 are Consumer tech (~5%).

Reason 3: Weaker, but something I think about: AI is weirdly non-differentiating/egalitarian. You see some people try to differentiate it with crazy prompts or context engineering, but then next month Cursor ships an update and suddenly those aren't differentiating anymore. If you're an investor in a single-person unicorn-moonshot, you have to ask: Why can't some other company just come in and do the same thing you're doing? If that much can be automated through off-the-shelf systems. My feeling is that this concern would lead to lower revenue multiples on the valuation, which just makes it that much harder to hit unicorn status.

[1] https://www.ycombinator.com/companies?batch=Summer%202025&ba...

39. kinj28 ◴[] No.44644144[source]
AI is still in an experimental phase for many teams, especially when it comes to handling complex, long-term projects. For PMs and EMs, the cost-benefit analysis of AI credits vs. manual tasks is a big concern before fully committing to AI adoption. Some teams have seen great success, particularly in areas where speed and flexibility are key, but others are still waiting for clearer ROI before diving in. It’ll be interesting to see how the balance of risk and reward evolves as AI tools mature.
40. Anamon ◴[] No.44676256{3}[source]
A nice article on this was in the January issue of the Communications of the ACM [1]. With a reference to a piece by Dijkstra predicting that this is never going to be effective, back in 1979.

Being able to write code in a programming language is a feature, not a flaw. If we had always had to program in natural language, the precision and unambiguity of programming languages would be an eagerly welcomed revolution.

[1] https://cacm.acm.org/opinion/on-program-synthesis-and-large-...