←back to thread

423 points serjester | 6 comments | | HN request time: 1.405s | source | bottom
1. extr ◴[] No.43538417[source]
The problem I find in many cases is that people are restrained by their imagination of what's possible, so they target existing workflows for AI. But existing workflows exist for a reason: someone already wanted to do that, and there have been countless man-hours put into the optimization of the UX/UI. And by definition they were possible before AI, so using AI for them is a bit of a solution in search of a problem.

Flights are a good example but I often cite Uber as a good one too. Nobody wants to tell their assistant to book them an Uber - the UX/UI is so streamlined and easy, it's almost always easy enough to just do it yourself (or if you are too important for that, you probably have a private driver already). Basically anything you can do with an iPhone and the top 20 apps is in this category. You are literally competing against hundreds of engineers/product designers who had no other goal than to build the best possible experience for accomplishing X. Even if LLMs would have been helpful a priori - they aren't after every edge case has already been enumerated and planned for.

replies(2): >>43538507 #>>43538886 #
2. lolinder ◴[] No.43538507[source]
> You are literally competing against hundreds of engineers/product designers who had no other goal than to build the best possible experience for accomplishing X.

I think part of what's been happening here is that the hubris of the AI startups is really showing through.

People working on these startups are by definition much more likely than average to have bought the AI hype. And what's the AI hype? That AI will replace humans at somewhere between "a lot" and "all" tasks.

Given that we're filtering for people who believe that, it's unsurprising that they consciously or unconsciously devalue all the human effort that went into the designs of the apps they're looking to replace and think that an LLM could do better.

replies(1): >>43538967 #
3. arionhardison ◴[] No.43538886[source]
> The problem I find in many cases is that people are restrained by their imagination of what's possible, so they target existing workflows for AI.

I concur and would like to add that they are also restrained by the limitations of existing "systems" and our implicit and explicit expectations of said system. I am currently attempting to mitigate the harm done by this restriction by focusing on and starting with a first principal analysis of the problem being solved before starting the work, for example; lets take a well established and well documented system like the SSA.

When attempting to develop, refactor, extend etc... such a system; what is the proper thought process. As I see it, there are two paths:

Path 1:

  a) Breakdown the existing workflows

  b) Identify key performance indicators (KPIs) that align with your business goals

  c) Collect and analyze data related to those KPIs using BPM tools

  d) Find the most expensive worst performing workflows

  e) Automate them E2E w/ interface contracts on either side
This approach locks you into to existing restrictions of the system, workflows, implementation etc...

Path 2:

  a) Analyze system to understand goal in terms of 1st principals, e.g: What is the mission of the SSA? To move money based on conditional logic.

  b) What systems / data structures are closest to this function and does the legacy system reflect this at its core e.g.: SSA should just be a ledger IMO

  c) If Yes, go to "Path 1" and if No go to "D"

  d) Identify the core function of the system, the critical path (core workflow) and all required parties

  e) Make MVP which only does the bare min
By following path 2 and starting off with an AI analysis of the actual problem and not the problem as it exist as a solution within the context of an existing system, it is my opinion that the previous restrictions have been avoided.

Note: Obviously this is a gross oversimplification of the project management process and there are usually external factors that weigh in and decide which path is possible for a given initiative, my goal here was just to highlight a specific deviation from my normal process that has yielded benefits so far in my own personal experience.

4. arionhardison ◴[] No.43538967[source]
> I think part of what's been happening here is that the hubris of the AI startups is really showing through.

I think it its somewhat reductive to assign this "hubris" to "AI startups". I would posit that this hubris is more akin to the superiority we feel as human beings.

I have heard people say several times that they "treat AI like a Jr. employee", I think that within the context of a project AI should be treated based on the level if contribution. If AI is the expert, I am not going to approach it as if I am an SME that knows exactly what to ask. I am going to try and focus on the thing. know best, and ask questions around that to discover and learn the best approach. Obviously there is nuance here that is outside the scope of this discussion, but these two fundamentally different approaches have yield materially different outcomes in my experience.

replies(1): >>43541238 #
5. hexasquid ◴[] No.43541238{3}[source]
Treat AI like a junior employee?

Absolutely not. When giving tasks to an AI, we supply them with context, examples of what to do, examples of what not to do, and we clarify their role and job. We stick with them as they work and direct them accordingly when something goes wrong.

I've no idea what would happen if we treated a junior developer like that.

replies(1): >>43543093 #
6. aledalgrande ◴[] No.43543093{4}[source]
They would become a senior developer? lol ;)