I appreciate the distinction between agents and workflows - this seems to be commonly overlooked and in my opinion helps ground people in reliability vs capability. Today (and in the near future) there's not going to be "one agent to rule them all", so these LLM workflows don't need to be incredibly capable. They just need to do what they're intended to do _reliably_ and nothing more.
I've started taking a very data engineering-centric approach to the problem where you treat an LLM as an API call as you would any other tool in a pipeline, and it's crazy (or maybe not so crazy) what LLM workflows are capable of doing, all with increased reliability. So much so that I've tried to package my thoughts / opinions up into an AI SDK for Apache Airflow [1] (one of the more popular orchestration tools that data engineers use). This feels like the right approach and in our customer base / community, it also maps perfectly to the organizations that have been most successful. The number of times I've seen companies stand up an AI team without really understanding _what problem they want to solve_...