In no particular order:
LLMs seem, for some reason, to be worse at some languages than others.
LLMs only have so much context available, so larger projects are harder to get good results in.
Some tools (eg a fast compiler) are very useful to agents to get good feedback. If you don't have a compiler, you'll get hallucinations corrected more slowly.
Some people have schedules that facilitate long uninterrupted periods, so they see an agent work for twenty minutes on a task and think "well I could've done that in 10-30 minutes, so where's the gain?". And those people haven't understood that they could be running many agents in parallel (I don't blame people for not realizing this, no one I talk to is doing this at work).
People also don't realize they could have the agent working while they're asleep/eating lunch/in a meeting. This is why, in my experience, managers find agents more transformative than ICs do. We're in more meetings, with fewer uninterrupted periods.
People have an expectation that the agent will always one-shot the implementation, and don't appreciate it when the agent gets them 80% of the way there. Or that, it's basically free to try again if the agent went completely off the rails.
A lot of people don't understand that agents are a step beyond just an LLM, so their attempts last year have colored their expectations.
Some people are less willing to attempt to work with the agent to make it better at producing good output. They don't know how to do it. Your agent got logging wrong? Okay, tell it to read an example of good logging and to write a rule that will get it correct.