I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
The signs are not there but while we may not be on an exponential curve (which would be difficult to see), we are definitely on a steep upward one which may get steeper or may fizzle out if LLM's can only reach human level 'intelligence' but not surpass it. Original article was a fun read though and 360,000 words shorter than my very similar fiction novel :-)
The threshold would be “produce anything that isn’t identical or a minor transfiguration of input training data.”
In my experience my AI assistant in my code editor can’t do a damn thing that isn’t widely documented and sometimes botches tasks that are thoroughly documented (such as hallucinating parameters names that don’t exist). I can witness this when I reach the edge of common use cases where extending beyond the documentation requires following an implication.
For example, AI can’t seem to understand how to help me in any way with Terraform dynamic credentials because the documentation is very sparse, and it is not part of almost any blog posts or examples online. My definition the variable is populated dynamically and real aren’t shown anywhere. I get a lot of irrelevant nonsense suggestions on how to fix it.
AI is a great “amazing search engine” and it can string together combinations of logic that already exist in documentation and examples while changing some names here and there, but what looks like true understanding really is just token prediction.
IMO the massive amount of training data is making the man behind the curtain look way better than he is.