LLMs are more tech demo than product right now, and it could take many years for their full impact to become apparent.
LLMs are more tech demo than product right now, and it could take many years for their full impact to become apparent.
> I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code
https://www.businessinsider.com/anthropic-ceo-ai-90-percent-...
This seems either wildly optimistic or comes with a giant asterisk that AI will write it by token predicting, then a human will have to double check and refine it.
Again. We had some parts of one of 3 datasets split in ~40 files, and we had to manipulate and save them before doing anything else. A colleague asked chatgpt to write the code to do it and it was single threaded, and not feasible. I hopped up on htop and upon seeing it was using only one core, I suggested her to ask chatgpt to make the conversion run on different files in different threads, and we basically went from absolutely slow to quite fast. But that supposed that the person using the code knows what's going on, why, and what is not going on. And when it is possible to do something different. Using it without asking yourself more about the context is a terrible use imho, but it's absolutely the direction that I see we're headed towards and I'm not a fan of it