While the top coding models have become much more trustworthy lately, Grok isn't there yet. It doesn't matter if it's fast and/or free; if you can't trust a tool with your code, you can't use it.
While the top coding models have become much more trustworthy lately, Grok isn't there yet. It doesn't matter if it's fast and/or free; if you can't trust a tool with your code, you can't use it.
- Boston Dynamics' Atlas does not move as gracefully as a human
- LLM writing and code is oh-so-easy to spot
- the output of diffusion models is indistinguishable from a photo... until you look at it for longer than 5 seconds and decide to zoom in because "something's wrong"
- motion in AI-generated videos is very uncanny
Maybe it's because we get use to it and therefore recognize it easier, but it does seem to get more and more recognizable instead of the opposite, doesn't it?
I think I could recognize a ChatGPT email way easier in 2025 than if you showed me the same email written by gpt-3.5.
Not to mention that accidents happen, not everyone always has the good habit of using version control for every change in every project, and depending on the source control software and the environment you work in, it may not even be possible to preserve a pending change (not every project uses git).
I have heard real stories of software bugs causing uncommitted changes to be deleted, or causing an entire hobby project to be wiped from disk when it has not been pushed to remote repositories yet. They are good software engineers, but they are not super careful, and they trust other people's code too much.
There’s huge difference between different languages. With TS web development always working the best.
There are no proper retention laws with car manufacturers and self-driving development companies that I know of.
[0] https://arstechnica.com/cars/2025/08/how-a-hacker-helped-win...