←back to thread

446 points walterbell | 1 comments | | HN request time: 0.2s | source
Show context
AIorNot ◴[] No.43576024[source]
This is another silly against AI tools - that doesn’t offer useful or insightful suggestions on how to adapt or provide an informed study of areas of concern and - one that capitalizes on the natural worries we have on HN because of our generic fears around critical thinking being lost when AI will take over our jobs - in general, rather like concerns about the web in pre-internet age and SEO in digital marketing age

OSINT only exists because of internet capabilities and google search - ie someone had to learn how to use those new tools just a few years ago and apply critical thinking

AI tools and models are rapidly evolving and more in depth capabilities appearing in the models, all this means the tools are hardly set in stone and the workflows will evolve with them - it’s still up to human oversight to evolve with the tools - the skills of human overseeing AI is something that will develop too

replies(3): >>43576117 #>>43576130 #>>43576484 #
cmiles74 ◴[] No.43576130[source]
So weak! No matter how good a model gets it will always present information with confidence regardless of whether or not it's correct. Anyone that has spent five minutes with the tools I knows this.
replies(1): >>43577244 #
1. mattgreenrocks ◴[] No.43577244[source]
I’ve read enough pseudo-intellectual Internet comments that I tend to subconsciously apply a slight negative bias to posts that appear to try too hard to project an air of authority via confidence. It isn’t always the best heuristic, as it leaves out the small set of competent and well-marketed people. But it certainly deflates my expectations around LLM output.