←back to thread

289 points sandslash | 2 comments | | HN request time: 0.625s | source
Show context
skwb ◴[] No.44451927[source]
It's hard to describe, but it's felt like LLMs have completely sucked the entire energy out of computer vision. Like... I know CVPR still happens and there's great research that comes out of it, but almost every single job posting in ML is about LLMs to do this and that to the detriment of computer vision.
replies(11): >>44452003 #>>44452047 #>>44452144 #>>44452268 #>>44454239 #>>44455214 #>>44455952 #>>44456130 #>>44456701 #>>44457560 #>>44460696 #
1. porphyra ◴[] No.44452268[source]
I feel like 3D reconstruction/bundle adjustment is one of those things where LLMs and new AI stuff haven't managed to get a significant foothold. Recently VGGT won best paper which is good for them, but for the most part, stuff like NERF and Gaussian Splatting still rely on good old COLMAP for bundle adjustment using SIFT features.

Also, LLMs really suck at some basic tasks like counting the sides of a polygon.

replies(1): >>44453012 #
2. KaiserPro ◴[] No.44453012[source]
> LLMs really suck at some basic tasks like counting the sides of a polygon.

Oh indeed, but thats not using tokens correctly. if you want to do that, then tokenise the number of polygons....