To me its totally obvious that we will have a plethora of very valuable startups who use RL techniques to solve realworld problems in practical areas of engineering .. and I just get blank stares when I talk about this :]
Ive stopped saying AI when I mean ML or RL .. because people equate LLMs with AI.
We need better ML / RL algos for CV tasks :
- detecting lines from pixels
- detecting geometry in pointclouds
- constructing 3D from stereo images, photogrammetry, 360 panoramas
These might be used by LLMs but are likely built using RL or 'classical' ML techniques, tapping into the vast parallel matmull compute we now have in GPUs / multicore CPUs, and NPUs.Also, LLMs really suck at some basic tasks like counting the sides of a polygon.
==> For me it is more something like :
Source = crude video-or-photo pixels (to) ===> Find simple many rectangle-surface that are glued together one another.
This is, for me, how you really go easily to detecting rather complexes geometry of any room.I'm hopeful that VLMs will "fan out" into a lot of positive outcomes for computer vision.
not in the defense sector, or aviation, or UAVS, automotive, etc. Any proper real-time vision task where you have to computationally interact with visual data is unsuited for LLMs.
Nobody controls a drone, missile or vehicle by taking a screenshot and sending it to ChatGPT and has it do math while it's on flight, anything that requires as the title of the thread says, spatial intelligence is unsuited for a language model
Similarly I use another algo to detect pipe runs which tend to appear as half cylinders in the pointcloud, as the scanner usually sees one side, and often the other side is hidden, hard to access, up against a wall.
So, I guess my point is the devil is in the details .. and machine learning can optimize even further on good heuristics we might come up with.
Also, when you go thru a whole pointcloud, you have a lot of data to sift thru, so you want something fairly efficient, even if your using multiple GPUs do do the heavy matmull lifting.
You can think of RL as an optimization - greatly speeding up something like monte carlo tree search, by learning to guess the best solution earlier.