To quote their purpose:
>The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
While you may argue it is not intelligent, it is certainly AI, which is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards simulating intelligence and learning.
This is "it's just an engineering problem, we just have to follow the roadmap", except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
No, this is "it's a science problem". All this:
> except the roadmap is illegible and the incremental steps noodle around and lead somewhere else.
is what makes it science rather than engineering.
From the outside though, it is tough to decide if somebody is doing proper science. Maybe they are just doing nonsense. Following a hunch or an intuition may look like nonsense from the outside, though.
Several colleagues of mine have had to switch out of scientific machine learning as a discipline because the funding just isn't there anymore. All the money is in generic LLM research and generating pictures slightly better.
Second, not sure what you are saying exactly, do you think "experiments in cold fusion in a test tube" are a step forward for science? Do you think a serious scientist would believe that?
As I said, playing science, and doing proper science, are two entirely different things, but hard to distinguish from the outside.
Leaving money out of it, my point is that they weren't doing fusion, they were doing fusion research. Their device was for fusion, but it was not a working fusion device. Similarly, the software of AI researchers is not working AI software, and they are not doing AI, apart from semantic shift where we call it AI now anyway and created the term AGI to replace the former meaning.
It's not correct to say that an experiment, with the intent of finding out how to do a thing, is equal to the goal. It's a step.
Calling it "incremental" is misleading since all steps are incremental, and assuming you're doggedly determined and exit blind alleys and circles, you will eventually arrive, if the destination exists. But "incremental" suggests you know the distance and know how far there is to go, or at least can put a bound on it, and know in some sense which way. Like the whole thing is planned.
So saying that AI "is anything in the last 70 years utilizing a machine that could be considered an incremental steps towards [AI]" is misleading, in both those ways. The process is not the goal, and the goal is not being approached at a known rate.