Of course, Nintendo clearly cared about the CPU a lot for marketing purposes (it's in the console's name), but from a purely technological perspective, it is wasteful. Most of the actual compute is done on the RSP anyway. So, getting a much smaller CPU would have been a big corner to cut, that could have saved enough resources to increase the texture cache to a useful resolution like 128x128 or so.
It should be noted, though, that the N64 was designed with multitexturing capabilities, which would have helped with the mushy colors had games actually taken advantage of it (but they didn't, which here again, the Nintendo SDK is to blame for).
If you sell games for roughly the same amount as before (or even a bit cheaper), you have extra surplus you can use to subsidise the cost of the console a bit.
Effectively, you'd be cutting a corner on worse load times, I guess?
Keep in mind that the above ignores questions of piracy. I don't know what the actual impact of a CD based solution would have been, but I can tell for sure that the officials at Nintendo thought it would have made a difference when they made their decision.
Only really in the marketing material. It's a bit like calling a 386 with an arithmetic co-processor an 80 bit machine, when it was still clearly a 32 bit machine by all metrics that matter.
However, I agree in general that the N64 CPU sits idle a lot of the time. It's overspecced compared to the rest of the system.
> Nintendo had a hard enough time with preventing piracy and unlicensed games with the NES and SNES [...]
Yes, so I'm not sure that the cartridge drawbacks bought them that much in terms of piracy protection?
I agree that the PS1 had more piracy, but I'm not sure that actually diminished its success?
How? The texture RAM (TMEM) is in the RSP, not in the CPU.
At least in my corner of the world (Spain), piracy improved its success. Everybody wanted the PSX due to how cheap it was, I think it outsold the N64 10:1.
But hardware was actively transitioning and what we "knew" one year was gone the next and Nintendo was lucky to have made enough right choices to support enough good games to survive the transition. They just got some bets wrong and calculated some tradeoffs poorly.
For example, almost everything Kaze is showing off, all the optimizations were technically doable on original hardware, but devs were crunching to meet deadlines and nobody even thought to wonder whether "lets put a texture on this cube" needed another ten hours of engineering time to optimize. Cartridges needed to be constructed by Christmas. A lot of games made optimization tradeoffs that were just wrong, and didn't test them to find out. Like the HellDivers 2 game size issue.
Sega meanwhile flubbed the transition like four different ways and died. Now they have the blue hedgehog hooked up to a milking machine. Their various transition consoles are hilariously bad. "Our cpu and rasterizer can't actually do real polygon rendering and can't fake it fast enough to do 3D graphics anyway. Oh, well what about two of them?"
Of course, even 32 bit are massively more than they actually need for the paltry amount of memory they actually get, even if you map ROM and various devices into the same virtual address space.
Anyway, the real problem is that TMEM was not a hardware-managed cache, but a scratchpad RAM fully under the control of the programmer, which meant that the whole texture had to fit under a meagre 4 kB of RAM! It is the same mistake that Sony and IBM later made with the Cell.