←back to thread

205 points onename | 2 comments | | HN request time: 0.635s | source
Show context
gdiamos ◴[] No.45898849[source]
Transmeta made a technology bet that dynamic compilation could beat OOO super scalar CPUs in SPEC.

It was wrong, but it was controversial among experts at the time.

I’m glad that they tried it even though it turned out to be wrong. Many of the lessons learned are documented in systems conferences and incorporated into modern designs, ie GPUs.

To me transmeta is a great example of a venture investment. If it would have beaten Intel at SPEC by a margin, it would have dominated the market. Sometimes the only way to get to the bottom of a complex system is to build it.

The same could be said of scaling laws and LLMs. It was theory before Dario, Ilya, OpenAI, et al trained it.

replies(9): >>45898875 #>>45899126 #>>45899335 #>>45901599 #>>45902119 #>>45903852 #>>45906222 #>>45906660 #>>45908075 #
fajitaforce5 ◴[] No.45903852[source]
I was an intel cpu architect when transmeta started making claims. We were baffled by those claims. We were pushing the limit of our pipelines to get incremental gains and they were claiming to beat a dedicated arch on the fly! None of their claims made sense to ANYONE with a shred of cpu arch experience. I think your summary has rose colored lenses, or reflects the layman’s perspective.
replies(4): >>45904343 #>>45904657 #>>45905133 #>>45905527 #
1. empw ◴[] No.45905133[source]
Wasn't Intel trying to do something similar in Itanium i.e. use software to translate code into VLIW instructions to exploit many parallel execution units? Only they wanted the C++ compiler to do it rather than a dynamic recompiler? At least some people in Intel thought that was a good idea.

I wonder if the x86 teams at Intel people were similarly baffled by that.

replies(1): >>45907929 #
2. BirAdam ◴[] No.45907929[source]
Itanium wasn’t really focusing on running x86 code. Intel wanted native Itanium software, and x86 execution was a bonus.