The cost structure was just bonkers. I replaced a big file server environment that was like $2M of Sun gear with like $600k of HP Proliant.
The real thing that killed the division is Oracle announcing that they would no longer support IA-64. It just so happened that like 90% of the clients using Itanium were using it for oracle DBs.
But by that point HP was already trying to get people to transition to more traditional x86 servers that they were selling.
I was a HUGE DEC Alpha fanboy at the time (even helped port FreeBSD to DEC Alpha), so I hated Itanium with a passion. I'm sure people like me who were 64-bit MIPS and PA-RISC fanboys and fangrirls also existed, and also lobbied against adoption of itanic where they could.
I remember when amd64 appeared, and it just made so much sense.
Itanium sounded the deathknell for all of them.
The only Unix to survive with any market share is MacOS, (arguably because of its lateness to the party) and it has only relatively recently went back to a more bespoke architecture
You had AutoCAD, you had 3D Studio Max, you had After Effects, you had Adobe Premiere. And it was solid stuff - maybe not best-in-class, but good enough, and the price was right.
The common attitude in the 80s and 90s was that legacy ISAs like 68k and x86 had no future. They had zero chance to keep up with the innovation of modern RISC designs. But not only did x86 keep up, it was actually outperforming many RISC ISAs.
The true factor is out-of-order execution. Some RISC contemporary designs were out-of-order too (Especially Alpha, and PowerPC to a lesser extent), but both AMD and Intel were forced to go all-in on the concept in a desperate attempt to keep the legacy x86 ISA going.
Turns out large out-of-order designs was the correct path (mostly OoO has side effect of being able to reorder memory accesses and execute them in parallel), and AMD/Intel had a bit of a head start, a pre-existing customer base and plenty of revenue for R&D.
IMO, Itanium failed not because it was a bad design, but because it was on the wrong path. Itanium was an attempt to achieve roughly the same end goal as OoO, but with a completely in-order design, relying on static scheduling. It had massive amounts of complexity that let it re-order memory reads. In an alternative universe where OoO (aka dynamic scheduling) failed, Itanium might actually be a good design.
Anyway, by the early 2000s, there just wasn't much advantage to a RISC workstation (or RISC servers). x86 could keep up, was continuing to get faster and often cheaper. And there were massive advantages to having the same ISA across your servers, workstations and desktops.
He was a key player in the Pentium Pro out of order implementation.
https://www.sigmicro.org/media/oralhistories/colwell.pdf
"We should also say that the 360/91 from IBM in the 1960s was also out of order, it was the first one and it was not academic, that was a real machine. Incidentally that is one of the reasons that we picked certain terms that we used for the insides of the P6, like the reservation station that came straight out of the 360/91."
Here is his Itanium commentary:
"Anyway this chip architect guy is standing up in front of this group promising the moon and stars. And I finally put my hand up and said I just could not see how you're proposing to get to those kind of performance levels. And he said well we've got a simulation, and I thought Ah, ok. That shut me up for a little bit, but then something occurred to me and I interrupted him again. I said, wait I am sorry to derail this meeting. But how would you use a simulator if you don't have a compiler? He said, well that's true we don't have a compiler yet, so I hand assembled my simulations. I asked "How did you do thousands of line of code that way?" He said “No, I did 30 lines of code”. Flabbergasted, I said, "You're predicting the entire future of this architecture on 30 lines of hand generated code?" [chuckle], I said it just like that, I did not mean to be insulting but I was just thunderstruck. Andy Grove piped up and said "we are not here right now to reconsider the future of this effort, so let’s move on"."
That sounds like DEC Alpha to me, yet Alpha didn't take over the world. "Proprietary architecture" is a bad word, not something you want to base your future on. Without the Intel/AMD competition, x86 wouldn't have dominated for all these years.
Itanic wasn't exactly HP-PA v.3, but it was a kissing cousin. Most of the HP shops I worked with believed the rhetoric it was going to be a straightforward if not completely painless upgrade from the PA-8x00 gear they were currently using.
Not so much.
The MIPS 10k line on the other hand...sigh...what might have been.
I remember when amd64 appeared, and it just made so much sense.
And you were right.
Now, to be clear, a lot of these folks and their ideas moved the state-of-the-art in compilers massively ahead, and are a big reason compilers are so good now. Really, really smart people worked this problem.
One of the selling points for HP users was running old code via dynamic translation and x86 would just work on the hardware directly.
Another fun fact I remember from working at HP was that later PA-RISC chips were fabbed at Intel because the HP-Intel agreement had Intel fabbing a certain amount of chips and since Merced was running behind... Intel-fabbed PA-RISC chips!
https://community.hpe.com/t5/operating-system-hp-ux/parisc-p...
Actually no, it was Metaflow [0] who was doing out-of-order. To quote Colwell:
"I think he lacked faith that the three of us could pull this off. So he contacted a group called Metaflow. Not to be confused with Multiflow, no connection."
"Metaflow was a San Diego group startup. They were trying to design an out of order microarchitecture for chips. Fred thought what the heck, we can just license theirs and remove lot of risk from our project. But we looked at them, we talked to their guys, we used their simulator for a while, but eventually we became convinced that there were some fundamental design decisions that Metaflow had made that we thought would ultimately limit what we could do with Intel silicon."
Multiflow, [1] where Colwell worked, has nothing to do with OoO, its design is actually way closer to Itanium. So close, in-fact that the Itanium project is arguably a direct decedent of Multiflow (HP licensed the technology, and hired Multiflow's founder, Josh Fisher). Colwell claims that Itainum's compiler is nothing more than the Multiflow compiler with large chunks rewritten for better performance.
I'm pressing X: the doubt button.
I would argue that speculative execution/branch prediction and wider pipeline, both of which that OoO largely benefitted from, would be more than OoO themselves to be the sole factor. In fact I believe the improvement in semiconductor manufacturing process node could contribute more to the IPC gain than OoO itself.
The late 90's to the early aughts' race for highest-frequency, highest-performance CPUs exposed not a need for a CPU-only, highly specialised foundry, but a need for sustained access to the very front of process technology – continuous, multibillion-dollar investment and a steep learning curve. Pure-play foundries such as TSMC could justify that spend by aggregating huge, diverse demand across CPU's, GPU's and SoC's, whilst only a handful of integrated device manufacturers could fund it internally at scale.
The major RISC houses – DEC, MIPS, Sun, HP and IBM – had excellent designs, yet as they pushed performance they repeatedly ran into process-cadence and capital-intensity limits. Some owned fabs but struggled to keep them competitive; others outsourced and were constrained by partners’ roadmaps. One can trace the pattern in the moves of the era: DEC selling its fab, Sun relying on partners such as TI and later TSMC, HP shifting PA-RISC to external processes, and IBM standing out as an exception for a time before ultimately stepping away from leading-edge manufacturing as well.
A compounding factor was corporate portfolio focus. Conglomerates such as Motorola, TI and NEC ran diversified businesses and prioritised the segments where their fab economics worked best – often defence, embedded processors and DSP's – rather than pouring ever greater sums into low-volume, general-purpose RISC CPU's. IBM continued to innovate and POWER endured, but industry consolidation steadily reduced the number of independent RISC CPU houses.
In the end, x86 benefited from an integrated device manufacturer (i.e. Intel) with massive volume and a durable process lead, which set the cadence for the rest of the field. The outcome was less about the superiority of a CPU-only foundry and more about scale – continuous access to the leading node, paid for by either gigantic internal volume or a foundry model that spread the cost across many advanced products.
It's a little annoying that OoO is overloaded in this way. I have seen some people suggesting we should be calling these designs "Massively-Out-of-Order" or "Great-Big-Out-of-Order" in order to be more specific, but that terminology isn't in common use.
And yes, there are some designs out there which are technically out-of-order, but don't count as MOoO/GBOoO. The early PowerPC cores come to mind.
It's not that executing instructions out-of-order benefits from complex branch prediction and wide execution units, OoO is what made it viable to start using wide execution units and complex branch prediction in the first place.
A simple in-order core simply can't extract that much parallelism, the benefits drop off quickly after two-wide super scalar. And accurate branch prediction is of limited usefulness when the pipeline is that short.
There are really only two ways to extract more parallelism. You either do complex out-of-order scheduling (aka dynamic scheduling), or you take the VLIW approach and try to solve it with static scheduling, like the Itanium. They really are just two sides of the same "I want a wide core" coin.
And we all know how badly the Itanium failed.
Ah, the philosophy of having the CPU execution out of ordered, you mean.
> A simple in-order core simply can't extract that much parallelism
While yes, it is also noticable that it does not have data hazard because a pipeline simply doesn't exist at all, and thus there is no need for implicit pipeline bubble or delay slot.
> And accurate branch prediction is of limited usefulness when the pipeline is that short.
You can also use a software virtual machine to turn an out-of-order CPU into basically running in-order code and you can see how slow that goes. That's why JIT VM such as HotSpot and GraalVM for JVM platform, RyuJIT for CoreCLR, and TurboFan for V8 is so much faster, because when you compile them to native instruction, the branch predictor could finally kick in.
> like the Itanium > And we all know how badly the Itanium failed.
Itanium is not exactly VLIW. It is an EPIC [^1] fail though.
[1]: https://en.wikipedia.org/wiki/Explicitly_parallel_instructio...
It's also interesting to note that back then the consensus was that you needed your own in-house fab with tight integration between the fab and CPU design teams to build the highest performance CPU's. Merchant fabs were seen as second-best options for those who didn't need the highest performance or couldn't afford their own in-house fab. Only later did the meteoric rise of TSMC to the top spot on the semiconductor food chain upend that notion.
Linux didn't "win" nearly as much as x86 did by becoming "good enough" - Linux just happened to be around to capitalize on that victory.
The writing on the wall was the decreasing prices and increasing capability of consumer-grade hardware. Then real game-changer followed: horizontal scalability.
Meanwhile the decision to keep Itanium on expensive but lower-volume market meant that there simply wasn't much market growth, especially once non-technical part of killing other RISCs failed. Ultimately Itanium was left as recommended way in some markets to run Oracle databases (due to partnership between Oracle and HP) and not much else, while shops that used other RISC platforms either migrated to AMD64, or moved to other RISC platforms (even forcing HP to resurrect Alpha for last one gen)
To the point that once that ended with Oracle's purchase of Sun, there was a lawsuit between Oracle and HP. And a lot of angry customers as HP-UX was pushed to the last moment of acquisition announcement.
I guess Oracle / Sun sparc is also still hanging on. I haven't seen a Sun shop since the early 2000's...
Intel made a bet on parallel processing and compilers figuring out how to organize instructions instead of doing this in silicone. It proved to be very hard to do, so the supposedly next gen processors turned out to be more expensive and slower than the last gen or new AMD ones.
Almost all early startups I worked with were Sun / Solaris shops. All the early ISPs I worked with had Sun boxes for their customer shell accounts and web hosts. They put the "dot in dot-com", after all...
The problem as far as I can tell as a layman is that the compiler simply doesn't have enough information to do this job at compile time. The timing of the CPU is not deterministic in the real world because caches can miss unpredictably, even depending on what other processes are running at the same time on the computer. Branches also can be different depending on the data being processed. Branch predictors and prefetchers can optimize this at runtime using the actual statistics of what's happening in that particular execution of the program. Better compilers can do profile directed optimization, but it's still going to be optimized for the particular situation the CPU was in during the profile run(s).
If you think of a program like an interpreter running a tight loop in an interpreted program, a good branch predictor and prefetcher are probably going to be able to predict fairly well, but a statically scheduled CPU is in trouble because at the compile time of the interpreter, the compiler has no idea what program the interpreter is going to be running.
I still run into a number of Solaris/SPARC shops, but even the most die hard of them are actively looking for the off-ramp. The writing is on that wall.
That's the usual chicken & egg problem... If they sold more units, the prices would have come down. But people weren't buying many, because the prices were high.
Itanium, like Alpha, or any other alternative architecture, would also have trouble and get stuck in that circle. x86-64, being a very inexpensive add-on to x86, managed to avoid that.