Most active commenters
  • ajross(4)
  • p_l(4)
  • kimixa(3)

←back to thread

331 points giuliomagnifico | 72 comments | | HN request time: 0.656s | source | bottom
1. ndiddy ◴[] No.45377533[source]
Fun fact: Bob Colwell (chief architect of the Pentium Pro through Pentium 4) recently revealed that the Pentium 4 had its own 64-bit extension to x86 that would have beaten AMD64 to market by several years, but management forced him to disable it because they were worried that it would cannibalize IA64 sales.

> Intel’s Pentium 4 had our own internal version of x86–64. But you could not use it: we were forced to “fuse it off”, meaning that even though the functionality was in there, it could not be exercised by a user. This was a marketing decision by Intel — they believed, probably rightly, that bringing out a new 64-bit feature in the x86 would be perceived as betting against their own native-64-bit Itanium, and might well severely damage Itanium’s chances. I was told, not once, but twice, that if I “didn’t stop yammering about the need to go 64-bits in x86 I’d be fired on the spot” and was directly ordered to take out that 64-bit stuff.

https://www.quora.com/How-was-AMD-able-to-beat-Intel-in-deli...

replies(11): >>45377674 #>>45377914 #>>45378427 #>>45378583 #>>45380663 #>>45382171 #>>45384182 #>>45385968 #>>45388594 #>>45389629 #>>45391228 #
2. wmf ◴[] No.45377674[source]
It wasn't recent; Yamhill has been known since 2002. A detailed article about this topic just came out: https://computerparkitecture.substack.com/p/the-long-mode-ch...
3. kstrauser ◴[] No.45377914[source]
"If you don't cannibalize yourself, someone else will."

Intel has a strong history of completely mis-reading the market.

replies(4): >>45378417 #>>45380495 #>>45386139 #>>45394743 #
4. zh3 ◴[] No.45378417[source]
Andy Grove, "Only the paranoid survive":-

Quote: Business success contains the seeds of its own destruction. Success breeds complacency. Complacency breeds failure. Only the paranoid survive.

- Andy Grove, former CEO of Intel

From wikipedia: https://en.wikipedia.org/wiki/Andrew_Grove#Only_the_Paranoid...

Takeaway: Be paranoid about MBAs running your business.

replies(1): >>45378841 #
5. jcranmer ◴[] No.45378427[source]
The story I heard (which I can't corroborate) was that it was Microsoft that nixed Intel's alternative 64-bit x86 ISA, instead telling it to implement AMD's version instead.
replies(2): >>45379105 #>>45381552 #
6. h4ck_th3_pl4n3t ◴[] No.45378583[source]
I wanted to mention that the Pentium 4 (Prescott) that was marketed as the Centrino in laptops had 64bit capabilities, but it was described as 32bit extended mode. I remember buying a laptop in 2005(?) which I first ran with XP 32bit, and then downloading the wrong Ubuntu 64bit Dapper Drake image, and the 64bit kernel was running...and being super confused about it.

Also, for a long while, Intel rebranded the Pentium 4 as Intel Atom, which then usually got an iGPU on top with being a bit higher in clock rates. No idea if this is still the case (post Haswell changes) but I was astonished to buy a CPU 10 years later to have the same kind of oldskool cores in it, just with some modifications, and actually with worse L3 cache than the Centrino variants.

core2duo and core2quad were peak coreboot hacking for me, because at the time the intel ucode blob was still fairly simple and didn't contain all the quirks and errata fixes that more modern cpu generations have.

replies(6): >>45379425 #>>45379498 #>>45379528 #>>45379547 #>>45380006 #>>45385421 #
7. zer00eyz ◴[] No.45378841{3}[source]
> Takeaway: Be paranoid about MBAs running your business.

Except Andy is talking about himself, and Noyce the engineers getting it wrong: (watch a few minutes of this to get the gist of where they were vs Japan) https://www.youtube.com/watch?v=At3256ASxlA&t=465s

Intel has a long history of sucking, and other people stepping in to force them to get better. Their success has been accident and intervention over and over.

And this isnt just an intel thing, this is kind of an American problem (and maybe a business/capitalism problem). See this take on steel: https://www.construction-physics.com/p/no-inventions-no-inno... that sounds an awful lot like what is happening to intel now.

replies(3): >>45380431 #>>45381083 #>>45387582 #
8. smashed ◴[] No.45379105[source]
Microsoft did port some versions of Windows to Itanium, so they did not reject it at first.

With poor market demand and AMD's success with amd64, Microsoft did not support itanium in vista and later desktop versions which signaled the end of Intel's Itanium.

replies(3): >>45379384 #>>45379495 #>>45380934 #
9. Analemma_ ◴[] No.45379384{3}[source]
Microsoft also ships/shipped a commercial compiler with tons of users, and so they were probably in a position to realize early that the hypothetical "sufficiently smart compiler" which Itanium needed to reach its potential wasn't actually possible.
replies(1): >>45382728 #
10. cogman10 ◴[] No.45379425[source]
Are you referring to PAE? [1]

[1] https://en.wikipedia.org/wiki/Physical_Address_Extension

replies(2): >>45380277 #>>45380383 #
11. ◴[] No.45379495{3}[source]
12. mjg59 ◴[] No.45379498[source]
Pentium 4 was never marketed as Centrino - that came in with the Pentium M, which was very definitely not 64-bit capable (and didn't even officially have PAE support to begin with). Atom was its own microarchitecture aimed at low power use cases, which Pentium 4 was definitely not.
13. SilverElfin ◴[] No.45379528[source]
Speaking of marketing, that era of Intel was very weird for consumers. In the 1990s, they had iconic ads and words like Pentium or MMX became powerful branding for Intel. In the 2000s I think it got very confused. Centrino? Ultrabook? Atom? Then for some time there was Core. But it became hard to know what to care about and what was bizarre corporate speak. That was a failure of marketing. But maybe it was also an indication of a cultural problem at Intel.
replies(2): >>45388074 #>>45394782 #
14. marmarama ◴[] No.45379547[source]
Centrino was Intel's brand for their wireless networking and laptops that had their wireless chipsets, the CPUs of which were all P6-derived (Pentium M, Core Duo).

Possibly you meant Celeron?

Also the Pentium 4 uarch (Netburst) is nothing like any of the Atoms (big for the time out-of-order core vs. a small in-order core).

15. kccqzy ◴[] No.45380006[source]
In 2005 you could already buy Intel processors with AMD64. It just wasn't called AMD64 or Intel64; it was called EM64T. During that era running 64-bit Windows was rare but running 64-bit Linux was pretty commonplace, at least amongst my circle of friends. Some Linux distributions even had an installer that told the user they were about to install 32-bit Linux on a computer capable of running 64-bit Linux (perhaps YaST?).
replies(1): >>45381669 #
16. esseph ◴[] No.45380277{3}[source]
No, EM64T
17. seabrookmx ◴[] No.45380383{3}[source]
PAE is a 32-bit feature that was around long before AMD64. OP means EM64T: https://www.intel.com/content/www/us/en/support/articles/000...
18. wslh ◴[] No.45380431{4}[source]
Andy Grove explained this very clearly in his book. By the way, the parallel works if you replace Japan with China in the video. In the late 1970s and 1980s, Japan initially reverse engineered memory chips, and soon it became impossible to compete with them. The Japanese government also heavily subsidized its semiconductor industry during that period.

My point isn't to take a side, but simply to highlight how history often repeats itself, sometimes almost literally, not rhyme.

19. nextos ◴[] No.45380495[source]
I don't think it's just mis-reading. It's also internal politics. How many at Nokia knew that the Maemo/MeeGo series was the future, rather than Symbian? I think quite a few. But Symbian execs fought to make sure Maemo didn't get a mobile radio. In most places, internal feuds and little kingdoms prevail over optimal decisions for the entire organization. I imagine lots of people at Intel were deeply invested in IA-64. Same thing repeats mostly everywhere. For example, from what I've heard from insiders, ChromeOS vs Android battles at Google were epic.
replies(1): >>45388063 #
20. kimixa ◴[] No.45380663[source]
That's no guarantee it would succeed though - AMD64 also cleaned up a number of warts on the x86 architecture, like more registers.

While I suspect the Intel equivalent would do similar things, simply from being a big enough break it's an obvious thing to do, there's no guarantee it wouldn't be worse than AMD64. But I guess it could also be "better" from a retrospective perspective.

And also remember at the time the Pentium 4 was very much struggling to get the advertised performance. One could argue that one of the major reasons that the AMD64 ISA took off is that the devices that first supported it were (generally) superior even in 32-bit mode.

EDIT: And I'm surprised it got as far as silicon. AMD64 was "announced" and the spec released before the pentium 4 was even released, over 3 years before the first AMD implementations could be purchased. I guess Intel thought they didn't "need" to be public about it? And the AMD64 extensions cost a rather non-trivial amount of silicon and engineering effort to implement - did the plan for Itanium change late enough in the P4 design that it couldn't be removed? Or perhaps this all implies it was a much less far-reaching (And so less costly) design?

replies(5): >>45381174 #>>45381211 #>>45384598 #>>45385380 #>>45386422 #
21. wmf ◴[] No.45380934{3}[source]
Microsoft supported IA-64 (Itanium) and AMD64 but they refused to also support Yamhill. They didn't want to support three different ISAs.
replies(1): >>45386283 #
22. II2II ◴[] No.45381083{4}[source]
> Intel has a long history of sucking, and other people stepping in to force them to get better. Their success has been accident and intervention over and over.

If one can take popular histories of Intel at face value, they have had enough accidental successes, avoided enough failures, and outright failed so many times that they really ought to know better.

The Itanium wasn't their first attempt to create an incompatible architecture, and it sounds like it was incredibly successful compared to the iAPX 432. Intel never intended to get into microprocessors, wanting to focus on memory instead. Yet they picked up a couple of contracts (which produced the 4004 and 8008) to survive until they reached their actual goal. Not only did it help the company at the time, but it proved essential to the survival of the company when the Japanese semiconductor industry nearly obliterated American memory manufacturers. On the flip side, the 8080 was source compatible with the 8008. Source compatibility would help sell it to users of the 8008. It sounds like the story behind the 8086 is similar, albeit with a twist: not only did it lead to Intel's success when it was adopted by IBM for the PC, but it was intended as a stopgap measure while the iAPX 432 was produced.

This, of course, is a much abbreviated list. It is also impossible to suggest where Intel would be if they made different decisions, since they produced an abundance of other products. We simply don't hear much about them because they were dwarfed by the 80x86 or simply didn't have the public profile of the 80x86 (for example: they produced some popular microcontrollers).

replies(2): >>45381196 #>>45385406 #
23. chasil ◴[] No.45381174[source]
The times that I have used "gcc -S" on my code, I have never seen the additional registers used.

I understand that r8-r15 require a REX prefix, which is hostile to code density.

I've never done it with -O2. Maybe that would surprise me.

replies(3): >>45381498 #>>45381833 #>>45387856 #
24. asveikau ◴[] No.45381196{5}[source]
Windows NT also originally targeted a non-x86 CPU from Intel, the i860.
25. ghaff ◴[] No.45381211[source]
As someone who followed IA64/Itanium pretty closely, it's still not clear to me the degree to which Intel (or at least groups within Intel) thought IA64 was a genuinely better approach and the degree to which Intel (or at least groups within Intel) simply wanted to get out from existing cross-licensing deals with AMD and others. There were certainly also existing constraints imposed by partnerships, notably with Microsoft.
replies(2): >>45381402 #>>45382598 #
26. ajross ◴[] No.45381402{3}[source]
Both are likely true. It's easy to wave it away in hindsight, but there was genuine energy and excitement about the architecture in its early days. And while the first chips were late and on behind-the-cutting-edge processes they were actually very performant (FPU numbers were world-beating, even -- parallel VLIW dispatch really helped here).

Lots of people loved Itanium and wanted to see it succeed. But surely the business folks had their own ideas too.

replies(3): >>45381455 #>>45383151 #>>45383639 #
27. kimixa ◴[] No.45381455{4}[source]
Yes - VLIW seems to lend itself to computation-heavy code, used to this day in many DSP (and arguably GPU, or at least "influences" many GPU) architectures.
28. astrange ◴[] No.45381498{3}[source]
You should be able to see it. REX prefixes cost a lot less than register spills do.

If you mean literally `gcc -S`, -O0 is worse than not optimized and basically keeps everything in memory to make it easier to debug. -Os is the one with readable sensible asm.

replies(1): >>45381519 #
29. chasil ◴[] No.45381519{4}[source]
Thanks, I'll give it a try.
30. antod ◴[] No.45381552[source]
Yeah, I remember hearing that at the time too. When MS chose to support AMD64, they made it clear it was the only 64bit x86 ISA they were going to support, even though it was an open secret Intel was sitting on one but not wanting to announce it.
31. fy20 ◴[] No.45381669{3}[source]
AMD was a no-brainer in the mid 2000s if you were running Linux. It was typically cheaper than Intel, lower power consumption (= less heat, less fan noise), had 64bit so you could run more memory, and dual core support was more widespread. Linux was easily able to take advantage of all of these, were as for Windows it was trickier.
32. o11c ◴[] No.45381833{3}[source]
Obviously it depends on how many live variables there are at any point. A lot of nasty loops have relatively few non-memory operands involved, especially without inlining (though even without inlining, the ability to control ABI-mandated spills better will help).

But it's guaranteed to use `r8` and `r9` for for a function that takes 5 and 6 integer arguments (including unpacked 128-bit structs as 2 arguments), or 3 and 4 arguments (not sure about unpacking) for Microsoft. And `r10` is used if you make a system call on Linux.

33. userbinator ◴[] No.45382171[source]
"Recently revealed" is more like a confirmation of what I had read many years before; and furthermore, that Intel's 64-bit x86 would've been more backwards-compatible and better-fitting than AMD64, which looks extremely inelegant in contrast, with several stupid missteps like https://www.pagetable.com/?p=1216 (the comment near the bottom is very interesting.)

If you look at the 286's 16-bit protected mode and then the 386's 32-bit extensions, they fit neatly into the "gaps" in the former; there are some similar gaps in the latter, which look like they had a future extension in mind. Perhaps that consideration was already there in the 80s when the 386 was being designed, but as usual, management got in the way.

replies(2): >>45382493 #>>45382651 #
34. Dylan16807 ◴[] No.45382493[source]
> (the comment near the bottom is very interesting.)

Segmentation very useful for virtualization? I don't follow that claim.

replies(1): >>45382694 #
35. tw04 ◴[] No.45382598{3}[source]
Given that Itanium originated at HP, it seems unlikely it was about AMD and more about the fact, at the time, Intel was struggling with 64-bit. People are talking about the P4 but Itanium architecture dates back to the late 80s…

https://en.m.wikipedia.org/wiki/Itanium

replies(1): >>45390482 #
36. CheeseFromLidl ◴[] No.45382651[source]

   would've been more backwards-compatible and better-fitting
Eagerly awaiting the first submission of someone decapping, forcing the fuse, capping and running it.
37. userbinator ◴[] No.45382694{3}[source]
https://www.pagetable.com/?p=25
replies(1): >>45383635 #
38. SunlitCat ◴[] No.45382728{4}[source]
I wonder if AI would have been a huge help to that.
replies(1): >>45383207 #
39. ccgreg ◴[] No.45383151{4}[source]
> they were actually very performant

Insanely expensive for that performance. I was the architect of HPC clusters in that era, and Itanic never made it to the top for price per performance.

Also, having lived through the software stack issues with the first beta chips of Itanic and AMD64 (and MIPS64, but who's counting), AMD64 was way way more stable than the others.

40. consp ◴[] No.45383207{5}[source]
Some "simple" optimization algorithm would be enough modern "AI" just adds obfuscation. Though it would be slow as hell and thus unusable.
41. Dylan16807 ◴[] No.45383635{4}[source]
"The virtual machine monitor’s trap handler must reside in the guest’s address space, because an exception cannot switch address spaces."

I would call this the real problem, and segmentation a bad workaround.

42. pjmlp ◴[] No.45383639{4}[source]
I am one of those people, and I think that it only failed because AMD had the possible to turn the tables on Intel, to use the article's title.

Without AMD64, I firmly believe eventually Itanium would have been the new world no matter what.

We see this all the time, technology that could be great but fails due to not being pushed hard enough, and other similar technology that does indeed succeed because the creators are willing push it at a loss during several years until it finally becomes the new way.

replies(3): >>45387412 #>>45389086 #>>45403145 #
43. Lu2025 ◴[] No.45384182[source]
> it would cannibalize IA64 sales

The concern is that it won't cannibalize sales, it would cannibalize IA64 manager's job and status. "You ship the org chart"

44. tuyiown ◴[] No.45384598[source]
> first supported it were (generally) superior even in 32-bit mode.

They also were affordable dual cores, it wasn't the norm at all at the time.

45. p_l ◴[] No.45385380[source]
Pentium 4 was widely speculated of being able to run 64bit at the time of AMD64 delivering, but at half the speed.

Essentially, while decoding a 64bit variant of x86 ISA might have been fused off, there was a very visible part that was common anyway, and that was available ALUs on NetBurst platform - which IIRC were 2x 32bit ALUs for integer ops. So you either issue micro-op to both to "chain" them together, or run every 64bit calculation in multiple steps.

replies(1): >>45388690 #
46. p_l ◴[] No.45385406{5}[source]
i960 was essentially iAPX432 done right in its full form. But the major client (BiiN partnership with Siemens) ultimately didn't pan out, various world events quite possibly also impacted things, and finally intel cannibalized the i960 team to make Pentium Pro.
47. p_l ◴[] No.45385421[source]
Very early intel "EM64T" chips (aka amd64 compatible) had too short virtual address size of 36bit instead of 40, which is why Windows 64bit didn't run on them, but some linux versions did.

Rest is well explained by sibling posts :)

48. indymike ◴[] No.45385968[source]
> Fun fact: Bob Colwell (chief architect of the Pentium Pro through Pentium 4) recently revealed that the Pentium 4 had its own 64-bit extension to x86 that would have beaten AMD64 to market by several years, but management forced him to disable it because they were worried that it would cannibalize IA64 sales.

File this one under "we made the right decision based on everything we knew at the time." It's really sad because the absolute right choice would have been to extend x86 and let it duke it out with Itanium. Intel would win either way and the competition would have been even more on the back heel. So easy to see that decades later...

49. cowmix ◴[] No.45386139[source]
When I ran the Python Meetup here in Phoenix -- an engineer for Intel's compilers group would show up all the time. I remember he would constantly be frustrated that Intel management would purposely down-play and cripple advances of the Atom processor line because they thought it would be "too good" and cannibalize their desktop lines. This was over 15 years ago -- I was hearing this in real-time. He flat out said that Intel considered the mobile market a joke.
50. dooglius ◴[] No.45386283{4}[source]
What is/was Yamhill?
replies(1): >>45387568 #
51. kouteiheika ◴[] No.45386422[source]
> That's no guarantee it would succeed though - AMD64 also cleaned up a number of warts on the x86 architecture, like more registers.

As someone who works with AMD64 assembly very often - they didn't really clean it up all that much. Instruction encoding is still horrible, you still have a bunch of useless instructions even in 64-bit mode which waste valuable encoding space, you still have a bunch of instructions which hardcode registers for no good reason (e.g. the shift instructions have a hardcoded rcx). The list goes on. They pretty much did almost the minimal amount of work to make it 64-bit, but didn't actually go very far when it comes to making it a clean 64-bit ISA.

I'd love to see what Intel came up, but I'd be surprised if they did a worse job.

52. ghaff ◴[] No.45387412{5}[source]
I'm inclined to agree and I've written as much. In a world where 64-bit x86 wasn't really an option, Intel and "the industry" would probably have eventually figured a way to make Itanium work well-enough and cost-effectively-enough and incremented over time. Some of the then-current RISC chips would probably have remained more broadly viable in that timeline but, in the absence of a viable alternative, 64-bit was going to happen and therefore probably Itanium.

Maybe ARM gets a real kick in the pants but high-performance server processors were probably too far in the future to play a meaningful role.

53. cwizou ◴[] No.45387568{5}[source]
It was the name of Intel's x86 64bit flavor : https://www.edn.com/intel-working-on-yamhill-technology-says...
54. tjwebbnorfolk ◴[] No.45387582{4}[source]
> Their success has been accident and intervention over and over.

Of course, the whole foundational thesis of market competition is that everything sucks unless forced by competitors to make your product better. That's why its VERY important to have effective competition.

It's not a capitalism problem, or really a "problem" at all. It's a recognition of a fact in nature that all animals are as lazy as they can get away with, and humans (and businesses made by humans) are no different.

55. wat10000 ◴[] No.45387856{3}[source]
I don't have gcc handy, but this bit of code pretty easily gets clang to use several of them:

    int f(int **x) {
        int *a = x[0]; int *b = x[1]; int *c = x[2]; int *d = x[3];
        puts("hello");
        return *a + *b + *c + *d;
    }
56. immibis ◴[] No.45388063{3}[source]
In other words, all complex systems get cancer.

Cancer is when elements of a system work to enrich themselves instead of the system.

57. immibis ◴[] No.45388074{3}[source]
Core is confusing. Of course it's a Core 2. It has 2 cores in it. Core 2 Quad? Obviously has 2 cores... oh wait, 4. i3/i5/i7 was reasonable except for lacking the generation number so people thought a 6th gen i3 was slower than a 1st gen i7 because 3 is less than 7. Nvidia seems to have model numbers figured out. Higher number is better, first half is generation and second half is relative position within it. At least if they didn't keep unfairly shifting the second half.
58. mathgradthrow ◴[] No.45388594[source]
This seems like an object lesson in making sure that the right hand does not know what the left is doing. Yes, if you have two departments working on two mutually exclusive architectures, one of them will necessarily fail. In exchange, however, you can guarantee that it will be the worse one. This is undervalued as a principle since the wasted labor is more easily measured, and therefore decision making is biased towards it.
replies(1): >>45389158 #
59. eigenform ◴[] No.45388690{3}[source]
Yeah, they wrote a paper about the ALUs too, see:

https://ctho.org/toread/forclass/18-722/logicfamilies/Delega...

> There are two distinct 32-bit FCLK execution data paths staggered by one clock to implement 64-bit operations.

If it weren't fused off, they probably would've supported 64-bit ops with an additional cycle of latency?

replies(1): >>45389997 #
60. Agingcoder ◴[] No.45389086{5}[source]
There was a fundamental difficulty with ‘given a sufficiently smart compiler’ if I remember well revolving around automatic parallelization. You might argue that given enough time and money it might have been solved, but it’s a really hard problem.

( I might have forgotten)

replies(1): >>45389534 #
61. short_sells_poo ◴[] No.45389158[source]
I agree with you, but perhaps this is very hard (impossible?) to pull off. Invariably, politics will result in various outcomes being favored in management and the moment that groups realize the game is rigged, the whole fair market devolves into the usual political in-fighting.
62. ajross ◴[] No.45389534{6}[source]
The compilers did arrive, but obviously too late. Modern pipeline optimization and register scheduling in gcc & LLVM is wildly more sophisticated than anything people were imagining in 2001.
replies(1): >>45392989 #
63. ChuckMcM ◴[] No.45389629[source]
Yup. I went to the Microprocessor Forum where they introduced 'Sledgehammer' (the AMD 64 architecture) and came back to NetApp where I was working and started working out how we'd build our next Filer using it. (that was a journey given the AMD distrust inside of NetApp!). I had a pretty frank discussion with the Intel SVP of product who was pretty bought into the Intel "high end is IA, Mid/PC is IA32, embedded is the 8051 stuff". They were having a hard time with getting Itainum wins.
64. p_l ◴[] No.45389997{4}[source]
At least one cycle, yes, but generally it would make it possible to deliver. AFAIK it also became crucial part of how intel could deliver "EM64T" chips fast enough - only to forget to upgrade memory subsystem which is why first generation can't run Windows (they retained 36bit physical addressing from PAE when AMD64 mandates minimum of 40, and Windows managed to trigger an issue on that)
65. mwpmaybe ◴[] No.45390482{4}[source]
For context, it was intended to be the successor to PA-RISC and compete with DEC Alpha.
66. alfiedotwtf ◴[] No.45391228[source]
> cannibalize IA64 sales

Damn!

67. kimixa ◴[] No.45392989{7}[source]
But modern CPUs have even more capabilities on re-ordering/OOO execution and other "live" scheduling work. They will always have more information available than a ahead-of-time static scheduling from the compiler, as so much is data dependent. If it wasn't worth it they would be slashing those capabilities instead.

Statically scheduled/in order stuff is still relegated to pretty much microcontroller, or specific numeric workloads. For general computation, it still seems like a poor fit.

replies(1): >>45403960 #
68. sys_64738 ◴[] No.45394743[source]
They don't misread the market so much as intentionally do that due to INTC being a market driven org. They want to suck up all the profits in each generation for each SKU. They stopped being an engineering org in the 80s. I hope they crash and burn.
69. sys_64738 ◴[] No.45394782{3}[source]
The is what happens when marketing gets involved. The worst of the worst being INTC marketing dept.
70. thesz ◴[] No.45403145{5}[source]

  > Without AMD64, I firmly believe eventually Itanium would have been the new world no matter what.
VLIW is not binary forward- or cross-implementation-compatible. If MODEL1 has 2 instruction per block and its successor MODEL2 has 4, the code for MODEL1 will be run on MODEL2, but it will underperform due to underutilization. If execution latencies differ between two versions of the same VLIW ISA implementation, the code for one may not be executed optimally on another. Even different memory controllers and cache hierarchies can change optimal VLIW code.

This precludes any VLIW from having multiple differently constrained implementations. You cannot segment VLIW implementations you can do with as x86, ARM, MIPS, PowerPC, etc, where same code will be executed as optimal as possible on the concrete implementation of ISA.

So - no, Itanium (or any other VLIW for that matter) would not be the new world.

replies(1): >>45403947 #
71. ajross ◴[] No.45403947{6}[source]
> VLIW is not binary forward- or cross-implementation-compatible.

It was on IA-64, the bundle format was deliberately chosen to allow for easy extension.

But broadly it's true: you can't have a "pure" VLIW architecture independent of the issue and pipeline architecture of the CPU. Any device with differing runtime architecture is going to have to do some cooking of the instructions to match it to its own backend. But that decode engine is much easier to write when it's starting from a wide format that presents lots of instructions and makes explicit promises about their interdependencies.

72. ajross ◴[] No.45403960{8}[source]
That's true. But if anything that cuts in the opposite direction in the argument: modern CPUs are doing all that optimization in hardware, at runtime. In software it's a no-brainer in comparison.