When it matures, you’re right back to the same heat constraint considerations, just with faster chips.
> Rather than allowing heat to build up, what if we could spread it out right from the start, inside the chip?... To do that, we’d have to introduce a highly thermally conductive material inside the IC, mere nanometers from the transistors, without messing up any of their very precise and sensitive properties. Enter an unexpected material—diamond.
> ... my research group at Stanford University has managed what seemed impossible. We can now grow a form of diamond suitable for spreading heat, directly atop semiconductor devices at low enough temperatures that even the most delicate interconnects inside advanced chips will survive... Our diamonds are a polycrystalline coating no more than a couple of micrometers thick.
> The potential benefits could be huge. In some of our earliest gallium-nitride radio-frequency transistors, the addition of diamond dropped the device temperature by more than 50 °C.
https://www.powerelectronicsnews.com/diamond-semiconductors-...
Edit: Because they are polycrystalline, and produced with a very new and novel technology.
"Our diamonds are a polycrystalline coating no more than a couple of micrometers thick."
BTW, the thermal conductivity of C-12 diamond at cryogenic temperature is even higher, reaching something like 41000 W/m K at 104 K.
Isotopically purified silicon has also been considered due to its higher thermal conductivity, but the effect there at room temperature is not nearly as dramatic.
Weirdly, I read UV damage in C-12 diamond is reduced by a factor of 10 vs. natural diamond, I understand because this damage process is mediated by phonons. No relevance to the chip use case (unless UV damage in photolithography could be important?), but I found it interesting.
> The high p-n junction built-in voltage (4.9V, compared to 2.8V in SiC) and short carrier lifetimes limit the advantages of bipolar diamond devices to only ultra-high voltages (> 6kV) and low switching frequencies.
Nobody is thinking about using diamond for the silicon CMOS logic in a computer, though they may replace the gallium arsenide we use for motor control some day.
On an unrelated note, I like the writing style of this article a lot. This is how science journalism should be. It reminds me of how Scientific American used to be before it was ruined. Is IEEE Spectrum always like this? I might have to subscribe to the print version. I want articles like this floating around my house for my kids to discover.
"Before my lab turned to developing diamond as a heat-spreading material, we were working on it as a semiconductor. In its single-crystal form—like the kind on your finger—it has a wide bandgap and ability to withstand enormous electric fields. Single-crystalline diamond also offers some of the highest thermal conductivity recorded in any material, reaching 2,200 to 2,400 watts per meter per kelvin—roughly six times as conductive as copper. Polycrystalline diamond—an easier to make material—can approach these values when grown thick. Even in this form, it outperforms copper.
"As attractive as diamond transistors might be, I was keenly aware—based on my experience researching gallium nitride devices—of the long road ahead..."
It sounds like the most important part of the article (and another cool quote) is this:
>Until recently we knew how to grow it only at circuit-slagging temperatures in excess of 1,000 °C.
So basically, the big breakthrough was low-temp growth of a diamond lattice. Very cool they can do it at such a low temperature. It must be a crazy low temp - probably under 100C?
If a chip were to be stacked as tall as it was wide, are we talking 10x, 100x, 100,000x?
I guess for N stacks you're still paying N chips worth of wafer, and Nx the amount of defects.
That said, mentioned temperature gains are absolutely and utterly insane even if they come with some high-frequency issues.
"Oxygen-assisted monodisperse transition-metal-atom-induced graphite phase transformation to diamond: a first-principles calculation study"
I think it's pay-walled unfortunately. https://pubs.rsc.org/en/content/articlelanding/2024/ta/d4ta0...
The packaging usually has the stacked dies offset in a staircase pattern so that the contacts at the edge are exposed for every die. The alternative is through-silicon vias (TSVs), which theoretically would allow stacking until you have a mass of chips that is roughly a cube, but achieving that without having a defective connection somewhere in the stack is approximately impossible.
Not sure I understand this. Is this a requirement for real-world use? What happens if the outside of the coating isn't atomically flat? What makes this hard to do?
This is probably not an issue for thermal TSVs, because of the heat spreader layer between each silicon layer, but it would become an issue for power TSVs, as each layer would (presumably) require an independent supply of power.
Not to say that it can't be done, only that the process window is not very large and the propensity for deleterious carbon soot is very high. Likely this will generate some very fun, highly integrated problem statements before we see this available for sale.
Getting heat out of the chip is such a painful and important struggle. I hope this works on a real process line. Too many benefits on the table to ignore.
Edit: Grammar, clarity
Caveat: For older processes, built on a larger scale (>1 micron), these kinds of details may not matter, in which you are right to question this point. But if you want to implement on cutting edge manufacturing processes, these details absolutely do matter.
To put this in perspective, in cutting edge process nodes, I've seen senior engineers argue bitterly over ~1 nm in a certain critical dimension. That's (roughly) about 5 atoms across, depending on how much you trust the accuracy of the metrology.
So, if ANY layer isn't "flat" (or otherwise to spec within tolerance), the next layer in the semiconductor patterning stack will tend to translate that bumpiness upward, or cause a deformity in adjacent structure. This is (almost) always bad. These defects cause voids, bad electrical/thermal contacts and characteristics, misshapen/displaced structures, etc, etc
Crystallization in thin-film (especially conformal/gap-filling films) is a tough job which many poor PhD students have slaved over. Poly crystalline material is arguably harder to control in some key ways vs mono crystalline, since you don't have direct control the specific crystal grain orientation and growth direction. That is, some grain orientations will grow quickly, and others growing slowly. You can imagine the challenge then of getting the layer to terminate growth without ending up too jagged on the ~nm scale. After that you also get into the fun world of crystal defects, grain size, and deciding if you need to do some more post-processing (do I risk planarizing?)
Hopefully I have captured some of the pieces involved in an understandable way.
Edit: clarity
This on top of all the through-silicon-vias and backside power delivery would make even the crustiest of engineers weep...
"Assuming this becomes easier and cheaper to do as the technique matures"
In other words, what I'm suggesting is a potential future use if the cost comes down.
I suppose that's enough for cookware?
More seriously: I did see that, and your idea is interesting! My intent was to communicate the minimum threshold we would need to hit to make that future a reality.
My understanding is that point/local resistive heating problems out in the wild tend to drive different failure modes vs the global heating techniques used on the manufacturing line, mostly because the CPU is in active operation, which changes the defect physics. Put another way, likely any particular structure in the CPU would not need to reach 400C to fail - even the small voltages used in these chips coupled with elevated temperature can drive a lot of difficult-to-catch, slow-to-manifest failure modes. Copper metal migration is the classic example of this type of problem, where copper ions slowly migrate under voltage+temperature, causing/propagating voids until finally an open circuit is made. Surprise! your chip no longer works after seeming perfectly fine! Manufacturers try to catch such problems with simulated aging through aggressive temperature and voltage experiments. Intel must have discovered a big gap in their visibility, and then discovered their CPU specs were incompatible with the stated product lifetime without a major re-spec of already sold product. Ouch.
The chip manufacturer also has some capability to make repairs and adjustments ahead of end of line, which should encompass managing some of the issues you refer to. Some big customers might have their own repair capabilities.
Edit: Clarity, trying to better address the question
The Tiny Star Explosions Powering Moore’s Law https://spectrum.ieee.org/euv-light-source
https://en.wikipedia.org/wiki/Material_properties_of_diamond
Also, removed/liberated particles of Diamond from the workpiece which failed to fully chemically dissolve into the slurry would then contribute to the abrasive in the slurry. If the slurry abrasive was not also diamond, then that could lead to some serious scratch/gouging of the work surface.
Perhaps not insurmountable, but wow, that sounds like a stiff challenge, especially when accounting for cost.
I wonder if diamond would be machinable with a dry (plasma) etch instead? I am purely speculating here, this is far out of my wheelhouse. But SiO2 is already very chemically inert (though considerably softer vs diamond), but manufacturers regularly dry etch it.
Next, a diamond layer every few layers in 3D chips?
[1] https://www.alibaba.com/product-detail/Thermal-Management-Gr...