←back to thread

335 points ingve | 2 comments | | HN request time: 0.415s | source
Show context
owlbite ◴[] No.45083253[source]
So how many gates are we talking to factor some "cryptographically useful" number? Is there some pathway that makes quantum computers useful this century?
replies(9): >>45083492 #>>45083705 #>>45084166 #>>45084245 #>>45084350 #>>45084520 #>>45085615 #>>45085735 #>>45088593 #
lisper ◴[] No.45084350[source]
> So how many gates are we talking to factor some "cryptographically useful" number?

That is a hard question to answer for two reasons. First, there is no bright line that delineates "cryptographically useful". And second, the exact design of a QC that could do such a calculation is not yet known. It's kind of like trying to estimate how many traditional gates would be needed to build a "semantically useful" neural network back in 1985.

But the answer is almost certainly in the millions.

[UPDATE] There is a third reason this is hard to predict: for quantum error correction, there is a tradeoff between the error rate in the raw qbit and the number of gates needed to build a reliable error-corrected virtual qbit. The lower the error rate in the raw qbit, the fewer gates are needed. And there is no way to know at this point what kind of raw error rates can be achieved.

> Is there some pathway that makes quantum computers useful this century?

This century has 75 years left in it, and that is an eternity in tech-time. 75 years ago the state of the art in classical computers was (I'll be generous here) the Univac [1]. Figuring out how much less powerful it was than a modern computer makes an interesting exercise, especially if you do it in terms of ops/watt. I haven't done the math, but it's many, many, many orders of magnitude. If the same progress can be achieved in quantum computing, then pre-quantum encryption is definitely toast by 2100. And it pretty much took only one breakthrough, the transistor, to achieve the improvement in classical computing that we enjoy today. We still don't have the equivalent of that for QC, but who knows when or if it will happen. Everything seems impossible until someone figures it out for the first time.

---

[1] https://en.wikipedia.org/wiki/UNIVAC_I#Technical_description

replies(4): >>45084500 #>>45084571 #>>45086104 #>>45087176 #
TheOtherHobbes ◴[] No.45087176[source]
It's not an eternity because QC is a low-headroom tech which is already pushing its limits.

What made computing-at-scale possible wasn't the transistor, it was the precursor technologies that made transistor manufacturing possible - precise control of semiconductor doping, and precision optical lithography.

Without those the transistor would have remained a lab curiosity.

QC has no hint of any equivalent breakthrough tech waiting to kick start a revolution. There are plenty of maybe-perhaps technologies like Diamond Defects and Photonics, but packing density and connectivity are always going to be huge problems, in addition to noise and error rate issues.

Basically you need high densities to do anything truly useful, but error rates have to go down as packing densities go up - which is stretching optimism a little.

Silicon is a very forgiving technology in comparison. As long as your logic levels have a decent headroom over the noise floor, and you allow for switching transients (...the hard part) your circuit will be deterministic and you can keep packing more and more circuitry into smaller and smaller spaces. (Subject to lithography precision.)

Of course it's not that simple, but it is basically just extremely complex and sophisticated plumbing of electron flows.

Current takes on QC are the opposite. There's a lot more noise than signal, and adding more complexity makes the problem worse in non-linear ways.

replies(1): >>45087332 #
lisper ◴[] No.45087332[source]
I'm sympathetic to this argument, but nearly every technological breakthrough in history has been accompanied by plausible-sounding arguments as to why it should have been impossible. I myself left my career as an AI researcher about 20 years ago because I was convinced the field was moribund and there would be no major breakthroughs in my lifetime. That was about as well-informed a prediction as you could hope to find at the time and it was obviously very wrong. It is in the nature of breakthroughs that they are rare and unpredictable. Nothing you say is wrong. I would bet against QC is 5 years (and even then I would not stake my life savings) but not 75.
replies(2): >>45087765 #>>45088015 #
lqstuart ◴[] No.45087765[source]
In fairness, the biggest breakthrough in AI has been calling more and more things “AI.” Before LLMs it was content based collaborative filtering.
replies(1): >>45087967 #
lisper ◴[] No.45087967[source]
No, LLMs are a real breakthrough even if they are not by themselves reliable enough to produce a commercially viable application. Before LLMs, no one knew how to even convincingly fake a natural language interaction. I see LLMs as analogous to Rodney Brooks's subsumption architecture. Subsumption by itself was not enough, but it broke the logjam on the then-dominant planner-centric approach, which was doomed to fail. In that respect, subsumption was the precursor to Waymo, and that took less than 40 years. I was once a skeptic, but I now see a pretty clear path to AGI. It won't happen right away, but I'd be a little surprised if we didn't see it within 10 years.
replies(4): >>45087981 #>>45088068 #>>45088333 #>>45089509 #
zppln ◴[] No.45089509[source]
> clear path to AGI

What are the steps?

replies(1): >>45090928 #
lisper ◴[] No.45090928[source]
It's not really about "steps", it's about getting the architecture right. LLMs by themselves are missing two crucial ingredients: embodiment and feedback. The reason they hallucinate is that they have no idea what the words they are saying mean. They are like children mimicking other people. They need to be able to associate the words with some kind of external reality. This could be either the real world, or a virtual world, but they need something that establishes an objective reality. And then they need to be able to interact with that world, poke at it and see what it does and how it behaves, and get feedback regarding whether their actions were appropriate or not.

If I were doing this work, I'd look at a rich virtual environment like Minecraft or simcity or something like that. But it could also be coq or a code development environment.

replies(1): >>45092025 #
1. bryanrasmussen ◴[] No.45092025[source]
if they were able to associate with some sort of external reality will that prevent hallucination or just being wrong. Humans hallucinate and humans are wrong, perhaps being able to have intelligence without these qualities is the impossibility.
replies(1): >>45093946 #
2. lisper ◴[] No.45093946[source]
It's certainly possible that computers will suffer from all the same foibles that humans do, but we have a lot of evolutionary baggage that computers don't, so I don't see any fundamental reason why AGIs could not transcend those limitations. The only way to know is to do the experiment.