The Google Willow Thing

763 points
1/21/1970
16 days ago
by Bootvis

Comments


nsxwolf

Man, reading this makes me feel so small. Being a "software engineer" consuming APIs and updating database rows seems laughably childish compared to whatever the hell it is I just read. I can't even imagine why I should bother trying to understand it. It's completely inaccessible. Only an elite few get to touch these machines.

16 days ago

brailsafe

> I can't even imagine why I should bother trying to understand it.

Well, maybe you should just try for the hell of it and see how far you get? Becoming fit seems impossible to a morbidly obese 45 y.o, and it is if that person's expectation is unreasonable, but if they just change it to be more reasonable, break it down into manageable routines, then they can get somewhere eventually.

Find some papers, fill many gaps, dedicate a few years in your spare time, in 6 months you'll be 6 months closer than you were.

Whether there's a reason or not, idk, it's something to do, be curious. Don't forget that by dedicating their life to something, they're naturally not dedicating their life to other things, things that you might be able to do, like climbing mountains, making pizza, or coming up with witty banter in social situations.

16 days ago

shermantanktop

> in 6 months you'll be 6 months closer than you were

Not to be morbid, but…in 6 months you’ll also have 6 months less of your life left to do those things.

In the past few years, I have felt—-whether due to my aging, the chaos of current events, the mindless march of technology, who knows—-that our time here on earth is a gift that we squander at our peril.

I think some would find that bleak and grim, but I consider it an existentialist battle cry. Make a list of those things that someday you’d like to say you’ve done, or know, or have built, or have given to others…and do one of them.

16 days ago

thowawatp302

> Not to be morbid, but…in 6 months you’ll also have 6 months less of your life left to do those things.

The time passes anyway.

16 days ago

troad

> The time passes anyway.

All the more reason to spend it on things that matter to you. The opportunity cost of six months spent deep in abstract CS papers is six months not spent teaching your daughter to play guitar, visiting that place you always dreamt of seeing, finally finishing that book on your nightstand, etc.

16 days ago

brailsafe

Absolutely, it's fair to say there's a cost to dedicating huge amounts of time to anything, including work and school. I'd argue that people waste more time than they appreciate with inane things like scrolling Instagram/Tiktok, commuting. Everyone need to be somewhat aware of the microeconomy of their life, and there's some sweet spot where you're making decent money, not spending too much time working, not spending too much time getting to work, spending as much time as reasonable with your kid(s) when they're young if you have them and with your partner if you have one, engaging your own interests, and ideally taking care of yourself physically. It's all quite a lot.

When it comes to that category of your own interests, I don't really think one can afford not to spend time on them, lest you hollow yourself out. Whether any one thing is worth the time over another, like grinding papers vs travel, they're not always mutually exclusive; although trying to do both in parallel might be silly, I personally like to shift my attention periodically. I'll go and spend a few months learning, and then go adventure. I don't that much, but I'm happy to meet up with friends and do that too, and it means taking time away from video games or learning, and that's important too.

14 days ago

jpc0

These things are not mutually exclusive...

16 days ago

troad

Oh, but they are. We get fewer of those six month periods than we like to think we will.

16 days ago

Falimonda

Not if you teach your daughter quantum computing instead of guitar

16 days ago

aeonik

You can definitely teach your daughter both.

15 days ago

dayjah

At the same time, in different multiverses..

15 days ago

SirHound

Must be nice living a life free from the constraints of time.

16 days ago

jpc0

You are awake for at least 16 hours of the day, you telling me you cant find 4 hours a week to read a paper? So 4/112 hours or around 3.5% of your week...

I guarantee thats more time than most people will spend teaching their kids any musical instrument.

Just spend a week mapping out what you do ans how long it takes you every week and I'm pretty sure you can find double digit hours spent somewhere, maybe even right here on HN

16 days ago

asoneth

> you telling me you cant find 4 hours a week to read a paper?

For me, four hours a week is sufficient to stay up-to-date on an active research area but making forward progress requires at least twice that.

> You are awake for at least 16 hours of the day, you telling me you cant find 4 hours a week to read a paper? So 4/112 hours or around 3.5% of your week...

Using awake hours as the denominator is misleading because most people have other non-discretionary time commitments besides sleep. For me I'd estimate ~60h/wk sleep, ~50h/wk work/commute, ~30h/wk non-discretionary upkeep of children/relationships/home/body. Assuming 8+h/wk to make progress out of the remaining ~28h/wk of discretionary time means I can handle about three non-discretionary priorities. (Pre-kids I could handle about five.)

Therefore, when someone with a job says "I don't have time" to pick up a hobby, skill, language, outside research area, instrument, volunteer position, etc I don't interpret their statement as meaning it is physically impossible for them to rearrange their schedule to accommodate it. I (and I suspect most people) interpret the statement as them admitting that it's not one of their ~3-5 non-discretionary priorities.

15 days ago

jpc0

I don't disagree with you. But I have also opened screentime on some of those people with "no time" and it has 15+ hours on ticktok this week...

There are legitimately busy people and then there are people who wish they could achieve X if only they had time but don't put any effort into making time for that.

HINT: if research is directly related to your job, allocate time to it during working hours, those aren't 40-45 hours of time a company gets to take from you and also get benefits from your out of work time. I'm reasonably sure your boss would happily let you allocate an hour every now and then to improving yourself as an employee and if they don't, well... The internet has their usual answer to that even though I don't always agree.

15 days ago

asoneth

> some of those people with "no time" and it has 15+ hours on ticktok this week...

Sometimes this is the result of black-hat products hacking their dopamine cycle, in which case screentime or a friend can help. However, I've found that in some cases staying on top of the zeitgeist like this is actually in someone's 3-5 priorities. In that case saying they have "no time" for X is another way of saying that using TikTok is a higher priority than X for them. (Baffling to me, but a valid choice.) I similarly know people who spend a non-trivial amount of time on other "useless" activities like watching TV shows, playing video games, reading novels, learning esoteric languages, growing plants with no utility, commenting on online forums, etc. Who am I to judge if they find it valuable?

So as technically imprecise as "I don't have time" is, I understand why people use the expression. When someone suggests that I should volunteer for a cause, participate in an activity, go to an event, learn a skill, watch a TV show, read a particular book, learn a language, etc and I tell them that it isn't a high enough priority to displace any of my existing priorities, they sometimes get defensive and/or attempt to litigate my current priorities.

> I'm reasonably sure your boss would happily let you allocate an hour every now and then to improving yourself as an employee

Absolutely, this is a major perk that knowledge workers should take advantage of. I'm spending quite a bit more than "an hour every now and then" to learn about LLMs and accessibility because they are in the intersection of my interests and my job responsibilities. However quantum computing (or game design, solar vehicles, gardening, etc) are not in that intersection and would count against one of my discretionary priorities.

15 days ago

jpc0

We seem to agree.

I will always try to convince people against mindless media like ticktok, well unless it's in their life goals to be an influencer but that may also be an issue...

Other cauaes though, sure I don't mind if you don't have time to volunteer etc.

15 days ago

_zoltan_

If I find 4 hours where I could read that paper (outside of work, where I do read papers for my day job), I'll do something else, thank you.

At 70 nobody will be proudly say "oh yes I've spent years reading up on this topic!".

12 days ago

backlogman

My bucket list says otherwise.

16 days ago

samus

Then you by definition don't care enough about quantum computing. The same could be said about learning programming or any other deep skill.

15 days ago

_zoltan_

If it has nothing to do in with your life, ambitions and goals - why should you care about it?

Just like I don't know how to build a solar panel or how to do organic chemistry.

12 days ago

samus

That's fine - but then it's no surprise if deep skills stay out of reach. Skills that take more than a few hours of watching a YouTube video or reading a book or two to acquire. Skills that one arguably should care about if one wants a career in that field.

6 days ago

learningstud

Your revelations echo with that of Seneca, "On the Shortness of Life". You take your life seriously which is an act that I immensely respect.

16 days ago

lacedeconstruct

Why should it matter anyway if its short or long when it will abruptly end as if it never existed

15 days ago

simonh

Why shouldn’t it? I think we get to choose what matters.

15 days ago

brailsafe

I absolutely agree, which is why I try to constraint my social media use, and have long since stopped arbitrarily storing articles to read later in order to fake a personal sense of productivity. I still haven't bothered to learn anything to do with Crypto or AI, only because I don't feel like I have anything compelling enough to drive me to get anything satisfying out of it compared to like.. going outside or something.

However, that's also exactly why I didn't say anything like "You should learn X", because it's just a curiosity, and there's many curiosities. For example, last year I failed an interview at Apple because they got the impression my hardware-level knowledge of computers wasn't there, and it wasn't, and that convinced me to finally try and work my way through NAND2Tetris, which I'm now about 3/4's of the way through, and feel was incredibly rewarding even though the net benefit is likely nebulous. I was out of work then for about a year and a half, and it helped me pass the time well too, in a much more spirit lifting way than grinding through yet another rest api project or frontend framework.

Eventually a project may come along that I'll feel is compelling enough to dedicate some serious time to AI/Crypto, and I'll consider it then, but if I were to just try and learn it for no reason at all—including innate curiosity—I don't think it'd stick.

15 days ago

teaearlgraycold

Don’t stress your inevitable death. But also don’t live like you’re immortal.

16 days ago

anon291

Learning for the sake of learning is a good thing

16 days ago

weatherlite

Why? There has to be some pleasure or goal derived from it. I don't think I'd particularly enjoy learning to speak Swahili just because I'm learning.

16 days ago

n4r9

For me, "aha" moments often make me feel like I've come closer to understanding myself and the universe. I suppose it's a kind of pleasure, but it's not directly a practical goal. It can definitely happen with learning a new language, because the language may encode and communicate information in ways I've never thought about before.

16 days ago

weatherlite

Got it. I guess we're very different then, I don't feel like I'm getting any closer to "understanding myself and the universe" nor is it a goal for me. Perhaps it was when I was younger.

16 days ago

n4r9

Yeah, we are probably different. I suppose my primary goal is to love and be kind. But trying to figure out what the hell is going on is a strong second place.

16 days ago

weatherlite

> But trying to figure out what the hell is going on is a strong second place

Not an easy goal, good luck!

15 days ago

brailsafe

I think there's some nuance between your view and their's, but I do find these perspectives to often divide people. If you're learning an arbitrary language for literally no reason at all—you don't know anyone who speaks it or have any intention of ever speaking it to someone—then that activity might struggle to compete with other things you might have a more coherent reason to do. The people I know who are more driven by tangible results, money, and outcomes, struggle to value things that have no previously established obvious purpose.

I'd use the example of hiking literally all day without the promise of a good viewpoint; I'd invite the person out, with only a plausible estimate of the time required, and they'd want to just find something that takes less time so they can schedule something afterward. Along the way, they'll be rushing to meet that time, because this is just exercise or whatever to them, and in some way they aren't at peace the idea that we're both just here in the forest maybe chatting maybe not, there's no tangible justification for the mission.

Another type of person would replace tangible outcomes with the feeling that they always need to be learning, regardless of what it is, because it's intrinsically virtuous, and they also sometimes fail to be at peace with doing something for no reason, or nothing at all.

I've wavered between these over the years, and now I'll learn something because it's a clear weak spot, or I can imagine how it might be interesting, and if I don't have anything else that's more compelling (including doing nothing) I might give it a go. What's different now than a few years ago is how much I respect serious time involvement. Anything I decide is worth trying to learn is something I need to feel capable of dedicating serious energy to, at least in the first year; if I can't or don't want to, then maybe I won't, and I shouldn't fool myself into thinking I should or will, because I have other things going on. If I'm going hiking, that's my day, that's it, that's the whole activity, if anyone wants to join me then that's great, they need to accept the same mentality or they can stay home. If we happen to get back before the bars close, then that's great too. I might find Swahili interesting too, and if it seemed worth trying, I'd dive in on the basis that I'd just get a sense for how a different language works, and that there might be something surprising along the way, which to me is inherently valuable.

14 days ago

weatherlite

Why? There has to be some pleasure or goal derived from it. I don't think I'd particularly enjoy learning to speak Swahili.

16 days ago

brailsafe

How boring would life be if you new at the outset how much pleasure you'd get out of any given thing? Goals are nice, but not everything should have a goal attached, and not all pleasure should be attached to goals

10 days ago

danparsonson

> ...that our time here on earth is a gift that we squander at our peril.

Yes but - it's up to each individual to decide on their own definition of 'squandering'. Ultimately everything we do is in service of our own search for meaning in life, and learning for its own sake can absolutely fulfil that role.

16 days ago

srfwx

Reminds me of this song: https://youtu.be/9X_o_BAUJ-c

15 days ago

guilamu

This is very rare, so much I can't remember when was the last time it happened, but I was inspired by your words.

Thank you for writing them.

16 days ago

rkp8000

MSR has a very clear and accessible tutorial on quantum computing for anyone interested in getting up to speed with the fundamentals: https://www.youtube.com/watch?v=F_Riqjdh2oM .

16 days ago

sebastiennight

> things that you might be able to do, like [...] coming up with witty banter in social situations

Well... guess it's time I start learning quantum computing then

16 days ago

samplatt

As an autistic that brute-forced the "witty social banter" skill early on and has recently turned 40... I kinda wish I'd learned quantum computing instead tbh.

16 days ago

marcellus23

any tips?

16 days ago

samplatt

Read a lot of popular books for conversational ideas, follow the news for topical application of them, and attempt cognitive & emotional empathy for per-person personalization of the application.

Simple in theory, juggling plates and knives in practice...

16 days ago

namaria

Mirroring goes a long way.

16 days ago

hackernoops

Reddit refugee I take it?

16 days ago

samplatt

Technically yes, but I've been here for a few years. Why?

16 days ago

nsxwolf

I have a treadmill though. Even though I don't use it. I can't get a quantum computer.

16 days ago

Bjartr

Here's a free quantum treadmill

https://www.quantumplayground.net/#/home

You don't need a "real" quantum computer to mess around with quantum computing and learn how it works any more than you need a supercomputer to play around with algorithms and learn how they work.

16 days ago

owlmirror

most experts in that field do not have access to a quantum computer. For the longest times it was a very theoretical field. Having access to a physical machine will not help you for 99% of the knowledge you can acquire in that field right now.

16 days ago

desdenova

Everyone forgets people have been doing quantum computing research for decades.

Shor's algorithm is from 1994.

16 days ago

trhway

i think you can. A very simple one - an optical based one, one qubit. Several thousand dollars of equipment of a typical university photon quantum entanglement lab.

https://en.wikipedia.org/wiki/KLM_protocol

16 days ago

Keyframe

not computer, but you can definitely get a quantum devkit (an "emulator") and dabble with it.

16 days ago

lapumawan

How can you get it?

16 days ago

porridgeraisin

Also, Azure let's you run quantum code.

You can in general start with these search keywords: qiskit caterpillar yosys.

16 days ago

bodine30

Thank you for adding such a simple, positive, growth-mindset message to the internet. This instantly brightened my day and compelled me to write what might be my first ever comment on this site. idk if I can understand quantum computers, but I can climb mountains and make excellent sourdough pizza and that's not something I could say a year ago. Mostly, I'm just happier than I was 5 minutes ago and you did that!

15 days ago

absoluteunit1

Clicked on this article to learn a bit more about this newly released chip but instead got extremely motivated to pursue intellectual interests intensely

16 days ago

smellybigbelly

What books will get me started into understanding recent scientific publications? I’m interested in the theoretical side but even more in the engineering/hardware side.

15 days ago

Cthulhu_

Same, but kind of; I'm so far removed from higher up engineering stuff like quantum stuff, nuclear fusion stuff, LHC stuff, astronomy stuff, AI stuff that I just scan it, grab a coffee, raise an eyebrow and go "Interesting", then go about my day and wonder what the fuck I'm supposed to be doing at work again. Oh right, implement a component, same thing I've been doing for the past decade or so.

Thing is, I don't know how to get out without on the one side giving up my comfort zone (well paid, doable work), and on the other side gaining responsibility / being looked at as an expert in any field (that's where impostor syndrome and responsibility aversion comes in). I really need a holiday lol.

16 days ago

hellojebus

This was me yesterday after reading the official Willow release.

Spent yesterday afternoon and this morning learning what I could. I'm now superficially familiar with quantum coherence, superposition, and phase relationships.

In other words, you got this. Now I gotta learn linear algebra. brb.

16 days ago

billti

I gave a plug for this yesterday, but if you want to try the Quantum Katas from Azure Quantum, it runs in the browser and covers this stuff. See lesson 3 for linear algebra. <https://quantum.microsoft.com/en-us/tools/quantum-katas>

One thing I did forget to mention is that you can play with this stuff in a "familiar to software developers" way in our VS Code playground at <https://vscode.dev/quantum/playground/> . This is a 'code first' approach familiar to software developers leveraging VS Code integration. The playground is pre-populated with a bunch of common quantum algorithms.

You can also install the extension in VS Code directly (<https://marketplace.visualstudio.com/items?itemName=quantum....>), you don't need to run it in the browser, but even in the browser it has a fully working language service, debugger, evaluator, quantum simulator, package management, etc. It's all written in Rust and compiled to either Wasm for the browser and VS Code extension, or native code for the Python package. (I'm thinking about doing a video on how we build it, as I expect it will interesting to this type of crowd. Let me know if so).

Disclaimer: I work in Azure Quantum on the product mentioned. AMA.

16 days ago

MicrosoftShill

Hey there - professionally I'm a sr cloud engineer focused on Azure. I have an interest in quantum as a hobbyist and maybe a career focus in coming years/decades. Katas seems like a good place to learn things, but if you were to give a 5 year outlook, what specifically should I be looking into? Is there something as an administrator/architect of these quantum products that I should focus expertise on? Or something in the Azure/O365/etc ecosystem I should be looking at that leverages these technologies? If I was to become a consultant, what would be my focus as an Azure cloud engineer in relation to quantum technologies?

I'm unsure I am even asking the right questions. I'd appreciate any direction you can give me!

15 days ago

billti

Sorry for the delay in replying...

What do you hope to be doing in 5 years? Architecting quantum solutions? Reselling or consulting on cloud solutions? Building quantum applications?

I think the quantum space will have quite a bit of progress in 5 years, but I think most experts in the space (of which I'm NOT one) think we're still over 5 years out before there's broad adoption on running quantum programs with significant business value. (i.e, it'll still largely be researchers and bleeding edge adopters).

Opinions are my own, etc.

10 days ago

CSMastermind

> Now I gotta learn linear algebra.

Linear algebra does seem to be a hard wall I've seen many smart software engineers hit. I'd honestly love for someone to study the phenomenon and figure out why this is.

16 days ago

bloomingkales

Curriculum pacing? Linear Algebra may be showing up at the wrong time during the 4 years.

16 days ago

fromMars

Linear algebra is a relatively straight forward subject. I won't say easy because there are bits that aren't, and I struggled with it when I was first exposed to it in college. But in graduate school revisited it and didn't have a problem.

So, I really agree that it's about timing and curriculum. For one, it appears somewhat abstract until you really understand geometrically what's happening.

So, I surmise that most non-mathematicians don't have quite the mathematical maturity to absorb the standard pedagogy when they first approach the subject.

16 days ago

hedvig23

Why isn't the "standard pedagogy" always evolving and improving? Isnt that what pedagogy is?

15 days ago

[deleted]
16 days ago

anon291

It seems wild to me since linear algebra is at the heart of basically all modern mathematics (alright maybe not all but so many things exhibit linear behavior, that it's still useful). Don't most CS majors require some math today? Linalg is like first year college, right?

16 days ago

Traubenfuchs

> Linalg is like first year college, right?

Yes, but most people, like me, never gain even remotely intuitive understanding of it and quickly forget it after passing a few dreadful exams. Just like with most other math.

15 days ago

RockRobotRock

I enjoy engineering. I hate math.

16 days ago

jackphilson

I would recommend mathacademy.

16 days ago

simpaticoder

We are all small in every domain in which we are not an expert, which is approximately all of them. The "computer" domain has expanded wildly over the last 50 years to include so many specialties that even within it one cannot possibly acquire expertise in everything. And of course "computers" do not include (although they do impact!) the vast majority of domains.

If you want to go deeper into quantum computing, I can highly recommend Scott Aaronson's own book, "Quantum Computing since Democritus"[0]. Although I have a background in physics and math, I found his style lively and engaging with truly unique and compact recapitulations of things I already knew (two come to mind: his description of Cantor's diagnalization argument, and the assertion that QM is the natural consequence of "negative probabilities" being real. The later is a marvelous insight that I've personally gotten a lot of use out of).

It's also useful to understand the boundary of what quantum computing is. At the end of the day what we'll see are "QaaS" apis that give us the ability to, for example, factor large primes. You won't need to know Shor's algorithm or the implementation details, you'll just get your answer exponentially faster than the classical method. I would not expect desktop quantum computers, languages designed for them, or general user software designed to run on them. (Of course, eventually someone will make Doom run on one, but that's decades in the future.)

https://www.alibris.com/booksearch?mtype=B&keyword=quantum+c...

16 days ago

owenfi

Also good starting places (but still, only understood a tiny bit of what was there).

https://podcast.clearerthinking.org/episode/208/scott-aarons...

https://quantum.country

16 days ago

benreesman

I’d urge you to not feel small.

First of all, the formalism/practice gap is real: taking API calls and updating a database correctly has a mountain of formalism around it. And it is not easy to get right! Concurrent sequential processes and distributed systems theory and a bunch of topics have a huge formalism. It is also the case that many (most?) working software engineers have internalized much of that formalism: they “play by ear” rather than read sheet music, but what matters is if it sounds good.

Second, whether it’s quantum computing or frontier machine learning or any other formalism-heavy topic? It’s eminently possible to learn this stuff. There’s a certain lingering credentialism around “you need a PhD” or whatever, I call BS: this stuff is learnable.

Keep hacking, keep pushing yourself on topics you’re passionate about, but don’t consign yourself to some inferior caste. You’re just as likely as the next person to be the next self-taught superstar.

16 days ago

rhubarbtree

The theory of quantum computation is accessible with a good understanding of linear algebra. Anyone working in machine learning or computer graphics should feel encouraged by that, and take a look at the theory.

Quantum error correction is one of those “wow” moments like euler’s identity, it is worth making the effort to get there.

16 days ago

arrowsmith

I feel like I have a fairly good understanding of linear algebra, or at least I did when I studied it at uni but that was a while ago.

Any suggestions for where I can learn more about the theory of quantum computation?

16 days ago

rhubarbtree

The classic is "Quantum Computation and Quantum Information" by Nielsen & Chuang.

13 days ago

acscott

e^(iπ) +1 = 0 e, i, π, 1, 0 Beauty

16 days ago

alanfalcon

e^(iτ) = 1; true beauty.

16 days ago

vishnugupta

A few years ago I made peace with the fact that my space of ignorance is humongous and will only get exponentially bigger. Even in the domain of my work which is software engineering. It liberated me from the pressure or burden of having to learn or know everything and enabled me to focus on things that I truly like to pursue.

I’ve made a list of 4-5 things at which I want to be extremely good at in 10 years compared to where I’m today. Now I just spend time on those. I occasionally wander into something new just for the sake of diversion.

16 days ago

ojbyrne

As with most novel hardware since the dawn of computing, there are ways to emulate it.

https://www.ibm.com/quantum/qiskit

16 days ago

bigiain

Damian Conway did it in Perl back in the late 90's:

https://metacpan.org/pod/Quantum::Superpositions

(Sadly, the Perl Module has to do classical calculations underneath to get the Quantum computing code/functions to execute, but it let you experiment with silly little QC toys - "Look, I factorised 15 in parallel!!!")

16 days ago

genewitch

they buried the lede. google doesn't have 2 qubits to rub together 100%. 105 "qubits" make a "single" qubit after coalescing or whatever. I'm really annoyed because i've kinda followed this since the mid-90s and this is the first time i am hearing that "it'll probably take millions of physical qubits to crack 256 bit"

to me the whole endeavor smells like a bait and switch or something. I remember about 10 years ago canada or someone had at least a few hundred qubits if not close to 1000 of them, but these were physical qubits, and don't represent anything, really. Google's 105 finally makes a "fast enough" single qubit or at best half of a pair.

16 days ago

namaria

Silicon valley figured out decades ago that the trick to keep R&D dollars flowing is to balance on the verge of getting amazing results (AI/AGI; defeat cryptography) and not proving these efforts fruitless. The end result needs only be plausible and fantastic at the same time.

16 days ago

random3

As hard it may seem to you to tackle that, it's harder to convince others like you that tackling it can be like child play. Not just QM/QC (which btw it's beeing successfully taught to highschoolers) but any "advanced" topic. I hope we'll be able to look back and laugh at how backwards education was "back in the day" and how dumb people were to think that some were "elite few", while the reality is that the "elite few" were the "lucky few" to not be deprived of learning to think either by having the right people around them or the right context to find it by themselves.

16 days ago

voidhorse

totally, education across the globe is currently severely suboptimized. At this point there's troves of research on what actually works when it comes to learning, yet the educational systems that implement these techniques are still incredibly scarce

16 days ago

deprecative

Most education systems, particularly within the US, have very little interest in educating for critical thought and reasoning. They exist as both day care and labor exploitation pipelines.

16 days ago

random3

this (and many other things) seem more a natural effect than something intentional. Are you suggesting something different?

14 days ago

UltraSane

When Corridor Digital was analyzing the really impressive time spaghetti effect in Loki season 2 they said "there are two kinds of CGI artists. The kind that use the buttons and the kind the program the buttons."

16 days ago

kmarc

You captured very well my sentiment. Also same feelings for AI.

I'm wondering if it's time for me to switch professions and give up compsci / software altogether.

16 days ago

29athrowaway

The complexity in AI, for common use-cases, is well encapsulated right now.

Many pre-trained models and libraries that hide most of the complexity.

16 days ago

kmarc

Indeed. That's my understanding, too.

However, my sentiment is rather this: I wouldn't pass an assembly programming interview, but I know enough about it so that I know what I don't know. Same with embedded programming, fpgas, machine learning stuff, big data, networking, etc etc.

As for LLMs and quantum computing, I don't even know the basics, have no idea about the broader science behind it. Worst is that I don't feel like it interests me, I don't feel excited about it.

I guess if tomorrow I had to work with them, I could learn some "libraries that hide the complexity", but it leaves me with an empty feeling about these new technologies. Hence the existential question if I'm "too old for this" career path at all.

15 days ago

mike_hearn

Nah. You don't have to feel excited about every single new bit of tech that comes along.

About 15 years ago I became interested in really advanced cryptography, because it was presented at a Bitcoin conference I went to. If you think AI is hard, that's kindergarten stuff compared to the maths behind zero knowledge proofs. And because nobody cared at that time outside of a handful of academics, there were no tutorials, blog posts or anything else to help. Just a giant mound of academic papers, often undated so it was hard to even figure out if what you were reading had been superseded already. But it seemed important, so I dived in and started reading papers.

At first, maybe only 5% of the words made sense. So I grabbed onto those 5%. I read a paper, put it down for a while, re-read it later and found I understood more. I talked to the researchers, emailed them, asked questions. I read the older papers that initiated the field, and that helped. It was a lot of work.

You know what? In the end, it was a waste of time. The knowledge ended up being useful primarily for explaining why I wasn't using those algorithms in my designs. A lot of the claims sounded useful but ended up not being so for complicated reasons, and anyway, I was mostly interested in what you could do with the tech rather than the tech itself. Turns out there's always a small number of people who are willing to dive in and make the magic happen in a nicely abstracted way for everyone else, for any kind of tech. QC is no different. There's, as far as I can tell, very little reason to learn it. If QC does ever "happen" it'll presumably 95% of the time be in the form of a cloud service where you upload problems that fit a quantum algorithm worked out by someone else, pay, and download the answer. Just like LLMs are - another topic where I was reading papers back in 2017 and that knowledge turned out to not be especially useful in regular life.

Learn the details of stuff if it naturally interests you. Ignore it if it doesn't. Being a specialist in an obscure domain can occasionally be like striking the jackpot, but it's rare and not something to feel bad about if you just don't want to.

15 days ago

numpad0

It's also way harder to make money doing "not laughably childish" stuffs. The more accessible and human-connected it gets, the more likely people recognize and pays you. People criticize yes-men getting rewarded but you have to be nearly clinically insane to recognize value of a no-machine like a partial prototype quantum supercomputer.

16 days ago

weatherlite

> I can't even imagine why I should bother trying to understand it

Why should you? I agree with your sentiment, super advanced quantum physics is probably out of reach for 99% of the population (I'm estimating here but I think it's reasonable to assume that's the average IQ of the physics PHDs who can actually understand this stuff to a deep level). You can probably make the effort to understand something about what's going on there, but it will be very superficial. Going advanced quantum physics takes a huge amount of effort and an incredible capacity for learning complex things. And even the advanced physics guys don't and can't understand a bunch of very elementary things about reality, so it's not as if the feeling of not understanding stuff ever goes away.

16 days ago

n4r9

To be fair you don't need to know advanced quantum physics to understand a lot of quantum computing, including Aaronson's discussion of this paper. It's pretty accessible with undergraduate-level linear algebra and computer science concepts.

16 days ago

Sakurai_Quantum

We recently launched of our self-paced, mathematically rigorous course on quantum computing! This comprehensive introduction is designed to help you grasp foundational concepts like superposition and entanglement with mathematical clarity and depth.

Here’s a sneak peek from Lecture 6: https://youtu.be/6rf-hjyNl4U

You can sign up via: https://quantumformalism.academy/mathematical-foundations-fo...

16 days ago

TacticalCoder

> Being a "software engineer" consuming APIs and updating database rows ...

> Only an elite few get to touch these machines.

But lately many can run quite a lot of AI models at home. Doesn't require too crazy of a setup.

Why not build something software fun at home that doesn't involve a DB? Maybe using some free AI model?

I did experiment lately: automatically "screenshot" a browser and ask an AI to find the URL and ask if the URL and site looked like a phishing attempt or not. Fun stuff (and it works).

I tried installing one of these "photo gallery" in a Docker container (where you can put all your family/travel pics and let anyone on your LAN [or on the net] browse them). I saw some of these have "similarity" searches features. I also saw that SAM / SAM2 (Meta's Segment Anything Model) was plenty quick: some people are using these to analyze video frames in real-time. So I was thinking about sending all my family pictures through SAM2 (or a similar model: I saw some modified SAM2 to make it even faster) and then augmenting the "similarity search" by using the results of SAM2. For example finding all the pictures about "pool", etc.

And why limit myself to pictures? I could do family vids too: "Find all the vids where that item can be seen".

Possibilities at the moment seems endless: times are exciting if you ask me.

15 days ago

compumetrika

I haven't tried this yet myself, but have you tried plugging it into GPT or Claude or Perplexity and asking Qs? I've made some progress on thongs this way, much faster than I would have the usual way. Apply the usual precautions about hallucinations etc (and maybe do the "ask multiple AIs the same thing" thing). We don't yet have a perfect tutor in these machines, but on balance I've gained from talking to them about deep topics.

16 days ago

ericmcer

Some people say higher education is a privilege, but a few times during college while grinding out difficult classes it felt more like a huge burden.

Being part of a highly educated elite group like that has some huge benefits, but they have also shackled themselves to an insanely specialized and highly difficult niche. I can't imagine the stress they are under and the potential for despair when dedicating years to something with so much uncertainty.

16 days ago

gerdesj

Start with the handy precis that Mr A leaves at the top in yellow. You can drop that into conversation right now with some confidence! Mr A has quite some clout hereabouts let alone elsewhere, so that seems reasonable.

You can't know everything but knowing what you don't know and when to rely on someone else to know what you don't know and to confidently quote or use what they know that you don't know, is a skill too.

"Critical thinking", and I suspect you do know how to do that and whilst this blog post might be somewhat impenetrable it is still might be useful to you, even as just general knowledge. Use your skills to determine - for you and you alone - whether it is truth, false or somewhere in between.

Besides, a mental work out is good for you!

16 days ago

matthewdgreen

For what it’s worth, Scott spent the last few years working for OpenAI because he wanted to do something much more applied than quantum information theory. As an applied scientist I’m aghast: he’s one of the few people who actually gets to “touch the firmament of the Universe,” why would you ever give that up for mere applied science (even science as interesting as whatever goes on inside OpenAI) :) But as a human being I understand it. The grass is always greener somewhere else, and doing tangible things is sometimes more fun.

TL;DR: whatever you’re doing, there’s probably someone who wishes they were doing it instead.

15 days ago

sourcepluck

People feel like this about domains they don't know all the time. Don't allow your ignorance of an area to trick you into thinking you can't learn that area. It's just new to you!

Source: teaching beginners piano for years. Of course sheet music looks like gobbledygook, until you've spent a bit of time learning the basic rules!

16 days ago

lr4444lr

To be fair, probably everyone who "touches these things" had extensive graduate work on the topic, or put in a lot of long years toiling away at far less exciting and remunerative grunt work. They weren't just super bright CS undergrads who got entry level jobs you couldn't.

16 days ago

illwrks

I know absolutely nothing about quantum, but I did watch this Bloomberg video this week which was very helpful to understand it in layman’s term.

https://youtu.be/1_gJp2uAjO0?si=--XzbVlAT3w9hw2L

16 days ago

riteshpatidar

You can also for Veritasium's videos on quantum computing, if you are a fan of his videos.

16 days ago

krisgenre

Thank you. Seeing this as the top comment actually makes me feel better as I now know that I am not alone.

16 days ago

sleepybrett

> Only an elite few get to touch these machines.

For now, people were saying the same thing about mainframes in the 60s

16 days ago

fossuser

I think the jargon makes it seem scarier/less approachable than it is. There's tons of jargon in every new branch of anything interesting that you first notice, but it's not as impenetrable as it appears (at least for a high level understanding imo).

16 days ago

[deleted]
16 days ago

bn-l

I had exactly the same thought. I read it after struggling for hours, making notes, etc, with a very hard (to me) part of my code and it made it seem pretty trivial.

16 days ago

nicebyte

if it makes you feel any better, it is almost impossible to do anything practically useful at all with quantum computation at the moment. this is a field for researchers, people who are okay with not seeing the fruits of their work materialize for decades, if ever. by not understanding it, you're not missing anything more than what you're missing by not understanding e.g. string theory.

16 days ago

lbltavares

Not even Einstein got it, you're fine

15 days ago

bamboozled

The machines you program today were once only accessible by "elites" too though?

16 days ago

gist

Anything that someone else does that you don't understand or can't do always sounds super important, impressive, and enviable. Usually. But you have to realize that he spends his life doing and thinking about this. And will stipulate he's gifted in this area. I've never heard of him but managed to download his CV.

I will also note that sometimes when I read HN links and think one thing then read the comments and people know enough to take issue with what is being said and even call it out.

16 days ago

29athrowaway

AWS Bracket is on AWS Free Tier.

16 days ago

renegade-otter

And yet it pays the bills and runs the world, doesn't it.

React 19 is out and it's time to hit the docs eh to solve the same exact problem we've had for the last 30 years in a slightly better (or NOT) way.

16 days ago

bradleyjg

The problem it solved would take a septillion years to do on a conventional computer but no one other than a quantum researcher cares about that problem.

How about solving a problem someone that’s not a quantum researcher would care about. Give me traveling salesmen with n=10. Or factor a 10 digit number. Something.

Until then quantum computers are in the same category as commercial fusion. Long on “breakthroughs”, zero on results.

Look at cancer researchers for a nice contrast. The annual number of “breakthrough that could cure cancer!1!” announcements have dropped to near zero while steady, real progress is being made all the time.

16 days ago

bloomingkales

but no one other than a quantum researcher cares about that problem.

The better question may be why the rest of us don’t care about the same things. If we sat here in 2014 and pondered why neural net folks cared about particular problems, I’m not sure where we’d be. Faith and Vision are truly spiritual things, even in tech.

16 days ago

llm_trw

By 2014 NN folks, of which I was one for a decade by that point, were solving real world problems at above human capabilities.

We'd been managing around human capabilities since around 2004 in certain tasks.

There were actual industrial applications in 1994.

You'd have to go back to the 1950s to find the type of research in neural networks that was of no practical application to anyone but NN researchers.

16 days ago

gedpeck

So go back to the 1950s and the point stands. What does it matter when the actual start date is in terms of the point being right or wrong?

16 days ago

sampo

> So go back to the 1950s and the point stands.

Yes. "Quantum computers will revolutionize computing by year 2100" is a claim I can take seriously. "Quantum computers will revolutionize computing real soon now" is a claim I am not taking seriously.

And yes, the 1950s AI researchers also boasted that it would take only months, what ended up taking many decades: https://en.wikipedia.org/wiki/Dartmouth_workshop

16 days ago

llm_trw

So quantum computers will be relavent in some time around 2075.

How much money should we putting into something with a time horizon longer than the working careers of everyone here?

16 days ago

tucnak

Nobody wants your money

16 days ago

llm_trw

I'd like a rebate on my taxes then.

16 days ago

jayd16

Well I'll be dead by that timeline so it matters a little.

16 days ago

akira2501

> I’m not sure where we’d be.

They didn't solve any new problems. They took old ideas and just cranked the dial all the way up to 11. Then an entire market formed around building the hardware necessary to crank it even further. This all represents a revolution in computing power not in what neural net folks were doing at all.

> Faith and Vision are truly spiritual things, even in tech.

Yea, but money isn't.

16 days ago

clarionbell

Because it's all just scaled perceptron isn't it?

The reality is that we had to solve vanishing / exploding gradient problem. That wasn't accomplished by getting faster compute, but by devising better architectures and using activation functions with properties we needed.

That's how ANNs got out of the "somewhat useful" into "really useful" categories. Hardware helped move it even further. But without the work of computer scientists and mathematicians we would never get to the point when the hardware matters.

16 days ago

llm_trw

Even in the bad old days you got your golden tickets every so often.

I was working on an alternative approach of using much smoother activation functions at much higher precision, 64 bits in production using GPUs and 128 in testing using CPUs. It worked well enough that I don't think the field would have slowed down much. The only issue is that we'd be working on networks that are 10 times smaller.

15 days ago

kortilla

Because when a new technology is demonstrated with a really esoteric use-case, it comes off as being useless for anything more useful.

16 days ago

wslh

You are engaging in some well-known logical fallacies, particularly hindsight bias. Linking a past event to an outcome we now know often ignores the many other past events that had no clear future or resulted in nothing significant. It is also important to highlight that this comment is not against quantum computing.

15 days ago

socksy

Well, yes, obviously. Since we can't predict the future we can't know which wild bets are gonna pay off, the idea is to expose yourself to as many as possible, rather than cut yourself off and say "aha, but that's the hindsight bias fallacy!". And this one in particular has a proven use case — Shor's algorithm, given a large enough number of logical qubits, can be used to break all existing RSA based encryption.

Also, I'm really struggling to identify with the naysaying around this thread. Every year we improve the error correcting (bringing the number of physical qubits needed to represent a logical qubit down), and increase the number of qubits. So what exactly is the worry here? That the secrets the government have dragnetted won't be relevant anymore by the time we crack it, due to the prevalence of Kyber?

15 days ago

mplewis

Because I have shit to do and this doesn’t relate to any of that shit?

16 days ago

bradleyjg

It’s a question of overpromising and underdelivering. I’m not anti science. The scientists should definitely go forth and science. It’s the marketing they are bad at. The boy who cried wolf is not a training manual.

16 days ago

samus

Scientists have to get good at marketing because they will not get funding otherwise. They have to pitch their research towards an interested panel though, whose members have background knowledge. This means that they can merely bend the truth and exaggerate the potential impact of their research.

Popular science magazines are quite different. They publicize for views, and nobody cares how accurate they are.

Presentations of a public company are somewhere in between. The target audience is not nearly as qualified as a scientific grant panel, but they can sell shares or even sue the company if it underdelivers on its promises.

15 days ago

kccqzy

Google says the next step is to find a problem with real life application. From https://blog.google/technology/research/google-willow-quantu...

> The next challenge for the field is to demonstrate a first "useful, beyond-classical" computation on today's quantum chips that is relevant to a real-world application.

16 days ago

simpaticoder

I always assumed the obvious application was for Google to make a QaaS (Quantum as a Service) for factoring large primes. One obvious big customer would be intellegence agencies, especially the NSA.

16 days ago

ilya_m

The only problem with this business model is that once you factored one number, you kill your market - people will stop using pre-quantum crypto. (The obvious retort is that NSA would have harvested a ton of RSA/EC-encrypted traffic by then and would keep cracking ciphers going back decades. Unfortunately, old secrets is a rapidly depreciating asset class.)

16 days ago

red_admiral

If you can get a quantum computer to take discrete logarithms on secp256k1 or mess with SHA256, then you can either get $$$ very quickly or you nuke almost the entire crypto market. For the former, you'd have to keep your discovery secret but just be unusually good at minting new coins.

Getting crypto coins to move over to post-quantum seems to me to be a much harder problem than e.g. rushing out a new version of TLS or SSH.

The key to Satoshi's original coins is a rapidly _apprecicating_ secret at the moment, but paradoxically also one that might immediately crater out if someone actually discovers a generic way to break the crypto involved.

I'm not an expert on this angle of things but: as far as I know, Shor's quantum algorithm breaks both RSA (factoring) and DSA (finite-field discrete logarithms). But I'm not sure if it works the same way against elliptic curves - or at least you'd probably need a bigger computer to attack the same level of security.

It's not clear to me if a quantum computer could effectively attack SHA256, either: Shor definitely does not help, Grover cuts the search space from 256 to 128 bits but that's still not practical to iterate over.

16 days ago

kevvok

> But I'm not sure if it works the same way against elliptic curves - or at least you'd probably need a bigger computer to attack the same level of security.

Elliptic curve cryptography is also based on the difficulty of computing discrete logarithms, which makes it vulnerable to Shor’s algorithm. Unfortunately, while the increased difficulty of brute forcing ECC with a classical computer allowed it to use smaller key sizes to achieve security equivalent to older algorithms like RSA, the smaller key sizes make ECC attackable with fewer qubits.

11 days ago

r0m4n0

Partially true. If you copy a ton of encrypted data today, you can maybe decrypt it tomorrow

16 days ago

fooker

That's why it won't be a public market.

16 days ago

thehappypm

Can you imagine? Send us a number, we send you the factors. And a bill. With lots of zeroes

16 days ago

sampo

> I always assumed the obvious application was for Google to make a QaaS (Quantum as a Service) for factoring large primes.

We are decades and decades away from the technology to be able to do that. The problem for Google is trying to find a practically useful use case a bit sooner than that.

16 days ago

cwillu

This will never be relevant for solving travelling salesmen problems.

More relevantly, that line of experimentation is specifically to refute the notion that something unexpected will happen with the physics to break the computational scaling.

Nobody with any credibility is claiming that the current experiments are useful for anything practical.

16 days ago

Ntrails

> The problem it solved would take a septillion years to do on a conventional computer

And more importantly, the solution is not actually verified in any way. It might be wrong.

I don't understand (as a complete laymen) why the milestone isn't something classically hard but easily verified (in line with you). I feel like it's weird because people have spent a lot of time telling me how trivially quantum computing will break encryption that defeats normal computers.

15 days ago

stracer

> why the milestone isn't something classically hard but easily verified

They're far from being able to do any such thing, so posing such milestone would make the field look stagnant and thus be a bad marketing and hurt the money stream. The "milestones" are chosen so that they are plausibly reachable in short time relevant to patrons/investors, because people working on this need to constantly demonstrate progress.

15 days ago

dividedcomet

I feel like the problems airlines face when not running a hub-and-spoke model would be a good market for quantum computing? Could be totally wrong though. Lots and lots of variables and permutations and options to sift through.

15 days ago

cooper_ganglia

>Until then quantum computers are in the same category as commercial fusion.

That category being: "Technologies We Will Have Unlocked Within 10 Years"

15 days ago

bradleyjg

Are you a poster on slashdot in 1999? Because you are saying them same thing they did.

15 days ago

winwang

Note there's a caveat: problems in CS can be reduced to other problems in CS. If we solved SAT, well, no one cares about SAT, but traveling salesman obviously reduces to that.

(disclaimer: I don't think that's what is going on here, I'd have to dig into it more)

16 days ago

bradleyjg

They didn’t solve any kind of CS problem. As far as I can tell the problem they solved is “what is this complicated quantum system going to do” by building the complicated quantum system and seeing what it did.

Then they claim it would take a gazillion years to simulate on a conventional computer. Which I’m sure is true.

16 days ago

thaumasiotes

> If we solved SAT, well, no one cares about SAT

Really? SAT is the question "I have a set of constraints. Is it possible to obey all of them at once?"

If it were impossible to use that question to answer any other questions, I'm pretty sure there would be a lot of interest anyway.

It's kind of like how a lot of people care about the determinant of a matrix, which is exactly the same question set against a much more restrictive set of possible constraints.

16 days ago

fennecfoxy

I don't think they're building them for people like you, yet. Why does everyone think of anything that's announced or shown as if it's a commercial product on a shelf and marketed at them.

15 days ago

bradleyjg

Because the ceo of a public company tweeted about it. If it was a professor at mit people wouldn’t jump to that conclusion.

15 days ago

asdasdsddd

[dead]

16 days ago

munchler

The argument in favor of the Everettian multiverse (“where else could the computation have happened, if it wasn’t being farmed out to parallel universes?”) seems illogical to me. Aren't these parallel universes running the same computation at the same time, and thus also "farming out" part of their computations to us? If so, it's a zero-sum game, so how could there be an overall performance gain for all the universes?

16 days ago

Filligree

That's not really how MWI works. There isn't some fixed number of universes; actually, there aren't really distinct universes (or timelines) at all. Attempting to count them is about like trying to measure the length of a coastline; they blend together once you zoom far enough in.

Running the quantum computer causes 'new timelines' to be created. Though so would ordinary atoms just sitting there; the tricky thing about quantum computers is making it so the split is temporary.

So the quantum computer gets split into multiple versions of itself, does some computation in each, and merges the results. This isn't map-reduce; there's a strictly limited set of ways in which you can do the merger, all of which are weird from a classical perspective.

You can argue for MWI based on this, because the computations that got merged still had to happen somewhere. It's incompatible with Copenhagen, more so the bigger and longer-lasting the computation gets. It's not, strictly speaking, incompatible with pilot wave theory; but pilot wave theory is MWI plus an additional declaration that "Here, you see this timeline here? That's the real one, all the others are fake. Yes, all the computation needed to instantiate them still happens, but they lack the attribute of being real."

Though that makes PWT incompatible with computationalism, and hence with concepts such as mind-uploading. Which is a bullet you can choose to bite, of course...

16 days ago

adastra22

I don't think this is accurate. There is no classical computation happening here, and there is no reason you have to have "room" for the classical computation to happen. That seems to be assuming that the universe is a classical substrate and a classical algorithm has to be instantiated somewhere.

Quantum computing isn't classical computing. It isn't a Turing machine. It is a fundamentally different kind of information processing device, making use of non-classical physical phenomena.

I'm not a defender of Copenhagen, but the wave collapse interpretation has no difficulty explaining quantum computation. A quantum computer creates extremely large, complexly entangled wave functions that upon collapse result in states that can be interpreted as solutions to problems that were encoded in the sequence of operations that setup the entangled wave function. The Everett interpretation is easier to think about in my opinion, and I prefer thinking in terms of MWI when I try to make sense of these results. But it is not necessary.

Computer science is the study of universal Turing machines and their application. But Turing machines are only “universal” in the sense that they can represent any statement of mathematical logic, and that can (we believe) be used to simulate anything we can dream up. But there are intrinsic performance limitations of Turing machines, studied by algorithmic theory, which are artifacts of the Turing machine itself, not physical limitations of the universe we live in. That searching an unordered list with a serial processor takes O(n) time, for example. Grover showed that there are non-Turing machine quantum processes that could be used to perform the same computation in O(sqrt(n)) time. That doesn't mean we need to go looking for "where did that computation actually happen". That doesn't even make sense.

16 days ago

lisper

> Turing machines are only “universal” in the sense that they can represent any statement of mathematical logic

That's not quite right. TM's in general are not universal. The "universality" of TMs has to do with the existence of a universal TM which is capable of emulating any other TM by putting a specification of the TM to be emulated on the universal TM's tape. The reason this matters is that once you've built a universal TM, you never have to build any more hardware. Any TM can then be emulated in software. That result is due to Turing.

The relationship between TMs and mathematical logic is of a fundamentally different character. It turns out that any system of formal logic can be emulated by a TM (and hence by a UTM), but that is a different result, mainly due to Godel, not Turing. There is also the empirical observation (famously noted by Eugene Wigner [1]) that all known physical phenomena (with the exception of quantum measurements) can be modeled mathematically, and hence can be emulated by a TM, and hence can be emulated by a UTM. But it is entirely possible that a new class of physical phenomena could be discovered tomorrow that cannot be modeled mathematically.

But here's the thing: no one has been able to come up with any idea of what such a phenomenon could possibly look like, and there is a reason to believe that this is not just a failure of imagination but actually a fundamental truth about the universe because our brains are physical systems which themselves can be modeled by mathematics, and that is (empirical) evidence that our universe is in some sense "closed" under Turing-equivalence (with quantum randomness being the lone notable exception). That is the kind of universality embodied in the idea that TM's can emulated "anything we can dream up". It's called the Church-Turing thesis [2] and unlike the universality of universal TM's it cannot be proven because a counterexample might be discovered at any time.

[1] https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

[2] https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis

16 days ago

adastra22

I appreciate the detail, but for the record I don’t think we disagree. There’s a lot that I had to oversimplify in my already long comment.

16 days ago

Y_Y

> all known physical phenomena (with the exception of quantum measurements) can be modeled mathematically

Of course you can model quantum measurements! And Wigner certainly knew so.

16 days ago

lisper

> Of course you can model quantum measurements!

No, you can't. You can statistically model the results of multiple quantum measurements in the aggregate but you cannot model the physical process of a single measurement because there is a fundamental disconnect between the physics of quantum measurements and TMs, namely, TMs are deterministic and quantum measurements are not. There is a reason that the Measurement Problem is a thing.

16 days ago

Y_Y

A statistical model is still a model. Lots of physics works that way. Newtonian mechanics might be deterministic, but because you often don't have perfect information on initial state that's not a useful model.

For example in statistical mechanics you work with ensembles of microstates. That doesn't mean thermodynamics is fake and only F=ma is real. Models are tools for understanding the behaviour of systems, not a glimpse into god's simulation source code where the hidden variables are.

16 days ago

lisper

For QM it can be shown that there are no hidden variables, at least no local ones [1]. Quantum randomness really cannot be modeled by a TM. You need a random oracle.

[1] https://en.wikipedia.org/wiki/Bell%27s_theorem

16 days ago

sampo

> For QM it can be shown that there are no hidden variables, at least no local ones

If we assume that the experimenter has free will in choosing the measurement settings, so that the hidden variables are not correlated with the measurement settings, then it can be shown.

https://en.wikipedia.org/wiki/Bell%27s_theorem#Superdetermin...

But it we are less strict on the requirement of the free will assumption, then even local hidden variables are back on the menu.

16 days ago

lisper

That's true, but "less strict" is understating the case pretty seriously. It's not enough for experimental physicists to lack free will. To rescue local hidden variables, nothing in the universe can have free will, not even God. That's a bridge too far for most people. (It's a bridge too far for me, and I'm an atheist! :-)

Note also that superdeterminism is unfalsifiable. Since we are finite beings living in a finite universe, we can only ever have access to a finite amount of data and so we can never experimentally rule out the possibility that all experimental results are being computed by some Cosmic Turing Machine churning out digits of pi (assuming pi is normal). But we also can't rule out the possibility that the moon landings were faked or that the 2020 election was stolen by Joe Biden. You gotta draw a line somewhere.

BTW, you might enjoy this: https://blog.rongarret.info/2018/01/a-multilogue-on-free-wil...

15 days ago

sampo

> Note also that superdeterminism is unfalsifiable.

I think the many worlds interpretation of quantum mechanics is also unfalsifiable. The annoying thing about quantum mechanics is that any one of the interpretations of quantum mechanics has deep philosophical problems. But you can't choose a better one because all of them have deep problems.

15 days ago

lisper

> many worlds interpretation of quantum mechanics is also unfalsifiable

Yes, that's true.

> all of them have deep problems

Some are deeper than others.

15 days ago

Y_Y

That is indeed what I was referring to. To clarify, plenty of classical physical models work only with distributions too. You don't need a random oracle because your model doesn't predict a single microstate. It wouldn't be possible or useful to do so. You can model the flow of heat without an oracle to tell you which atoms are vibrating.

16 days ago

lisper

Yes, all this is true, but I think you're still missing the point I'm trying to make. Classical mechanics succumbs to statistics without any compromises in terms of being able to make reliable predictions using a TM. But quantum mechanics is fundamentally different in that it produces macroscopic phenomena -- the results of quantum measurements -- that a TM cannot reproduce. At the most fundamental level, you can always make a copy of the state of a TM, and so you can always predict what a given TM is going to do by making such a copy and running that instead of the original TM. You can't make a copy of a quantum state, and so it is fundamentally impossible to predict the outcome of a quantum measurement. So a TM cannot generate a random outcome, but a quantum measurement can.

15 days ago

11101010001100

Sure, the 'problem' is that while the schrodinger equation is deterministic, we can only 'measure' the amplitudes of the solution. Is the wavefunction epistemic or ontological?

16 days ago

lisper

No, this has nothing to do with ontology vs epistemology. That's a philosophical problem. The problem here is that a measurement is a sample from a random distribution. A TM cannot emulate that. It can compute the distribution, but it cannot take a random sample from it. For that you need a random oracle (https://en.wikipedia.org/wiki/Random_oracle).

16 days ago

hackandthink

This is not really about quantum computing. A classical probabilistic Turing samples from a random distribution:

"probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed"

I remember that probabilistic Turing machines are not more powerful than deterministic Turing machines, though Wikipedia is more optimistic:

"suggests that randomness may add power."

https://en.wikipedia.org/wiki/Probabilistic_Turing_machine

16 days ago

lisper

Power is not the point. The point is just that probabilistic TM's (i.e. TMs with a random oracle) are different. For example, the usual proof of the uncomputability of the halting problem does not apply to PTMs. The proof can be generalized to PTMs, but the point is that this generalization is necessary. You can't simply reduce a PTM to a DTM.

16 days ago

Y_Y

The problem is about physics, not Turing machines. You don't need to make a random choice as part of your physical model, the model only makes predictions about the distribution. You can't represent the continuous dynamical manifolds of classical or quantum mechanics on a TM either, but that's ok, because we have discrete models that work well.

16 days ago

hackandthink

I am asking myself:

Does a probabilistic Turing machines needs aleatory uncertainty? (would have called this ontological but (1) disagrees)

Epistemic uncertainty would mean her:

We don't know which deterministic Turing machine we are running. Right now, I see no way to use this in algorithms.

(1) https://dictionary.helmholtz-uq.de/content/types_of_uncertai...

16 days ago

lisper

The whole point of Turing Machines is to eliminate all of these different kinds of uncertainty. There is in point of actual physical fact no such thing as a Turing Machine. Digital computers are really analog under the hood, but they are constructed in such a way that their behavior corresponds to a deterministic model with extremely high fidelity. It turns out that this deterministic behavior can in turn be tweaked to correspond to the behavior of a wide range of real physical systems. Indeed, there is only one known exception: individual quantum measurements, which are non-deterministic at a very deep fundamental level. And that in turn also turns out to be useful in its own way, which is why quantum computing is a thing.

15 days ago

11101010001100

Right, the point is that we don't need a solution to the 'measurement problem' to have a quantum computer.

15 days ago

lisper

Well, yeah, obviously. But my point is that you do need a solution to the measurement problem in order to model measurements in any way other than simply punting and introducing randomness as a postulate.

15 days ago

11101010001100

And is that solution required to be deterministic? If so, that is another postulate.

15 days ago

lisper

You have to either postulate randomness or describe how it arises from determinism. I don't see any other logical possibility.

BTW, see this:

https://arxiv.org/abs/quant-ph/9906015

for a valiant effort to extract randomness from determinism, and this:

https://blog.rongarret.info/2019/07/the-trouble-with-many-wo...

for my critique.

15 days ago

lisper

> You don't need to make a random choice as part of your physical model

You do if you want to model individual quantum measurements.

15 days ago

11101010001100

Interaction in quantum physics is something that remains abstract at a certain level. So long as conservation principles are satisfied (include probability summing to one), interactions are permitted (i.e., what is permitted is required).

15 days ago

lisper

Yes. So? What does that have to do with modeling measurements, i.e. the macroscopic process of humans doing experiments and observing the results?

15 days ago

11101010001100

Would you agree that measurement is considered an interaction?

15 days ago

lisper

Sure. So?

15 days ago

11101010001100

Right I did hijack the thread a bit, but for me, the distribution is more than enough. The rest is just interpretation.

16 days ago

lisper

Well, no. The measurements are the things that actually happen, the events that comprise reality. The distribution may be part of the map, but it is definitely not the territory.

16 days ago

11101010001100

Isn't this just circling back to the original ontic vs epistemic though -> map vs territory?

15 days ago

lisper

No, because the original map-vs-territory discussion had to do with the wave function:

> Is the wavefunction epistemic or ontological?

https://news.ycombinator.com/item?id=42383854

Now we're talking about measurements which are indisputably a part of the territory.

15 days ago

11101010001100

Technically measurement devices are described by wavefunctions too.

15 days ago

lisper

Well, yeah, maybe. There's a reason that the Measurement Problem is called what it is.

15 days ago

11101010001100

I'm replying here since we appear to have reached the end.

Presumably measurement involves interaction with 3 or more degrees of freedom (i.e., an entangled pair of qubits and a measurement device). This is something, for most types of interactions (exclude exactly integrable systems for the moment), classical or quantum, we cannot analytically write down the solution. We can approximately solve these systems with computers. All that to say, is that any solution to any model of an 'individual' measurement will be approximate. (Of course, one of the key uses of quantum computing is improving upon these approximate solutions.) So what type of interaction should you pick to describe your measurement? Well, there is a long list and we can use a quantum computer to check! I guess part of the point I am trying to make, is when you open the box of a measurement device, you enter the world of many body physics, where obtaining solutions to the many-body equations of motion IS the problem.

15 days ago

lisper

> We can approximately solve these systems with computers.

Yes, but with quantum measurements you cannot even approximate. Your predictions for e.g. a two-state system with equal amplitudes for the two states will be exactly right exactly half of the time, and exactly wrong the other half.

15 days ago

11101010001100

I guess I don't have an issue with being wrong if we treat 'measurement' like a black box.

15 days ago

gregw2

"God does not play dice with the universe" said Einstein.

But he hasn't met my Dungeon Master...

16 days ago

gedpeck

… our brains are physical systems which themselves can be modeled by mathematics..

How do you know this? What is the model? Can an AI come up with the Incompleteness Theorems? It can be proven ZFC that PA is consistent. Can an AI or Turing Machine or whatever do the same?

EDIT: I’m equating “our brians” with consciousness.

16 days ago

lisper

> How do you know this?

I don't. But it seems like a plausible hypothesis, and I see no compelling evidence to the contrary.

> Can an AI or Turing Machine or whatever do the same?

Can you?

16 days ago

gedpeck

Yes. It’s a basic result proven in any mathematical logic course at the senior or beginning graduate level.

If a Turing Machine can’t prove that it can’t prove the consistency of its axiomatic system from within that system but that it could from within a larger system but I can then this is evidence against your belief. At least as I see it.

I have the minority view that the Incompleteness results (the proof of them) are a limitation of artificial intelligence.

15 days ago

lisper

> Yes. It’s a basic result proven in any mathematical logic course at the senior or beginning graduate level.

OK, but note that you've moved the goal posts here. Your original question was:

> Can an AI come up with the Incompleteness Theorems?

There is a difference between coming up with those theorems, and being able to reproduce them after having been shown how. There can be no doubt that an AI can do the latter, it's not even speculative any more. ChatGPT can surely recite the proof of the consistency of PA within ZFC.

> I have the minority view that the Incompleteness results (the proof of them) are a limitation of artificial intelligence.

Yeah, well, there's a reason this is the minority view. How do you know that the incompleteness results don't apply to you? Sure you can see that PA can be proven consistent in ZFC, but that is not the same thing as being able to see the consistency of the formal system that governs the behavior of your brain. You don't even know what that formal system is. It's not even clear that it's possible for you to know that. It's possible that your brain contains all kinds of ad-hoc axioms wired in by millions of years of evolution, and it's possible that these are stored in such a way that they cannot be easily compressed. Evolution tends to drive towards efficient use of resources. So even if you had the technology to produce a completely accurate model of your brain, your brain might not have the capacity to comprehend it.

History is full of people making predictions about how humans will ultimately prove to be superior to computers. Not a single one of those predictions has stood the test of time. Chess. Go. Jeopardy. Writing term papers. Generating proofs. Computers do all these things now, and they've come to do them in the span of a single human lifetime. I see absolutely no reason to believe that this trend will not continue.

15 days ago

gedpeck

Thanks for the response and perspective. I'll contemplate it more later.

While I personally may not have come up with the Incompleteness results humans did. The discussion is about human intelligence in general (particularly applied to bright people) not about my own intelligence and its limitations.

The second order Peano Axioms are categorical while the first order Peano Axioms are not. The first order axioms are used precisely because it was the dream of Hilbert and others to reduce mathematics to a computable system. The dream can not be realized. We humans can prove things like Goodstein's theorem. A statement that is true in the second order PA. How will a computer prove such a thing? There is no effective, computable means, for determining if a given statement is an axiom in PA.

I don't know anything about the chess algorithms but my understanding is that they rely, essentially, on searching a vast number of possible outcomes. Can a computer beat Magnuson with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

Thanks for the discussion. I'll contemplate what you've written and any response you care to make. I won't respond further since I'm delving into areas I know little about.

https://en.wikipedia.org/wiki/Chinese_room

15 days ago

lisper

> humans did

No. Not humans. One human.

> The discussion is about human intelligence in general (particularly applied to bright people) not about my own intelligence and its limitations.

OK, but if you're going to talk about human intelligence in general then you have to look at what humans do in general, and not what an extreme outlier like Curt Godel did as a singular event in human history.

> particularly applied to bright people

And how are you going to measure brightness?

> How will a computer prove such a thing?

I have no idea. (I was going to glibly say, "The same way that humans do", but one of the lessons of AI is that computers generally do not do things the same way that humans do. But that in no way stops them from doing the things that humans do.) But just because I don't know how they will do it in no way casts doubt on the near-certainty that they will do it, possibly even within my lifetime given current trends.

> I don't know anything about the chess algorithms but my understanding is that they rely, essentially, on searching a vast number of possible outcomes.

Yes, that's true. So?

> Can a computer beat Magnuson

I presume you meant Magnus Carlsen? Yes, of course. That experiment was done last year:

https://www.youtube.com/watch?v=dgH4389oTQY

> with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

What difference does that make? But the answer is still clearly yes because the computer could simply emulate Carlsen's brain. A 10x speed advantage would surely be enough to win.

15 days ago

gedpeck

I don’t believe you are engaging in a good faith discussion. Your previous comment is worthy of further contemplation but not this one. A computer can not emulate a person’s brain. At least not now and there isn’t sufficient evidence to believe that is even theoretically or practically possible to do in the future.

Your response here implicitly admits there’s difference in human thinking and computer “thinking”. A chess program that just searches a vast number of possibilities and chooses the best one is not thinking like a human. It’s not even close.

* > How will a computer prove such a thing? I have no idea*

If you knew about these things you’d know that it isn’t possible to have an algorithm that halts in a finite number of steps that determines whether or not a given statement is an axiom in 2nd order PA. A computer is incapable of reasoning about such things.

15 days ago

lisper

> A computer can not emulate a person’s brain.

Earlier you wrote:

> Can a computer beat Magnuson with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

If a computer can't emulate a person's brain then how are you going to assess whether or not the number of computations it's doing is "within one order of magnitude of what a human can do"?

> A computer is incapable of reasoning about such things.

You want to bet on that? Before you answer you'd better re-read your claim very carefully. When you realize your mistake and correct it, then my answer will be that humans aren't guaranteed to be able to determine these things in a finite number of steps either. There's a reason that there are unsolved problems in mathematics.

15 days ago

adastra22

Unless you're a Cartesian dualist, the mind maps to the brain which is governed by physical laws that are, in principle, fully modeled by mathematical theory that can be simulated on a computer.

16 days ago

gedpeck

..physical laws that are, in principle, fully modeled by mathematical theory that can be simulated on a computer.

Consciousness so far appears to be something that can’t be modeled mathematically. Can a Turing Machine conclude that while it can’t prove the consistency of the axiomatic system it works under if one embeds that system in a larger system then it could be possible to prove consistency?

Aren’t you assuming super determinism? What if consciousness is not “computable” in any meaningful way? For example suppose the theoretically most efficient mathematical model of consciousness requires more variables than the number of particles in the observable universe.

16 days ago

Terr_

> Consciousness so far appears to be something that can’t be modeled mathematically.

That doesn't tell us anything though: Almost every single physical phenomenon we can start to model mathematically looked impossible at some point in history. Just because something's hard is not a reason to expect it's magic.

If consciousness is somehow un-model-able, that will most likely be for a different reason, where the premise itself is somehow flawed, like how we can't bottle phlogiston or calculate the rate of air turning into maggots.

16 days ago

gedpeck

...something's hard is not a reason to expect it's magic.

Something that is impossible to accurately be modeled mathematically is not magic.

15 days ago

adastra22

I think you are very confused about consciousness, which is nothing more than what an algorithm feels like from the inside. There is nothing mysterious about it.

But we are now way off topic.

16 days ago

gedpeck

You are claiming to be able to mathematically model consciousness or that it is theoretically possible. So what’s the model or proof that it can theoretically be modeled. Your statements regarding were lacking.

15 days ago

adastra22

Dennett’s Consciousness Explained would be a good resource if you want more worked out theory and explanation.

15 days ago

adastra22

More than adequately covered in Dennett. There's a whole chapter devoted to the Chinese room IIRC.

15 days ago

gedpeck

It’s not a settled issue in the sense that the experts overwhelmingly agree on one side vs. the other. My link was posted to show a contrarian view. Lots of research papers arguing on either side have been written about the Chinese Room. Your argument is similar to a Christian telling an atheist it’s a settled issue since the book “The Case for a Creator” covers the issue.

15 days ago

fluoridation

Hmm... Turing machines are "universal" in the sense that they can be used to model the behavior of any finite process that can compute a computable function. So if a process exists that can compute something, it must be modelable as a Turing machine, and its operation must be analyzable as a finite series of elementary instructions that are analogous to those of a Turing machine. So I don't see how it doesn't make sense to ask the question "if this process has a better asymptotic behavior than is theoretically possible, where did the computation take place?" I would be more inclined to believe that someone is not counting properly, than it being some kind of hypercomputer. Or even more simply that the process is just physically impossible.

16 days ago

JumpCrisscross

> if a process exists that can compute something, it must be modelable as a Turing machine

Yes.

> and its operation must be analyzable as a finite series of elementary instructions that are analogous to those of a Turing machine

Analyzable, sure. MWI is fine as an analytic tool in the same way e.g. virtual particles and holes are. Nothing here requires that any of these analytic tools are physically corresponded to.

16 days ago

fluoridation

I didn't say I favored the MWI. My point is simply that the question of where computation took place shouldn't be dismissed. Like I said, I would be more inclined to think someone is not counting properly. As a simple example, a parallel program may run in half the time, but that doesn't mean it executed half as many operations.

16 days ago

JumpCrisscross

> the question of where computation took place shouldn't be dismissed

Sure. But it also shouldn’t be glorified as adding anything to an old debate—every QCD calculation has this dynamic of lots of classical “computations” happening seemingly simultaneously.

16 days ago

adastra22

There isn’t hidden computation going on though.

16 days ago

fluoridation

Computation isn't hidden in a parallel program, either. But if you only count the running time, and not the total instructions retired or the computer's energy usage, you're not getting the full picture.

16 days ago

adastra22

And there isn’t parallel computation going on either. I think there’s a fundamental misunderstanding of how quantum compute works here. There is no merge-reduce as another commenter is claiming. Even in MWI, the decoherence is one-way.

16 days ago

galaxyLogic

A Turing Machine operates serially one instruction at a time. But you could obviously multiply its output by multiplying the machines.

The question is then how could we efficiently multiply Turing machines? One way could be by using rays of light to do the computations. Rays of light operate in parallel. Light-based computers don't need to be based on quantum mechanics, just plain old laser-optics. They multiply their performance by using multiple rays of light, if I understand it correctly.

SEE: https://thequantuminsider.com/2024/03/19/lightsolver-laser-c...

So using multiple rays of light is a way to multiply Turing Machines economically, in practice, I would think.

I assume quantum computers similarly just multiply Turing Machines -like computations, performing many computations in parallel, similarly to light-based computers. That does not mean that such computations must happen "somewhere else", than where the quantum processes occur. And that does not require multiple universes, just good old quantum mechanics, from Copenhagen.

16 days ago

fluoridation

Parallel computers don't "multiply their performance" in computation theoretic terms. If you took half as long by computing on two Turing machines simultaneously you didn't decrease the time complexity of the algorithm. You still churned through about the same number of total operations as if you had done it serially.

The distinction between the total operations and the total time is important, because your energy requirements scale with the time complexity of what you try to compute, not with the total time it takes.

An optical computer, for example, has a limit on how densely it can pack parallel computation into the same space, because at some point your lenses overheat and melt from the sheer intensity of the light passing through them. It's possible QCs are subject to similar limitations, and despite the math saying they're capable of computing certain functions polynomially, that doing so requires pushing exponential amounts of energy into the same space.

16 days ago

galaxyLogic

Good points. I just wanted to draw attention to the idea that Turing Machine is just one way of doing "computing". Whatever is proven about Turing Machines is proven about Turing Machines, not about all mechanical devices, like quantum computers, and "light-computers", in general.

I'm wondering is there a universal definition of "computing"? Saying that "Conmputing is what Turing Machines do" somehow seems circular. :-)

16 days ago

fluoridation

Well, Turing machines are supposed to be that definition. The Church-Turing thesis still hasn't been shown to be false. Proving something of Turing machines that doesn't hold for one of those classes of devices would mean refuting it. Basically we'd need to find some problem that, memory not being an issue, one class of computer can solve in finite time while another can't solve in any finite amount of time.

16 days ago

adastra22

“Any process can be modeled by a Turing machine” is not the same as “Every process must be isomorphic to a Turing machine, and therefore share the same limitations.”

If I have a magic box that will instantly calculate the Nth digit of the busy beaver number of any size, that can be modeled by a Turning machine. AFAIK there is no constant time algorithm though, which is what our magic box does. So a Turing machine can’t match the performance of our magic box. But no where is it written that a Turing machine puts a an upper bound on performance!

That’s what quantum computers are. They solve certain problems faster than a classical computer can. Classical computers can solve the same problems, just more slowly.

16 days ago

fluoridation

They are in fact the same, since Turing machines are mathematical models. If you had such a magic box, capable of computing things without regard for their time complexity, we might need to revise our assumptions about whether Turing machines are in fact capable of universally modeling all forms of computation. But like I implied on the sibling comment, I'm expressing skepticism that quantum computation is actually physically realizable, to the extent that functions can be computed in fewer operations (i.e. more efficiently) than is predicted by classical computing.

16 days ago

adastra22

Well you are free to believe whatever you want. But at this point disbelief in quantum computers is akin to flat earth denial. There are thousands of labs across the world who have performed entanglement experiments, and dozens of labs that have built working quantum computers specifically.

16 days ago

hnfong

That's a bit extreme.

While QM and QC theory is well-established, there has been very few experiments that confirm that quantum computing actually works as theorized. There are quantum computers that are "working", but some of them (esp. the older ones) are just the kind of "quantum computers show that 15 = 3 * 15 (with high probability)". From what I read on Scott Aaronson's blog, very few of those experiments show "quantum supremacy" (i.e. classical computing physically cannot compute the results in reasonable time). This is why the Google Willow thing is considered a breakthrough.

So basically empirical "proof" that quantum computing actually works as predicted in theory is rather recent stuff.

15 days ago

adastra22

Quantum computers are a direct consequence of quantum mechanics. Just like ropes and pulleys are a direct consequence of newtonian mechanics. If you think that quantum computers won't work, then either (1) all the theorists studying them for the past half century have somehow screwed up their basic maths, or (2) our understanding of quantum mechanics is wrong in ways that experiments have already ruled out.

15 days ago

fluoridation

They're not a "direct consequence". If they were, quantum computers would arise spontaneously in the environment. What you mean to say is that quantum mechanics permits quantum computers to exist. That doesn't mean something else can't forbid them from existing, or forbid them from ever reaching supremacy in practical applications.

>If you think that quantum computers won't work, then either (1) all the theorists studying them for the past half century have somehow screwed up their basic maths, or (2) our understanding of quantum mechanics is wrong in ways that experiments have already ruled out.

You know this thing called science? Its goal is not to know things, it's to learn things. If we already knew everything there is to know about quantum mechanics people would have built a perfectly functioning quantum computer at the first try (or they would have known from the start it was impossible). Physicists are trying to build quantum computers partly because in doing so they learn new things about quantum mechanics, and because they want to learn if it's possible to build them. Quantum computers are themselves also an experiment about quantum mechanics.

14 days ago

hnfong

I don't know what it is that make people think of science in an over-confident binary (yes or no) manner.

If you read the original article, you'd see that the idea of experiments about quantum supremacy is an important issue. The whole reason they want to conduct such experiments is to prove that quantum computing actually works empirically.

The question is not whether "I think" quantum computers won't work. I don't "think" anything, I'm not an expert in the field and as such what I personally think is irrelevant. Scientifically, there aren't enough empirical experiments to conclusively prove whether they work. Whatever I "think" or you "think" are pure speculation. The chances of QC working and scalable to non-trivial number of qbits might be pretty good, but they haven't built a machine that can break RSA yet for example.

And yes of course the theorists could be working on the "wrong" theory all the time. It happened with Newtonian physics. You can't build accurate GPS systems with Newtonian physics without taking into account relativity. Similarly, we already know QM does not take gravity into account. Is it possible that quantum-computers-as-we-know-it are not possible under a gravitational field? Unlikely(?), but it's possible. You can't just take your personal speculative belief as truth and call everyone else flat earthers.

14 days ago

fluoridation

Honestly, I'm skeptical that this even demonstrates quantum supremacy. Like I said in a different comment in this thread, all they did was let the device run for some time on whatever nonsense was in its memory and let it reach some meaningless state. Since it would take a classical computer a long time to simulate the qubits at the physical level to reach the same result, that shows that the quantum computer is much better at executing itself. But that's stupid. By that same logic, electronic computers are faster than themselves, because it would take a long to simulate a computer at the physical level on another, identical computer.

If the QC isn't doing any meaningful computation, if it's not actually executing an algorithm, then it's not possible to compare their respective efficiencies. Or rather it is possible, but the answer you get is meaningless. Let's make it fair. How long would it take a ~100 qubit quantum computer to simulate at the physical level a smartphone's SoC running for 1 second?

15 days ago

fluoridation

I didn't say I don't believe in quantum computers. What I said is I don't believe in quantum computing as a practical technology. Right now "quantum computers" are just research devices to study QC itself. Not a single one has ever computed anything of use to anyone that couldn't have been computed classically.

16 days ago

slowmovintarget

According to this ( https://arxiv.org/pdf/2302.10778 ), there are no branches, just decoherence (minus the "collapse").

16 days ago

adastra22

A paper proposing a radically different formulation of quantum mechanics, with only a half dozen non-author-citations, so heavy with jargon that it's hard to figure out what the author is claiming.

Sorry, my crank meter is dialed to 11 right now and I don't think this is worth spending further time on.

16 days ago

slowmovintarget

Jacob Barandes is a professor of physics at Harvard (not a crank).

He recently did a great podcast with Kurt Jaimungal [1]. In the interview he explains that every year he searches for a good way to introduce his students to quantum mechanics, and every year he's ended up being unsatisfied with the approach and started again.

One year he decided to attempt describing systems with traditional mechanics, using probabilities (stochastic mechanics). He worked at it and worked at it, assuming that he would eventually have to take some leap to cross the chasm into the world of gauge theory, Lie Groups, and Hilbert spaces. What he found is, to his surprise, the math just seemed to fit together at some point. That is, he found a mapping, just as the path integral is a mapping into the math of the wave function, and it just kind of worked. He had been under the assumption that it shouldn't.

It turns out, that in doing his stochastic mechanics, he had used mathematical descriptions that were indivisible. That is once a given process began, it could not be sliced into smaller and smaller time slices. It had to complete and yield a result. This was what he called a non-Markov stochastic process. Apparently all previous attempts at this used Markov processes, which are divisible like Hilbert vector calculations or the path integral.

It turns out that things like "collapse" of the wave function, and all the quantum weirdness arose from how the math worked with the wave function and Hilbert space, not from anything intrinsic to the mechanics of the universe (at least that's what his equivalent math was telling him). So in his stochastic non-Markov model, there is no collapse, just decoherence. There is always a result, and the intermediate states (where all the quantum oddities live) aren't real.

He mentions being really disappointed at seeing all the magic of quantum mechanics just kind of vanish. From what he could tell, it was just a trick of the wrong kind of math.

[1] https://www.youtube.com/live/7oWip00iXbo

15 days ago

adastra22

Mathematicians, even from prestigious universities, have been wrong before. It's the danger of a discipline that doesn't yet have machine-checkable proofs.

15 days ago

slowmovintarget

I'm not saying he's correct because he's from Harvard, but he's not some crank outside the system to be lightly dismissed. He's firmly in the mainstream of physics and foundations of physics and seems to have gotten a new result. Time will tell.

15 days ago

3form

Why does the computation have to "happen somewhere"? I understand the desire to push the boundary of the classical reasoning, but I don't think it's something to be taken as an axiom, and I haven't heard about any proof either.

16 days ago

feoren

There's no reason to think the computation is "happening" anywhere but our universe, or even "happening" at all in any real sense.

Light follows a geodesic. Atomic reactions tend to minimize energy. Rivers follow the most efficient path to the sea. Where does any of that computation "happen"? Why is it any stranger that an electron "knows how" to follow a gradient in an electric field than that a quantum computer "knows how" to perform a random circuit sampling?

15 days ago

fidotron

The easy comparison for devs is quantum computing is like git branching.

The big question being the PR approval process.

16 days ago

whimsicalism

no that's not a good comparison for MWI, it's more continuous

16 days ago

fidotron

That sounds like you having a narrow understanding of git - there is no way it is not continuous.

16 days ago

whimsicalism

git branches are not continuous

16 days ago

fidotron

You mean as opposed to discrete? (And not in the usual sense of continuous)

There is nothing stopping you putting probability distributions in git and branching them.

16 days ago

dartos

Git operates on commits (branches are really nicknames for specific commits with some extra features), which are discrete steps.

That’s what’s stopping you from doing anything continuous in git.

16 days ago

whimsicalism

i think we are just not connecting in what we are saying, nbd

16 days ago

amelius

Maybe our consciousness also branches, and all branches are approved.

16 days ago

FergusArgyll

Ah, so it's all pointers after all...

16 days ago

munchler

Thank you. This makes sense.

16 days ago

athrowaway3z

[flagged]

16 days ago

monero-xmr

I’d prefer to say we just go back in time the moment it collapses. How can you disagree with me? No proof of anything and we can all contrive untestable solutions.

No wait, it’s actually God who exists in the wave collapse, and His divine intervention does the computation.

16 days ago

astrange

> I’d prefer to say we just go back in time the moment it collapses. How can you disagree with me? No proof of anything and we can all contrive untestable solutions.

That's close to retrocausality which is a serious theory/interpretation of quantum mechanics.

16 days ago

monero-xmr

I'd like to see some serious attempts to prove any of these theories. If not, we can just wildly imagine all sorts of possibilities.

15 days ago

klabb3

> “where else could the computation have happened, if it wasn’t being farmed out to parallel universes?”

Right. I’m definitely a layman here but as a CS generalist this rings out as narrow-minded to me, biased towards the way we designed physically and then mathematically formalized “computation”.

Not that I have anything against multiverses but.. if I had to choose between “Turing-style computation happened, but needed parallel universes to do so” and “what happened is something counter-intuitive and insufficiently understood about the universe we inhabit” I would bet on the latter.

16 days ago

adastra22

I'm an actual physicist here. Not in quantum computers, but that was my advisor's field and I've kept paying attention in the years since. Your intuition here is correct, I believe. Classical computer science deals with information processing limits of devices constructed using the classical physics laws of Newton and Maxwell. Quantum physics introduces new elements that permit new kinds of information processing machines, which have no obligation to be constrained by classical law.

16 days ago

Ar-Curunir

I don't know of any computer scientists, including the quantum-information-theorists, who think that QTMs can compute something that classical TMs cannot. That is, QM cannot solve undecidable problems.

16 days ago

ayewo

> Quantum physics introduces new elements that permit new kinds of information processing machines, which have no obligation to be constrained by classical law.

Question then becomes, how do you build such non-constrained machines? Also, how do you confirm that such machines—or even small scale prototypes—are not constrained by classic laws of physics?

16 days ago

rlupi

> Question then becomes, how do you build such non-constrained machines? Also, how do you confirm that such machines—or even small scale prototypes—are not constrained by classic laws of physics?

You mean, like... transistors? ;-)

https://en.wikipedia.org/wiki/History_of_the_transistor (search for quantum)

16 days ago

adastra22

By doing the math of many physics calculations to design a setup that will make these quantum phenomena stable enough to be useful, then building it and seeing if prediction matches reality.

16 days ago

hnfong

And that's what the linked article is doing?

15 days ago

AyyEye

Rule #1 is that technologists must pretend that they understand what they are doing. Woo-woo is fine as long as you can get your peers to play along with you.

16 days ago

qnleigh

But to emphasize one quote from the blog: "the new experiment doesn’t add anything new to this old debate" about multiverse interpretations vs. something else. An equally viable and perhaps conceptually simpler interpretation of the experiment is that the qubits are temporarily put in a 'superposition' of many different bitstrings, some operations are performed, and then measurement collapses this superposition back into a single definite bitstring. No multiverse required.

16 days ago

superq

> No multiverse required.

True, but no exciting plot twists either. :(

16 days ago

galaxyLogic

Multiple Universes is a "Deus ex Machina" applied by Hollywood a lot these days.

16 days ago

AnotherGoodName

You just need to devise a system where the vast majority of universes give the right answer for this to work.

Start by at least having a universe for each possibility so that every code path is computed. Then add a mechanism to create many more universes when a correct result is hit.

So each wrong result has 1 universe and the correct result alone has 2^300 a universes. Run this. You now end up with the correct result 99.99999% of the time.

I’m not arguing on the above take or not fwiw, it’s just that it easy to see how this could happen in many worlds. Effectively error correction just becomes a mechanism to create more universes for correct answers than incorrect answers and it all works. In fact it’s very reasonable to think of quantum error correction this way (it really is a mechanism to favour correct answers being observed and in many worlds that means it creates more correct answer universes).

16 days ago

skirmish

> add a mechanism to create many more universes when a correct result is hit

Simple count has not much to do with the outcome observed, IMU each branch in a split is not equally likely. If you finely sub-divide a low-probability branch into 2^300 sub-branches, so what. Instead, you need to merge multiple high probability branches back into a single outcome that is observed "99.99999% of the time".

16 days ago

AnotherGoodName

In many worlds the probability observed can absolutely be thought of as the ratio of worlds with that outcome vs another. Any specific numbers here are just as an example.

16 days ago

knome

This feels like taking quantum bogo sort seriously.

16 days ago

lmm

> Aren't these parallel universes running the same computation at the same time, and thus also "farming out" part of their computations to us? If so, it's a zero-sum game, so how could there be an overall performance gain for all the universes?

The result is the same in all universes. There's a kind of scatter-gather/map-reduce feel - each universe computes part of the problem, then you add up all their results to get the final one (in all universes).

The argument works for me, although I already believed the conclusion so I'm biased.

16 days ago

Strilanc

The main issue with this line of argument is that you don't see people who like other interpretations claiming quantum computers won't work. Saying quantum computers imply many worlds is the classic mistake of wanting to show A=>B and arguing B=>A instead of ~B=>~A. If quantum computers were inconsistent with collapse interpretations, you'd expect people who think collapse interpretations are correct to be confidently predicting quantum computers will never work. But they don't; they just think the state inside the computer isn't collapsing.

I'm often tempted to argue quantum computers at least clearly favor ontic interpretations (ones where quantum states are real). Because good luck computing things by subjectively having a computer instead of actually having a computer. But, as far as I know, you don't see quantum-bayesianism-ists having any qualms with quantum computers. I think because if you're already biting the bullet of interpreting diagonally polarized light as subjective, and Bell tests as subjective, then interpreting a computation as subjective isn't fundamentally different. It's just more in your face about it.

16 days ago

adastra22

Quantum computers do defeat hidden-variable models, which would have to hide the computation somewhere. But yeah, Bell and EPR ruled those out years ago anyway.

16 days ago

Strilanc

I don't see people who believe hidden variable models, like pilot wave, claiming quantum computers won't work. So I don't think quantum computers disprove hidden variable models.

I do agree quantum computers disprove local hidden variable models. Because they can run Bell tests, and local hidden variable models can't violate the Bell inequalities.

16 days ago

adastra22

Last time I checked pilot wave theory was not developed enough to permit modeling of quantum computers, or at least no one had done the work yet to apply the theory to that use case. It is very undeveloped.

16 days ago

sampo

> local hidden variable models can't violate the Bell inequalities

Superdeterminism is an interpretation of quantum mechanics where they can:

https://en.wikipedia.org/wiki/Bell%27s_theorem#Superdetermin...

16 days ago

Strilanc

I don't think anyone has managed to write down a super deterministic model that can violate bell inequalities and contain computers. The computer bit is key here because it's what let's me create ludicrously difficult to solve setups, like picking measurement bases using sha1 hashes of starlight.

15 days ago

adastra22

I’m not sure the sibling comment is correct, so I’ll take another stab at it.

First, I also think the “this favors the Everett interpretation” argument is incoherent, but for different reasons. I won’t be defending that view. But I can try to answer your question about the nature of computation and “where” that computation is taking place.

A quantum computer is essentially a random number generator, but with a non-uniform distribution that can be programmatically controlled.

Let’s start with the random part: a qubit can be either a 1 or a 0. When first initialized, when you then read it you will get either a zero or a one with 50% probability. (I am oversimplifying to the point of error! This is technically incorrect, but please excuse me for the purpose of a simplified explanation that makes sense to computer scientists rather than quantum physicists.) In the Copenhagen interpretation the wave function randomly collapsed in a way that can be measured as a 0 or as a 1 with 50% probability for either outcome. In the Everett interpretation there are two universes (really, two disjoint sets of universes): one where you see a 0, and the other where you see a 1, and your likelihood of ending up in either is 50%.

For example, with three qubits you can read a random value between 0 to 7, where each qubit provides a 0 or 1 of a classical 3-bit binary value. If you're making a D&D simulator, you can take three of these registers (9 qubits total) and read them to get three values between 0 to 7, then increment each and add together to get the simulated value of three 8-sided dice, for the attack value of some game weapon.

But so far we're assuming even and independent odds for a 0 or 1 in each qubit. A quantum gate operation lets you modify that probability distribution in interesting ways. Specifically you can take multiple qubits and entangle their state such that the eventual outcome (still random) is constrained to a non-uniform probability distribution. (The details don't matter, but it has to do with the fact that you can add wave amplitude, which is complex-valued and therefore can constructively or destructively interfere.)

So instead of reading three separate 1d8 registers and adding them together, we can use quantum gate operations to perform the addition: entangling the state of a new 4 qubit register (large enough to hold the sum of two 3 qubit registers) such that it represents the sum of two separate, randomly chosen with independent probability values between 0..7. The combined sum will be a random value between 0 and 14, but with the value 7 being most likely, 6 or 8 next most likely, etc. (You can see the probabilities here: https://anydice.com) We then add in the third 3-bit register, getting a 5-qubit result. For good measure, we then add a constant 3 value using quantum gate operations so the result will be what we expect for adding 3 dice rolls, which start counting at 1 rather than 0. Now we have a great random number generator for D&D games. It's a 5-bit register that when you read it will give you a value between 3 and 24, with exactly the probabilities you would expect of three rolls of a fair 8-sided die.

You can interpret what this means in Copenhagen vs Everett, but there isn't a way in which it makes material difference. I will say that the Everett interpretation is easier to think about: you setup the entangled operations such that there simply isn't a universe in which the register reads 31, or some value not representative of a 3d8 dice roll, and if you count the number of possible universes under the born rules and holding the rest of the multiverse constant, the counts match the 3d8 dice probability distribution. So you don't know which universe you will end up in when you finally read the register and, under the many worlds interpretation, split off into one of many possibilities. But you know it will be a good 3d8 dice roll for your D&D game.

Now imagine you have two 1,024 qubit registers (each with random probability for each bit), and you run a full set of shift-adders to multiply them together into a 2,048 qubit register. If you read the result, it'll be a random non-prime number. Why? Because it's necessarily the product of two 1,024 composites. What's interesting is that you can kinda do the reverse, as Peter Shor figured out how to do in the 90's: start with a 2,048 bit register, run the shift-adder in reverse (really, the method of continued fractions, but I'm going for an intuition pump here), then read the two 1,024 registers. What you will get is two values that when multiplied together will result in the input, the factors. If you put "60" in the input, you will maybe get the outputs (15, 4), or (5, 12), or (2, 30), or (1, 60). It's random which will pop out, but what you get WILL be a factorization of 60.

Now put your bank's SSL public key in the register, and read the factors out. The factors are the private key.

I don't think any talk of multiverses is needed to understand this. It's simply mucking around with the probability distribution of a random number generator in clever enough ways that the random number generated, whatever it is, will be A solution (perhaps one of many) to your problem. You do this by destructively interfering the register with wave functions that represent non-solutions, and keep doing this until only solutions are left as possible values. Then you read your random number register and know the result is drawn from the solution set.

16 days ago

Robotbeat

Finally, someone mentioning Shor and his algorithm.

Shor’s algorithm is where the rubber meets the road for me. Synthetic benchmarks like what Google made just aren’t that useful imo.

There’s even a case to be made that all the work needed to make quantum computing reliable negates the supposed advantage and makes it no more unscalable than conventional computing. I don’t subscribe to that, but a successful Shor’s Algorithm attack on RSA (with enough bits) would certainly prove that wrong.

16 days ago

layer8

> and your likelihood of ending up in either is 50%.

I don’t quite agree with this characterization. You end up in both universes with 100% certainty, but there will then be two of you, one in each universe. Where the likelihood again comes in, is when one of the two yous wonders in which branch they are located, and tries to predict (retrodict?) based on information collected prior to the branching, it will be a 50/50 likelihood. Of course, if they are able to look at the outcome of the experiment, it will become clear in which branch they are located.

This concept of probability in the context of MW is known as “self-locating uncertainty”: https://www.journals.uchicago.edu/doi/10.1093/bjps/axw004

16 days ago

adastra22

This is philosophy and probably a waste of time to pursue further. Both statements can be correct depending on your point of view.

16 days ago

[deleted]
16 days ago

vasco

As a layman I really hope this is correct because it makes a lot of sense. The multiverse thing is supposed to help thinking about it but I just get stuck on some problems of intuition if I try to do it like that. Great comment!

16 days ago

adastra22

Thanks!

Most of the debate over interpretations boils down to the mind projection fallacy. For each of us one interpretation is so clearly easier to think about, and everyone else must be bonkers for thinking otherwise. In reality one makes sense to me, the other makes sense to you. That’s fine.

16 days ago

whimsicalism

the larger the devices we are able to build where quantum effects still dominate, the more it hurts any notion of a 'physical collapse' suggested by some copenhagen branches.

Other, weaker forms of Copenhagenism scarcely even qualify as a real metaphysical interpretation of what is going on. I do think building big things in superposition does seem suggestive that there is no 'collapse upon interaction with observer' effect in metaphysical reality.

16 days ago

adastra22

> the more it hurts any notion of a 'physical collapse' suggested by some copenhagen branches

Why? The mathematics of quantum mechanics describes perfectly well what is going on in these systems, to exactly the same level of accuracy. There is no predictive difference between Copenhagen and Everett interpretations.

This is a philosophical argument, not a physical one.

16 days ago

whimsicalism

I'm not saying it is a purely empirical argument, hence my extensive mention of metaphysics. There are many, many, many theories that can match existing observation, such as superdeterminism or superdeterminism where there is a human-form God right outside of our causal observable universe that is using some magical force to determine where all particles are.

The principle of parsimony, imo, favors an everettian explanation - and constructing larger contraptions with QM effects puts increasing parsimony pressure on why observers like humans are not simply getting entangled with the system they are observing when we see 'collapse.' The mathematics of QM does not describe collapse or how that occurs from wavefunction evolution or why observers are relevant.

16 days ago

laserbeam

I think the "parallel universe" analogies are misleading at best. I actually think most "narrative" explanations of quantum physics are as well. For a long time I used to imagine quantum physics with the same language we talk about superheroes and time travel and consciousness and whatever.

But then I actually saw some of the math (very basic Hisenberg uncertainty equations) and good comparisons to classical wave mechanics and wave interference. Everything made more sense.

As a layman with some very rudimentary understanding of the math, I would ignore ALL talk about "farming out" and "parallel universrs" and similar concepts. I think thise ideas are charged with too many interesting but fantastical elements which you probably don't find in actual quantum mechanics.

16 days ago

tbrownaw

> I think the "parallel universe" analogies are misleading at best. I actually think most "narrative" explanations of quantum physics are as well.

This is inherent to the problem of attempting to describe something that fundamentally does not work like the macroscopic world, using vocabulary rooted in that macroscopic world.

16 days ago

atq2119

As a layperson with a math background who has dabbled in quantum computing, I agree.

It would help to call the "Everettian" interpretation by the original title given by Everett: the theory of the universal wave function.

Ignore the techno-babble about the multiverse. The way I understand Everett is simply as taking the math literally and seriously to its logical conclusion.

16 days ago

smolder

I think the minimum amount of understanding needed to grasp MWI is understanding the double slit experiment. The interference pattern is real, thus the waves are real, but we only experience one measurable outcome where a photon lands in a specific location. The probabilistic part is not which outcome happens but which outcome your reality follows.

16 days ago

laserbeam

My point is that analogy is influenced by fiction. It is too easy to attach concepts from fiction to the double slit experiment. Saying things like "which outcome your reality follows" are humane experiences overlapped on to the experiment, not something that arises from the experiment itself. Nor something that arises from the math.

15 days ago

smolder

I will humbly disagree. I don't think fiction plays into it much, more the other way around. I think the 'multiple universes' ideas in fiction are likely to have been inspired by those experimental results. Just knowing how the double slit experiment works made me consider that every wavefunction might result in an infinite number of outcomes that never interact again, despite always having a dim view of fiction with parallel worlds. The fictional multiverses still aren't plausible, since usually there is some interaction between the different worlds, and they don't well represent the continuous nature of things, but it seems a little less dumb conceptually having learned about that experiment, quantum erasure, etc.

15 days ago

[deleted]
16 days ago

S_A_P

I am also curious if Mark Everett, Hugh’s son, has an account here.

16 days ago

Mistletoe

In many worlds, he certainly does.

16 days ago

munchler

EELS!

16 days ago

blueprint

It’s not necessary for there to be parallel universes when all possible paths are already taken within this universe.

16 days ago

kokanee

Parallel universes enter the conversation because we (non physicists) can't identify any scenario explainable with classical physics that allowed those paths to be taken in the allotted time and space.

16 days ago

blueprint

Have a look at e.g. the path integral. It's a formulation for calculating the result of the mechanism behind probabilistic mechanics. I like to use the word "possible" instead of "probable", when explaining it, sometimes. When a car is hurtling down a road, it cannot really diverge from its "classical" path too much because it is already highly localized and entangled. There's a single car we're talking about. But a single isolated particle is not even so localized as to be able to be called a particle, but more so a wave of possibility of measuring in a certain state, such as a location. (I'm still a student though, so I do not yet fully understand how particle location and space-patch entanglement connect.)

But, anyway, as I grasp it so far... nature already makes it possible to look like one can "take multiple paths" when one is coherent, i.e. isolated from entanglement with environment during that path, i.e. where the "which path" is not already decided via e.g. measurement. For example, the Lamb shift is the consequence of the fact that the universe really doesn't know which paths of particle creation and annihilation, if any, the force carriers between the electron and the proton took. So, it is able to take all of them. Why should it rather be our expectation rather than how nature says it can work? It seems like a mental contortion is needed - but nature really does operate based on possibility because it's the consequence of the more fundamental fact that there being no one / nothing in the entire universe, including the particle itself, that "knows" what state it's in or going to be in, during a certain span of the system's evolution.

At the moment there is such a record keeper, the influence of "possibility" diminishes.

16 days ago

fizx

The Everettian multiverse is a bit like seeing some bubbles and foam in the surf and concluding that there are "many oceans." Yes, there are many surfaces of water, but calling each bubble an ocean is a bit disingenuous.

Like the bubble analogy, new Everettian universes are local, small, and the total amount of probability density (water in this analogy) is conserved.

16 days ago

ssener2001

[dead]

16 days ago

bambax

> Having said that, the biggest caveat to the “10^25 years” result is one to which I fear Google drew insufficient attention. Namely, for the exact same reason why (as far as anyone knows) this quantum computation would take ~10^25 years for a classical computer to simulate, it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!

I don't understand that part, can someone explain? There should be plenty of problems that take a long time to solve, but are trivial to verify? Like for example factoring extremely large numbers that are the product of a few very large primes? Maybe not on the order of 10^25 years, but still?

16 days ago

refulgentis

This computation is... handwaves, most handwaving I've done in a while...initializing a random state, and it's well-understood whether this state was randomly initialized isn't computable in a reasonable timeframe on classical computers. ends extreme handwaving

This nerd sniped a bunch of us, because it sounds like "oh we proved P!=NP", the keys to understanding are A) hanging onto the this in "this computation" (this is easier when you're familiar with the computation and it's contextual history in the field) B) remembering prime factors as a plausible application of QC.

Then faced with the contradiction of B, it's neatly resolved by "yes, but the quantum computer isn't big enough to do prime factorization yet"

As noted somewhat sideways in the blog, if someone has a computation that is A) not classically computable in a reasonable timeframe B) is computable on a miniscule quantum computer C) can be verified on a classic computer in a reasonable timeframe, a lot of researchers will be excited.

16 days ago

jl6

Does this mean that the problem of not being able to verify QC results will go away in the scenario where we have a large enough QC to solve an NP problem?

16 days ago

refulgentis

Correct, you nailed it.

FWIW:

There were some really cool comments on the last article re: Willow.

One of them being, a reference to a apparently well-known "roadmap" of quantum scaling that apparently got written up a few years back.

Apparently the Willow result was the 2026 baseline projection.

So their message was "well...not too big a deal, we achieved 2026 at ~2025." Also said that the same roadmap would have that achieved in 15-20 years.

16 days ago

joak

The hardware is progressing, we have a issue though: we don't have algorithms to run on quantum computers. Besides Shor's algorithm, useful to break RSA, we have nothing.

Just vague ideas like: it could be useful for quantum simulations or optimisation or maybe ...

If tomorrow we have a full running quantum computing what would we run on it? We are in a vacuum.

The only hope is a breakthrough in quantum algorithms. Nothing in sight, not much progress on this side.

Oh yes, Zapata Computing, the best funded company in quantum algorithms just went under this year.

16 days ago

oldgradstudent

> Oh yes, Zapata Computing, the best funded company in quantum algorithms just went under this year.

It's kinda hard to make money by developing algorithms for imaginary magic computers.

16 days ago

winwang

For one, we could just start simulating quantum chemistry, though at that point it's more like "actually running quantum chemistry" rather than simulating.

16 days ago

sgt101

Will AI take that use case though?

16 days ago

winwang

AI isn't magic. It's still subject to the limits of classical computing.

15 days ago

phoronixrly

Sorry but you need to cite some sources... The fact that random adtech devs on HN have no algorithms useful to them to run on a QC does not mean much to me.

16 days ago

griomnib

You’ve provided me with a beautiful framing to understand some of the most obtuse and self-serving arguments I see on here: “random adtech devs”.

16 days ago

TZubiri

laugh as we may, they get the money.

16 days ago

reikonomusha

No-nonsense roll-up of what algorithms a quantum computer can run in principle, along with their big-O speed ups (which don't necessarily reflect practical speed ups): https://quantumalgorithmzoo.org/

16 days ago

gauge_field

Here is another paper about its applicability on a range of problems: https://cacm.acm.org/research/disentangling-hype-from-practi...

16 days ago

77pt77

This is not true.

There are plenty of quantum algorithms.

https://en.wikipedia.org/wiki/Quantum_algorithm

16 days ago

sgt101

Qualification: Quantum Algorithms that provide superpolynomial speedup, have an application, and don't need quantum memory or rely on quantum data.

16 days ago

ChrisArchitect

Related:

Willow, Our Quantum Chip

https://news.ycombinator.com/item?id=42367649

16 days ago

imranq

Summary: Its a real result, the cool part is more qubits seem to live longer rather than shorter, bad part the results are not explicitly verifiable, only through extrapolation

16 days ago

NooneAtAll3

you're mixing up 2 different results

a) error-correction needs small level of errors to begin with to amplify signal - we finally got to that point, and larger correction setup deals with more errors

b) "standard" benchmark problem now 100% computes something uncomputable with classic chips (in practice) - the problem is that it's so quantum, neither it is verifiable with classic chips anymore

15 days ago

r33b33

Let's talk about things that actually matter - where to invest in post-quantum world?

I'll keep this short.

- Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

- Hypothesis: Accelerating advancements in tech and AI could lead to quantum supremacy arriving sooner than the 2030s, contrary to expert predictions.

- Legacy banking systems, being centralized, could transition faster to post-quantum-safe encryption by freezing transfers, re-checking processes, and migrating to new protocols in a controlled manner.

- Decentralized cryptocurrencies face bigger challenges:Hard forks are difficult to coordinate across a decentralized network.

- Transitioning to quantum-safe algorithms could lead to longer transaction signatures and significantly higher fees, eroding trust in the system.

- If quantum computers compromise current cryptography, tangible assets (e.g., real estate, stock indices) may retain more value compared to digital assets like crypto.

Thoughts?

16 days ago

almostgotcaught

Literally none of this is correct.

> - Google’s Willow quantum chip significantly outpaces current supercomputers, solving tasks in minutes that would otherwise take billions of years.

billions of years you say? Just what kinds of "computing tasks" we talkin about here?

16 days ago

smallerize

The benchmark is called "random circuit sampling". https://blog.google/technology/research/google-willow-quantu... I call it a benchmark because a random circuit is probably not useful, but quantum computers are useful for simulating quantum systems including materials science, particle physics etc. This result is separate from the error-corrected qubit result and did not use error correction. There is a third result in the paper, which is a repetition-coded qubit that was stable for almost an hour. https://www.nature.com/articles/s41586-024-08449-y

15 days ago

r33b33

Okay, explain why and how it is not correct then?

I genuinely want to learn and figure out what is the truth and what is the best route of action when it comes to securing a portfolio.

16 days ago

simonh

Willow only implements one single error corrected logical Qubit, using over 100 physical qubits. So far the test they have done is only applicable to an extremely narrow problem domain, approaching the limit of how practically useless a problem domain can be. I may be exaggerating slightly.

As Google's own team report "Google will only consider itself to have created a “true” fault-tolerant qubit, once it can do fault-tolerant two-qubit gates with an error of ~10-6". That's two logical qubits.

The general consensus is that to have a practically useful quantum computer we'd need one with about a million physical Qubits. That's not just 1,000 of these 105 qubit chips because those wouldn't be entangled, but one chip with about a million entangled physical qubits and therefore 1,000 logical qubits.

15 days ago

NooneAtAll3

> Let's talk about things that actually matter

I'm all ears

> where to invest

and like that, you lost me

15 days ago

machina_ex_deus

Before invoking parallel universes, how about comparing the system to nature's mind-boggling number of particles in the macroscopic world? A single gram contains 10^23=2^76 particles. Google's random circuit sampling experiment used only 67 qubits, Which is still order of magnitude below 76. I wonder why, the chip had 105 qubits and the error correction experiment used 101 qubits.

Did Google's experiment encounter problems when trying to run RCS on the full 105 qubits device?

Before saying that the computation invoked parallel universes, first I'd like to see that the computation couldn't be explained by the state being encoded classically by the state of the particles in the system.

16 days ago

zh3

Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible - yet the universe "computes" the correct result in real time. It feels like there is a huge gap between what actually happens and what can be done with a computer (even a quantum one).

16 days ago

vasco

The universe also computes Pi perfectly every time and nobody is surprised or calling side universes for help explaining it.

16 days ago

galaxyLogic

Universe does not calculate the digits of Pi. We do.

16 days ago

amelius

I think they mean that Pi is part of many formulas in physics.

16 days ago

galaxyLogic

It's a good questiopn why that is so. But I wouldn't draw from that the conclusion that Universe somehow "calculates Pi", and then puts it in all the forces it "has" so it turns out in our formulas. That is rather fantastical way of thinking and I do see its poetic appeal. A bit like "God doesn't play dice, or does he?"

What is calculation anyway we may ask. Isn't it just term-rewriting?

16 days ago

sgt101

I think this shows how bad the definitions for computing are, there's a big rethink needed, but unfortunately it needs a galaxy brain to do it!

15 days ago

j-krieger

> It's a good questiopn why that is so

Pi is just a description used for calculating perfectly/near-perfect spheres. A sphere is nature's building block, since every point on it's surface is the same distance from the centre.

16 days ago

throw310822

> yet the universe "computes" the correct result in real time

Does it? In what sense the result is "correct"? It's not because it's perfectly regular, or unique, or predictable, or reproducible. So what's "correct" about it?

Completely out of my depth here, but maybe there is a difference between evolution of a physical system and useful computation: and maybe there's much less useful computation that can be extracted from a physical system than the entire amount of computation that would be theoretically needed to simulate it exactly. Maybe you can construct physical systems that perform vast, but measurable, amounts of computation, but you can extract only a fixed max amount of useful information from them?

And then you have this strange phenomenon: you build controlled systems that perform an enormous amount of deterministic, measurable computation, but you can't make them do any useful work...

16 days ago

zh3

It does seem to, and can anyone credibly say they aren't out of their depth in these waters? (the sandpile thing is not original, it dates back many years). Taking the idea that the "universe is a simulation" [0], what sort of computer (or other device) could it be running on? (and how could we tell we're living in a VM?)

From the same school of thought, to simulate the path of a single particle seems it should require a device comprised of more than a single particle. Therefore, if the universe is a simulation, the simulator must have more than the number of particles in the universe.

[0] https://en.wikipedia.org/wiki/Simulation_hypothesis

16 days ago

knome

If the universe is just the universe, it needs only the number of particles in the universe.

16 days ago

TZubiri

"In what sense is ground truth correct?"

In the tautological sense.

16 days ago

onlyrealcuzzo

> Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible

Is it really?

There's only ~500,000 grains of sand in an egg timer.

I don't know anything here, but this seems like something that shouldn't be impossible.

So I'm curious. Why is this impossible?

What am I missing?

16 days ago

zh3

Maybe it's not that hard to simulate, but let's start with looking at just two of the sand grains that happen to hit each other? They collide, how they rebound is all angles, internal structure, Young's modulus, they have electrostatic interactions, even the Van der Walls force come into play. Sand grains aren't regular, consider how determining the precise point at which two irregular objects collide is quite a challenge (and this isn't even a game, approximations to save compute time won't do what the real world does 'naturally').

So while we can - for something as simple and regular as an eggtimer - come up with some workable approximations, the approximation would surely fall short when it comes to the detail (an analytical solution for the path of every single grain).

16 days ago

onlyrealcuzzo

I guess I wasn't thinking of a PERFECT simulation.

Now it's obvious to me that you would have to simulate exactly what the universe is doing down to the smallest level to get a perfect simulation.

Thanks.

Is it really impossible to get a very close approximation without simulating down to the atomic level, though?

15 days ago

zh3

A close approximation should arguably include collapses/slides, which happen spontaneously because the pile organises itself to a critical angle; then an incredibly small event can trigger a large slide of salt/sand/whatever/rocks (or whatever else the pile is made of). Even working out something like "What's the biggest and smallest slides that could occur given a pile of some particular substance?".

Every approximation will by definition deviate from what really happens - I suppose that's why we talk of "working approximations", i.e. they work well enough for a given purpose. So it probably comes down to what the approximation is being used for.

There is the idea that we are all living in a simulation; if so maybe if we look closely enough at the detail all the way from the universe to atoms then we'll start to see some fuzziness (well, of course there's quantum physics....).

12 days ago

galaxyLogic

When the output looks the same as the original we would say that the simulation was successful. That is how computer games do it. We're not asking for the exact position of each grain, just the general outline of the pile.

16 days ago

namaria

An image of something is likely to be the simplest model of that thing that happened, and it has A LOT less information than a 3D model of arbitrary resolution would have.

16 days ago

galaxyLogic

Simulation is never an "image". It may simulate each grain, just saying it doesn't need to simulate each precisely, because the law of large numbers kicks in.

This is the basis for example Monte Carlo simulation, it simulates real world with random numbers it generates.

15 days ago

namaria

Every video game engine is a simulation and many of them are a very simplified model of images of things happening instead of simulating the actual physics. Even "physics" in these engines is often just rendering an image.

15 days ago

GeneralMayhem

The real issue is that the sand isn't orderly sorted. At a micro level, it's billions and trillions of individual interactions between atoms that create the emergent behavior of solid grains of sand packing reasonably tightly but not phasing through each other.

16 days ago

NooneAtAll3

> I wonder why, the chip had 105 qubits and the error correction experiment used 101 qubits.

I wonder why, byte has 8 bits and the Hamming error correction code uses 7 bits.

oh right - that's because *the scheme* requires 3-7-15-... bits [0] and 7 is the largest that fits

Same with surface error correction - it's just the largest number in a list. No need for conspiracies. And no connection to manufacturing capabilities, which determine qubits on a single chip

[0] https://en.wikipedia.org/wiki/Hamming_code

15 days ago

aeternum

>it would also take ~10^25 years for a classical computer to directly verify the quantum computer’s results!!

This claim makes little sense. There are many problems that are much easier to verify than to solve. Why isn't that approach ever used to validate these quantum computing claims?

16 days ago

adastra22

That's what the author is saying. Researchers in this field should, for credibility reasons, be solving test problems that can be quickly verified. As to why this isn't done:

(1) They're picking problems domains that are maximally close to the substrate of the computation device, so they can hit maximum problem sizes (like 10^25). For many (all?) fast-verifiable problems they can't currently handle impressively large problem sizes. In the same way that GPUs are only really good at "embarrassingly parallel" algorithms like computer graphics and linear algebra, these quantum chips are only really good at certain classes of algorithms that don't require too much coherence.

(2) A lot of potential use cases are NOT easy to validate, but are still very useful and interesting. Weather and climate prediction, for example. Quantum chemistry simulations is another. Nuclear simulations for the department of energy. Cryptography is kinda exceptional in that it provides easily verifiable results.

16 days ago

qnleigh

I would add one more to this, which I would argue is the main reason:

(0) For a quantum algorithm/simulation to be classically verifiable, it needs additional structure; something that leads to a structured, verifiable output despite the intermediate steps being intractable to simulate classically. That additional structure necessarily adds complexity beyond what can be run on current devices.

To pick an arbitrary example I'm familiar with, this paper (https://arxiv.org/abs/2104.00687) relies on the quantum computer implementing a certain cryptographic hash function. This alone makes the computation way more complex than what can be run on current hardware.

16 days ago

vasco

Isn't weather prediction extremely easy to validate? What am I missing other than looking out the window tomorrow?

16 days ago

badmintonbaseba

Maybe a working quantum algorithm for weather prediction would outperform currently used classical simulations, but I wouldn't expect it to be bang on every time. Inputs are imperfect. So at best you could benchmark it, and gain some confidence over time. It could very well be good enough for weather prediction though.

Also I doubt that a quantum algorithm is possible that provably solves the Navier-Stokes equations with known boundary and initial conditions. At least you need some discretization, and maybe you can get a quantum algorithm that provably converges to the real solution (which alone would be a breakthrough, I believe). Then you need some experimental lab setup with well controlled boundary and initial conditions that you can measure against.

In any case the validation would be at a very different standard compared to verifying prime factorization. At most you can gain confidence in the correctness of the simulation, but never absolute certainty.

16 days ago

adastra22

At scale, yes. But this would still be solving toy problems with less variables, fewer dimensions.

And they’re not actually solving weather problems right now, I think. That was just an example. What they are actually solving are toy mathematical challenges.

16 days ago

TZubiri

To the extent that they are useful, they will be easier to validate.

For example: "Calculate whether we'll have El Niño this year"

The validation will not need to be run on a machine, but on the real world. Either we have el niño or we don't.

16 days ago

svachalek

I think he was making that exact point in this blog.

16 days ago

pitpatagain

Because we don't currently know a problem like this that both has a quantum algorithm we can run on this type of device with expected exponential speedup and has a fast classical verification algorithm. That's exactly the author's point/he has been advocating for quite a while the importance of researching such an example that would be better to use.

16 days ago

qnleigh

Depends on what you mean by "this type of device." Strictly speaking there are many efficiently verifiable quantum algorithms (including Shor's algorithm, the one that breaks RSA). But if you mean "this particular device," then yes, none are simple enough to run on a processor of this scale.

16 days ago

gyrovagueGeist

They likely mean on any of the current era of NISQ-like devices (https://en.wikipedia.org/wiki/Noisy_intermediate-scale_quant...) like this one or quantum annealers.

16 days ago

hammock

Do we have any proof or evidence that such a problem even exists?

16 days ago

JumpCrisscross

> Why isn't that approach ever used to validate these quantum computing claims?

Hossenfelder’s linked tweet addresses this head on [1]. We need four orders of magnitude more qubits before a QC can simulate anything real.

In the meantime, we’re stuck with toy problems (absent the sort of intermediate test algorithms Aaronson mentions, though the existence of such algorithms would undermine the feat’s PR value, as it would afford cheap takedowns about the QC lacking supremacy).

[1] https://x.com/skdh/status/1866352680899104960

16 days ago

TZubiri

In terms of physical simulations, surely you can limit the size of the system to fit your exact complexity needs?

For example, isolate two molecules in a vacuum and predict its future state. Now make it 5, now 100, etc...

16 days ago

andreareina

That's pretty much the kind of problem they're using here. The problem is that to verify the simulation, you need to run the simulation on the classic device, and that requires time that is exponential in the number of particles being simulated. They've verified that the classical and quantum simulations agree for n=1, 2, 3, ..., now here's a quantum simulation for n that would take 10^25 years to do classically.

16 days ago

TZubiri

If what you are simulating is a physical system, then to verify it you only need to replicate the physical system, not rerun the simulation on another device.

I suggested simulating the experiment of n molecules in a vacuum, another experiment might be a chaotic system like a double pendulum. Although there would need to be a high level of precision in setting up the physical parameters of the experiment.

15 days ago

EvgeniyZh

That's not true, there are enough interesting problems for NISQ. Quantum chemistry for example.

16 days ago

throw310822

Could it be that it's not a chance if these kind of problems are chosen? Somehow we can get from a quantum system an amount of computation that goes well beyond what a classical system can perform, but we can't seem to extract any useful information from it. Hmm.

16 days ago

aidenn0

Right, Factoring and discrete logs both come to mind; is Google's quantum computer not able to achieve measurable speedups on those versus classical computation?

16 days ago

pitpatagain

That's exactly correct, this chip cannot do either of those things yet. Which is why they use this toy-ish random circuit sampling problem.

16 days ago

Filligree

Not a large enough speedup to factor something that couldn't be done on a classical computer.

Well, really it can't run them at all, but a more-general computer this size which could, still wouldn't be large enough.

16 days ago

fastball

Google's chip is not general enough to perform any quantum algorithm.

16 days ago

tmvphil

It is perfectly general, but the error rate is too high to operate all qubits simultaneously for more than a few tens of gates without error. This is why error correction is needed but then you need orders of magnitude more physical qubits to deal with the overhead.

16 days ago

fastball

Then why can they perform an RCS evaluation but not some other algo? RCS requires the least number of qubits by a huge margin?

16 days ago

drkevorkian

No, not quite, it's about the error-per-gate. RCS has very loose requirements on the error per gate, since all they need is enough gates to build up some arbitrary entangled state (a hundred or so gates on this system). Other algorithms have very tight requirements on the error-per-gate, since they must perform a very long series of operations without error.

15 days ago

hiddencost

For reference, Aaronson is widely regarded as one of the foremost members of the field.

Try "I don't understand this claim"?

16 days ago

evanb

The question is not whether a problem is easier to verify than to solve but whether there is a problem that is provably faster (in the complexity sense) on a quantum computer than a classical computer that is easy to verify on a classical computer.

16 days ago

megiddo

He argues this point exactly.

16 days ago

urbandw311er

> No doubt people will ask > me what this means for > superconducting qubits > versus trapped-ion or > neutral-atom or > photonic qubits

I laughed at this. If I understood more than literally 2 words of that, then yes - no doubt I would ask about that.

16 days ago

andyferris

It’s the implementation of the quantum computer - what are the qubits and gates and things made of?

It is like we are still figuring out whether it’s better to use vacuum tubes or semiconductors or what. Google used superconducting circuits for Willow, which can be fabricated with computer-chip-style lithography etc, but needs to be kept extremely cold and the connectivity is obviously etched permanently into the circuit. Other technologies have different pros and cons.

16 days ago

urbandw311er

Thank you, much appreciated

16 days ago

devit

Where's the performance on common useful tasks?

What's the largest number it can factor using Shor's algorithm? What's the largest hash it can compute a pre-image for using Grover's algorithm?

16 days ago

sgt101

I think the record for Shor's remains at 15.

16 days ago

dataflow

Dumb question: can someone explain the following?

Imagine a ball falling on the ground.

Simulating the O(10^23) atoms in each one with a classical computer would take (say) 10^23 times the amount of work of simulating a single atom. Depending on the level of detail, that could easily take, you know, many, many years...

We don't call the ball a supercomputer or a quantum computer just because it's so much more efficient than a classical computer here.

I presume that's because it can't do arbitrary computation this quickly, right?

So in what way are these quantum computers different? Can they do arbitrary computations?

16 days ago

qnleigh

Great question. The device is fully programable. Arbitrary one-qubit operations and arbitrary two-qubit operations between adjacent qubits can be performed. Theoretically these are 'universal for computation', meaning that a large enough device could compute anything computable. You can't program Quantum Tetris or whatever on a bouncy ball :).

But nevertheless, many of these 'beyond-classical' demonstrations feel a bit arbitrary in the way you describe, and there's good reason for this. Logical operations are still quite noisy, and the more you apply, the more output quality degrades. To get the most 'beyond-classical,' you run the thing that maps most readily to the physical layout and limitations of the hardware.

As things improve, we'll see more and more demonstrations of actually useful computations. Google and others have already performed lots of quantum simulations. In the long run, you will use quantum error correction, which is the other big announcement this week.

16 days ago

fluoridation

So isn't this the same as turning a classical computer on and letting it run on whatever garbage is on the RAM at that time, and when some nonsense shows up on the screen breathlessly exclaim that it would take several millennia to get the same result with an abacus, despite the fact that something was "computed" only by strict adherence to the definition of the word? It's not like it takes a quantum computer to produce a meaningless stream of data.

16 days ago

qnleigh

That's a great analogy, and I basically agree with it. But there would be some ancient, abacus-wielding mathematicians who would be impressed by this fast-but-useless computer. One might take it as a sign that once/if the computer can be controlled properly, it might be quite useful.

This might have been part of the history of classical computers as well, except that it turns out to be pretty easy to do classical operations with very high fidelity.

16 days ago

fluoridation

Yeah... But since the device is not doing anything meaningful, there's no way to tell if it actually is computing anything, rather than being a very expensive and very complicated random number generator. Because you don't need a quantum computer to generate a stream of meaningless numbers, a machine being capable of generating a stream of meaningless numbers doesn't demonstrate whether it's computing quantumly.

16 days ago

fluoridation

Furthermore, how do you distinguish successful runs from malfunctions?

15 days ago

qnleigh

That's a good question. They run the system on a small scale and validate there. The assumption is that no new error mechanism magically switches on when the simulation gets large enough, but it is did there would be no way to know.

Hopefully large-scale, verifiable demonstrations become viable in the near future. But current they're just too hard to implement.

5 days ago

TZubiri

Re: Noise

There's some probablistic programs that we run that not only don't need determinism, but are actively harmed by it.

For example deep learning training would probably work fine if there was a 1% destructive noise, as long as there were a massive increase in compute.

16 days ago

dataflow

Thank you!

16 days ago

Strilanc

The key difference is that the problem being solved is a math problem. It can be written down on paper.

A ball falling on the ground can be converted into a math problem. To get the conversion exactly right you will need to write down the exact state of the ball. But you will invariably incur small inaccuracies while doing this. For example, maybe the mass you write down is off by 1 part in a trillion. The math problem is the ground truth, so any conversion inaccuracies are now errors in the ball. In practice these inaccuracies will prevent even the original ball from solving the written down problem much better than you could with a computer.

In the case of random circuit sampling, the written down problem is a tensor network [1] (that happens to also be a shallow quantum circuit). Fundamentally, a tensor network just specifies a bunch of matrix multiplications to do. It's not even that big of a problem: only a few kilobytes of information (whereas the exact state of a ball would be gargantuan). All you have to do is perform the specified multiplications, interpret the result as a probability distribution, and sample from it. The obstacle is that these multiplications create intermediate values that are really really large. The quantum computer bypasses this obstacle by executing the tensor network as a circuit.

[1]: https://en.wikipedia.org/wiki/Tensor_network

16 days ago

drkevorkian

There are two points. You got the first one, which is controllability. The components are controllable and programmable. But second it's important to appreciate the difference between simulating 10^23 classical billiard balls with a computer (very hard, C * 10^23 work for some C) and simulating 10^23 quantum mechanical atoms (C * d^(10^23) work for some C and some d). Those numbers are very different.

15 days ago

andreareina

The architecture/design of these quantum computers can do arbitrary computations. The specific machines we are currently able to build don't have enough qubits that remain coherent long enough to solve the problems we care about. We've built the aeolipile, now we need to scale it up to a steam turbine capable of moving ships and trains.

16 days ago

[deleted]
16 days ago

JKCalhoun

What does quantum computing need to move forward? Will just throwing a lot of money at the thing allow it to scale? Or are there fundamental problems blocking it that require new physics or new material sciences?

16 days ago

EvgeniyZh

It's hard to say. For example the Google's paper talks about some rare (once an hour) strong errors. Are they fundamental or have some easy fix? We don't know.

One obvious problem is cooling. We can't cool million qubits in a single fridge, so we will need to split them between fridges and communicate. Also, the wiring is really complicated already and hard to scale (one reason IBM has heavy hex is to have more space).

Another problem is connectivity. For transmon qubits connectivity is fixed. Applying gate to two far qubits requires a lot of swaps, which is expensive. This is less of a problem for ions or cold atoms because they can couple any two qubits; but they likely wouldn't be able to for large amount of qubits.

Another thing is classical control, because the classical data needs to be processed at high speed. Some companies develop specialized hardware for that.

None of these is necessarily fundamental, but these problems need to be solved, in addition to usual scaling (it is hard to manufacture these devices and it becomes harder with each qubit).

16 days ago

ryandamm

I was told by a Microsoft researcher that it will unlock currently unsolvable problems in chemistry, weather modeling, materials science, fusion and plasma physics, drug development… the list went on but it was really long. Most advances he cited would result from improved simulations, iirc.

I don’t recall enough of the conversation (circa 2019) to remember anything about the properties of the problems it helps solve, so I can’t help generalize. Sorry.

16 days ago

gauge_field

Here is a perspective from another MS researcher: https://www.youtube.com/watch?v=WY3htdKUGsA&t=1564s&ab_chann...

Essentially, they argue that unless strong algorithmic breakthrough happens (e.g. having cubic speedup, instead of quadratic), the only practical problem for which quantum computer will be useful, are those where you get exponential speed up:simulation of quantum systems (and breaking of RSA encyrption if you count that). Even those are challenged by other (approximate) simulation by Classical Deep Learning. There will be some quantum models for which quantum supremacy will be useful and Deep Learning wont. The question what classes of systems.

16 days ago

bawolff

I think we are well at the point where it is just time and money. Its not a physics problem its an engineering problem.

I'm sure there are lots of complicated problems ahead, but i don't think we are waiting for someone to discover "new" physics.

16 days ago

andrewla

What does homeopathy need to move forward? What does perpetual motion machinery need to move forward?

This is not totally fair; it is possible given certain independence assumptions. But they are likely not physically realizable.

It is almost certain that even within our understanding of quantum systems it is simply not possible to have a quantum computer (much less create one). The assumptions necessary to produce a completely uncoupled system are likely invalid; we have yet to produce a compelling experiment to indicate otherwise. [1]

[1] https://arxiv.org/pdf/1908.02499

16 days ago

wasabi991011

From the paper:

> Goal 3: Create distance-5 surface codes on NISQ circuits that require a little over 100 qubits.

> The argument presented in the next section asserts that attempts to reach Goals 1-3 will fail.

Goal 3 is exactly what Google has recently achieved with Willow. Actually, they did even better, reaching a distance-7 surface code (with a little over 100 qubits). Q.E.D.

To be clear, I do think your article is interesting and was valid at the time, but it goes to show how fast the field is advancing.

16 days ago

nuancebydefault

>> But for anyone who wonders why I’ve been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments: well, this is why! We’re now deeply into the unverifiable regime that I warned about.

Can anybody explain me why it is hard to find a problem that can be solved only by a basic quantum computer within a short timespan and can be easily verified by a normal computer? I thought there are so many algo's out there for which one direction is fast and the reverse takes ages.

16 days ago

de6u99er

IMHO, we're still a long way from anything truly useful. The problem Google used to demonstrate quantum supremacy feels more like a glorified random number generator. Even if quantum computers can generate results faster, processing and storing the data still takes a lot of time. It’s hard not to draw a parallel with claims about "instantaneous quantum communication," where entangled particles appear to defy the speed of light — it seems impressive at first, but the practical value remains unclear.

16 days ago

blharr

Isn't instant communication impossible, even with entanglement?

16 days ago

JeffAllen

I'ts not communication it's coordination of 2 (or more) entangled particles.

(spooky action at a distance)

2 days ago

LikeBeans

In simple terms, if I understand quantum computing, and please correct me if I'm wrong, the big benefit is parallel computing at a massive scale whereas classical computing is serial in nature. If yes likely both method are useful. But a very useful use case for quantum computing is AI training to create the models. Currently consumes a lot of GPUs but QC has nice chance to impact such a use case. Did I get it right?

16 days ago

panda-giddiness

> the big benefit is parallel computing at a massive scale

The problem with this line of reasoning is that, even though a quantum system might have many possible states, we only observe a single one of those states at the time of measurement. If you could somehow prepare a quantum system such that it encoded the N equally-likely solutions to your classical problem, you would still need to rerun that experiment (on average) N times to get the correct answer.

Broadly speaking, quantum computing exploits the fact that states are entangled (and therefore correlated). By tweaking the circuit, you can make it so that incorrect solutions interfere destructively while the correct solution interferes constructively, making it more likely that you will measure the correct answer. (Of course this is all probabilistic, hence the need for quantum error correction.) But developing quantum algorithms is easier said than done, and there's no reason to think a priori that all classical problems can be recast in this manner.

16 days ago

sgt101

I think that the big challenge is to recast any classical computations as quantum computations with a superpolynomial speedup.

I think that all classical problems can be cast as quantum computations because quantum computation is just computation - I believe that one can implement a turning machine using quantum gates, so arbitrary computation is possible with quantum gates.

The superpolynomial speedups are the thing.. I wonder if these will be limited to a class of computations that have no physical realization - just pure maths.

16 days ago

batmansmk

Ahaha! Read the subtitle of the blog, literally at the top: "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel."

16 days ago

victor22

Will I be attacked for thinking this is at least fishy? or are they just being ultra secretive.

They never talk about what this computer is actually doing.

15 days ago

cubefox

Even Scott Aaronson doesn't really know what they will be good for. Something something quantum simulation something something material science, perhaps. Yet nothing concrete, no example of a problem that they would actually solve. Apart from cracking RSA, which is the opposite of useful.

It seems the whole quantum computer thing only made people like him excited because it's a different kind of computer, not because there is any strong evidence that it will be practically useful.

It reminds me of nuclear fusion: Sounds cool, but it is highly doubtful whether it could ever compete economically with nuclear fission.

15 days ago

simonh

They did a test of pretty much the only thing it can do.

This is an extremely limited system, different in capability but in the grand scheme of things conceptually similar to implementing the quantum equivalent of a single binary digit.

15 days ago

r33b33

Can someone just give it to me straight: should I sell my crypto positions and move to stock indices and real estate? Yes or no?

No nuance, just yes or no.

16 days ago

LittleTimothy

No.

This is progress towards quantum computing but no where near progress towards a real practical quantum computer that could break Bitcoin's algorithms. If progress continues it could be an issue in the future, check back in next time Google publishes a paper.

16 days ago

kimchidude

Asking the real questions here.

I got that ‘Google has been talking about Willow for ages, this isn’t new’ blah blah blah. The problem is the public only started talking about it yesterday.

16 days ago

Mistletoe

No, won’t all your stocks and financial information be gone also?

Also Willow can’t even factor 15=5x3 you are good for a very long time.

16 days ago

sgt101

no

not for the next 5 years for sure.

16 days ago

thrance

Still no real potential applications beyond factoring large integers (of dubious use) and doing some obscure quantum physics simulations.

QC companies are selling quantum AI and quantum finance and other hand wavy stuff. Yet, I don't see any algorithms that has a proven advantage in these domains over classical ones running on clusters of GPUs.

16 days ago

01HNNWZ0MV43FF

> No doubt people will ask me what this means for superconducting qubits versus trapped-ion or neutral-atom or photonic qubits,

I only wonder what it means for cryptography.

The letters "crypt" don't appear in the text.

16 days ago

NooneAtAll3

quantum computing is entering "middle-late 1930s" - it's still quite some time away from Turing&Enigma moment

but it already passed "1920s" with "only radios" - analog, non-digital devices

stability tech is almost here (check quanta magazine), next is the scaling up

16 days ago

adastra22

As noted in the article, it could be a very sharp transition from "largest factorization performed so far is 3x7=21" to "we can factor real-world RSA."

If you want to make a classical computing analogy, it's like we're struggling to make transistors with more than a single 9 of reliability, and obviously you can't do complex computation with a circuit where every step gives garbage 1-in-10 times.

Except it's not's obvious. 90% reliability could probably be made to work with silicon transistors with bog standard error correcting logic at the hardware level. Quantum error is a little bit more troublesome to work with, but there are also no known theoretical reasons error correction wouldn't work at existing error rates. We just need better algorithms, which might very well exist.

Or, the next generation of chips would offer more 9's reliability, and even with existing error correction just a few more sigma in reliability would put us over the tipping point of being able to make reliable large-scale systems.

16 days ago

fluoridation

There were mechanical computers before the 20th century that had more complexity (in terms of total information) and were more useful than quantum computers are.

16 days ago

NooneAtAll3

mechanical computers were, at best, calculators

universal computation was not a thing yet

16 days ago

fluoridation

Not true. Programmable looms and player pianos existed. They weren't Turing machines, but they were certainly more sophisticated than mere calculators. And of course there's the analytical engine, even if it was never built. These technologies were far more influential (both culturally and within the context of CS itself) and practical than QCs are. It's possible if electricity had taken a bit longer to develop that we would've seen honest-to-goodness mechanical Turing machines during the early 20th century. It's not like just didn't know how to design and build the machines, they were just very complex and expensive.

So if you want to make analogies between quantum and classical computers, QCs aren't even at the level of early 19th century technology. They're still trying to figure out what's even necessary to build devices of a practical size.

16 days ago

Filligree

Nothing, yet.

16 days ago

daft_pink

I just want to know when this thing is going to put me out of work forever with it’s insane and crazy math skills.

16 days ago

bawolff

Unless your job involves factoring numbers into primes (or a few other limited math problems) probably never.

16 days ago

talldayo

I don't think it has crazy math skills at all. That's what classical computing is really good at - give it a problem of arbitrary length and a computer should be able to string together instructions to yield you a sum.

Quantum computing, especially right now, is simply focused on getting reliable input/output idempotency. It will be a really long time before it has insane and crazy math skills, and when it does, traditional CPU architectures will probably outperform it.

TL:DR - if the Texas Instrument calculator didn't put you out of a job, neither will quantum computers.

16 days ago

eastbound

Aren’t matrix calculations a perfect field for quantum computing? i.e. AI would progress extremely fast, wouldn’t it?

16 days ago

tmvphil

No, not really. There is no speed up for general matrix multiply, and generally it won't be practical for any problem with large amounts of input and output data. Closest thing is HHL algorithm for solving big linear systems, which requires a bunch of caveats on the matrix, and even then, it needs to be a subroutine of another quantum algorithm, since it outputs a quantum state, and not the full vector.

16 days ago

galaxyLogic

There's two great lanes of progress happening in engineering these days: 1. Quantum Computing, and 2. AI.

If they can find some synergies between the two, that could be THE major development. Make better AI by using Quantum Computers, and make better Quantum Computers by applying AI.

16 days ago

chikere232

QC – nowhere near practical applications

AI – being practically applied in many areas where it doesn't seem to be working well

I really do hope these aren't our best lanes of progress

16 days ago

galaxyLogic

Here's an example of where AI does something of economic importance:

https://www.seattletimes.com/business/technology/google-intr...

Accurate weather forecasts save money. Doing them on super-computers costs a lot of money. But with Google's AI solution that gets cheaper.

As for QC I'm not saying it does much useful things yet, but companies are starting to use it and there is progress going on.

What in your view are better lanes of progress than QC and AI?

15 days ago

chikere232

I sloppily used "AI" as a collective term for the currently hyped LLMs and image generators, where especially LLMs are being applied enthusiastically regardless if it works

It's very true that there are many areas of ML where considerable progress has been made and is delivering value

Are companies using QC? It's not like it can do much currently, as I understand it

We're making a lot of progress all over the engineering side of computing. A lot of it is incremental, but a faster or more energy efficient processor is very practically useful in a way that QC isn't yet. We might well get optical interconnects on cpu dies before we get anything tangible from QC

14 days ago

Jasondells

While these advancements in quantum error correction are undeniably impressive, I can't help but feel a sense of déjà vu here.... reminds me of the decades of "breakthroughs" in fusion energy—always promising to change the world, but perpetually just out of reach.

The Google benchmark with random circuit sampling is fascinating from a theoretical perspective, but it’s hard to see how this translates into solving problems that matter outside the quantum research community.

The lack of practical applications is a glaring issue. Sure, it's cool that Willow can outperform Frontier on this obscure task, but where’s the real-world impact?

Cryptography, optimization, drug discovery—these are the kinds of problems quantum computing needs to beat if it’s going to justify the investment. Until that happens, it feels like we’re stuck in a cycle of overpromising and underdelivering, with flashy press releases but no tangible results.

And let’s talk about scalability. Even if Willow hits the error-correction frontier, the number of physical qubits needed to build a truly practical quantum computer seems astronomical. Millions of qubits just to factor a number?

It’s hard to see how this scales in a way that makes economic or scientific sense. Right now, it feels like quantum computing is a field for researchers who are okay with not seeing practical outcomes in their lifetimes.

Maybe I’m too cynical, but this smells like another example of tech marketing getting ahead of the science. Maybe we can admit that we’re still decades away from quantum computing having any real-world relevance?

15 days ago

flkenosad

Could bitcoin miners use this to "guess" the next block?

16 days ago

coppsilgold

Mining is SHA256 guess-and-check lottery which will be safe from a quantum computer.

Bitcoin private keys will become vulnerable with knowledge of the public keys.

There is a way to mitigate this - because a bitcoin address is a hash of a public key, a bitcoin protocol change can occur whereas the public key becomes a secret and the signature is a zero-knowledge-proof (ZKP) of the prescribed transformation of secret -> address. These signatures are going to be big however, and so the fees will be very high.

16 days ago

LZ_Khan

Yes, but I'm pretty sure the moment this is achieved bitcoin's price will collapse.

And the investment required to produce such a machine is undoubtedly in the XX billions.

16 days ago

TZubiri

"Yes, but I'm pretty sure the moment this is achieved bitcoin's price will collapse."

Not necessarily. There's two interpretations to the question: 1. QC would be very efficient and mine efficiently. 2. QC would break SHA and would be able to reverse the hashing function at O(1).

In scenario 1. The difficulty would increase. The mining rate globally always stays the same. And the voting power would be distributed amongst the holders of the new compute, this has happened before with ASICs. Usually there's some graduality to it, and the capital is distributed so that there is never a 51% monopoly. It's especially relevant how big the jump is, if the new computer is stronger than all of the existing miners combined, then they get 100% theoretically (although with malice). In that case there would probably be a fork or as you put it, BTC would collapse. However, if you have that power, holding BTC is probably not that important anyway. The actual compute is worth more.

On scenario 2. Yes BTC would crash, but then again the actual compute power would be more impactful. BTC would crash but so would encryption, and planes and the world.

16 days ago

Solvitieg

> BTC would crash but so would encryption, and planes and the world.

Sad to see Bitcoin advocates use this dismissive argument.

Centralized systems will update their software as the threat increases. Meanwhile, there are no serious proposals for a quantum-resistance Bitcoin. Some are estimating the update will require a hard fork and take 1 year to update.

16 days ago

TZubiri

Not sure I would say I'm a bitcoin advocate, nor whether the argument is dismissive, or what it's supposed to be dismissive.

15 days ago

mapmeld

Not really. The primary issue with Bitcoin x quantum computers is that older Bitcoin transactions contained a full public key (and not a fingerprint / wallet address), so a quantum computer could manipulate that to make a false signature on wallets including Satoshi's (this would be obvious and detectable if someone starts moving very dormant coins). For the larger network there are some other points where it becomes an issue, but before then you'd need to fix HTTPS web traffic and PGP, too.

https://www.deloitte.com/nl/en/services/risk-advisory/perspe...

16 days ago

astrange

IIRC Bitcoin is quantum-safe because SHA-256 is quantum-safe.

16 days ago

chickenbigmac

[dead]

16 days ago

mupuff1234

I just want to know if the stock movement is justified or not.

16 days ago

Vecr

Stock movements aren't generally justified.

16 days ago

praptak

I want the ability to answer such questions with an accuracy slightly over 50%.

16 days ago

amoss

return false

16 days ago

dboreham

You never know the reason for a stock move. Could be that people see Oracle's poor results today as reason GCP will make more money.

16 days ago

crazygringo

If you're a journalist and simply phone a few institutional investors who make up the bulk of this kind of trading and with who you have a trusting relationship, they'll tell you.

If they all mostly agree, that's your answer.

And it's almost always what you assumed anyways from the news, because this stuff isn't rocket science.

So for all practical purposes, yes actually you usually do know, whenever a stock movement is large enough that it's clearly outside the normal noise of day trading.

I mean, can you prove the reason with 100% mathematical certainty? No. But can you be 99% sure? Of course.

16 days ago

[deleted]
16 days ago

vasco

You don't need a quantum computer to know everything from now until the end of times has already been priced in.

16 days ago

crazygringo

Oh wow it was up 6.2% at one point this morning from yesterday, now 4.6%.

16 days ago

melvinmelih

Depends on which universe you're talking about.

16 days ago

[deleted]
16 days ago

Sergii001

It becomes more and more interesting

16 days ago

partloyaldemon

"except now with the PR blitz from Sundar Pichai on down"

I definitely read this first pass and thought 'damn the CEO is hitting its own quantum supercomputer with bug reports'. that's cold. It just came out.

16 days ago

krick

Surprisingly sparse on actual information considering all this humble-bragging (actually, not even so humble) about how he was teaching peasants to catch fish for 20 years, to be forced yet again to hand it out with his own bare hands! Well, he didn't hand out much.

The post reiterates some facts from the original statement, which are pretty vague for most, I believe. The only useful clarification is that simulation results are indeed unverifiable, lol (as some might have suspected, but still nice to have a definitive confirmation from somebody who is supposedly an expert on this).

Then it addresses the cringeworthy "Everettian multiverse" statement discussion. Granted, it indeed was one of the most discussed things on the previous thread, but I have honestly assumed that it's so obviously moot that one can simply ignore it. Everyone knows that at least one of top-3 threads on HN comments must be either a lame joke or some sort of bikeshedding argument.

And that's pretty much it. "This blogger said this usual generic words, that reporter asked for an interview, but I declined, also, kudos to Google team for amazing work, it's unclear if it's any good, but it surely isn't bad!" Uh, ok, thanks for the clarification, man.

I get it that this post was written in a hurry, but given all that "fish-handing" stuff and all these commenters in this very thread complaining about how they don't know the difference between "trapped-ion or neutral-atom" qubits (as if this distinction was the very essence of the post, which author paid much more attention to than to his responses to NYT journalists) it just doesn't deliver.

...So, what did I expect? Well, I didn't expect anything, but let's state the obvious. Google's benchmark was to produce some very specific (and unverifiable) random distribution (which, BTW, he kinda says, but waaay less clearly than it could have been said). Obviously, nobody cares about that. Everyone cares about when they will be able to run Shor's algorithm on Google's Chip, and factor primes and deprecate RSA into oblivion. Obviously. Some may wonder why it's not possible to do it on that Willow thing, others may suspect that it may have something to do with the fact they need a ton of "physical" qubits to emulate logical qubits because of error-correction. Also, it is widely advertised, that the very thing that is special about Willow is vastly better (and somehow "more promising to scale") error-correction. So, what people really want from a generous and skillful fisherman is obviously some critical, appropriately-speculative, ELI5-style analysis of the updated state of the art. What does Willow have, what does it need to become practical, what are some realistic projections on when we can get there. Where is all of that? Where is the fucking fish?!

16 days ago

blast

Damn he's funny.

For 20 years I’ve been trying to teach the world how to fish in Hilbert space, but (sigh) I suppose I’ll just hand out some more fish.

16 days ago

Ham121

Google Willow is an exciting step forward in technology! The potential for it to transform how we interact with and integrate AI into our daily lives is incredible. I am curious to learn more about its real-world applications and how it will improve accessibility and efficiency across various domains. Kudos to the team for pushing boundaries and innovating looking forward to seeing whats next!

15 days ago

man4

[dead]

16 days ago

Cataleya

[flagged]

16 days ago

TZubiri

[flagged]

16 days ago

refulgentis

> Anyone can write about quantum computers as if they are remotely qualified

He's the quantum guy. [^2]

You're applying classical computing intuitions (where verification is indeed usually much faster than computation) to quantum computing, where this relationship doesn't necessarily hold. The fact that verification can be as hard as computation in quantum computing is actually a major challenge that Aaronson has written about extensively.

n.b. I've been here 15 years but sometimes have to take a step back and adjust my approach, because I can use this as a quick break to let out frustration with something else. When I do, I find the HN guidelines helpful, almost a joy. [^1] They're written conversationally and are more meditations than rules.

[^1] https://news.ycombinator.com/newsguidelines.html

> Be kind. Don't be snarky. [...] Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

"Other threads in these comments talk about some rick and morty multiverse type of thing, just stay on the sidelines guys"

> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

"Anybody can write about quantum computers as if they are remotely qualified"

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize.

From Aaronson's actual post: "...for the exact same reason why this quantum computation would take ~10^25 years for a classical computer to simulate, it would also take ~10^25 years for a classical computer to directly verify the quantum computer's results!!"

He goes on: "...this is why I've been obsessing for years about the need to design efficiently verifiable near-term quantum supremacy experiments."

[^2]

• Received the [2020 ACM Prize in Computing](https://awards.acm.org/about/2020-acm-prize) for groundbreaking contributions to quantum computing

• [ACM Fellow (2019)](https://www.acm.org/media-center/2019/december/fellows-2019) for contributions to quantum computing and computational complexity

• Named [Simons Investigator (2017)](https://www.simonsfoundation.org/mathematics-physical-scienc...)

• Won the [Alan T. Waterman Award (2012)](https://www.nsf.gov/news/news_summ.jsp?cntn_id=123406), NSF's most prestigious young researcher award

• Received [Presidential Early Career Award](https://www.nsf.gov/awards/PECASE/recip_details.jsp?pecase_i...) for Scientists and Engineers (2009)

• Awarded [Sloan Research Fellowship](https://news.mit.edu/2009/sloan-fellows-0217) (2009)

• Won multiple Best Student Paper Awards: - Danny Lewin Best Paper at [STOC](https://www.sigact.org/prizes/student.html) for quantum computing proofs in local search (2004) - Best Paper at [Complexity Conference](https://computationalcomplexity.org/conferences.php) for quantum advice limitations (2004) - Best Paper at Complexity Conference for quantum certificate complexity (2003)

16 days ago

TZubiri

Ah so the guy knows a thing or two.

I can see that there could be an argument for the edge cases where p=np is true could precisely be this new type of computing.

Re: MWI, author seems to have dismissed the relevancy of such discussion anyways, it just seems to be one of those pop topics that comes up whenever possible and is accessible to the laymen (like discussions about gender popping up when a random sex gene discovery pops up)

16 days ago

megamix

This is how miracles work. They are just physical laws fast forward, I like how this explains the ancient miracles. So maybe our previous generations had access to these things and it was lost along the way.

16 days ago

chikere232

You need more science

16 days ago

megamix

Oh there's enough science here. But to understand if someone needs more science, we should reach the conclusion scientifically. Etc etc

15 days ago

rudi_mk

> I like how this explains the ancient miracles

...which ancient miracles are you possibly alluding to here?

15 days ago