• 0 Posts
  • 15 Comments
Joined 2 months ago
cake
Cake day: July 7th, 2024

help-circle
  • I am factually correct, I am not here to “debate,” I am telling you how the theory works. When two systems interact such that they become statistically correlated with one another and knowing the state of one tells you the state of the other, it is no longer valid to assign a state vector to the system subsystems that are part of the interaction individually, you have to assign it to the system as a whole. When you do a partial trace on the system individually to get a reduced density matrix for the two systems, if they are perfectly entangled, then you end with a density matrix without coherence terms and thus without interference effects.

    This is absolutely entanglement, this is what entanglement is. I am not misunderstanding what entanglement is, if you think what I have described here is not entanglement but a superposition of states then you don’t know what a superposition of states is. Yes, an entangled state would be in a superposition of states, but it would be a superposition of states which can only be applied to both correlated systems together and not to the individual subsystems.

    Let’s say R = 1/sqrt(2) and Alice sends Bob a qubit. If the qubit has a probability of 1 of being the value 1 and Alice applies the Hadamard gate, it changes to R probability of being 0 and -R probability of being 1. In this state, if Bob were to apply a second Hadamard gate, then it undoes the first Hadamard gate and so it would have a probability of 1 of being a value of 1 due to interference effects.

    However, if an eavesdropper, let’s call them Eve, measures the qubit in transit, because R and -R are equal distances from the origin, it would have an equal chance of being 0 or 1. Let’s say it’s 1. From their point of view, they would then update their probability distribution to be a probability of 1 of being the value 1 and send it off to Bob. When Bob applies the second Hadamard gate, it would then have a probability of R for being 0 and a probability of -R for being 1, and thus what should’ve been deterministic is now random noise for Bob.

    Yet, this description only works from Eve’s point of view. From Alice and Bob’s point of view, neither of them measured the particle in transit, so when Bob received it, it still is probabilistic with an equal chance of being 0 and 1. So why does Bob still predict that interference effects will be lost if it is still probabilistic for him?

    Because when Eve interacts with the qubit, from Alice and Bob’s perspective, it is no longer valid to assign a state vector to the qubit on its own. Eve and the qubit become correlated with one another. For Eve to know the particle’s state, there has to be some correlation between something in Eve’s brain (or, more directly, her measuring device) and the state of the particle. They are thus entangled with one another and Alice and Bob would have to assign the state vector to Eve and the qubit taken together and not to the individual parts.

    Eve and the qubit taken together would have a probability distribution of R for the qubit being 0 and Eve knowing the qubit is 0, and a probability of -R of the qubit being 1 and Eve knowing the qubit is 1. There is still interference effects but only of the whole system taken together. Yet, Bob does not receive Eve and the qubit taken together. He receives only the qubit, so this probability distribution is no longer applicable to the qubit.

    He instead has to do a partial trace to trace out (ignore) Eve from the equation to know how his qubit alone would behave. When he does this, he finds that the probability distribution has changed to 0.5 for 0 and 0.5 for 1. In the density matrix representation, you will see that the density matrix has all zeroes for the coherences. This is a classical probability distribution, something that cannot exhibit interference effects.

    Bob simply cannot explain why his qubit loses its interference effects by Eve measuring it without Bob taking into account entanglement, at least within the framework of quantum theory. That is just how the theory works. The explanation from Eve’s perspective simply does not work for Bob in quantum mechanics. Reducing the state vector simultaneously between two different perspectives is known as an objective collapse model and makes different statistical predictions than quantum mechanics. It would not merely be an alternative interpretation but an alternative theory.

    Eve explains the loss of coherence due to her reducing the state vector due to seeing a definite outcome for the qubit, and Bob explains the loss of coherence due to Eve becoming entangled with the qubit which leads to decoherence as doing a partial trace to trace out (ignore) Eve gives a reduced density matrix for the qubit whereby the coherence terms are zero.



  • Personally, I think there is a much bigger issue with the quantum internet that is often not discussed and it’s not just noise.

    Imagine, for example, I were to offer you two algorithms. One can encrypt things so well that it would take a hundred trillion years for even a superadvanced quantum computer to break the encryption, and it almost has no overhead. The other is truly unbreakable even in an infinite amount of time, but it has a huge amount of overhead to the point that it will cut your bandwidth in half.

    Which would you pick?

    In practice, there is no difference between an algorithm that cannot be broken for trillions of years, and an algorithm that cannot be broken at all. But, in practice, cutting your internet bandwidth in half is a massive downside. The tradeoff just isn’t worth it.

    All quantum “internet” algorithms suffer from this problem. There is always some massive practical tradeoff for a purely theoretical benefit. Even if we make it perfectly noise-free and entirely solve the noise problem, there would still be no practical reason at all to adopt the quantum internet.


  • The problem with the one-time pads is that they’re also the most inefficient cipher. If we switched to them for internet communication (ceteris paribus), it would basically cut internet bandwidth in half overnight. Even moreso, it’s a symmetric cipher, and symmetric ciphers cannot be broken by quantum computers. Ciphers like AES256 are considered still quantum-computer-proof. This means that you would be cutting the internet bandwidth in half for purely theoretical benefits that people wouldn’t notice in practice. The only people I could imagine finding this interesting are overly paranoid governments as there are no practical benefits.

    It also really isn’t a selling point for quantum key distribution that it can reliably detect an eavesdropper. Modern cryptography does not care about detecting eavesdroppers. When two people are exchanging keys with a Diffie-Hellman key exchange, eavesdroppers are allowed to eavesdrop all they wish, but they cannot make sense of the data in transit. The problem with quantum key distribution is that it is worse than this, it cannot prevent an eavesdropper from seeing the transmitted key, it just discards it if they do. This to me seems like it would make it a bit harder to scale, although not impossible, because anyone can deny service just by observing the packets of data in transit.

    Although, the bigger issue that nobody seems to talk about is that quantum key distribution, just like the Diffie-Hellman algorithm, is susceptible to a man-in-the-middle attack. Yes, it prevents an eavesdropper between two nodes, but if the eavesdropper sets themselves up as a third node pretending to be different nodes when queried from either end, they could trivially defeat quantum key distribution. Although, Diffie-Hellman is also susceptible to this, so that is not surprising.

    What is surprising is that with Diffie-Hellman (or more commonly its elliptic curve brethren), we solve this using digital signatures which are part of public key infrastructure. With quantum mechanics, however, the only equivalent to digital signatures relies on the No-cloning Theorem. The No-cloning Theorem says if I gave you a qubit and you don’t know it is prepared, nothing you can do to it can tell you its quantum state, which requires knowledge of how it was prepared. You can use the fact only a single person can be aware of its quantum state as a form of a digital signature.

    The thing is, however, the No-cloning Theorem only holds true for a single qubit. If I prepared a million qubits all the same way and handed them to you, you could derive its quantum state by doing different measurements on each qubit. Even though you could use this for digital signatures, those digital signatures would have to be disposable. If you made too many copies of them, they could be reverse-engineered. This presents a problem for using them as part of public key infrastructure as public key infrastructure requires those keys to be, well, public, meaning anyone can take a copy, and so infinite copy-ability is a requirement.

    This makes quantum key distribution only reliable if you combine it with quantum digital signatures, but when you do that, it no longer becomes possible to scale it to some sort of “quantum internet.” It, again, might be something useful an overly paranoid government could use internally as part of their own small-scale intranet, but it would just be too impractical without any noticeable benefits for anyone outside of that. As, again, all this is for purely theoretical benefits, not anything you’d notice in the real world, as things like AES256 are already considered uncrackable in practice.


  • Entanglement plays a key role.

    Any time you talk about “measurement” this is just observation, and the result of an observation is to reduce the state vector, which is just a list of complex-valued probability amplitudes. The fact they are complex numbers gives rise to interference effects. When the eavesdropper observes definite outcome, you no longer need to treat it as probabilistic anymore, you can therefore reduce the state vector by updating your probabilities to simply 100% for the outcome you saw. The number 100% has no negative or imaginary components, and so it cannot exhibit interference effects.

    It is this loss of interference which is ultimately detectable on the other end. If you apply a Hadamard gate to a qubit, you get a state vector that represents equal probabilities for 0 or 1, but in a way that could exhibit interference with later interactions. Such as, if you applied a second Hadamard gate, it would return to its original state due to interference. If you had a qubit that was prepared with a 50% probability of being 0 or 1 but without interference terms (coherences), then applying a second Hadamard gate would not return it to its original state but instead just give you a random output.

    Hence, if qubits have undergone decoherence, i.e., if they have lost their ability to interfere with themselves, this is detectable. Obvious example is the double-slit experiment, you get real distinct outcomes by a change in the pattern on the screen if the photons can interfere with themselves or if they cannot. Quantum key distribution detects if an observer made a measurement in transit by relying on decoherence. Half the qubits a Hadamard gate is randomly applied, half they are not, and which it is applied to and which it is not is not revealed until after the communication is complete. If the recipient receives a qubit that had a Hadamard gate applied to it, they have to apply it again themselves to cancel it out, but they don’t know which ones they need to apply it to until the full qubits are transmitted and this is revealed.

    That means at random, half they receive they need to just read as-is, and another half they need to rely on interference effects to move them back into their original state. Any person who intercepts this by measuring it would cause it to decohere by their measurement and thus when the recipient applies the Hadamard gate a second time to cancel out the first, they get random noise rather than it actually cancelling it out. The recipient receiving random noise when they should be getting definite values is how you detect if there is an eavesdropper.

    What does this have to do with entanglement? If we just talk about “measuring a state” then quantum mechanics would be a rather paradoxical and inconsistent theory. If the eavesdropper measured the state and updated the probability distribution to 100% and thus destroyed its interference effects, the non-eavesdroppers did not measure the state, so it should still be probabilistic, and at face value, this seems to imply it should still exhibit interference effects from the non-eavesdroppers’ perspective.

    A popular way to get around this is to claim that the act of measurement is something “special” which always destroys the quantum probabilities and forces it into a definite state. That means the moment the eavesdropper makes the measurement, it takes on a definite value for all observers, and from the non-eavesdroppers’ perspective, they only describe it still as probabilistic due to their ignorance of the outcome. At that point, it would have a definite value, but they just don’t know what it is.

    However, if you believe that, then that is not quantum mechanics and in fact makes entirely different statistical predictions to quantum mechanics. In quantum mechanics, if two systems interact, they become entangled with one another. They still exhibit interference effects as a whole as an entangled system. There is no “special” interaction, such as a measurement, which forces a definite outcome. Indeed, if you try to introduce a “special” interaction, you get different statistical predictions than quantum mechanics actually makes.

    This is because in quantum mechanics, every interaction leads to growing the scale of entanglement, and so the interference effects never go away, just spread out. If you introduce a “special” interaction such as a measurement whereby it forces things into a definite value for all observers, then you are inherently suggesting there is a limitation to this scale of entanglement. There is some cut-off point whereby interference effects can no longer be scaled passed that, and because we can detect if a system exhibits interference effects or not (that’s what quantum key distribution is based on), then such an alternative theory (called an objective collapse model) would necessarily have to make differ from quantum mechanics in its numerical predictions.

    The actual answer to this seeming paradox is provided by quantum mechanics itself: entanglement. When the eavesdropper observes the qubit in transit, for the perspective of the non-eavesdroppers, the eavesdropper would become entangled with the qubit. It then no longer becomes valid in quantum mechanics to assign the state vector to the eavesdropper and the qubit separately, but only them together as an entangled system. However, the recipient does not receive both the qubit and the eavesdropper, they only receive the qubit. If they want to know how the qubit behaves, they have to do a partial trace to trace out (ignore) the eavesdropper, and when they do this, they find that the qubit’s state is still probabilistic, but it is a probability distribution with only terms between 0% and 100%, that is to say, no negatives or imaginary components, and thus it cannot exhibit interference effects.

    Quantum key distribution does indeed rely on entanglement as you cannot describe the algorithm consistently from all reference frames (within the framework of quantum mechanics and not implicitly abandoning quantum mechanics for an objective collapse theory) without taking into account entanglement. As I started with, the reduction of the wave function, which is a first-person description of an interaction (when there are 2 systems interacting and one is an observer describing the second), leads to decoherence. The third-person description of an interaction (when there are 3 systems and one is on the “outside” describing the other two systems interacting) is entanglement, and this also leads to decoherence.

    You even say that “measurement changes the state”, but how do you derive that without entanglement? It is entanglement between the eavesdropper and the qubit that leads to a change in the reduced density matrix of the qubit on its own.


  • This is accurate, yes. The cat in the box is conscious presumably, in my opinion of cats at least, but still can be “not an observer” from the POV of the scientist observing the experiment from outside the box.

    “Consciousness” is not relevant here at all. You can write down the wave function of a system relative to a rock if you wanted, in a comparable way as writing down the velocity of a train from the “point of view” of a rock. It is coordinate. It has nothing to do with “consciousness.” The cat would perceive a definite state of the system from its reference frame, but the person outside the box would not until they interact with it.

    QM is about quite a lot more than coordinate systems

    Obviously QM is not just coordinate systems. The coordinate nature of quantum mechanics, the relative nature of it, is merely a property of the theory and not the whole theory. But the rest of the theory does not have any relevance to “consciousness.”

    and in my opinion will make it look weird in retrospect once physics expands to a more coherent whole

    The theory is fully coherent and internally consistent. It amazes me how many people choose to deny QM and always want to rush to change it. Your philosophy should be guided by the physical sciences, not the other way around. People see QM going against their basic intuitions and their first thought is it must be incomplete and needs to have additional complexity added to it to make it fit their intuitions, rather than just questioning that maybe their basic intuitions are wrong.

    Your other comment was to a Wikipedia page which if you clicked the link on your own source it would’ve told you that the scientific consensus on that topic is that what you’re presenting is a misinterpretation.

    A simple search on YouTube could’ve also brought up several videos explaining this to you.

    Edit: Placing my response here as an edit since I don’t care to continue this conversation so I don’t want to notify.

    Yes, that was what I said. Er, well… QM, as I understand it, doesn’t have to do anything with shifting coordinate systems per se (and in fact is still incompatible with relativity). They’re just sort of similar in that they both have to define some point of view and make everything else in the model relative to it. I’m still not sure why you brought coordinate systems into it.

    A point of view is just a colloquial term to refer to a coordinate system. They are not coordinate in the exact same way but they are both coordinate.

    My point was that communication of state to the observer in the system, or not, causes a difference in the outcome. And that from the general intuitions that drive almost all of the rest of physics, that’s weird and sort of should be impossible.

    No, it doesn’t not, and you’re never demonstrated that.

    Sure. How is it when combined with macro-scale intuition about the way natural laws work, or with general relativity?

    We have never observed quantum effects on the scale where gravitational effects would also be observable, so such a theory, if we proposed one, would not be based on empirical evidence.

    This is very, very very much not what I am doing. What did I say that gave you the impression I was adding anything to it?

    You literally said in your own words we need to take additional things into account we currently are not. You’re now just doing a 180 and pretending you did not say what literally anyone can scroll up and see that you said.

    I am not talking about anything about retrocausality here, except maybe accidentally.

    Then you don’t understand the experiment since the only reason it is considered interesting is because if you interpret it in certain ways it seems to imply retrocausality. Literally no one has ever treated it as anything more than that. You are just making up your own wild implications from the experiment.

    I was emphasizing the second paragraph; “wave behavior can be restored by erasing or otherwise making permanently unavailable the ‘which path’ information.”

    The behavior of the system physically changes when it undergoes a physical interaction. How surprising!



  • Kastrup is entirely unconvincing because he pretends the only two schools of philosophy in the whole universe are his specific idealism and metaphysical realism which he falsely calls the latter “materialism.” He thus never feels the need to ever address anything outside of a critique of a single Laymen understanding of materialism which is more popular in western countries than eastern countries, ignoring the actual wealth of philosophical literature.

    Anyone who actually reads books on philosophy would inevitably find Kastrup to be incredibly unconvincing as he, by focusing primarily on a single school, never justifies many of his premises. He begins from the very beginning talking about “conscious experience” and whatnot when, if you’re not a metaphysical realist, that is what you are supposed to be arguing in the first place. Unless you’re already a dualist or metaphysical realist, if you are pretty much any other philosophical school like contextual realist, dialectical materialist, empiriomonist, etc, you probably already view reality as inherently observable, and thus perception is just reality from a particular point-of-view. It then becomes invalid to add qualifiers to it like “conscious experience” or “subjective experience” as reality itself cannot had qualifiers.

    I mean, the whole notion of “subjective experience” goes back to Nagel who was a metaphysical realist through-and-through and wrote a whole paper defending that notion, “What is it like to be a Bat?”, and this is what Kastrup assumes his audience already agrees with from the get-go. He never addresses any of the criticisms of metaphysical realism but pretends like they don’t exist and he is the unique sole critic of it and constantly calls metaphysical realism “materialism” as if they’re the same philosophy at all. He then builds all of his arguments off of this premise.


  • You should look into contextual realism. You might find it interesting. It is a philosophical school from the philosopher Jocelyn Benoist that basically argues that the best way to solve most of the major philosophical problems and paradoxes (i.e. mind-body problem) is to presume the natural world is context variant all the way down, i.e. there simply is no reality independent of specifying some sort of context under which it is described (kind of like a reference frame).

    The physicist Francois-Igor Pris points out that if you apply this thinking to quantum mechanics, then the confusion around interpreting it entirely disappears, because the wave function clearly just becomes a way of accounting for the context under which an observer is observing a system, and that value definiteness is just a context variant property, i.e. two people occupying two different contexts will not always describe the system as having the same definite values, but may describe some as indefinite which the other person describes as definite.

    “Observation” is just an interaction, and by interacting with a system you are by definition changing your context, and thus you have to change your accounting for your context (i.e. the wave function) in order to make future predictions. Updating the wave function then just becomes like taring a scale, that is to say, it is like re-centering or “zeroing” your coordinate system, and isn’t “collapsing” anything physical. There is no observer-dependence in the sense that observers are somehow fundamental to nature, only that systems depend upon context and so naturally as an observer describing a system you have to take this into account.


  • Quantum mechanics is incompatible with general relativity, it is perfectly compatible with special relativity, however. I mean, that is literally what quantum field theory is, the unification of special relativity and quantum mechanics into a single framework. You can indeed integrate all aspects of relativity into quantum mechanics just fine except for gravity. It’s more that quantum mechanics is incompatible with gravity and less that it is incompatible with relativity, as all the other aspects we associate with relativity are still part of quantum field theory, like the passage of time being relative, relativity of simultaneity, length contraction, etc.


  • There shouldn’t be a distinction between quantum and non-quantum objects. That’s the mystery. Why can’t large objects exhibit quantum properties?

    What makes quantum mechanics distinct from classical mechanics is the fact that not only are there interference effects, but statistically correlated systems (i.e. “entangled”) can seem to interfere with one another in a way that cannot be explained classically, at least not without superluminal communication, or introducing something else strange like the existence of negative probabilities.

    If it wasn’t for these kinds of interference effects, then we could just chalk up quantum randomness to classical randomness, i.e. it would just be the same as any old form of statistical mechanics. The randomness itself isn’t really that much of a defining feature of quantum mechanics.

    The reason I say all this is because we actually do know why there is a distinction between quantum and non-quantum objects and why large objects do not exhibit quantum properties. It is a mixture of two factors. First, larger systems like big molecules have smaller wavelengths, so interference with other molecules becomes harder and harder to detect. Second, there is decoherence. Even small particles, if they interact with a ton of other particles and you average over these interactions, you will find that the interference terms (the “coherences” in the density matrix) converge to zero, i.e. when you inject noise into a system its average behavior converges to a classical probability distribution.

    Hence, we already know why there is a seeming “transition” from quantum to classical. This doesn’t get rid of the fact that it is still statistical in nature, it doesn’t give you a reason as to why a particle that has a 50% chance of being over there and a 50% chance of being over here, that when you measure it and find it is over here, that it wasn’t over there. Decoherence doesn’t tell you why you actually get the results you do from a measurement, it’s still fundamentally random (which bothers people for some reason?).

    But it is well-understood how quantum probabilities converge to classical probabilities. There have even been studies that have reversed the process of decoherence.



  • The traditional notion of cause and effect is not something all philosophers even agree upon, I mean many materialist philosophers largely rejected the notion of simple cause-and-effect chains that go back to the “first cause” since the 1800s, and that idea is still pretty popular in some eastern countries.

    For example, in China they teach “dialectical materialist” philosophy part of required “common core” in universities for any degree, and that philosophical school sees cause and effect as in a sense dependent upon point of view, that an effect being described as a particular cause is just a way of looking at things, and the same relationship under a different point of view may in fact reverse what is considered the cause and the effect, viewing the effect as the cause and vice-versa. Other points of view may even ascribe entirely different things as the cause.

    It has a very holistic view of the material world so there really is no single cause to any effect, so what you choose to identify as the cause is more of a label placed by an individual based on causes that are relevant to them and not necessarily because those are truly the only causes. In a more holistic view of nature, Laplacian-style determinism doesn’t even make sense because it implies nature is reducible down to separable causes which can all be isolated from the rest and their properties can then be fully accounted for, allowing one to predict the future with certainty.

    However, in a more holistic view of nature, it makes no sense to speak of the universe being reducible to separable causes as, again, what we label as causes are human constructs and the universe is not actually separable. In fact, the physicists Dmitry Blokhintsev had written a paper in response to a paper Albert Einstein wrote criticizing Einstein’s distaste for quantum mechanics as based on his adherence to the notion of separability which stems from Newtonian and Kantian philosophy, something which dialectical materialists, which Blokhintsev self-identified as, had rejected on philosophical grounds.

    He wrote this paper many many years prior to the publication of Bell’s theorem which showed that giving up on separability (and by extension absolute determinism) really is a necessity in quantum mechanics. Blokhintsev would then go on to write a whole book called The Philosophy of Quantum Mechanics where in it he argues that separability in nature is an illusion and under a more holistic picture absolute determinism makes no sense, again, purely from materialistic grounds.

    The point I’m making is ultimately just that a lot of the properties people try to ascribe to “materialists” or “naturalists” which then later try to show quantum mechanics is in contradiction with, they seem to forget that these are large umbrella philosophies with many different sects and there have been materialist philosophers criticizing absolute determinism as even being a meaningful concept since at least the 1800s.


  • There 100% are…

    If you choose to believe so, like I said I don’t really care. Is a quantum computer conscious? I think it’s a bit irrelevant whether or not they exist. I will concede they do for the sake of discussion.

    Penrose thinks they’re responsible for consciousness.

    Yeah, and as I said, Penrose was wrong, not because the measurement problem isn’t the cause for consciousness, but that there is no measurement problem nor a “hard problem.” Penrose plays on the same logical fallacies I pointed out to come to believe there are two problems where none actually exist and then, because both problems originate from the same logical fallacies. He then notices they are similar and thinks “solving” one is necessary for “solving” the other, when neither problems actually existed in the first place.

    Because we also don’t know what makes anesthesia stop consciousness. And anesthesia stops consciousness and stops the quantum process.

    You’d need to define what you mean more specifically about “consciousness” and “quantum process.” We don’t remember things that occur when we’re under anesthesia, so are we saying memory is consciousness?

    Now, the math isn’t clean. I forget which way it leans, but I think it’s that consciousness kicks out a little before the quantum action is fully inhibited? It’s been a minute, and this shit isn’t simple.

    Sure, it’s not simple, because the notion of “consciousness” as used in philosophy is a very vague and slippery word with hundreds of different meanings depending on the context, and this makes it seem “mysterious” as its meaning is slippery and can change from context to context, making it difficult to pin down what is even being talked about.

    Yet, if you pin it down, if you are actually specific about what you mean, then you don’t run into any confusion. The “hard problem of consciousness” is not even a “problem” as a “problem” implies you want to solve it, and most philosophers who advocate for it like David Chalmers, well, advocate for it. They spend their whole career arguing in favor of its existence and then using it as a basis for their own dualistic philosophy. It is thus a hard axiom of consciousness and not a hard problem. I simply disagree with the axioms.

    Penrose is an odd case because he accepts the axioms and then carries that same thinking into QM where the same contradiction re-emerges but actually thinks it is somehow solvable. What is a “measurement” if not an “observation,” and what is an “observation” if not an “experience”? The same “measurement problem” is just a reflection of the very same “hard problem” about the supposed “phenomenality” of experience and the explanatory gap between what we actually experience and what supposedly exists beyond it.

    It’s the quantum wave function collapse that’s important.

    Why should I believe there is a physical collapse? This requires you to, again, posit that there physically exists something that lies beyond all possibilities of us ever observing it (paralleling Kant’s “noumenon”) which suddenly transforms itself into something we can actually observe the moment we try to look at it (paralleling Kant’s “phenomenon”). This clearly introduces an explanatory gap as to how this process occurs, which is the basis of the measurement problem in the first place.

    There is no reason to posit a physical “collapse” or even that there exists at all a realm of waves floating about in Hilbert space. These are unnecessary metaphysical assumptions that are purely philosophical and contribute nothing but confusion to an understanding of the mathematics of the theory. Again, just like Chalmers’ so-called “hard problem,” Penrose is inventing a problem to solve which we have no reason to believe is even a problem in the first place: nothing about quantum theory demands that you believe particles really turn into invisible waves in Hilbert space when you aren’t looking at them and suddenly turn back into visible particles in spacetime when you do look at them.

    That’s entirely metaphysical and arbitrary to believe in.

    There’s no spinning out where multiple things happen, there is only one thing. After wave collapse, is when you look in the box and see if the cats dead. In a sense it’s the literal “observer effect” happening our head. And that is probably what consciousness is.

    There is only an “observer effect” if you believe the cat literally did turn into a wave and you perturbed that wave by looking at it and caused it to “collapse” like a house of cards. What did the cat see in its perspective? How did it feel for the cat to turn into a wave? The whole point of Schrodinger’s cat thought experiment was that Schrodinger was trying to argue against believing particles really turn into waves because then you’d have to believe unreasonable things like cats turning into waves.

    All of this is entirely metaphysical, there is no observations that can confirm this interpretation. You can only justify the claim that cats literally turn into waves when you don’t look at them and there is a physical collapse of that wave when you do look at them on purely philosophical grounds. It is not demanded by the theory at all. You choose to believe it purely on philosophical grounds which then leads you to think there is some “problem” with the theory that needs to be “solved,” but it is purely metaphysical.

    There is no actual contradiction between theory and evidence/observation, only contradiction between people’s metaphysical assumptions that they refuse to question for some reason and what they a priori think the theory should be, rather than just rethinking their assumptions.

    That’s how science works. Most won’t know who Penrose is till he’s dead.

    I’d hardly consider what Penrose is doing to be “science” at all. All these physical “theories of consciousness” that purport not to just be explaining intelligence or self-awareness or things like that, but more specifically claim to be solving Chalmers’ hard axiom of consciousness (that humans possess some immaterial invisible substance that is somehow attached to the brain but is not the brain itself), are all pseudoscience, because they are beginning with an unreasonable axiom which we have no scientific reason at all to take seriously and then trying to use science to “solve” it.

    It is no different then claiming to use science to try and answer the question as to why humans have souls. Any “scientific” approach you use to try and answer that question is inherently pseudoscience because the axiomatic premise itself is flawed: it would be trying to solve a problem it never established is even a problem to be solved in the first place.


  • Roger Penrose is pretty much the only dude looking into consciousness from the perspective of a physicist

    I would recommend reading the philosophers Jocelyn Benoist and Francois-Igor Pris who argue very convincingly that both the “hard problem of consciousness” and the “measurement problem” stem from the same logical fallacies of conflating subjectivity (or sometimes called phenomenality) with contextuality, and that both disappear when you make this distinction, and so neither are actually problems for physics to solve but are caused by fallacious reasoning in some of our a priori assumptions about the properties of reality.

    Benoist’s book Toward a Contextual Realism and Pris’ book Contextual Realism and Quantum Mechanics both cover this really well. They are based in late Wittgensteinian philosophy, so maybe reading Saul Kripke’s Wittgenstein on Rules and Private Language is a good primer.

    That’s the only way free will could exist…What would give humans free will would be the inherent randomness if the whole “quantum bubble collapse” was a fundamental part of consciousness.

    Even if they discover quantum phenomena in the brain, all that would show is our brain is like a quantum computer. But nobody would argue quantum computers have free will, do they? People often like to conflate the determinism/free will debate with the debate over Laplacian determinism specifically, which should not be conflated, as randomness clearly has nothing to do with the question of free will.

    If the state forced everyone into a job for life the moment they turned 18, but they chose that job using a quantum random number generator, would it be “free”? Obviously not. But we can also look at it in the reverse sense. If there was a God that knew every decision you were going to make, would that negate free will? Not necessarily. Just because something knows your decision ahead of time doesn’t necessarily mean you did not make that decision yourself.

    The determinism/free will debate is ultimately about whether or not human decisions are reducible to the laws of physics or not. Even if there is quantum phenomena in the brain that plays a real role in decision making, our decisions would still be reducible to the laws of physics and thus determined by them. Quantum mechanics is still deterministic in the nomological sense of the word, meaning, determinism according to the laws of physics. It is just not deterministic in the absolute Laplacian sense of the word that says you can predict the future with certainty if you knew all properties of all systems in the present.

    If the conditions are exactly the same down to an atomic level… You’ll get the same results every time

    I think a distinction should be made between Laplacian determinism and fatalism (not sure if there’s a better word for the latter category). The difference here is that both claim there is only one future, but only the former claims the future is perfectly predictable from the states of things at present. So fatalism is less strict: even in quantum mechanics that is random, there is a single outcome that is “fated to be,” but you could never predict it ahead of time.

    Unless you ascribe to the Many Worlds Interpretation, I think you kind of have to accept a fatalistic position in regards to quantum mechanics, mainly due not to quantum mechanics itself but special relativity. In special relativity, different observers see time passing at different rates. You can thus build a time machine that can take you into the future just by traveling really fast, near the speed of light, then turning around and coming back home.

    The only way for this to even be possible for there to be different reference frames that see time pass differently is if the future already, in some sense, pre-exists. This is sometimes known as the “block universe” which suggests that the future, present, and past are all equally “real” in some sense. For the future to be real, then, there has to be an outcome of each of the quantum random events already “decided” so to speak. Quantum mechanics is nomologically deterministic in the sense that it does describe nature as reducible to the laws of physics, but not deterministic in the Laplacian sense that you can predict the future with certainty knowing even in principle. It is more comparable to fatalism, that there is a single outcome fated to be (that is, again, unless you ascribe to MWI), but it’s impossible to know ahead of time.