minkwe wrote:Jochen wrote:The reasoning is that, if you are committed to the existence of predetermined values for all observables (i.e. realism), then, if they are not disturbed by the act of measurement, you can find a joint probability distribution
; and as shown above, such a joint distribution, no matter how the outcomes are generated, is a sufficient condition for Bell inequalities holding.
Hello Jochen and welcome to SPF. I have a few questions about what you posted above.
Hello minkwe, and thanks for the welcome!
1) I agree that the existence of the joint probability distribution P(A,B,C,D) is the only assumption required to obtain the inequalities. Do you agree that neither locality nor hidden variables are required?
One must be a bit careful here. Yes, Bell inequalities are a simple mathematical consequence of the existence of a joint probability distribution. However, it is not always reasonable to assume such a joint distribution exists, even in classical physics. The crux of Bell's theorem is really to identify sufficient conditions under which such a joint PD exists, and then to show that nevertheless, in QM you can't propose such a joint distribution.
One necessary condition is the existence of predifined values, such that a joint PD is simply a connvex combination of possible value ascriptions. The other condition is the non-disturbance---here I'll leap forward to your fourth point.
Consider a simple finite state machine, whose states are labeled by measurement outcomes---that is, if the machine is in state (ABCD)=(+++-), and you press the button labeled 'A', it will output +1; you press button D, it will output -1, and so on. You could be (classically) uncertain about the precise state of the machine, and hence, associate it with a probability distribution
, where the index i counts all possible states from (----) to (++++),
is the machine state,
and
. This means that the machine is in state
with probability
. So, for instance, such a distribution could be 0.5*(++++) + 0.5*(----), which means that for every observable, you'd have a fifty/fifty chance of getting +1 on measurement.
As such, this automaton could not violate a Bell inequality, as the derivation I have given above shows. However, we can introduce some state transition rules. Such rules take the form: 'If the state is (xxxx), and button Y is pressed, output v(Y) and transition to state (zzzz)'. This models a disturbance of the state of the system by measurement, but such a thing is perfectly ordinary: measuring the voltage in a circuit alters the circuit and hence, what's being measured. But it's clear now that the above argument doesn't apply any longer: if I measure first quantity A, then B, B will in general be drawn from a different probability distribution than A. Thus, the system can produce arbitrary violations of Bell inequalities.
That's why one needs some form of nondisturbance assumption in order to make such tests meaningful. This leads to the three possibilities I gave above, with the locality assumption due to Bell perhaps being the most 'reasonable' one, as again we don't believe that there exist faster than light influences.
So, in order to talk about systems constrained by Bell inequalities, one must make the assumptions of definite values and nondisturbance (in some form); otherwise, violation of a BI is no great surprise.
2) In an EPRB experiment, a series of 2 spin-half particles are measured at angles (a,b) to produce the joint probability distribution P(A,B) for these particles. Since the act of measurement destroys those particles. A different set of particle pairs are measured at angles (c,d) to produce the joint probability distribution P(C,D) for those particles. How is it possible to measure the joint probability distribution P(A,B,C,D) for a series of particles, if the outcomes (A,B,C,D) are never jointly measured for any of the particle pairs?
The existence of this joint PD follows from the assumption of value definiteness. Now, you're right in saying that quantum mechanics doesn't allow us to make all measurements simultaneously (although it's not necessary to perform destructive measurements to violate Bell inequalities---it's been suggested to perform Bell tests with weak measurements, where you can in fact measure all observables on the same member of the ensemble, though I'm unaware of any experimental implementation). But we're interested in a property of a hypothetical completion of quantum mechanics, not one of QM itself, so appealing to the properties of QM at this stage puts the cart before the horse.
So the basic reasoning is this: imagine we have some putative completion of quantum mechanics that attributes the observables A, B, C and D with definite values before each measurement. Assume also that we can make those measurements simultaneously. Then, we can derive, in one single measurement, the quantity
. Clearly, whatever the value assignment, this quantity is always smaller than 2.
Now, we want to test this in the actual world. Unfortunately, the actual world does not let us test it directly: measuring A on one particle precludes measuring C on the same particle, since the first measurement will generally change the state. So, what do we do? Well, we note that if
would be true for every single measurement if we could perform it (counterfactual definiteness), then certainly it is true on average, i.e.
. And this we can now test, by preparing the same state a large number of times (here, the same state means that we observe the same statistics; if this is true, then it is also the case that each copy's observables are distributed according to P(A,B,C,D)---this follows directly from the assumption that our HV-theory completes QM, which in particular means that it must reproduce all of QM's predictions). Then, we can simply do the measurements---and figure out that we violate our statistical constraint, and that hence, our original assumptions can't be true.
3) Finally, QM makes a prediction for the expectation value <AB> of the joint measurement at angles (a,b). Why should any model which tries to reproduce this prediction care about the joint probability distribution P(A,B,C,D), when all that is needed to calculate <AB> is P(A,B), and no experiment will ever be able to measure P(A,B,C,D)?
Because of counterfactual definiteness: if any observation that one could make must be reproduced by the HV-theory, and the HV-theory does not know in advance which measurements will be performed on a given pair of particles, then every particle must have 'pre-prepared answers' to every question we could ask it. So let's say that the first particle of the ensemble comes out in a definite state (++++), the next one in a convex combination 1/3*(++--)+2/3(-+-+), the third one as (----); then, the probability distribution given by the source is simply such that we find with 1/3 probability the first particle, the second, or the third, and hence, yields a combined joint PD 1/3*(++++) + 1/9*(++--)+2/9*(-+-+) + 1/3*(----). In this way, we can always construct a joint PD for the particles the source provides us with, and as you know, no such joint PD can violate a BI.
If you're still uncomfortable with the averaging over, we can also just go on to an example that doesn't need it, the argument given by Greenberger, Horne, and Zeilinger, GHZ for short. There, you can find a contradiction between local realism and the predictions of quantum mechanics just investigating one copy of the state, just looking at definite outcomes, and without inequalities. Let's quickly look at that.
First, define the x-, y- and z-basis for a single qubit Hilber space by means of the eigenvectors of
,
and
to the eigenvalues
as follows. The eigenvectors of
are labeled
and
. Whenever a system is in state
, then an experimenter performing a measurement of spin in z-direction will receive +1 with certainty, and so on. The eigenvectors of the other two spin measurement directions can then be given in terms of the z-basis as
, where
is the complex unit, and
.
Now let's consider the three qubit state
). Now, a hidden variable theory would have to provide outputs for any possible measurement in advance. But this is impossible. Consider the case where the first party makes an x-spin measurement, and the other two measure y-spin. We can determine the distribution of their outputs directly from re-expressing the state in the appropriate bases:
So if the first party gets a +1 outcome, then the other two parties will be anticorrelated; if they obtain a -1, the others will be correlated. Hence, from any two parties, the outcome of the other is determined. But now let's consider the same state in the x-basis:
We've previously established that if one party measures x-spin, and obtains the result +1, then the y-spin of the other parties must be anticorrelated (the first decomposition above shows this explicitly for the case of the first party measuring, and the other cases are obtained by permuting the parties). Hence, according to the first term of the x-basis decomposition, all three pairs of particles must have opposite y-spin, while the other three terms show that one of the three particle pairs must have opposing y-spin. Thus, in either case, we must have an odd number of pairs with opposite y-spin.
But clearly, this is impossible: either all y-spins are the same, in which case we have no pair with opposite spin, or one differs from the other two, in which case there are two pairs with opposite y-spin. Hence, there exists no pre-assignment of values that could account for all measurements which we might choose to perform and agree with the quantum mechanical predictions, and there's no averaging over needed. Only if we allow that measurement on one spin somehow influences the outcome on the other ones do we get the possibility of agreement; but this is just tantamount to giving up locality. (I'll defer answering your second post, because I don't really think that it brings any new issues into play; if you feel that I should reply to any point there, feel free to point it out.)