Joy Christian wrote:Jochen wrote:See, I'm used to thinking about this in operational terms. You have two boxes, each of which takes either of two inputs, and produces one of two outputs. This is the level at which Bell inequalities are derived---whatever mechanism gives rise to the outcomes is irrelevant. If (and only if) those boxes can generate a violation of some Bell inequality without knowing about each other's inputs, you'll have shown that there is a local realistic model that can reproduce quantum mechanical predictions.
I find this argument very strange, if not simply wrong. Neither quantum mechanics nor local realism has anything to do with such boxes. Quantum mechanics or its local-realistic alternative has to do with correlations observed in Nature.
Well, it's a standard way of thinking about this in the quantum info community. Its virtue is precisely that it is wholely independent of the underlying implementation. The reasoning is that, if you are committed to the existence of predetermined values for all observables (i.e. realism), then, if they are not disturbed by the act of measurement, you can find a joint probability distribution
; and as shown above, such a joint distribution, no matter how the outcomes are generated, is a sufficient condition for Bell inequalities holding.
If you now say you have a local realistic strategy that nevertheless violates Bell inequalities, then there is some recipe to calculate, from just the hidden variables and the local input of one party, the outcome of every experiment they could perform; that is, there is a function only of the hidden variables and the input of one party such that that parties outcomes are reproduced by that function. So all you have to do is build a box that implements this strategy---either by simulation (which is possible if there is such a recipe for calculation), or by some actual physical system. If you can do this such that neither box has knowledge of the input to the other, then (and only then) will you have shown that there is a local realistic model for quantum correlations.
It takes only half a page of elementary calculation to show that these correlations can be explained within a manifestly local-realistic model:
http://arxiv.org/abs/1103.1879.
And you know very well that not everybody accepts these calculations as correct. But if you could show such a two-boxes implementation of your recipe, there would not be any doubt: because the impossibility of such boxes is just what Bell's theorem asserts.
FrediFizzx wrote:Jochen wrote:The argument hinges on a couple of elementary misconceptions; however, Schmelzer seems to have dispensed with most of them quite handily, so I don't really know what I could add to the discussion there.
If you really think so then please post how quantum mechanics violates Bell's inequality. It can't. It is impossible for
anything to violate it.
Well, Bell inequalities are statements about a certain kind of theories, where the observed probabilities always form a convex polytope. Their violation merely says that quantum mechanics is not such a theory.
Let me illustrate that. Take two propositions,
and
, which are about
-valued observables
and
, and assert '
is
' and '
is
' respectively. This could, for instance, simply be independent coin throws. Then form the conjunction
, which asserts '
is
and
is
'. We can construct the truth table for our propositions (I couldn't get the array-environment to work):
- Code: Select all
A | B | A^B
-----------
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
Now, each row contains a possible assignment of values, and all rows togethern contain all possible assignments. We can now construct probability distributions over these assignments: say,
, where
is the row
, means that with probability p, all observables are not
, while with probability (1-p),
is
. Plainly, the most general probability distribution on our system is
, where
and
. This equation defines a convex polytope (actually a simplex) in three dimensions, the so-called correlation polytope. All admissible correlations of the system lie in this polytope, and thus, one will only ever observe probability distributions such that they can be written in the above form. Note here that these are statistical predictions: the only way to have access to them is to carry out a sufficiently large number of measurements on identically prepared systems.
Or at least so one once thought, until quantum mechanics came around. Because note that in the above derivation, we have made the implicit assumption that all of
and
have definite values; only this made it possible to derive the correlation polytope. If we drop this assumption, and instead calculate the probabilities using the quantum mechanical prescription
, where
is the projector onto the
-eigenspace of
, then we get a different---and actually larger---convex body, of which the classical correlation polytope is merely a subset.
What does this have to do with Bell inequalities? Well, polytopes have two ways of characterizing them: via their vertices, as I have done, and via their faces. These faces are given by inequalities: anything that's larger than a certain value is outside the polytope, e.g. These inequalities are exactly Bell's inequalities. Hence, their derivation hinges on the assumption of definite values; and thus, in theoris where that assumption is violated, as it is in quantum mechanics, there is no reason for Bell inequalities to hold---and indeed, they are violated.
Now you're probably wondering that I haven't said anything about locality. That's because that really only comes in once one tries to give the inequalities---which are just equivalent to the mathematical statement that there exists a joint probability distribution for all variables, which one gets from the assumption that there are definite values and that measurements are non-disturbing---any operational or empirical content. In reality, there are many ways for the assumption that there is a joint PD to fail: if my measurement of
influenced the likelihood of
coming up
, for instance. So, you must introduce an auxiliary assumption in order to make testable predictions, which is also the reason that hidden-variable theories per se can never be excluded. There are three basic ways to make this assumption:
- Macroscopic realism: essentially, you assume that a system is always in one of the states available to it (corresponding to having always one fixed set of responses to all possible measurements), and that measurements carried out at different points in time do not influence one another. This gives you the Leggett-Garg inequalities.
- Noncontextuality: observables that mutually commute do not influence one another; hence, the value you obtain for is independent of whether you measure it together with or , which constitute the so-called measurement context. This gives you the Kochen-Specker theorem.
- Locality: measurements performed in spacelike separation don't influence one another, thus Bob's outcomes are independent of Alice's choices. You can view this as a special case of noncontextuality, since operators at spacelike separation always commute. This gives you Bell's theorem.
So, it's in the following sense that quantum mechanics violates Bell inequalities, while classical mechanics doesn't: Bell inequalities are really statements about theories in which all variables have a joint probability distribution; for such theories, they hold necessarily. However, this is not something you can require in general, as any influence of one variable on another spoils it. Hence, you must make an additional assumption in order to produce testable predictions. The weakest such assumption is given by locality, since we have good reason (from special relativity) to believe that information can't be transmitted faster than light. So under the joint assumptions of realism and locality, you can justify the assumption of the existence of a joint PD.
But this need not hold: a given theory could well violate either assumption, in which case it isn't constrained by Bell inequalities. So, the experimental violation of BIs then tells us that quantum mechanics must not obey one of the assumptions. We can either give up locality, if we want to be able to claim that there is a definite value for every measurement to discover; then, we would have a theory essentially like quantum mechanics in that the space of admissible probability distributions is a convex polytope, there just isn't a single such polytope for each experiment. Or, we could give up the assumption of definite values; then, we get a theory for which the set of admissible probability distributions isn't a polytope, but a more general convex body (this can be fully characterized using convex optimization methods). Which one to choose is up to individual preferences.