minkwe wrote:Heinera wrote:Here you confuse the results of an experiment, and the output of a model, which are two different things. In an experiment, the E(a,b) from one set of particles is independent from the E(a,b’) from a completely different set of particles, as you say. The point is that it is impossible to construct a LHV model so that the same is true for the model.
No I'm not. I asked 3 questions, 2 about Theoretical Bell and CHSH constructs and one about experiments. You objected only to my answer about experiments. That's what I'm clarifying to you that the same is true for a LHV model
of the experiment.
The experiment uses a completely separate set of particles, why would you insist on using a single set of particles when modelling it? Even the QM predictions are for separate sets of particles.
Why would you insist on using a single set of particles when modelling it? Answer: ask Einstein, Podolsky, Rosen. Or read Bell. Or both, and more.
A local hidden variables model is a *model* for one pair of particles and it allows you to define *within the model* all the different measurement outcomes of all the different possible measurements in the model, at the same time. After all, in the model, there is a function A(a, lambda)! (I'm not saying you would
want to do this. Of course not. I'm just saying that you
could do this.)
Since the birth of quantum mechanics, people have asked themselves whether such a model could underlie quantum mechanics. In other words, though QM appears weird, maybe there is a "normal" hidden layer, behind the scenes. All that quantum weirdness would not then really be weird, it would only be apparently weird.
This is not a stupid question since till quantum mechanics such models were always possible and indeed everyone believed that deep down the universe is just a bunch of billiard balls bumping into one another in deterministic fashion. Everyone believed that all instances of randomness in physics are just instances of unknown initial conditions, uncontrollable initial conditions, chaos theory (sensitive dependence on initial conditions causing exponential divergence between trajectories even when they start very close together). According to this point of view, every physical random generator is a pseudo random generator.
Bell's genius was to take the EPR argument (that QM was incomplete, because QM predictions itself showed that such a hidden layer exists), turn it on its head, and turn it into an experiment which might actually decide whether or not a hidden layer can explain the predictions of QM.
Note the EPR argument talks about an entangled state of two particles and considers the two pairs of maximally incompatible observables position, momentum on each particle. One can rewrite EPR for spin half system and again one consider the maximally incompatible observables sigma_x, sigma_y, sigma_z on both particles. Or take just two spin observables for each particle. Bell showed that something really interesting turned up when you played a bit more with this idea. Clauser-Horne-Shimony-Holt played a bit more and ended up with the CHSH set-up where the most exciting observables to work with are sigma_x, sigma_y on one of the particles, and (sigma_x+sigma_y)/sqrt 2 and (sigma_x-sigma_y)/sqrt 2 on the other particle. Later confirmed by Tsirelson to give the largest possible deviation from what could happen classically.
Later, "A tight Tsirelson inequality for infinitely many outcomes" by S. Zohren, P. Reska, R. D. Gill, W. Westra
http://arxiv.org/abs/1003.0616, Europhysics Letters 90 (2010) 10002, took the canonical generalization of CHSH to higher spins, took the spin number to its limit, and got an EPR-like experiment back again. The observables are of course position and momentum on one particle, and (position + momentum)/sqrt 2 and (position - momentum)/sqrt 2 on the other particle.