Joy Christian wrote:As for my proposed experiment, you and Gill should really read the two relevant papers before going about spewing nonsense about my work all over the internet.

As far as I know, Michel hasn't read these papers yet. He did have serious objections to calculating all the correlations on the same set of particles.

Back on topic, Watson's paper ...

There is a lot of misunderstanding about the equation

- E(a, b) = int A(a, lambda) B(b, lambda) rho(lambda) d lambda.

Part of the problem is writing the integral using the notation appropriate when lambda is an element of Euclidean space and rho(lambda) a probability density function (typical physics notation of 50 years ago). Today, a mathematician might write

- E(a, b) = int A(a, lambda) B(b, lambda) dP(lambda)

where P is a probability measure on a measurable space.

But anyway, this key equation is not a *definition* but a theorem, though a rather basic and easy theorem, belonging to the theory of local hidden variables and to classical probability theory (whether Bayesian or frequentist).

E(a, b) stands for the mean value of products of outcomes of measurements of spin in the directions a and b of infinitely many similarly prepared pairs of particles. ("State" = "preparation"). (I take a frequentist interpretation of probability here. If you have a Bayesian interpretation that is fine: the mean of a variable is just the fair price of a lottery ticket whose prize is the outcome of the variable. Probability is your uncertainty as to its value, prior to the experiment in which the variable is observed once).

The integral on the right hand side is the result of (1) assuming a local hidden variables model and (2) the law of large numbers (if you are a frequentist; if you are a Bayesian you have another theorem to take care of this basic building block of the theory: the relation between expectations and probabilities).

The local hidden variables theory says that preparing a new pair of particles and measuring a and b is like: picking lambda at random according to the probability measure P; getting to see the values A(a, lambda) and B(b, lambda).

Given functions A and B and a probability measure the mathematician is free to study a new composed function such as

A(a, lambda)B(b, lambda) - A(a, lambda)B(b', lambda) + ... + ... and integrate over lambda with respect to the probsbility measure P. Because this new function is everywhere -2 or +2, its average lies in between those bounds.

There is no suggestion anywhere of measuring different things at the same time. There are some functions, there is a trivial logical bound, there is some calculus and writing an integral of a sum as a sum of integrals.

No voodoo, just plain model + calculus. If you believe the model represents reality, then model deductions about things that are observable in some way or another, should fit too. If they don't fit, the model is inappropriate.

It's all rather simple as long as one careful distinguishes physical reality from mathematical models of parts of it.