gill1109 wrote:Gordon Watson wrote:So, returning to our topic: As I read him, Bell relied on his 1964:(15) to launch "Bell's impossibility theorem" (BIT) -- ie, his claim that Bell 1964:(2) cannot equal Bell 1964:(3). Do you agree with the crucial role of his 1964:(15) here? The whole history of the Bellian canon starts with his (15), right?
Dear Gordon,
This is indeed how it all started. Since then there have been different proofs, and indeed proofs which make less or different assumptions and which get stronger conclusions. But it is fine by me to go back to the original. Even though notation is very old-fashioned, the proof can be made much shorter and sharper, and Bell skates over a number of issues which he later expanded on at length precisely in reaction to critics like you.
Thanks Richard; but quite seriously: Bell dug his hole deeper -- not better; and struggled (unsuccessfully) to avoid doublespeak re AAD, statistical dependence; etc., as time went on. Please listen to his 1990 talk -- it's online.
gill1109 wrote:
Bell's argument is indeed that (2) implies (15) but (3) does not satisfy (15). Hence the model (2) cannot reproduce the quantum correlations predicted in (3).
This is where I asked you, as a statistician, to comment on the implicit assumption that you see in "Bell's integral" -- the one in his (2). Please respond.
gill1109 wrote:
I suppose you have no problems with the derivation of (15) from (2) and that you have no problems with the QM expression (3).
Richard; are you drunk on The Netherlands footballing success?
Please, re Bell's silly (15); so easily refuted experimentally too: What is MY topic here, in this thread, again? Please!
Re his (3), no problem at all since I happily derive it almost daily for folks like you. Based on commonsense local realism (CLR), of course.
gill1109 wrote: So it seems you have a problem with
(2) P(a, b) = int d lambda rho(lambda) A(a, lambda) B(b, lambda)
It needs careful handling, so please reply re the implicit assumption that you see therein.
gill1109 wrote:
I would like to explain what this formula is supposed to mean by reference to a computer simulation experiment. Suppose we set up a network of three ordinary computers called Source, Station A, and Station B. Station A takes input from Source and from a lab assistant Alice. Station B takes input from Source and from lab assistant B. The input from the source is whatever you like, call it lambda, but it is created using a state of the art pseudo random number generator. We think of each new lambda as being chosen, anew, completely at random, from some fixed set of possible values according to a fixed probability distribution rho(lambda) d lambda.
At the two stations, Alice and Bob (the lab assistants) pick some values of settings a and b just however they like. They don't know anything about the pseudo random number generator. They don't get to see the values of lambda which are sent from source to stations.
Their computers simply evaluate and output some binary output A(a, lambda) and B(b, lambda).
This is repeated lots and lots and lots of times.
If we restrict attention to runs in which Alice and Bob chose a particular pair of settings a and b, and average the product of their outcomes, it would in the long run become very close to P(a, b) = int d lambda rho(lambda) A(a, lambda) B(b, lambda)
Do you believe me so far? This is Probability for Engineers 101 or perhaps Statistics for Engineers 101.
Richard; thanks, but with respect: When we settle the World-Cup by computer, I'll switch my studies from real-life to gaming. Until then, I trust that you are happy to deal with the real world and real experiments? Knowing, as you do, that two correlated footballs have nothing like the correlations associated with two entangled particles?
gill1109 wrote:
PS [added an hour or so later]: if you would like the computers Station A and Station B to introduce further randomness into the generation of the measurement outcomes, let's suppose that this also uses more pseudo random number generators, just like Source. You can just as well imagine those pseudo random generators as being housed at the source, and you add to the existing "message" from the source to the measurement stations, the next batch of pseudo random numbers which Station A and Station B would use. This way we convert a "stochastic hidden variables model" to a "deterministic hidden variables model". There is no need to invent a new theory for the "stochastic" case.
I don't know what you mean by "common sense local realism" but I think the phrase should mean something like the computer network paradigm which I have just described.
"Commonsense local realism (CLR) is the fusion of local-causality (no causal influence propagates superluminally) and physical-realism (some physical properties change interactively)." Do you sense any problems here? Any that might prevent you too adopting CLR [ pronounced
clear ] as a working philosophy?
gill1109 wrote:
I'm glad that you don't want to allow any loopholes (locality loophole, conspiracy loophole, detection loophole). That allows for a much "cleaner" discussion in which we can focus on the principles. After we have understood the principles, we can start thinking about how well they are reflected in state of the art experiments.
Why the need for state-of-the-art experiments? Under CLR, there are NO loopholes to be closed. We predict the correct results (just like QM) but without collapse or any need for FTL or AAD or nonlocality. Surely Aspect, and Bell himself (when introspective), convinced you of that?
So, Richard, seriously, how about we quit the word games and let our maths do the talking. I moved first, in my essay. Your move next, in reply, via some maths?
With best regards; Gordon