Zen wrote:Michel,
I don't have Python installed here, but from the results you posted your Python translation of Richard's first two R scripts (which I translated to Perl) doesn't give the same results as the R and Perl codes? You get the same CHSH = 2.0 for both the dependent and independent cases?
Best,
Zen.
Zen, why oh why is Michel looking at the maximum? He should draw histograms of the results of the two cases. The point is to *see* the results. All of them. A histogram of a few hundred would do fine.
I think he doesn't realise that when we do experiments, if we say "violate a bound" we really mean "statistically violate a bound" or "statistically significant violation of a bound" ... and even those words, need to be expanded, because people who have never heard of standard errors or p-values won't have any idea what it is supposed to mean.
I think everyone has already taken Michel's point that in experiments, one can easily observe values of CHSH larger than 2. In fact this is well known and has been well known for years and years and years. In a CHSH experiment one shows that one sees correlations close to the negative cosine, and not close to the triangle wave. One looks at just a few points on the curve ... more precisely, one looks at four points on the surface. Is the calculated - observed - measured E(a, b) close to the theoretical rho(a, b) of quantum theory, or to the theoretical rho(a, b) of the standard local hidden variables model?
When we talk about experiments, we need a vocabulary with words like "statistically significant", "standard error", "p-value", "goodness-of-fit", "type 1 error", "type 2 error", "discriminatory power"... We need to distinguish population from sample. Theory from experiment.

