A silly computer experiment ... or, the heart of the matter?

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

A silly computer experiment ... or, the heart of the matter?

Postby gill1109 » Mon Apr 07, 2014 1:29 am

Experiment 1: copy and paste this code into your R console window 20 times, and report your results back here.

Code: Select all
N <- 10000

lambda <- runif(N, 0, 360)
alpha <- 0
beta <- 45
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E11 <- mean(A * B)

lambda <- runif(N, 0, 360)
alpha <- 0
beta <- 135
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E12 <- mean(A * B)

lambda <- runif(N, 0, 360)
alpha <- 90
beta <- 45
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E21 <- mean(A * B)

lambda <- runif(N, 0, 360)
alpha <- 90
beta <- 135
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E22 <- mean(A * B)

E12 - E11 - E21 - E22


Experiment 2: copy and paste this code into your R console window 20 times, and report your results back here.

Code: Select all
N <- 10000

lambda <- runif(N, 0, 360)

alpha <- 0
beta <- 45
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E11 <- mean(A * B)

# lambda <- runif(N, 0, 360)
alpha <- 0
beta <- 135
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E12 <- mean(A * B)

# lambda <- runif(N, 0, 360)
alpha <- 90
beta <- 45
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E21 <- mean(A * B)

# lambda <- runif(N, 0, 360)
alpha <- 90
beta <- 135
A <- sign(cos((alpha - lambda)*pi/180))
B <- - sign(cos((beta - lambda)*pi/180))
E22 <- mean(A * B)

E12 - E11 - E21 - E22


Discuss the difference between experiment 1 and experiment 2.

Hint 1: the "#" symbol indicates a comment. So lines which start with # have been commented out.

Hint 2: the command lambda <- runif(N, 0, 360) creates a vector of length N containing *real* numbers chosen uniformly at random between 0 and 360. Actually: pseudo random numbers. Every time the command is repeated, a new set of pseudo random numbers is created.

You may also like to investigate use of different parameters, e.g., different values of N; or you might like to repeat both experiments 1000 times and report histograms of the results.

You might like to learn about the function "set.seed()" which allows one to repeat a simulation experiment with the *same* random numbers as you used last time. This is very useful in debugging programs, etc. And in making your research reproducible: someone else who runs the same code on a different computer even under a different operating system will get identical results if they start with the same pseudo random seed.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment or the heart of the matter?

Postby gill1109 » Mon Apr 07, 2014 5:20 am

PS-1 Here is how to repeat the experiment not 20 times by copy and 20x paste, but 1000 times, saving the results and plotting a histogram of them:

Code: Select all
Nrep <- 1000
results <- numeric(Nrep)
for (i in 1:Nrep) {
    ##
    ##  Here you fill in the code from experiment 1 (or 2 if you prefer),
    ##  indented if you want it to look cool, but that's not necessary.
    ##  Just change the last line! It is changed into ...

    results[i] <- E12 - E11 - E21 - E22
}
hist(results)


PS-2 Anyone like to translate to Mathematica, Python or whatever? Please post the translations here so we can check that they are "true"

PS-3 You can get R from http://www.r-project.org Free. And a huge user-base, friendly forums where you can get advice, hundreds of quick start guides, getting started guides, tutorials ... If you want a book, I very much recommend Norman Matloff's "The art of R programming". http://www.nostarch.com/artofr.htm. If you want a GUI (and IDE) I recommend R Studio. Free. However the standard R distributions for Windows and for Mac come with a pretty adequate GUIs of their own.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Tue Apr 08, 2014 10:09 am

Still waiting for help with Python, Mathematica translations.

This is important because an important part of this code is actually the key part of the proposed code for determining the outcome of Joy and my bet. See

https://rpubs.com/gill1109/Bet

The experiment of the century is at stake! The progress of science.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Wed Apr 09, 2014 9:30 pm

NIce! That is a good warm-up. Call it Phase 1.

Phase 2: Can you also do
http://rpubs.com/gill1109/Bet
in perl? It's the proposed actual analysis of the data from the bet. It reads directions from two files. At the end it does a little simulation to exhibit the triangle wave. That is not part of the bet.

Since this was written by R Gill, in a language which lots of people don't know, Joy Christian can be very suspicious suspicious. We need that other people look at it and that there are indendent translations by people he trusts into other languages so yet more people can check and doublte that the algorithm is OK and all implementations are equivalent.

When we do the real experiment, we will end up with two computer files looking just like the two files which are analysed by this code. Two test files are on internet:
http://www.math.leidenuniv.nl/~gill/AliceDirections.txt
http://www.math.leidenuniv.nl/~gill/BobDirections.txt

Code: Select all
## I use mathematician's notation: theta is the azimuthal angle, phi is the polar
## (aka zenith) angle; both measured in radians.
## Reference: https://en.wikipedia.org/wiki/Spherical_coordinates
## Since my measurement directions all lie in equatorial plane I just extract "theta"


AliceDirections <-
    read.table("http://www.math.leidenuniv.nl/~gill/AliceDirections.txt")
   
names(AliceDirections) <- c("theta", "phi")

head(AliceDirections) ## N pairs theta, phi (N rows, 2 columns)

NAlice <- nrow(AliceDirections)

NAlice

AliceTheta <- AliceDirections$theta # Alice's azimuthal angles

head(AliceTheta)

BobDirections <-
    read.table("http://www.math.leidenuniv.nl/~gill/BobDirections.txt")
   
names(BobDirections) <- c("theta", "phi")

head(BobDirections)

NBob <- nrow(BobDirections)

NBob

if (NAlice != NBob) print("Error: particle numbers don't match") else
    print("Go ahead!")

BobTheta <- BobDirections$theta  # Bob's azimuthal angles

head(BobTheta)

## First pair of measurement directions

Alpha <- 0 * pi / 180
Beta <- 45 * pi / 180
A <- sign(cos(AliceTheta - Alpha))
B <- - sign(cos(BobTheta - Beta))
E11 <- mean(A * B)

## Second pair of measurement directions

Alpha <- 0 * pi / 180
Beta <- 135 * pi / 180
A <- sign(cos(AliceTheta - Alpha))
B <- - sign(cos(BobTheta - Beta))
E12 <- mean(A * B)

## Third pair of measurement directions

Alpha <- 90 * pi / 180
Beta <- 45 * pi / 180
A <- sign(cos(AliceTheta - Alpha))
B <- - sign(cos(BobTheta - Beta))
E21 <- mean(A * B)

## Fourth pair of measurement directions

Alpha <- 90 * pi / 180
Beta <- 135 * pi / 180
A <- sign(cos(AliceTheta - Alpha))
B <- - sign(cos(BobTheta - Beta))
E22 <- mean(A * B)

CHSH <- E12 - E11 - E21 - E22

CHSH

if (CHSH > 2.4) print("Congratulations, Joy") else
    print("Congratulations, Richard")

## Another experiment

AliceTheta <- runif(1000, 0, 360) * pi / 180
BobTheta <- - AliceTheta

Delta <- seq(from = 0, to = 360, by = 10) * pi / 180
Correlation <- numeric(length(Delta))
A <- sign(cos(AliceTheta))
i <- 0
for (delta in Delta) {
    i <- i+1
    B <- - sign(cos(BobTheta - delta))
    Correlation[i] <- mean(A * B)
}
plot(Correlation)
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby FrediFizzx » Wed Apr 09, 2014 11:47 pm

Mathematica version notebook file from John Reed. If you download and install the Wolfram CDF Player, you can open the file directly to view it and or print it.

http://www.wolfram.com/cdf-player/
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Thu Apr 10, 2014 2:37 am

Splendid, Fred and John Reed!

That's a big download (CDF player).

In the meantime I made little script to draw the correlation surfaces E(a, b) and just four points on them (a) under QM and/or Joy's model, (b) under the traditional "best" LHV model.

The point being: a CHSH-syle experiment tests four points on the surface. Let's forget the word "bound" and the word "inequality". They lead to endless misunderstanding.

But please do let's realize that an experiment is always subject to experimental error, statistical error. We don't determine those four points exactly, but only approximately. If N = 100 (per point) the error will be about 0.1. Not good. If N = 10 000 (per point) the error will be about 0.01. Plenty enough.

http://rpubs.com/gill1109/Wireframe
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Thu Apr 10, 2014 8:45 am

Zen wrote:Ok. Here is the other translation to Perl. But with one condition: you have to give a direct answer to a simple question.

Code: Select all
#!/usr/bin/perl

use strict;
use LWP::Simple;
use constant PI => 3.14159265358979;

my @alice = map { $_->[0] }
              map { [ split /\s/, $_ ] }
                split /\n/, get "http://www.math.leidenuniv.nl/~gill/AliceDirections.txt";

my @bob = map { $_->[0] }
              map { [ split /\s/, $_ ] }
                  split /\n/, get "http://www.math.leidenuniv.nl/~gill/BobDirections.txt";
                 
die "Different sample sizes!\n" unless $#alice = $#bob;

my $CHSH = E(\@alice, \@bob, 0, 135) - E(\@alice, \@bob, 0, 45)
         - E(\@alice, \@bob, 90, 45) - E(\@alice, \@bob, 90, 135);

print "CHSH = $CHSH\n";

exit 1;

sub E {
    my ($alice, $bob, $a, $b) = @_;
    $a = $a * PI / 180;
    $b = $b * PI / 180;
    my $corr = 0;
    for (my $i = 0; $i < @$alice; $i++) {
        $corr += sign(cos($alice->[$i] - $a)) * - sign(cos($bob->[$i] - $b));
    }
    return $corr / @$alice;   
}

sub sign { $_[0] >= 0 ? 1 : - 1; }


That gives the following error on my MacBook Pro:

Code: Select all
Can't locate LWP/Simple.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-thread-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-thread-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-thread-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at test3.pl line 4.
BEGIN failed--compilation aborted at test3.pl line 4.

So I have to get some perl package called LWP ... ? That's always a headache.

... OK googled, switched to linux, installed libwww-perl library, it works.

What's the question which has to be answered?

And how to install libwww-perl on a stock OSX.9? (I do already have the developer tools installed, including the command line tools, so I can at least call perl on the command line ...).
Ah this worked: http://triopter.com/archive/how-to-install-perl-modules-on-mac-os-x-in-4-easy-steps/
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Thu Apr 10, 2014 2:09 pm

Code: Select all
import numpy, itertools

EAB = []
lmda = numpy.random.uniform(0, 360., 10000) # lambda is python keyword
for alpha, beta in itertools.product((0,90), (45,135)):
    A = numpy.sign(numpy.cos(numpy.radians(alpha-lmda)))
    B = -numpy.sign(numpy.cos(numpy.radians(alpha-lmda)))
    EAB.append( (A*B).mean())
print "E12 - E11 - E21 - E22 = %0.5f" % (EAB[1] - EAB[0] - EAB[2] - EAB[3])

EAB = []
for alpha, beta in itertools.product((0,90), (45,135)):
    lmda = numpy.random.uniform(0, 360., 10000) # lambda is python keyword
    A = numpy.sign(numpy.cos(numpy.radians(alpha-lmda)))
    B = -numpy.sign(numpy.cos(numpy.radians(alpha-lmda)))
    EAB.append( (A*B).mean())
print "E12 - E11 - E21 - E22 = %0.5f" % (EAB[1] - EAB[0] - EAB[2] - EAB[3])

Results:
Code: Select all
E12 - E11 - E21 - E22 = 2.00000

E12 - E11 - E21 - E22 = 2.00000
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Thu Apr 10, 2014 7:34 pm

Michel, you can email me when you've done the experiment. Tell me what you saw.

What it is all about.

Look at these two pictures.

The surfaces are theoretical correlation functions rho(a, b).

The points are theoretical correlations - four of them, according to two different theories.

A typical CHSH style experiment measures four points, generating four observed correlations E(a, b), not shown in the images.

Are they be close to the blue or to the red points?

Image
Image

The blue points are on the blue surface, the red points are on the red surface. The two surfaces are bloody close to one another...

The images are made by the R script http://rpubs.com/gill1109/Wireframe
I'm investigating even better ways to visualise this. Help and advice is welcome. Maybe it's easier to do with Python ...

See: no inequality, no bound, no violation. Just: which picture is the true picture, focussing for this question on four sensitive points. Four, because on the one hand we want to look at as few points as possible where the difference between the surfaces is large; so the number should be as small as possible; yet on the other hand we need to vary both Alice's and Bob's settings independently of one another; so the number has to be at least 2 x 2 = 4. The four points which are traditionally chosen: the set { 0, 90 } x {45, 135} : give maximal statistical discrimination for a given finite number of pairs of particles. ie the smallest chance of coming to the wrong conclusion, both ways round (ie whatever is actually the true state of affairs)
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Thu Apr 10, 2014 7:46 pm

gill1109 wrote:Michel, you can email me when you've done the experiment. Tell me what you saw.

Didn't you see the code I posted above. It's as irrelevant and pointless as I suspected. Now here is an experiment for you. You can translate it to R if you like.
Code: Select all
import numpy, itertools

N = 1000000

i = (1-numpy.cos(numpy.pi/8))/4
j = (1-numpy.cos(3*numpy.pi/8))/4

# Single Set of Particles
print
print "Mutually dependent correlations:"
A,B = numpy.random.choice([-1, 1], p=(1-i, i), size=(2,N))
C,D = numpy.random.choice([-1, 1], p=(1-i, i), size=(2,N))

S = A*B - A*D + C*B + C*D
EM = [(A*B).mean() , -(A*D).mean() , (C*B).mean() , (C*D).mean()]
print "Upper Bound:  AB  -  AD  +  CB  +  CD <= % 0.3f" % (S.max())
print "Exp Average: <AB> - <AD> + <CB> + <CD> = % 0.3f" % (sum(EM))

# 4 Separate Sets of particles
print
print "Mutually independent correlations:"
A1, B1 = numpy.random.choice([-1, 1], p=(1-i, i), size=(2,N))
A2, D2 = numpy.random.choice([-1, 1], p=(1-j, j), size=(2,N))
C3, B3 = numpy.random.choice([-1, 1], p=(1-i, i), size=(2,N))
C4, D4 = numpy.random.choice([-1, 1], p=(1-i, i), size=(2,N))
S = A1*B1 - A2*D2 + C3*B3 + C4*D4
EM = [(A1*B1).mean() , -(A2*D2).mean() , (C3*B3).mean() , (C4*D4).mean()]
print "Upper Bound: A1B1 - A2D2 + C3B3 + C4D4 <= % 0.3f" % (S.max())
print "Exp Average: <AB> - <AD> + <CB> + <CD> = % 0.3f" % (sum(EM))

print
print "Mutually dependent correlations using only A1, B1, C4, D4: (same data)"
S = A1*B1 - A1*D4 + C4*B1 + C4*D4
EM = [(A1*B1).mean() , -(A1*D4).mean() , (C4*B1).mean() , (C4*D4).mean()]
print "Upper Bound: A1B1 - A1D4 + C4B1 + C4D4 <= % 0.3f" % (S.max())
print "Exp Average: <AB> - <AD> + <CB> + <CD> = % 0.3f" % (sum(EM))

print
print "Mutually dependent correlations using only A2, D2, C3, B3: (same data)"
S = A2*B3 - A2*D2 + C3*B3 + C3*D2
EM = [(A2*B3).mean() , -(A2*D2).mean() , (C3*B3).mean() , (C3*D2).mean()]
print "Upper Bound: A2B3 - A2D3 + C3B3 + C3D2 <= % 0.3f" % (S.max())
print "Exp Average: <AB> - <AD> + <CB> + <CD> = % 0.3f" % (sum(EM))

print
print "Mutually dependent correlations using only A1, B3, C4, D2: (same data)"
S = A1*B3 - A1*D2 + C4*B3 + C4*D2
EM = [(A1*B3).mean() , -(A1*D2).mean() , (C3*B3).mean() , (C3*D2).mean()]
print "Upper Bound: A1B3 - A1D2 + C3B3 + C3D2 <= % 0.3f" % (S.max())
print "Exp Average: <AB> - <AD> + <CB> + <CD> = % 0.3f" % (sum(EM))


Results:
Code: Select all
Mutually dependent correlations:
Upper Bound:  AB  -  AD  +  CB  +  CD <=  2.000
Exp Average: <AB> - <AD> + <CB> + <CD> =  1.850

Mutually independent correlations:
Upper Bound: A1B1 - A2D2 + C3B3 + C4D4 <=  4.000
Exp Average: <AB> - <AD> + <CB> + <CD> =  2.297

Mutually dependent correlations using only A1, B1, C4, D4: (same data)
Upper Bound: A1B1 - A1D4 + C4B1 + C4D4 <=  2.000
Exp Average: <AB> - <AD> + <CB> + <CD> =  1.849

Mutually dependent correlations using only A2, D2, C3, B3: (same data)
Upper Bound: A2B3 - A2D3 + C3B3 + C3D2 <=  2.000
Exp Average: <AB> - <AD> + <CB> + <CD> =  1.777

Mutually dependent correlations using only A1, B3, C4, D2: (same data)
Upper Bound: A1B3 - A1D2 + C3B3 + C3D2 <=  2.000
Exp Average: <AB> - <AD> + <CB> + <CD> =  1.849


You don't have to guess what it means, let me tell you: It shows that for the same data, mutual independent terms can violate the CHSH while mutually dependent terms from the exact same data source do not. Now I encourage you to study my coin toss example and try to understand it.

You can't eliminate "bound" from your vocabulary without at the same time eliminating "violate". I'm happy that you are finally agreeing to my suggestion of focusing on reproducing the QM expectation values from different sets of particles and forgetting about the CHSH.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Thu Apr 10, 2014 9:29 pm

Zen wrote:Michel,

I don't have Python installed here, but from the results you posted your Python translation of Richard's first two R scripts (which I translated to Perl) doesn't give the same results as the R and Perl codes? You get the same CHSH = 2.0 for both the dependent and independent cases?

Best,

Zen.


Zen, why oh why is Michel looking at the maximum? He should draw histograms of the results of the two cases. The point is to *see* the results. All of them. A histogram of a few hundred would do fine.

I think he doesn't realise that when we do experiments, if we say "violate a bound" we really mean "statistically violate a bound" or "statistically significant violation of a bound" ... and even those words, need to be expanded, because people who have never heard of standard errors or p-values won't have any idea what it is supposed to mean.

I think everyone has already taken Michel's point that in experiments, one can easily observe values of CHSH larger than 2. In fact this is well known and has been well known for years and years and years. In a CHSH experiment one shows that one sees correlations close to the negative cosine, and not close to the triangle wave. One looks at just a few points on the curve ... more precisely, one looks at four points on the surface. Is the calculated - observed - measured E(a, b) close to the theoretical rho(a, b) of quantum theory, or to the theoretical rho(a, b) of the standard local hidden variables model?

When we talk about experiments, we need a vocabulary with words like "statistically significant", "standard error", "p-value", "goodness-of-fit", "type 1 error", "type 2 error", "discriminatory power"... We need to distinguish population from sample. Theory from experiment.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Thu Apr 10, 2014 10:24 pm

Zen wrote:Michel,

I don't have Python installed here, but from the results you posted your Python translation of Richard's first two R scripts (which I translated to Perl) doesn't give the same results as the R and Perl codes? You get the same CHSH = 2.0 for both the dependent and independent cases?

Best,

Zen.

That's because I made a typo. Notice that beta is not being used. Given that no results were presented it is easy to make mistakes. I will fix it soon but it doesn't change my opinion about it one bit.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Thu Apr 10, 2014 10:35 pm

Zen wrote:Hi Michel,

I've found out why the Python code was giving the wrong results. You've made a typo. It should be:

Code: Select all
B = -numpy.sign(numpy.cos(numpy.radians(beta - lmda)))


You had retyped alpha instead of beta when computing B.

Best,

Zen.

P.S. Did you run it, Rick?

Not yet, Zen. (Michel writes beautiful code, no doubt about that). I want to see a histogram of the results. I'm not interested in the maximum over many many repetitions. Can you draw histograms in Python?
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Thu Apr 10, 2014 11:00 pm

Zen wrote:Rick:I don't think you can quickly explain (to a nonstatistician) the concept of a "sampling distribution of some statistic". Remember the experience with your students... Plotting histograms in Python is made with the library matplotlib. Just import it.

P.S. Yeah, Michel code reads like poetry.

Thanks for the compliment.

Zen,
Do you believe it is possible to produce a 4xN spreadsheet of any size, from any source, introducing as much error as one likes, which produces a CHSH value above 2 ("statistically") by even 0.000000000001.

Richard,
Thanks for the compliment too. I'll post some histograms tomorrow for both your experiment and mine. Note: I did not use maximum values in your experiment. I presented them in mine along side the averages to clearly show which inequality applies in what scenario, all determined from the data.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sat Apr 12, 2014 12:45 am

minkwe wrote:
Zen wrote:Rick:I don't think you can quickly explain (to a nonstatistician) the concept of a "sampling distribution of some statistic". Remember the experience with your students... Plotting histograms in Python is made with the library matplotlib. Just import it.

P.S. Yeah, Michel code reads like poetry.

Thanks for the compliment.

Zen,
Do you believe it is possible to produce a 4xN spreadsheet of any size, from any source, introducing as much error as one likes, which produces a CHSH value above 2 ("statistically") by even 0.000000000001.

Richard,
Thanks for the compliment too. I'll post some histograms tomorrow for both your experiment and mine. Note: I did not use maximum values in your experiment. I presented them in mine along side the averages to clearly show which inequality applies in what scenario, all determined from the data.


Looking forward to seeing your histograms, Michel. We are making progress.

Starting with the 4xN spreadsheet of values of A, A', B, B' it is of course impossible to get above 2, when all four correlations are based on all N rows.

If however the rows are randomly split into four sub-groups of roughly equal size, and each correlation based on a different subgroup, then almost anything can happen. However, most splittings will generate CHSH values which are not much larger than 2. The word "most" can be quantified in probability terms, where the probability depends also how you quantify "much larger". It also depends on N, of course. See Theorem 1 of my paper for a very precise statement.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Sat Apr 12, 2014 8:44 am

Richard, I won't produce any histograms until you answer the quesion:
Do you believe it is possible to produce a 4xN spreadsheet of any size, from any source, introducing as much error as one likes, which produces a CHSH value above 2 ("statistically") by even 0.000000000001.
If you can not answer this simple question I've asked you many times, you can produce your own histograms and present them. For some reason Zen did not want to answer it either. You would think two mathematical statisticians familiar with the CHSH would know he answer immediately.

If you answer that questions, then we would have made progress.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sat Apr 12, 2014 8:47 am

minkwe wrote:Richard, I won't produce any histograms until you answer the quesion:
Do you believe it is possible to produce a 4xN spreadsheet of any size, from any source, introducing as much error as one likes, which produces a CHSH value above 2 ("statistically") by even 0.000000000001.
If you can not answer this simple question I've asked you many times, you can produce your own histograms and present them. For some reason Zen did not want to answer it either. You would think two mathematical statisticians familiar with the CHSH would know he answer immediately.
If you answer that questions, then we would have made progress.

I've answered that question time and time again. My answer always stayed the same. It's even written out in my famous paper as "Fact 2". In the simple pure mathematical section 2. Page 4 of the current version (version 4)
Last edited by gill1109 on Sat Apr 12, 2014 8:50 am, edited 1 time in total.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Sat Apr 12, 2014 8:49 am

I guess you don't want to answer it then. Too bad.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sat Apr 12, 2014 8:51 am

minkwe wrote:I guess you don't want to answer it then. Too bad.

Well we just discovered that you never got past page three of my paper.

My answer to your question is the same as ever: "no".
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sat Apr 12, 2014 8:54 am

gill1109 wrote:Starting with the 4xN spreadsheet of values of A, A', B, B' it is of course impossible to get above 2, when all four correlations are based on all N rows.


Why did you ask the question? Can't you read?

Now please ask your next question. This is getting exciting, we are getting to the heart of the matter...
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Next

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 86 guests

cron
CodeCogs - An Open Source Scientific Library