Guest wrote:You guys keep saying in this thread that some expectations are "statistically independent". That's mathematical non-sense. It's well known that two events A and B are statistically independent if and only if P(A and B) = P(A) P(B). Also, two random variables X and Y are statistically independent if and only if P(X in A, Y in B) = P(X in A) P(Y in B) for all A and B. But the expectation E[Z] of some random variable Z is a real number. It does not make sense to say that two expectations E[Z] and E[W] are statistically independent. If you don't believe what I'm saying, please post a link to a single probability book which gives a definition of "statistically independent expectations". C'mon, guys.
Guest wrote:You guys keep saying in this thread that some expectations are "statistically independent". That's mathematical non-sense. It's well known that two events A and B are statistically independent if and only if P(A and B) = P(A) P(B). Also, two random variables X and Y are statistically independent if and only if P(X in A, Y in B) = P(X in A) P(Y in B) for all A and B. But the expectation E[Z] of some random variable Z is a real number. It does not make sense to say that two expectations E[Z] and E[W] are statistically independent. If you don't believe what I'm saying, please post a link to a single probability book which gives a definition of "statistically independent expectations". C'mon, guys.
Heinera wrote:So, you guys, I will be on an expedition for the next couple of days where I probably will not have much of an internet access. In the mean time, I suggest you look at my simple model http://rpubs.com/heinera/16727. This model reproduces QM exactly, and no loopholes are exploited. Unfortunately it is non-local. But, given its simplicity, I trust it will be an easy task to convert it into a local model. See you on Wednesday, and I expect that by then one of you have made a local version available. Sayonara!
set.seed(9875)
## For reproducibility.
M <- 10^5 ## Sample size.
## Use the same, single sample of 'M' realizations of hidden variables for
## all four correlations
## The hidden variable is just a random number t between -1 and 1. We send t
## to Alice and -t to Bob.
t <- runif(M, -1, 1)
## At both Alice's and Bob's stations the same model is used to determine
## outcomes. We put this into a function obs(hv), that takes the hidden
## variable as the argument and returns -1 or 1.
obs <- function(hv) {
s <- sign(hv)
hv <- abs(hv)
## Since the following variable depend on both settings a and b, the model is
## blatantly non-local:
L <- (1 + sum(a * b))/4
if (hv < L) {
o <- s
} else if (hv < 2 * L) {
o <- -s
} else if (hv < L + 0.5) {
o <- -1
} else {
o <- 1
}
return(o)
}
## Now we compute the four correlations in the CHSH inequality:
alpha <- 0
beta <- 45
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
## We generate the list of observations by applying the obs function to each
## element in the list of hidden variables. First for Alice:
ca1 <- sapply(t, obs)
## Then for Bob:
cb1 <- sapply(-t, obs)
E11 <- mean(ca1 * cb1)
E11
[1] -0.70528
alpha <- 0
beta <- 135
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
ca2 <- ca1 ## this is the same as A in <AB> being the same A in <AB'>
cb2 <- sapply(-t, obs)
E12 <- mean(ca2 * cb2)
E12
[1] -0.00296
alpha <- 90
beta <- 45
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
ca3 <- sapply(t, obs)
cb3 <- cb1 ## This is the same as the B in <AB> being the same as the B in <A'B>
E21 <- mean(ca3 * cb3)
E21
[1] -0.70528
alpha <- 90
beta <- 135
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
ca4 <- ca3 ## The A' from <A'B>
cb4 <- cb2 ## The B' from <AB'>
E22 <- mean(ca4 * cb4)
E22
[1] -0.00296
## CHSH expression
-E11 + E12 - E21 - E22
[1] 1.41056
FrediFizzx wrote:I don't see any advantage in trying to make this simulation into a local model. But I did modify it a bit in R Studio to better show how the dependency of the inequalities work.
- Code: Select all
set.seed(9875)
## For reproducibility.
M <- 10^5 ## Sample size.
## Use the same, single sample of 'M' realizations of hidden variables for
## all four correlations
## The hidden variable is just a random number t between -1 and 1. We send t
## to Alice and -t to Bob.
t <- runif(M, -1, 1)
## At both Alice's and Bob's stations the same model is used to determine
## outcomes. We put this into a function obs(hv), that takes the hidden
## variable as the argument and returns -1 or 1.
obs <- function(hv) {
s <- sign(hv)
hv <- abs(hv)
## Since the following variable depend on both settings a and b, the model is
## blatantly non-local:
L <- (1 + sum(a * b))/4
if (hv < L) {
o <- s
} else if (hv < 2 * L) {
o <- -s
} else if (hv < L + 0.5) {
o <- -1
} else {
o <- 1
}
return(o)
}
## Now we compute the four correlations in the CHSH inequality:
alpha <- 0
beta <- 45
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
## We generate the list of observations by applying the obs function to each
## element in the list of hidden variables. First for Alice:
ca1 <- sapply(t, obs)
## Then for Bob:
cb1 <- sapply(-t, obs)
E11 <- mean(ca1 * cb1)
E11
[1] -0.70528
alpha <- 0
beta <- 135
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
ca2 <- ca1 ## this is the same as A in <AB> being the same A in <AB'>
cb2 <- sapply(-t, obs)
E12 <- mean(ca2 * cb2)
E12
[1] -0.00296
alpha <- 90
beta <- 45
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
ca3 <- sapply(t, obs)
cb3 <- cb1 ## This is the same as the B in <AB> being the same as the B in <A'B>
E21 <- mean(ca3 * cb3)
E21
[1] -0.70528
alpha <- 90
beta <- 135
a <- c(cos(alpha * pi/180), sin(alpha * pi/180))
b <- c(cos(beta * pi/180), sin(beta * pi/180))
ca4 <- ca3 ## The A' from <A'B>
cb4 <- cb2 ## The B' from <AB'>
E22 <- mean(ca4 * cb4)
E22
[1] -0.00296
## CHSH expression
-E11 + E12 - E21 - E22
[1] 1.41056
From this, it is obvious that dependent expectation terms are not going to work to violate CHSH.
minkwe wrote:Thanks Fred, for doing Heine's homework, the calculation confirms that the sets of outcome pairs are statistically independent. Despite Guest's confusion, the "statistically independent" description is apt.
FrediFizzx wrote:Heinera wrote:So, you guys, I will be on an expedition for the next couple of days where I probably will not have much of an internet access. In the mean time, I suggest you look at my simple model http://rpubs.com/heinera/16727. This model reproduces QM exactly, and no loopholes are exploited. Unfortunately it is non-local. But, given its simplicity, I trust it will be an easy task to convert it into a local model. See you on Wednesday, and I expect that by then one of you have made a local version available. Sayonara!
I don't see any advantage in trying to make this simulation into a local model.
Heinera wrote:By the way, your modifications did turn it into a local model; unfortuantely, it also changed the correlations so they no longer agree with QM.
FrediFizzx wrote:Heinera wrote:By the way, your modifications did turn it into a local model; unfortuantely, it also changed the correlations so they no longer agree with QM.
It still is using your non-local hidden variable and it is just a "proof" by negation that your simulation doesn't violate CHSH when the expectation terms are dependent. IOW, your expectation terms are independent from each other so you have shifted to an inequality with a bound of 4 instead of 2. CHSH has a bound of 2. As I said... it is pretty mind boggling.
Heinera wrote:The hidden variable is local. The nonlocality enters because outcomes in either wing depend directly on settings from both wings of the simulations. Your modifications broke that dependence, so the model is now local.
FrediFizzx wrote:Heinera wrote:The hidden variable is local. The nonlocality enters because outcomes in either wing depend directly on settings from both wings of the simulations. Your modifications broke that dependence, so the model is now local.
Take another better look. The a in E11 is the same as the a in E12, etc.
Heinera wrote:FrediFizzx wrote: Take another better look. The a in E11 is the same as the a in E12, etc.
Yes, that's what I'm saying. The outcomes for Alice are no longer influenced by Bob's setting and vice versa, so you turned the model local.
Heinera wrote:The whole difference between a local and a non-local model is that for a local model the a in E11 must be the same as the a in E12, while in a non-local model this need not be the case, since Bob's different setting in E12 can give rise to a different outcome for a.
Heinera wrote:The whole difference between a local and a non-local model is that for a local model the a in E11 must be the same as the a in E12, while in a non-local model this need not be the case, since Bob's different setting in E12 can give rise to a different outcome for a.
FrediFizzx wrote:Heinera wrote:The whole difference between a local and a non-local model is that for a local model the a in E11 must be the same as the a in E12, while in a non-local model this need not be the case, since Bob's different setting in E12 can give rise to a different outcome for a.
I think you found your problem.Or maybe you meant something different? To properly adhere to Bell-CHSH with a bound of 2, all models must have the same a in E11 and E12. Even the quantum experiments adhere to that.
Heinera wrote:All local models wil have a bound of 2.
Heinera wrote:There is no problem. My model does not respect the bound of 2, because of its non-locality. All local models wil have a bound of 2.
Heinera wrote:As shown in post #1, you are calculating the independent termswhich are definitely not the terms in Bell's inequality.
minkwe wrote:He does not deny that Bell's inequality is the following:,
which makes use of the four termsall defined for the same set of outcomes
minkwe wrote:He presents a simulation which is equivalent to the terms each calculated from 8 separate columns of outcomes:recombined into 4 independent paired spreadsheets, yielding the 4 terms
. He then calculates the expression
and finds that the value is greater than 2. But this is to be expected because the terms in this expression are statistically independent.
As you can see in every proof of the CHSH or Bell's inequality (https://en.wikipedia.org/wiki/Bell%27s_ ... inequality for example), There are only 4 columns of data, recombined into pairs such that the cyclic dependency of the paired terms is present. What is often missing is that the subscripts are left out which allows them to later confusingly or intentionally mislead by assuming there are only 4 columns of data in
, since without subscripts, all the As, Bs, Cs and Ds look alike and you can simply say
.
Heinera wrote:With my model, it is impossible to compute a 4xN matrix as defined in Richard Gill's paper.
I you were to come up with a local loophole-free model, such a 4xN matrix would be trivial to construct: For each i, I would simply compute your model four times, changing only the settings, and keeping everything else the same.
Return to Sci.Physics.Foundations
Users browsing this forum: No registered users and 164 guests
