FrediFizzx wrote:Joy Christian wrote:OK, now we are well and truly done. I have revised my simulation again: http://rpubs.com/chenopodium/joychristian.
The initial state of the system (e_o, theta_o) is derived in http://libertesphilosophica.info/blog/w ... mplete.pdf, and the choice of the initial function f(theta_o) is
f(theta_o) = (1/2) sin^2(theta_o).
What is different in this version is the range of theta_o. This time any remaining wrinkles in the correlation are indeed well within the accuracy of the simulation.
I ran it in Mathematica with those parameters, 5 million trials. It's still off. The straighter part of the curve is not coming in now.
Zen wrote:So Joy is adjusting by hand the distribution of Theta to "fit" the quantum mechanical result. I've made clear in my comments above that in my opinion these simulations do not correspond to a local realist model, but I've noticed something funny anyway. Instead of tweaking the distribution of Theta, as Joy is doing, and transforming by f, we can tweek the ditribution of s directly as a Beta(1/2,1/2) achieving similar results. To do that, in this code
http://rpubs.com/chenopodium/joychristian
just replace (comment or delete) the lines
theta <- runif(M, 0.129153, pi/2.38548) ## Christian and Minkwe's theta_0
s <- (sin(theta)^2)/2
by
s <- rbeta(M, 1/2, 1/2)
Why is this curious and funny? Because it is well known that in the Bayesian analysis of the binomial model, Beta(1/2, 1/2) is the "noninformative" (Jeffreys) prior distribution for the parameter of the model, and it kind of works here.
Zen wrote:I've made clear in my comments above that in my opinion these simulations do not correspond to a local realist model...
Zen wrote:...we can tweek the ditribution of s directly as a Beta(1/2,1/2) achieving similar results. To do that, in this code
http://rpubs.com/chenopodium/joychristian
just replace (comment or delete) the lines
theta <- runif(M, 0.129153, pi/2.38548) ## Christian and Minkwe's theta_0
s <- (sin(theta)^2)/2
by
s <- rbeta(M, 1/2, 1/2)

Zen wrote:Thanks, Richard. By the way, you know that Bernstein's Theorem tells us that we can represent any continuous distribution with support [0,1] by a sufficiently large mixture of betas. Therefore, The "magic" distribution can be approximated by such a mixture. It's a kind 21st century epicyle system.
Have you coded the simulation using the results from Pearle?
Zen wrote:Joy and Richard,
Would you please replace
s <- rbeta(M, 1/2, 1/2)
with
s <- rbeta(M, 1.29, 4.56)
to check if there is a better fit?
Thank you very much.
minkwe wrote:Interesting discussions... Maybe we should be looking for a scale factor for |C|.
Joy Christian wrote:minkwe wrote:Interesting discussions... Maybe we should be looking for a scale factor for |C|.
My worry in looking for a scale factor for |C| is that that might mess up the perfect anti-correlation requirement at equal settings. It is not good enough to have 98% perfect anti-correlation at equal settings. That violates what is predicted by QM (I know---operationally that might not be a problem, but logically it is a problem).
minkwe wrote:Joy Christian wrote:minkwe wrote:Interesting discussions... Maybe we should be looking for a scale factor for |C|.
My worry in looking for a scale factor for |C| is that that might mess up the perfect anti-correlation requirement at equal settings. It is not good enough to have 98% perfect anti-correlation at equal settings. That violates what is predicted by QM (I know---operationally that might not be a problem, but logically it is a problem).
I don't think it would because the anti-correlation is based on the sign of C only not it's magnitude.
Joy Christian wrote:Have you looked at the latest version of my simulation yet? http://rpubs.com/chenopodium/joychristian
It does not seem to work as well in Mathematica as in R. I wonder whether it would work in Python.
Heinera wrote:But even with the second (A, B) approach, the 2D vs the 3D generalization are sufficiently different so that the the same input formula will give different correlations, as previous posters have discovered. This is why Fred and minkwe are now getting worse results; they run the 2D model (in Mathematica and Python, respectively), while Christian fine tuned this with Richard 3D generalization.
Heinera wrote:But even with the second (A, B) approach, the 2D vs the 3D generalization are sufficiently different so that the the same input formula will give different correlations, as previous posters have discovered. This is why Fred and minkwe are now getting worse results; they run the 2D model (in Mathematica and Python, respectively), while Christian fine tuned this with Richard 3D generalization.
Joy Christian wrote:Heinera wrote:But even with the second (A, B) approach, the 2D vs the 3D generalization are sufficiently different so that the the same input formula will give different correlations, as previous posters have discovered. This is why Fred and minkwe are now getting worse results; they run the 2D model (in Mathematica and Python, respectively), while Christian fine tuned this with Richard 3D generalization.
This is not quite correct. As you yourself observed earlier, it is the range of theta_o that is responsible for the difference in the 2D and 3D versions of the Fodje simulation. The input formula, f(theta_o), is the same in both versions, at least in the latest attempts of the simulation.
Joy Christian wrote:It is the range of theta_o that is responsible for the difference in the 2D and 3D versions of the Fodje simulation. The input formula, f(theta_o), is the same in both versions, at least in the latest attempts of the simulation.
Heinera wrote:Joy Christian wrote:Heinera wrote:But even with the second (A, B) approach, the 2D vs the 3D generalization are sufficiently different so that the the same input formula will give different correlations, as previous posters have discovered. This is why Fred and minkwe are now getting worse results; they run the 2D model (in Mathematica and Python, respectively), while Christian fine tuned this with Richard 3D generalization.
This is not quite correct. As you yourself observed earlier, it is the range of theta_o that is responsible for the difference in the 2D and 3D versions of the Fodje simulation. The input formula, f(theta_o), is the same in both versions, at least in the latest attempts of the simulation.
I assume that both Fred and Fodje updated the range of theta_o to match your new values, ran the simulation, and got different results than you. Or did they do something else?
Return to Sci.Physics.Foundations
Users browsing this forum: ahrefs [Bot] and 132 guests
