FrediFizzx wrote:Here is the link to the revised paper,
http://dx.doi.org/10.13140/RG.2.2.28311.91047/1
Enjoy!
.
FrediFizzx wrote:Here is the link to the revised paper,
http://dx.doi.org/10.13140/RG.2.2.28311.91047/1
Enjoy!
FrediFizzx wrote:Which improvements?
gill1109 wrote:FrediFizzx wrote:Which improvements?
The description of the matching process and the meaning of k_A and k_B is a whole lot better.
FrediFizzx wrote:Slightly revised version of the paper,
EPRsims/Event_by_Event_Numerical_Simulation_of_the_Strong_Singlet_Correlations_rev1.pdf
Also at,
DOI: http://dx.doi.org/10.13140/RG.2.2.28311.91047/2
Enjoy!
gill1109 wrote:FrediFizzx wrote:Slightly revised version of the paper,
EPRsims/Event_by_Event_Numerical_Simulation_of_the_Strong_Singlet_Correlations_rev1.pdf
Also at,
DOI: http://dx.doi.org/10.13140/RG.2.2.28311.91047/2
Enjoy!
I enjoyed this. You write, explaining displayed formula (9): "For this purpose we will use the following discrete version of the expectation function (6) assuming uniform probability distribution p(λ) = 1".
This is not a discrete version of the expectation function. This is (for each n) the empirical distribution which puts probability mass 1/n on each of n independent and identically distributed realised values lambda_1, lambda_2, ... lambda_n coming from the probability distribution rho(lambda). You are using the strong law of large numbers: empirical averages converge to theoretical expectation values. According to that theorem, the expression (9) is with probability one identically equal to the usual expression: the integral of A(a, lambda) B(b, lambda) rho(lambda) d lambda, i.e. the displayed formula (6).
At least, (6) and (9) are identical if the lambda_k are indeed independent and identically distributed realisations of values of lambda taken from a fixed probability distribution rho (which does not depend on a and b).
I don't see the point of using an explicit limit of empirical averages instead of its known value, unless you plan to let the hidden variable lambda depend on the settings a and b.
Joy Christian wrote:Whatever. You are toast in any case.
.
FrediFizzx wrote:Joy Christian wrote:Whatever. You are toast in any case.
Is Gill posting nonsense again? I warned him about doing that.
gill1109 wrote:FrediFizzx wrote:Joy Christian wrote:Whatever. You are toast in any case.
Is Gill posting nonsense again? I warned him about doing that.
I was suggesting an opportunity for improvement of your joint paper: just a small question of terminology. An empirical average uses equal weights of size 1/n. It does not use a probability mass function with p(.) = 1. Probabilities have to be nonnegative and sum to one. Probability densities have to integrate to one.
According to the law of large numbers, https://en.wikipedia.org/wiki/Law_of_large_numbers, sample averages converge to theoretical mean values, under certain conditions.
Joy Christian wrote:gill1109 wrote:FrediFizzx wrote:Joy Christian wrote:Whatever. You are toast in any case.
Is Gill posting nonsense again? I warned him about doing that.
I was suggesting an opportunity for improvement of your joint paper: just a small question of terminology. An empirical average uses equal weights of size 1/n. It does not use a probability mass function with p(.) = 1. Probabilities have to be nonnegative and sum to one. Probability densities have to integrate to one.
According to the law of large numbers, https://en.wikipedia.org/wiki/Law_of_large_numbers, sample averages converge to theoretical mean values, under certain conditions.
We have set p(\lambda) = 1 in our eq. (6) following Bell in his book (chapter 1, section 2, last equation), who calls it "uniform averaging over \lambda."
.
FrediFizzx wrote:Another slightly revised version. Just doing some fine tuning.![]()
EPRsims/Event_by_Event_Numerical_Simulation_of_the_Strong_Singlet_Correlations_rev1.pdf
Enjoy!
gill1109 wrote:I would also suggest that you add to the paper internet links to two Mathematica notebooks of the two simulations (Appendix A, Appendix B). Copying and pasting from a pdf file introduces all kinds of annoying errors and costs a lot of time.
To begin with, please give us two links here on the forum so we can reproduce the appendices with ease. It’s really great you have now separated a classical cosine curve experiment simulation from a classical CHSH experiment simulation.
A lot of people nowadays use SageMath for symbolic computing and more. Open source, free. Steve Wolfram has used dubious legal copyright tricks to prevent the development of an open source implementation of the Mathematica language.
https://www.sagemath.org/
FrediFizzx wrote:gill1109 wrote:I would also suggest that you add to the paper internet links to two Mathematica notebooks of the two simulations (Appendix A, Appendix B). Copying and pasting from a pdf file introduces all kinds of annoying errors and costs a lot of time.
To begin with, please give us two links here on the forum so we can reproduce the appendices with ease. It’s really great you have now separated a classical cosine curve experiment simulation from a classical CHSH experiment simulation.
A lot of people nowadays use SageMath for symbolic computing and more. Open source, free. Steve Wolfram has used dubious legal copyright tricks to prevent the development of an open source implementation of the Mathematica language.
https://www.sagemath.org/
Ya mean like here,
viewtopic.php?f=6&t=484#p13694
I'll put up the CHSH version as soon as I clean it up to match the paper. Tried Sage when I was playing around with, Niles Johnson's stuff. It is a bit geeky compared to Mathematica. Plus with Mathematica, you get two when you buy one. You can evaluate notebook files on the Wolfram Cloud but they have to run within 1 minute. But good for short testing if local Mathematica is tied up with long evaluations.
PS. CHSH version is up at the above link.
.
Return to Sci.Physics.Foundations
Users browsing this forum: ahrefs [Bot] and 127 guests