Further data-analysis of Delft, Munich, NIST, Vienna expts

Post a reply


This question is a means of preventing automated form submissions by spambots.

BBCode is ON
[img] is ON
[flash] is OFF
[url] is ON
Smilies are OFF
Topic review
   

Expand view Topic review: Further data-analysis of Delft, Munich, NIST, Vienna expts

Re: Further data-analysis of Delft, Munich, NIST, Vienna exp

Post by Joy Christian » Mon Oct 28, 2019 9:25 pm

***
Here is a propaganda video by the TU Delft team: https://www.youtube.com/watch?v=z1twSZF ... e=youtu.be.

It is a big business. The Delft team might even end up making more money than the spoon-bender Uri Geller. :)

***

Further data-analysis of Delft, Munich, NIST, Vienna expts

Post by gill1109 » Sun Oct 27, 2019 7:12 am

I've further refined my "classical" statistical analysis of the famous four loophole-free Bell experiments. ie I'm performing a classical statistics text-book asymptotically optimal statistical approach under the i.i.d. assumption (four multinomial distributions) *and* the very physical assumption of no-signalling (at the manifest level).

Before, I computed GLS (generalised least squares; you could also call it weighted least squares) estimators based on the asymptotic normality of the multinomial counts. Back to Gauss.

Now I try to compute the actual MLEs (maximum likelihood estimators) both under the unconstrained and under the constrained (local realism) model. I also computed Wilks' generalised log LR test statistic using its asymptotic chi-square distribution instead of the Wald test. Should be much more accurate!

There are some surprising differences! Physicists should learn some textbook small-data statistical theory especially if their sample sizes are big!

https://rpubs.com/gill1109/AdvancedDelft
https://rpubs.com/gill1109/AdvancedMunich
https://https://rpubs.com/gill1109/AdvancedNIST
https://rpubs.com/gill1109/AdvancedVienna



My ambition: 1). Convert to Python or Julia. 2.) Get computer algebra to write the code for us. After all, my claim is that this is routine text-book stuff.

Actually, the MLE optimisation gives lots of warnings in the case of Vienna and NIST. This is because the GLS estimator is already so close to optimal and the log-likelihood is essentially flat at the "local" (1/sqrt N ) scale. There's nothing left to optimise. We are just investigating rounding errors.

The interesting things are with the truly small data experiments Delft and Munich. Munich doesn't seem any more to be so good. The log-likelihood is too far from quadratic partly because the nominal effect seems so large. We're too close to the boundary for the asymptotic approximations to be reliable. Delft is much more stable and now this is a decent "two-sided significant at the 5% level" result. In part because the nominal effect is not so large.

Top

cron
CodeCogs - An Open Source Scientific Library