Some observations/questions about the Delft Experiment

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Some observations/questions about the Delft Experiment

Postby minkwe » Tue Oct 01, 2019 8:55 pm

Let a, b represents the settings and x, y represent the outcomes.
* In the above mentioned experiment, would we expect the sequences {a,b} to be uncorrelated with each other?
* What about {x, y}?
* and {a, y}, {b, x}, {a, x}, {b, y}?

The authors state the following:

Hensen et. al. wrote:Both for the Bell run in Hensen et al.17 and for the Bell run presented above, we are testing a single well-defined null hypothesis formulated before the experiment, namely that a local-realist model for space-like separated sites could produce data with a violation at least as large as we observe. The settings independence is guaranteed by the space-like separation of relevant events (at stations A, B and C). Since no-signalling is part of this local-realist model, there is no extra assumption that needs to be checked in the data. We have carefully calibrated and checked all timings to ensure that the locality loophole is indeed closed.

Nonetheless, one can still check (post-experiment) for many other types of potential correlations in the recorded dataset if one wishes to. However, since now many hypotheses are tested in parallel, P-values should take into account the fact that one is doing multiple comparisons (the look-elsewhere effect, LEE). Failure to do so can lead to too many false positives, an effect well known in particle physics. In contrast, there is no LEE for a single pre-defined null hypothesis as in our Bell test.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Some observations/questions about the Delft Experiment

Postby minkwe » Thu Oct 03, 2019 10:15 pm

Nobody interested? In any case, let me share a little experiment I did and what I found. The question I wanted to answer was:

Is there any correlation between the pairs of random variables in {a, b, x, y} where {a,b} are the settings on both arms of the delft experiment and {x, y} are the outcomes. For the experiment, we essentially have a list of N settings pairs {a,b} and a list of N outcome pairs {x, y}, ie, a total of 4 lists of N elements. For this test, I used the raw data from the first Delft experiment, including their post-processing code which resulted N=4746 raw data items corresponding to "bell_trial_filter".

To measure correlation, I used a Mutual Information test. Since the lists contain discrete elements, this was easy to do. I calculated the mutual information between pairs of the lists above (a,b), (x, y), (a,y), (b,x), (a,x), (b, y). As can be imagined, the mutual information would be very small for lists that were purportedly randomly obtained. This was indeed observed. But then the next question which arose was, how do I determined what the expected mutual information should be? For this, I calculated the probability distribution of the elements for each list in the pair being compared, and then randomly generated 10,000 pairs of similar lists with the same probability distributions, calculated the mutual information on each of the 10,000 pairs, and then calculated the percentile of the observed mutual information relative to the 10,000. As you can imagine, this is a pretty good indication of how far from expected the observed mutual information was in the experimental data.

Thus, a percentile of 100% indicates that all of the mutual information values calculated from the 10,000 randomly generated pairs, are lower than the observed mutual information, an indication that the observation is clearly correlated more than expected. Similarly, a mutual information of 0% indicates that all the randomly generated mutual information values are higher than the observed one.

Doing this calculation I obtain the following results:
I(a,b)=0.000027 pct= 53.760 95%=0.000191 <I>=0.000050 σ=0.000068
I(x,y)=0.000504 pct= 100.000 95%=0.000200 <I>=0.000054 σ=0.000071
I(a,y)=0.000207 pct= 95.240 95%=0.000184 <I>=0.000051 σ=0.000071
I(b,x)=0.000012 pct= 36.740 95%=0.000193 <I>=0.000048 σ=0.000065
I(a,x)=0.000761 pct= 100.000 95%=0.000200 <I>=0.000048 σ=0.000065
I(b,y)=0.000023 pct= 47.780 95%=0.000200 <I>=0.000051 σ=0.000070

(where pct = percentile of score and 95% is the value corresponding to 95th percentile)
Key takeaways: The settings appear to be uncorrelated but Why are the outcomes x,y so strongly correlated? Why is the (a,x) correlation much less than the (b,y) correlation? Why is the (a,y) correlation higher than the (b,y) correlation? Note that a is the setting for the x outcome and b is the setting for the y outcome.

The next level of my analysis was to figure out if there is a Markov process in play. This can easily be accomplished using the same mutual information test I described above except I'm now comparing each list with a shifted version of itself, with the shift equivalent to the Markov order being tested. I tested just 1st order and 2nd order. Note that for this test, I did not need to use first-order or second-order transition probabilities to compute the random lists because my null hypothesis was the lack of a Markov process. Below are the results:

Markov 1
I(x,x)=0.000000 pct= 2.520 95%=0.000188 <I>=0.000050 σ=0.000070
I(y,y)=0.000018 pct= 40.840 95%=0.000185 <I>=0.000051 σ=0.000070
I(a,a)=0.000012 pct= 40.820 95%=0.000199 <I>=0.000051 σ=0.000075
I(b,b)=0.000004 pct= 21.840 95%=0.000195 <I>=0.000051 σ=0.000075
Markov 2
I(x,x)=0.000130 pct= 91.760 95%=0.000195 <I>=0.000049 σ=0.000076
I(y,y)=0.000028 pct= 54.080 95%=0.000213 <I>=0.000048 σ=0.000071
I(a,a)=0.000006 pct= 30.360 95%=0.000226 <I>=0.000050 σ=0.000081
I(b,b)=0.000014 pct= 44.040 95%=0.000225 <I>=0.000050 σ=0.000081

Key takeway: The first order mutual information in x is artificially low, while the second order is higher than expected. I short, there is something strange about this experiment. Hopefully you guys have better explanation of what is going on.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Some observations/questions about the Delft Experiment

Postby local » Fri Oct 04, 2019 12:54 pm

Very interested over here! This should be published. It is additional evidence of harmful postselection in the experimental analysis. Postselection in the experiment was first reported by yourself in a different venue. Bravo! Then Bednorz, Adenier and Khrennikov, Graft, Santos, etc. published demonstrations of anomalies in the published data all consistent with and suggestive of harmful postselection. The full data has still not been released and so the experiment should be disqualified and excluded from consideration.
local
 
Posts: 295
Joined: Mon Aug 05, 2019 1:19 pm

Re: Some observations/questions about the Delft Experiment

Postby local » Sat Oct 05, 2019 7:34 am

Forgot to mention Hnilo, who also published a critique of the experiments.

https://arxiv.org/abs/1607.04177

minkwe, do you see your MI value for (say) {a,y} to be an alternative way to detect signalling? Also, any possibility to see the code of your analyses, or at least a description of your MI calculation? Thank you.

Finally, have you looked at the Shalm et al experiment, for which the full raw data is available?
local
 
Posts: 295
Joined: Mon Aug 05, 2019 1:19 pm

Re: Some observations/questions about the Delft Experiment

Postby minkwe » Sat Oct 05, 2019 1:55 pm

local wrote:minkwe, do you see your MI value for (say) {a,y} to be an alternative way to detect signalling?

Yes, although I do not like the term "signalling" because it implies a physical transmission of information during the experiment. I prefer to think about it just as dependency. But definitely, mutual information is the best way to quantify dependency between any two variables without making any assumptions about the nature of the dependency. All the other types of correlation statistics either require a linear relationship or explicit consideration of the type of dependency.

Also, any possibility to see the code of your analyses, or at least a description of your MI calculation? Thank you. Finally, have you looked at the Shalm et al experiment, for which the full raw data is available?

I'm working on a paper, and everything will be included. I'll post it here soon.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Some observations/questions about the Delft Experiment

Postby local » Sat Oct 05, 2019 2:27 pm

Awesome, thank you. Instead of posting here are you endorsed for arxiv quant-ph? If not, I could help you with that. Send a PM.
local
 
Posts: 295
Joined: Mon Aug 05, 2019 1:19 pm

Re: Some observations/questions about the Delft Experiment

Postby minkwe » Sun Oct 13, 2019 5:49 pm

I just completed a similar test of the Christensen et al, experiments and here are some preliminary results

Christensen 2013:
I(a,b)=2.6479e-03 pct=100.0000 95%=1.5600e-05 <I>=4.2629e-06 σ=6.3867e-06
I(x,y)=2.5352e-03 pct=100.0000 95%=2.2329e-05 <I>=6.3941e-06 σ=8.9249e-06
I(a,y)=2.5184e-02 pct=100.0000 95%=1.2938e-05 <I>=4.1834e-06 σ=5.0432e-06
I(b,x)=2.1632e-02 pct=100.0000 95%=1.8376e-05 <I>=5.0051e-06 σ=6.4942e-06

Wow, I didn't expect my null hypothesis to be rejected so strongly. There is certainly both setting dependence and outcome dependence in this post-selected data.

The Shalm 2015 experiment is a bit tricky. I have some preliminary results as well but the data is quite messy and huge, and I have to verify that I'm doing the right thing.

Despite trying, I have not been able to have access to raw data from Giustina et al 2013, 2015, or Rosenfeld 2017. I'm quite surprised by the secrecy in this community compared my home community where everything is public (http://www.rcsb.org/) as a requirement for publication.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Some observations/questions about the Delft Experiment

Postby Joy Christian » Sun Oct 13, 2019 9:37 pm

minkwe wrote:
Wow, I didn't expect my null hypothesis to be rejected so strongly. There is certainly both setting dependence and outcome dependence in this post-selected data.

Thank you, Michel, for your analysis of the "loophole-free" and/or Bell-test experiments. Your analysis deserves to be more widely appreciated. As "local" suggests, it is worthwhile to publish it at least on the arXiv.

As for your sentence I have quoted above, Wow indeed. But it seems to me that outcome dependence hidden in the experimental data is perfectly acceptable because that is how quantum mechanics is supposed to exceed Bell inequalities, and the experiments are claiming to varify quantum mechanical predictions. On the other hand, setting (or parameter) dependence is in flagrant violation of special relativistic causality. As such, that is totally unacceptable.

***
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Some observations/questions about the Delft Experiment

Postby minkwe » Mon Oct 14, 2019 5:50 am

Joy Christian wrote:
minkwe wrote:
Wow, I didn't expect my null hypothesis to be rejected so strongly. There is certainly both setting dependence and outcome dependence in this post-selected data.

Thank you, Michel, for your analysis of the "loophole-free" and/or Bell-test experiments. Your analysis deserves to be more widely appreciated. As "local" suggests, it is worthwhile to publish it at least on the arXiv.

As for your sentence I have quoted above, Wow indeed. But it seems to me that outcome dependence hidden in the experimental data is perfectly acceptable because that is how quantum mechanics is supposed to exceed Bell inequalities, and the experiments are claiming to varify quantum mechanical predictions. On the other hand, setting (or parameter) dependence is in flagrant violation of special relativistic causality. As such, that is totally unacceptable.

***

Thanks Joy,
You are right. Some outcome dependence is to be expected for QM and even for hidden variable theories. My next task is to figure out how much. Also notice how the setting dependence is an order of magnitude higher?

/Michel.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Some observations/questions about the Delft Experiment

Postby local » Mon Oct 14, 2019 1:50 pm

minkwe wrote: My next task is to figure out how much.

One way is to write a simple simulation of the quantum case, generate the outcome streams, and then do the MI calculation. An analytic solution would be great but we need a stronger mathematician than myself for that.

Perhaps rather than the mere showing of dependence being important, your showing of asymmetry is most significant, as it is consistent with post-selection bias.
local
 
Posts: 295
Joined: Mon Aug 05, 2019 1:19 pm

Re: Some observations/questions about the Delft Experiment

Postby minkwe » Mon Oct 14, 2019 2:51 pm

local wrote:One way is to write a simple simulation of the quantum case, generate the outcome streams, and then do the MI calculation. An analytic solution would be great but we need a stronger mathematician than myself for that.

It is easier than it looks. The mutual information for discrete random variables is:

It should be possible to obtain QM probabilities values for each of the probability terms. The problem with a simulation is that there is no such thing as an event by event QM simulation. Since QM says nothing about individual events, any such simulation will involve some extra assumptions to generate the events.

Perhaps rather than the mere showing of dependence being important, your showing of asymmetry is most significant, as it is consistent with post-selection bias

Good point, especially if the asymmetry is not present in data that has not been post-selected.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Some observations/questions about the Delft Experiment

Postby local » Mon Oct 14, 2019 3:40 pm

minkwe wrote: It is easier than it looks. The mutual information for discrete random variables is:

It should be possible to obtain QM probabilities values for each of the probability terms.

OK, looks promising!

The problem with a simulation is that there is no such thing as an event by event QM simulation. Since QM says nothing about individual events, any such simulation will involve some extra assumptions to generate the events.

Sure, but the two methods should give equivalent results if enough events are generated.

Good point, especially if the asymmetry is not present in data that has not been post-selected.

That's why they hide the raw data. :roll:
local
 
Posts: 295
Joined: Mon Aug 05, 2019 1:19 pm

Re: Some observations/questions about the Delft Experiment

Postby gill1109 » Tue Oct 15, 2019 8:21 pm

minkwe wrote:Nobody interested? In any case, let me share a little experiment I did and what I found. The question I wanted to answer was:

Is there any correlation between the pairs of random variables in {a, b, x, y} where {a,b} are the settings on both arms of the delft experiment and {x, y} are the outcomes. For the experiment, we essentially have a list of N settings pairs {a,b} and a list of N outcome pairs {x, y}, ie, a total of 4 lists of N elements. For this test, I used the raw data from the first Delft experiment, including their post-processing code which resulted N=4746 raw data items corresponding to "bell_trial_filter".

To measure correlation, I used a Mutual Information test. Since the lists contain discrete elements, this was easy to do. I calculated the mutual information between pairs of the lists above (a,b), (x, y), (a,y), (b,x), (a,x), (b, y). As can be imagined, the mutual information would be very small for lists that were purportedly randomly obtained. This was indeed observed. But then the next question which arose was, how do I determined what the expected mutual information should be? For this, I calculated the probability distribution of the elements for each list in the pair being compared, and then randomly generated 10,000 pairs of similar lists with the same probability distributions, calculated the mutual information on each of the 10,000 pairs, and then calculated the percentile of the observed mutual information relative to the 10,000. As you can imagine, this is a pretty good indication of how far from expected the observed mutual information was in the experimental data.

Thus, a percentile of 100% indicates that all of the mutual information values calculated from the 10,000 randomly generated pairs, are lower than the observed mutual information, an indication that the observation is clearly correlated more than expected. Similarly, a mutual information of 0% indicates that all the randomly generated mutual information values are higher than the observed one.

...

Key takeway: The first order mutual information in x is artificially low, while the second order is higher than expected. I short, there is something strange about this experiment. Hopefully you guys have better explanation of what is going on.


Splendid work. I do have an innocent explanation for Michel’s at first glance disturbing correlations.

His statistical analysis assumes iid data. In particular, identical distributions over time. Now, suppose that as time goes by, physical properties of all the systems involved in the experiment tend to drift and occasionally even suddenly jump. That can generate all kinds of spurious correlations. And with huge data-sets, they’ll be statistically highly significant.

That is why experimenters nowadays don’t use the conventional statistical analysis based on multinomial counts or Poisson distributions or naïeve resampling (bootstrap, cross validation, ...). Instead they have further developed and refined the martingale analysis which I pioneered in 2001. One gets less strong p-values but one also gets insurance against spurious correlations, against opportunistic early stopping, and more besides.

Anyway, Michel should publish, at least on arXiv. I can recommend him on arXiv, if he needs that. I can also help getting hold of data from researchers who like only to give their data to persons whom they consider to be reliable researchers. Fortunately, this kind of researcher is dying out. In the Netherlands there are national rules about availability of data coming from research which, let's face it, is usually paid for by the tax-payer, in one way or another. Those rules should obviously morally apply to anyone who participates in science.

Some further comments: Michel's variables a, b, x, y are all binary. There is nothing wrong with encoding them +/- 1 and looking at ordinary correlations. Of course we should realise that the settings a, b are just *labels* and there is not necessarily any particular relation between Alice's setting 1 and Bob's setting 1, Alice's 2 and Bob's 2. On the other hand, in the Bell-CHSH or Eberhard set-up we are comparing one of the *pairs* of settings to the other three. So one of Alice's settings and one of Bob's settings does have a special status.

In the Delft experiment, serious mistake of the experimenters, the settings were generated by quantum photonics! This is foolish. This means that (a) the physics of the setting-generation and of the source and detection are highly related; and (b) their randomness is not guaranteed, it is only verifed by extensive tests, but who knows what might go wrong the one time you are actually doing your experiment? With the martingale tests (alternatives to CHSH and to J) you rely heavily on the randomness of the settings. Drifts and jumps and memory effects in the other parts of the experiment are harmless. You can however "assume" that the deviation from randomness of the settings is smaller than some amount, at the cost of slightly decreasing the p-value.

I think one should use state-of-art pseudo random number generators and preferably a cascade of several different ones. Moreover, choose the random seeds for those generators by tossing some coins, etc. You never can avoid the conspiracy loophole but you can make its invocation pretty ridiculous, implausible, bad physics.

The Delft experiment is much too small. There are too many things which can go wrong, and if you test for all of them, you are bound to find some which are at least as strong as the actual physical signal which you want to determine.

An interesting new analysis of several of the loophole-free experiments is PHYSICAL REVIEW A 99, 022112 (2019), Very strong evidence in favour of quantum mechanics and against local hidden variables from a Bayesian analysis, Yanwu Gu, Weijun Li, Michael Evans, and Berthold-Georg Englert. https://arxiv.org/abs/1808.06863

However they assume that the iid (independence and identical distributions) assumptions are OK so that one can reduce the data to the 16 totals N(x, y | a, b)
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Some observations/questions about the Delft Experiment

Postby FrediFizzx » Tue Oct 15, 2019 8:29 pm

minkwe wrote:… The problem with a simulation is that there is no such thing as an event by event QM simulation. Since QM says nothing about individual events, any such simulation will involve some extra assumptions to generate the events. …

Ahh, you need the "New QM". Nothing wrong with some extra assumptions.

EPRsims/QM_Has_a_Hidden_Variable__Draft__9_28_long.pdf

Complete states does a fine job of doing an event by event QM simulation and was in fact derived from your "epr-simple".

EPRsims/Joy_local_CS_no0s3Ds0.pdf

Sure, it seems counter-intuitive to have whether states exist or not depend on a OR b but that is not what is actually going on. It really depends on the state itself. The A and B stations can still select any random setting.
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Some observations/questions about the Delft Experiment

Postby gill1109 » Tue Oct 15, 2019 9:26 pm

FrediFizzx wrote:
minkwe wrote:… The problem with a simulation is that there is no such thing as an event by event QM simulation. Since QM says nothing about individual events, any such simulation will involve some extra assumptions to generate the events. …

Ahh, you need the "New QM". Nothing wrong with some extra assumptions.

EPRsims/QM_Has_a_Hidden_Variable__Draft__9_28_long.pdf

Complete states does a fine job of doing an event by event QM simulation and was in fact derived from your "epr-simple".

EPRsims/Joy_local_CS_no0s3Ds0.pdf

Sure, it seems counter-intuitive to have whether states exist or not depend on a OR b but that is not what is actually going on. It really depends on the state itself. The A and B stations can still select any random setting.
.

I don't understand. QM tells us the probabilities of events. You can simulate the events by simulating a draw from the appropriate joint probability distribution of outcomes (x, y) given settings (a, b). Generating pairs of settings just how you like. As many as you like.

Event by event QM simulation is trivial. People have been doing it for decades. The problem is whether it can also be done on a computer network.

The answer is no, if the network is a network of classical computers communicating according to the standard protocol of a loophole-free Bell experiment, already laid down by Bell in 1981, and nowadays adopted by all self-respecting experimenters.

The answer is yes if you use quantum computers and quantum internet, even though the same protocol is in place.

This is called Bell's theorem.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Some observations/questions about the Delft Experiment

Postby FrediFizzx » Tue Oct 15, 2019 11:40 pm

gill1109 wrote:
FrediFizzx wrote:
minkwe wrote:… The problem with a simulation is that there is no such thing as an event by event QM simulation. Since QM says nothing about individual events, any such simulation will involve some extra assumptions to generate the events. …

Ahh, you need the "New QM". Nothing wrong with some extra assumptions.

EPRsims/QM_Has_a_Hidden_Variable__Draft__9_28_long.pdf

Complete states does a fine job of doing an event by event QM simulation and was in fact derived from your "epr-simple".

EPRsims/Joy_local_CS_no0s3Ds0.pdf

Sure, it seems counter-intuitive to have whether states exist or not depend on a OR b but that is not what is actually going on. It really depends on the state itself. The A and B stations can still select any random setting.
.

I don't understand. QM tells us the probabilities of events. You can simulate the events by simulating a draw from the appropriate joint probability distribution of outcomes (x, y) given settings (a, b). Generating pairs of settings just how you like. As many as you like.

Event by event QM simulation is trivial. People have been doing it for decades. The problem is whether it can also be done on a computer network.

The answer is no, if the network is a network of classical computers communicating according to the standard protocol of a loophole-free Bell experiment, already laid down by Bell in 1981, and nowadays adopted by all self-respecting experimenters.

The answer is yes if you use quantum computers and quantum internet, even though the same protocol is in place.

This is called Bell's theorem.

You obviously don't understand the Mathematica code. The Mathematica simulation can be run on a network no problem. But by Nature's rules, not yours.
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Some observations/questions about the Delft Experiment

Postby gill1109 » Wed Oct 16, 2019 12:50 am

FrediFizzx wrote:
gill1109 wrote:
FrediFizzx wrote:
minkwe wrote:… The problem with a simulation is that there is no such thing as an event by event QM simulation. Since QM says nothing about individual events, any such simulation will involve some extra assumptions to generate the events. …

Ahh, you need the "New QM". Nothing wrong with some extra assumptions.

EPRsims/QM_Has_a_Hidden_Variable__Draft__9_28_long.pdf

Complete states does a fine job of doing an event by event QM simulation and was in fact derived from your "epr-simple".

EPRsims/Joy_local_CS_no0s3Ds0.pdf

Sure, it seems counter-intuitive to have whether states exist or not depend on a OR b but that is not what is actually going on. It really depends on the state itself. The A and B stations can still select any random setting.
.

I don't understand. QM tells us the probabilities of events. You can simulate the events by simulating a draw from the appropriate joint probability distribution of outcomes (x, y) given settings (a, b). Generating pairs of settings just how you like. As many as you like.

Event by event QM simulation is trivial. People have been doing it for decades. The problem is whether it can also be done on a computer network.

The answer is no, if the network is a network of classical computers communicating according to the standard protocol of a loophole-free Bell experiment, already laid down by Bell in 1981, and nowadays adopted by all self-respecting experimenters.

The answer is yes if you use quantum computers and quantum internet, even though the same protocol is in place.

This is called Bell's theorem.

You obviously don't understand the Mathematica code. The Mathematica simulation can be run on a network no problem. But by Nature's rules, not yours.
.

Nature has no rules as to how I am to do an experiment. The rules of nature determine the outcome, but not the experimental design or protocol. Experimenters choose their own experimental protocol.

But I understand that you are saying that your Mathematica code won't run on a network according to the rules used, for rather good reasons, by all present-day experimenters. That is good to know.

It also means that Christian's model is of no consequence for the security of the quantum internet technology presently being deployed, whose "proven" security depends on the fact that it is not possible to fake quantum correlations by classical means under the standard constraints of loophole-free experiments (aka Bell's theorem).
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Some observations/questions about the Delft Experiment

Postby gill1109 » Wed Oct 16, 2019 3:49 am

I have re-analysed the data of the recent famous four Bell experiments, optimizing CHSH and J by taking account of the correlations between the observed deviations from no-signalling and the sampling noise in CHSH or J.

I assume multinomial data (iid assumption) throughout. Certainly, the Delft deviation from no-signalling is quite big.

https://rpubs.com/gill1109/OptimizedVienna
https://rpubs.com/gill1109/OptimizedNIST
https://rpubs.com/gill1109/OptimizedMunich
https://rpubs.com/gill1109/OptimizedDelft

Theory:
https://pub.math.leidenuniv.nl/~gillrd/Peking/Peking_4.pdf
In short: assume four multinomial samples, estimate covariance matrix of estimated relative frequencies, use sample deviations from no-signalling to optimally reduce the noise in the estimate of Bell's S or Eberhard's J. AKA: generalized least squares.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Some observations/questions about the Delft Experiment

Postby local » Wed Oct 16, 2019 9:27 am

gill1109 wrote: I can also help getting hold of data from researchers who like only to give their data to persons whom they consider to be reliable researchers.

By "reliable researchers", do you mean only those that believe in quantum nonlocality? Of course by those criteria, despite my numerous peer-reviewed publications, I would likely be judged unreliable. Nevertheless, I take up your offer and request you to get for us the full raw data of the Hensen et al experiment (not postselected or massaged in any way, i.e., we need the original detection event lists, one per station, not already postselected and combined). Is that the one you call "Delft"? Please report the results of your endeavor here. Thank you.
Last edited by FrediFizzx on Wed Oct 16, 2019 9:48 am, edited 1 time in total.
Reason: derogatory term removed
local
 
Posts: 295
Joined: Mon Aug 05, 2019 1:19 pm

Re: Some observations/questions about the Delft Experiment

Postby minkwe » Wed Oct 16, 2019 2:13 pm

gill1109 wrote:His statistical analysis assumes iid data. In particular, identical distributions over time.

This is incorrect. Time is irrelevant, since the mutual information does not depend on the time-ordering of values. The mutual information of a permutation of the list is unchanged.

Now, suppose that as time goes by, physical properties of all the systems involved in the experiment tend to drift and occasionally even suddenly jump. That can generate all kinds of spurious correlations. And with huge data-sets, they’ll be statistically highly significant.

As already explained above, ordering of events is irrelevant to the MI calculation. Again, keep in mind that we are talking about mutual information, not "correlation". Mutual information quantifies the dependence of one variable compared to another. There is a "data processing" theorem in information theory that says no amount of post-processing or stochastic process can improve the mutual information between two variables by acting only one one of them, without information about the other. If the mutual information is increased by a sudden jump, it must therefore be the case that the sudden jump happened on both sides or used information from the other arm of the experiment. Either way, we end up at the same point. If there is dependence, then there isn't independence, which is our goal.

The null hypothesis contains just one assumption . If this assumption fails, for whatever reason, then what does this imply for the experiment in question?
Some further comments: Michel's variables a, b, x, y are all binary. There is nothing wrong with encoding them +/- 1 and looking at ordinary correlations. Of course we should realise that the settings a, b are just *labels* and there is not necessarily any particular relation between Alice's setting 1 and Bob's setting 1, Alice's 2 and Bob's 2. On the other hand, in the Bell-CHSH or Eberhard set-up we are comparing one of the *pairs* of settings to the other three. So one of Alice's settings and one of Bob's settings does have a special status.

Again, please keep in mind we are talking about Mutual Information, not "correlation". The actual values of the variables is irrelevant. Take a look at the expression for mutual information of discrete variables.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Next

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: ahrefs [Bot], Majestic-12 [Bot] and 88 guests

cron
CodeCogs - An Open Source Scientific Library