New clocked EPR Simulation with 100% detection.

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 8:49 am

In response to Richard Gill's claims that
it is impossible to write a local realist computer simulation of a *clocked* experiment with no "non-detections", and which reliably reproduces the singlet correlations? (By reliably, I mean in the situation that the settings are not in your control but are delivered to you from outside; the number of runs is large; and that this computer program does this not just once in a blue moon, by luck, but most times it is run on different people's computers.)


I have now posted my new local realistic simulation at

https://github.com/minkwe/epr-clocked

To be fair to Richard, he has since told me that we probably have different definitions of the what a "clocked" experiment is. I will allow him to explain the differences. This simulation is also equivalent to a networked version, and follows all the *reasonable* requirements discussed in the recent thread about recommendations. Since no recommendations have been written for analysis programs, I perform the analysis exactly as is done in the Weihs' experiment. I plan to include a short snippet if Weihs' data if permitted, so that others may verify this. But anyone is free to write their own analysis code.

Whatever the analysis recommendations, the crucial requirement is that they are applied equally to both experimental data and data from simulations.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 9:02 am

This is nice code and a fun experiment. As far as I can see Minkwe is now playing with the "coincidence loophole" which Jan-Ake Larsson and I wrote about some years ago: Europhysics Letters, vol 67, pp. 707-713 (2004), Bell's inequality and the coincidence-time loophole, Jan-Ake Larsson, Richard Gill. http://arxiv.org/abs/quant-ph/0312035

However I must say I couldn't get the program to give me any interesting correlations at all yet and now it is time to quit playing with R and Python and cook dinner!

The particles get measured at random times and we have to decide which particles on Alice's and Bob's side belong together. In Minkwe's simulation, the fraction of unpaired particles (after we have fixed the criterium for being a pair) is way below the threshhold. In other words, this is the detection loophole, in a situation where the detection loophole is even more painful.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 9:05 am

For me, a clocked experiment meant a pulsed or synchronised or essentially discrete time experiment. Every unit of time, two particles fly off in different directions and get measured. Next time step, this is repeated.

Michel is now simuating a continuous time, unpulsed (I would say: unclocked) experiment. You don't know when the emissions were and the detections can be at any time whatever. In the data analysis, you decide what constitute a pair. Typically: if two events occur within epsilon time of one another we call them a pair. In case of ties, take out the events with the shortest times between them first.

All particles do get detected in his experiment, but only some fraction of them end up being called pairs. The events which end up un-paired have a similar role as pairs of particles in which only one or neither gets detected, in the pulsed set-up.

If you take the coincidence window as very large, all events get paired but the correlations get attenuated. If you take them very small, you can get wonderful correlations but lose many events. It is not too hard to simulate the singlet correlations in a local realistic way, using these tricks. It's actually a *worse* loophole than the detection loophole for pulsed experiments. In other words: you have to get the detection rate even higher, before you are proving anything.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 10:15 am

gill1109 wrote:This is nice code and a fun experiment. As far as I can see Minkwe is now playing with the "coincidence loophole" which Jan-Ake Larsson and I wrote about some years ago: Europhysics Letters, vol 67, pp. 707-713 (2004), Bell's inequality and the coincidence-time loophole, Jan-Ake Larsson, Richard Gill. http://arxiv.org/abs/quant-ph/0312035

However I must say I couldn't get the program to give me any interesting correlations at all yet and now it is time to quit playing with R and Python and cook dinner!

The particles get measured at random times and we have to decide which particles on Alice's and Bob's side belong together. In Minkwe's simulation, the fraction of unpaired particles (after we have fixed the criterium for being a pair) is way below the threshhold. In other words, this is the detection loophole, in a situation where the detection loophole is even more painful.


The answers to these questions make clear that my simulation does not use any loophole.
1) Are all emitted particles detected? Yes. Therefore there can be no detection loophole in the simulation.
2) As concerns the coincidence loophole, it does not apply to my simulation. I do not eliminate any particles *in my simulation* based on time. So you can not say that my simulation "exploits" the coincidence time loophole.

As, concerns the data analysis, which is separate from the simulation, it mirrors the standard data analysis procedure applied to all EPR-type experiments, ever performed and those that could be performed in the foreseeable future. QM does not dictate any data analysis procedure so guidelines for those have to be written. I've used the [almost] standard one. If you write your own analysis procedure wildly different from the one used in all EPR-type experiments, there is no doubt that you will get correlations which do not match experiment or the ones I obtained, even if you use my simulation to generate the data. The true test of course is that your own analysis procedure applied to real experimental data such as the one of Weihs must reproduce the QM correlations too.

So if you want to claim that there is a "coincidence time" loophole, then it is not in the simulation but in the "standard" data analysis procedure used in all EPR-experiments so far.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 10:39 am

Please read Jan-Åke's and my paper. We prove that with local realism you can achieve CHSH = 2 sqrt 2, but only if you allow a certain percentage of the events to go "un-paired" in the data-analysis (the correlations which you combine in CHSH are not based on all the events, but only on the events which are paired by the data-analysis procedure). We give a precise threshold. You can look up what it is. You are getting pretty close to 2 sqrt 2 but this is at the cost of throwing away a certain percentage of the events. And it's above our threshold (because our theorem is a true theorem and your simulation is a true local realistic simulation).

The Giustina et al experiment got pretty close to 2 sqrt 2 and the percentage of the events which were not "paired" in the data-analysis was *below* our threshold. It was a major breakthrough.

When I say "exploits the coincidence loophole" I do not mean that you are cheating in any way. I am saying that you are doing exactly what Jan-Åke and I showed could be done, and we called this phenomenon (which can occur in all *non*- pulsed / clocked / synchronised experiments, all experiments without "event-ready detectors") the coincidence loophole.

If you want to say that the loophole is in the data-analysis, that is fine by me. I already told you that the loophole can be removed by changing the data-analyis. Fix absolute time windows 0 = t0 < t1 < t2 ... and say that two events are paired if they occur in the same window. If there are more than two, take only the first. Now your experimental analysis is like a pulsed / clocked / synchronised experiment, like an experiment with event-ready detectors, but there are missing events. Some time intervals only have one event, or even have no event at all.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 10:55 am

gill1109 wrote:All particles do get detected in his experiment, but only some fraction of them end up being called pairs. The events which end up un-paired have a similar role as pairs of particles in which only one or neither gets detected, in the pulsed set-up. If you take the coincidence window as very large, all events get paired but the correlations get attenuated. If you take them very small, you can get wonderful correlations but lose many events. It is not too hard to simulate the singlet correlations in a local realistic way, using these tricks.

My simulation is not a trick. It is an attempt to show exactly what might be happening in the real EPR-experiments. It is an attempt to explain violation of the CHSH using local causality. There is nothing in my simulation which goes against any physical theory of what might be happening in the real experiments. There is nothing in my analysis which goes against the "standard" analysis procedure that has been used so far or which will be used in the foreseeable future. Yet the correlations are reproduced in a local and realistic manner. This is not a trick. On the other, hand, using an analysis procedure which is different from the one traditionally used in EPR-experiments in order to discredit an obviously locally-realistic simulation, without at the same time applying the same analysis to those EPR-experiments, would be a trick.
There is no threshold in my simulation that is lower than anything that has been measured so far. In your testing of my simulation, could you please state what you found the threshold to be?

It's actually a *worse* loophole than the detection loophole for pulsed experiments. In other words: you have to get the detection rate even higher, before you are proving anything.

As I have tried several times to explain previously, the pulsed experiment as you describe it can not be done. Essentially you want an experiment in which

1) Alice and Bob know exactly in what narrow time window the particle pairs were emitted
2) The source produces no unpaired particles and we are absolutely sure of that.
3) Only a single pair is produced within that arrow time window
4) All pairs produced reach the stations simultaneously
5) No time delays are introduced by their interactions at the stations
6) All the particles are detected
7) The output files from Alice and Bob are sequential so that no matching is necessary.

This experiment will never be done. Period.

As I have explained previously, many of these requirements are not reasonable. Take (5) for example, it may not be obvious to many that it is a well known experimental fact that wave-plates, electro-optical modulators, and stern-gerlach magnets introduce time delays which are polarization and spin orientation dependent. So the requirement (5) is not reasonable and will never be accomplished in any experiment. This is precisely why I believe that any new "loophole" which is introduced, such as "coincidence time" or "memory", is actually an unjustified restriction on what nature is not allowed to do, and an admission that Bell's model was very incomplete.

Even if such an experiment were done, I claim that my simulation will match the results from it, if the data analysis procedure applied to both are the same. Note that the requirements above require the time differences between the events to be very small, not very large, contrary to what Richard is suggesting above. In my analysis procedure, if you make the time differences zero in the matching, you still get the correlations. There requirement to have simultaneous measurements is contradictory with the operation of making the time window very large.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 11:16 am

I did not say it is a trick.

I did not demand points 1) to 7).

I did not say that a definitive experiment ever will be done. In fact I earlier published a paper in which I pointed out that it was logically possible that quantum mechanics itself could prevent it ever being done ("Bell's fifth position"). Bell himself had earlier agreed in private communications with Emilio Santos that this is a logical possibility, alongside the four other alternative positions which he had listed in "Bertlmann's socks".

There is some theoretical work supporting Bell's fifth position (by Volovic and others), and till recently, a lot of experimental support for the thesis, but in view of recent experimental support, I personally think that Bell's fifth position is hardly tenable any more.

The leaders of some top experimental groups have recently gone on record saying that they expect the definitive and successful experiment to have been done within the next five years. The successful experiment will not satisfy points 1) to 7). A successful experiment is one in which CHSH is approx equal to 2 sqrt 2 and the percentage of unpaired events is low enough. It has to be below thresholds determined in previous work by Larsson and myself ("coincidence loophole"), and earlier by him alone ("detection loophole"). The relevant threshold depends on the data-analysis: whether it is based on a fixed lattice of coincidence windows ("detection loophole"), or if every coincidence window may be shifted to optimally fit around a pair of detection times ("coincidence loophole").
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 11:48 am

gill1109 wrote:Please read Jan-Åke's and my paper. We prove that with local realism you can achieve CHSH = 2 sqrt 2,

In your paper with Jan, you derive your inequalities assuming a single set of particle pairs. As I have explained in the other thread here:

viewtopic.php?f=6&t=21&start=40

It is impossible for a single set of particles to have an upper bound above 2, no matter how you sample or post-select the pairs. That is what the abstract simulation in that thread was all about. The reason is very obvious and I will post it here again:

Consider a single pair of particles. Assume that the pair of particles have outcomes at 4 angles a, b, a', b' and that those outcomes are definite even if we do not measure them. And those outcomes can only be one of (+1, -1). Then it follows that
ab + ab′ + a′b′ − a′b ≤ 2

We can verify by factorization:
a(b + b′) + a′(b′ − b) ≤ 2
As concerns the values (b′, b). There are 4 possibilities. We may have (+1, -1), (-1, +1), (-1, -1) or (+1, +1). For the first two cases, the first of the terms (b + b′), (b′ − b) will be 0 and the second will be 2 or -2. For the other two cases, the first of the terms (b + b′), (b′ − b) will be 2 or -2 and the second will be 0. Which means that the maximum or minimum of the expression will be determined by 2a' XOR -2a. However, a and a' can only have values (+1, -1) which proves that the expression ab + ab′ + a′b′ − a′b ≤ 2 is a valid inequality for any four values (a, b, a', b') from a single particle pair. This inequality can be extended from the individual cases to averages over multiple particles in a single set, because the extrema of each term will not be affected by averaging over multiple values with the same extrema, on the condition that all averages of paired-product terms are calculated from the exact same set of particles.


There is no way in mathematics or logic to have <ab> + <ab′> + <a′b′> − <a′b> greater than 2 for a single set of particles as you start out with, without making a major error. If you are dealing with 4 different disjoint sets, the upper bound is 4. That you are able to obtain 2 sqrt (2), tells me clearly that you have somehow introduced 4 different sets inadvertently into the calculations, and for this the upper bound is 4, which can not be violated by 4 different sets of particles. So, I suspect that your paper is invalidated by the fact that you have a null set, due to contradictory assumptions.

I do not claim that my simulation violates the upper bound of 4. Nothing can. I have also repeatedly claimed that nothing can violate the original CHSH with an upper bound of 2. If anyone can point to anything (providing the *full* calculation demontrating it, QM included) that violates the CHSH with an upper bound of 2, I will be able to show you the mathematical error being made, and it would most likely involve using individual correlations from more than one disjoint set in an inequality which assumed a single set during derivation. This is an open challenge.

but only if you allow a certain percentage of the events to go "un-paired" in the data-analysis (the correlations which you combine in CHSH are not based on all the events, but only on the events which are paired by the data-analysis procedure). We give a precise threshold. You can look up what it is. You are getting pretty close to 2 sqrt 2 but this is at the cost of throwing away a certain percentage of the events. And it's above our threshold (because our theorem is a true theorem and your simulation is a true local realistic simulation).

Your threshold is 88%. I didn't want to give you the percentage myself. It would be more believable if you stated it.

The Giustina et al experiment got pretty close to 2 sqrt 2 and the percentage of the events which were not "paired" in the data-analysis was *below* our threshold.

So the Giustina experiment has a value below your threshold and my simulation has a value above it? I'll appreciate if you state the numbers so that it is transparent what we are talking about. Which is better, to be below the threshold or to be above it. How close did they come to the QM expectation value, and how close did my simulation come to the QM expectation value. These are all important questions which should be answered for both experimental data and simulations at the same time. We shouldn't apply different standards to each. The matching algorithm used in the Giustina experiment is not public. As soon as it is made public, I will use it too.

I already told you that the loophole can be removed by changing the data-analyis. Fix absolute time windows 0 = t0 < t1 < t2 ... and say that two events are paired if they occur in the same window. If there are more than two, take only the first.

Not forgetting to apply any such change to data from the EPR experiments as well and comparing the two, as I keep insisting.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 12:55 pm

In a CHSH style experiment, you need to get both (a): CHSH about equal 2 sqrt 2, and (b): the detection rate above 88%. Equivalently, the rate of unpaired events below 12%. (Sorry for the confusion about what should be high and what should be low).

A lower value of CHSH would also be OK as long as the detection rate is higher still, to compensate.

We can discuss whether or not our proofs make any sense, after you have come up with a counter-example to what we claim to have proved. Namely that in a simulation like yours you can't achieve both CHSH = 2 sqrt 2 and detection rate above 88%. (According to our definition of detection rate).

So far I haven't seen a counter-example. It's a beautiful suite of programs, and it confirms my theory.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 1:09 pm

minkwe wrote:It is impossible for a single set of particles to have an upper bound above 2, no matter how you sample or post-select the pairs.

Exactly. And therefore, when you calculate each of the four correlations on a different random subset, then *on average* the upper bound won't exceed 2 either.

We have hereby proved Bell's inequality in its usual form (an inequality about expectation values): local realism (and no conspiracy) implies CHSH <= 2 (on average).

It is easy to exhibit a single set of particles, and a particular partition of that set into four disjoint subsets of approximately equal size, such that we actually get approx 2 sqrt 2 when we compute each correlation on a particular subset. So in principle, experimental data could violate CHSH. QM predicts that in certain QM experiments, it does.

We have hereby proved Bell's theorem.

It's just that, if local realism holds (according to which the observed data can be thought of as having arisen from a partition of a complete data set into four parts as described above), then if one would select the four subsets at random, the subsets which give such extreme values of CHSH are very rare. That's Bell's inequality in my strengthened (probabilistic) form.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 1:17 pm

gill1109 wrote:In a CHSH style experiment, you need to get both (a): CHSH about equal 2 sqrt 2, and (b): the detection rate above 88%. Equivalently, the rate of unpaired events below 12%. (Sorry for the confusion about what should be high and what should be low).


As I keep saying, without getting any response, there is no way in mathematics or logic to have a CHSH derived for a single set of particles to be above 2. Not even in quantum theory. Any theory which suggests an upper bound above 2 for a single set of particles, like your paper, must be mistaken. In the case of your paper, you have a null set which renders the theory meaningless. To recover some validity in your theory, you will have to admit that you are not dealing with a single set of particles but 4 different sets. As I have explained already without any response, for 4 disjoint sets of particles (like in real experiments and my simulation), the upper bound is 4, and there is no way in mathematics or logic to have an upper bound less than 4 for 4 disjoint sets. The expression is very clear, just by simply looking at it, you know I am correct:

For 4 *disjoint* sets of particles we have:

a1b1 + a2b2′ + a3′b3′ − a4′b4

Each paired-product term, has an upper bound of 1 and a lower bound of -1. Which clearly means the upper bound is 4. The averages for each term

<a1b1> + <a2b2′> + <a3′b3′> − <a4′b4>

will have the exact same upper and lower bound, which means the upper bound is 4. Could anyone please show me how it is possible mathematically to have an *upper bound* less than 4 if the 4 sets remain disjoint. It should be easy to do this if it were possible.

Isn't it interesting that the roles have reversed and it is now the other side who must demonstrate genuine violation of the CHSH? You do not need experiments for that. All you need is maths or logic, showing how: either "A single set of particles can violate an inequality derived for a single set of particles" ( S <= 2), or how "the upper bound for 4 disjoint sets of particles can be less than 4". This is my challenge.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 1:41 pm

gill1109 wrote:Exactly. And therefore, when you calculate each of the four correlations on a different random subset, then *on average* the upper bound won't exceed 2 either.

First of all, that is not the definition of an *upper bound*. An *upper bound* in an inequality is a mathematical statement that no value above that is possible within the assumptions used to derive it, it is not a statement about what the average value will be. When we determine the upper bound, we look at the extremes, not what the most likely value will be. Secondly, when we say *upper bound*, we do not mean that average values can not be less than that, rather we mean that when that value is exceeded, we will have to revisit our assumptions used to derive the bound or a mathematical error has been committed.

We have hereby proved Bell's inequality in its usual form (an inequality about expectation values): local realism (and no conspiracy) implies CHSH <= 2 (on average).

And you did that prove by assuming that you have a single set. This is easy to see by looking at Bell's original derivation, as well as ALL derivations including yours which include a factorization step, a step that can not be done if you do not have a single set. That inequality is valid, for a single set.

It is easy to exhibit a single set of particles, and a particular partition of that set into four disjoint subsets

Once you divide the set up into disjoint sets, we no longer have a single set, and the inequality you derived for the single set no longer applies to a the disjoint sets, without conspiracy as I have clearly explained.

So in principle, experimental data could violate CHSH.

Yes, because of the mathematical error of using terms from 4 disjoint sets to substitute for terms which should have been calculated in a single set as assumed in the derivation.

QM predicts that in certain QM experiments, it does.

Yes, because of the mathematical error of using QM predictions, which are for completely distinct experimental arrangements, and therefore necessary for disjoint sets of particles to substitute in an expression which was derived starting with the assumption that we have a single set.

We have hereby proved Bell's theorem.

And hereby, I've proven that Bell's theorem is false.

It's just that, if local realism holds (according to which the observed data can be thought of as having arisen from a partition of a complete data set into four parts as described above), then if one would select the four subsets at random, the subsets which give such extreme values of CHSH are very rare. That's Bell's inequality in my strengthened (probabilistic) form.

Bell's theorem does not say it is rare to violate the inequality. It says it is impossible. It does not matter whether you start from a single set and partition the data into 4, you are still calculating the correlations from 4 disjoint sets and for this purpose the upper bound is 4. Probability can not save Bell's theorem here, it still fails.

There is no way that this expression <a1b1> + <a2b2′> + <a3′b3′> − <a4′b4> can have an upper bound lower than 4 is if the values in one set, impose constraints on the values another set, and the only way that can happen is if the sets are not disjoint to begin with. So what you are claiming is equivalent to the claim that probability can cause 4 disjoint sets to not be disjoint -- a contradiction.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 1:53 pm

Dear Michel, You get no response because you are claiming that I say things in my paper which I don't say. You take no notice of the words like "on average", "mean value", "with large probability". You take no notice of the careful mathematical proof of every precisely stated claim. So your criticism is empty. So I have no response.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 2:40 pm

Dear Michel

May be I can explain the points which I think you are missing by reference to your simulation experiment. I just used it to run a standard CHSH experiment of one second duration, spin 1 particles, Alice's setting 0 and 45 degrees; Bob's 22.5 and 67.5.

Y ou got CHSH nicely spot-on at 2.82. Your coincidence efficiency was 81.4%, which is well below the theoretical threshold of 87.9% from Larsson and Gill. The experiment nicely confirms our theoretical predictions: in a local realistic simulation experiment of this type, if it achieves CHSH = 2.82... then its coincidence efficiency must have been below 87.9%. You did achieve the desired value of CHSH and you coincidence efficiency was well below our bound.

Our bound is actually sharp, so you could in principle do better.

We proved that however you tweak your program, you'll never simultaneously get the coincidence efficiency systematically above 87.9% while maintaining CHSH systematically at around 2.83. By "systematically" I mean: apart from statistical fluctuations - you might just be lucky once in a blue moon, but in the long run you won't succeed. (After all, just once in a while you might even observe the value "4" for CHSH and "100%" for the coincidence efficiency. But the longer the duration of the experiment, the smaller the chance of such extreme outcomes).

The simulation experiment beautifully illustrates the ideas behind my recent paper, and beautiful fits to the theory which Jan-Ake and I developed for the coincidence loophole.

Richard

Code: Select all
richard@Nehus-Mint-16 ~/Desktop/epr-clocked-master $ python source.py 1 1
Generating spin-1.0 particle pairs
Time left:        0s
20594 particles in 'SrcLeft.npy.gz'
20594 particles in 'SrcRigh.npy.gz'

richard@Nehus-Mint-16 ~/Desktop/epr-clocked-master $ python station.py SrcLeft.npy.gz Alice 0,45
Detecting particles for Alice's arm
Particles detected:    20594
Done!

richard@Nehus-Mint-16 ~/Desktop/epr-clocked-master $ python station.py SrcRight.npy.gz Bob 22.5,67.5
Detecting particles for Bob's arm
Particles detected:    20594
Done!

richard@Nehus-Mint-16 ~/Desktop/epr-clocked-master $ python analyse.py 1No. of detected particles, non-zero outcomes only
   Alice:           20594
     Bob:           20594

Calculation of expectation values
  Settings       N_ab     Trials   <AB>_sim    <AB>_qm StdErr_sim
   0, 22.5       3492       4295      0.708      0.707      0.012
   0, 67.5       3413       4181     -0.734     -0.707      0.013
  45, 22.5       3541       4420      0.700      0.707      0.012
  45, 67.5       3514       4286      0.678      0.707      0.011

   Same Angle <AB> = +nan
   Oppo Angle <AB> = +nan
   CHSH: <= 2.0, Sim: 2.820, QM: 2.828
   Coincidence Efficiency:  81.4 %

Statistics of residuals between exact QM curve and Simulation
      Skew:               0
     Range: -0.02685 : -0.01165
    Length:               2
  Variance:       0.0001156
  Kurtosis:              -2
      Mean:        -0.01925


It's a beautiful simulation program! I continue to experiment with getting its data into R and analysing it in various other ways.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 3:50 pm

To continue, your previous simulation model, Michel, was what I would call a clocked (pulsed, synchronised, discrete-time model, or a model with event-ready detectors). At each discrete time step you create two particles. They go to two detectors. At each detector there is an outcome +1, -1, or "no detection". Let's do a classical CHSH experiment with this simulation model. Obviously, you might by chance see just any result between -4 and +4 for CHSH and simultanenously see any detection rate all the way up to 100%, but typically you will get something close to CHSH = 2.82. The pair detection rate was about 67%.

Theorem (Larsson): there is no way you can get this rate systematically above about 71% while maintaining CHSH systematically at about 2.828....

By "systematically" I mean that with large sample size, simultaneous violation of both these bounds by some small positive amount is extremely unlikely. For example you'll hardly ever have detection rate above 72% *and* CHSH above 2.83.

With tiny sample sizes almost anything could happen, obviously!

If you don't believe the theorem, then just try and give us a counter-example. Or read my own theorems and their proofs more carefully.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 4:31 pm

gill1109 wrote:Dear Michel, You get no response because you are claiming that I say things in my paper which I don't say. You take no notice of the words like "on average", "mean value", "with large probability". You take no notice of the careful mathematical proof of every precisely stated claim. So your criticism is empty. So I have no response.


If you want, we can go through your paper line-by-line. Here are some quotes from your paper:

Larsson & Gill wrote:"The proof consists of simple algebraic manipulations inside each of the two expressions on
the right hand side, followed by application of the triangle inequality on each expression."


Which is not different from the algebraic manipulations in your more recent paper, and which I've mentioned above, the factorization:
ab + ab′ + a′b′ − a′b ≤ 2 --> a(b + b′) + a′(b′ − b) ≤ 2

You agree that this factorization can not be done if you have 4 disjoint sets:
Larsson & Gill wrote:The original CHSH inequality is no longer valid, and the reason can be seen in the start of the proof where one wants to add...

The integrals on the right-hand side cannot easily be added when ΛAC′ =/= ΛAD′ , since we are taking expectations over different ensembles ΛAC′ and ΛAD′ , with respect to different probability measures.

The problem here is that the ensemble on which the correlations are evaluated changes with the settings, while the original Bell inequality requires that they stay the same. In effect, the Bell inequality only holds on the common part of the four different ensembles ΛAC′ , ΛAD′ , ΛBC′ , and ΛBD′ ...
i.e., for correlations of the form

E(AC′|ΛAC′ ∩ ΛAD′ ∩ ΛBC′ ∩ ΛBD′ ). (8)

Unfortunately our experimental data comes in the form

E(AC′|ΛAC′)

so we need an estimate of the relation of the common part to its constituents ... This is a purely theoretical construct, not available in experimental data, but we will relate it to experimental data below.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 4:33 pm

Further down, you say:

Larsson & Gill wrote:Correlations are obtained on subsets of Λ, namely on

ΛAC′ , ΛAD′ , ΛBC′ , or ΛBD′ . (iv)

Then

E(AC′|ΛAC′ ) + E(AD′|ΛAD′ ) + E(BC′|ΛBC′ ) − E(BD′|ΛBD′ ) ≤ 4 − 2δ. (11)

Proof. The proof consists of two steps; the first part is similar to the proof of Theorem 1,
using the intersection ΛI = ΛAC′ ∩ ΛAD′ ∩ ΛBC′ ∩ ΛBD′ , (12)

on which coincidences occur for all relevant settings. This ensemble may be empty, but only
when δ = 0 and then the inequality is trivial, so δ > 0 can be assumed in the rest of the proof.


E(AC′|ΛI) + E(AD′|ΛI) +E(BC′|ΛI) − E(BD′|ΛI) ≤ 2. (13)


This is the crucial part, notice that again we have the original CHSH *because ΛI is a single set again, and we all agree that if this set is empty, we have an inequality with an upper bound of 4 just as I've explained many times and the original CHSH can not be applied to experiments.

This is the crucial part, and permit me to emphasize the point again: after admitting that the inequality can not be derived for 4 disjoint sets, or is only valid for the parts of the 4 sets that are not disjoint, you now define a new set ΛI that is an intersection of 4 sets. If this set is empty, all bets are off, and the rest of your paper fails as I have claimed.

Now it is easy to see that the set is empty: None of the particle pairs in any of the 4 sets measured in any EPR-experiments ever performed or performable in the future is a member of any of the other sets. The sets used for calculating each of the terms is disjoint from all the others, therefore ΛI is a null set. Do you deny this?

As you can see, I'm arguing using precise mathematical statements based on what you wrote in your own paper. You can verify in my simulation that if you split the single source file into 4 disjoint sets of particles, as is done in the simulation, there will be no two particles that have the same set of hidden variables.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Re: New clocked EPR Simulation with 100% detection.

Postby gill1109 » Sat Mar 01, 2014 11:19 pm

Dear Michel,

You are *not* using precise mathematical statements based on what we wrote in our own paper, because you seem to be taking no notice at all of the mathematical definitions of the various sets and quantities.

I will try to explain just one more time, by translation of our mathematics into the terms of your program.

Run the first Python program "source.py" and create SrcLeft and SrcRight. They both contain 1 million particles, belonging in pairs.

We can feed SrcLeft into "station.py" for the fixed choice of angle a for Alice, and also for the fixed choice of angle b for Alice. The same for SrcRight into "station.py" for the fixed choice of angle a' [correction: should be b - thanks Fred] for Bob, and also for the fixed choice of angle b' for Bob.

Now I have four more files, two for Alice and two for Bob, each of length 1 million, all pertaining to the same original 1 million particle pairs, and still listed in the same order.

Merge these four files to generate a set of 1 million elements, such that the i'th element contains all the information which was generated by the four runs of station.py pertaining to the i'th particle pair.

For the i'th particle pair, the file contains the outcomes and detection times, on both sides of the experiment, for both of Alice's two possible settings, and for both of Bob's two possible settings.

This is the set Lambda. It's a set of size 1 million.

Lambda(a, b) is a subset of this set. It's just those particle pairs, such that their detection times under settings a and b would be so close that they would be counted as a pair by "analyse.py"

We can do the same with each pair of left and right settings - altogether four pairs.

The four sets are not disjoint.
Last edited by gill1109 on Sun Mar 02, 2014 1:07 am, edited 1 time in total.
gill1109
Mathematical Statistician
 
Posts: 2049
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: New clocked EPR Simulation with 100% detection.

Postby FrediFizzx » Sat Mar 01, 2014 11:43 pm

Say what? I hope you meant a and a' for Alice and b and b' for Bob.
FrediFizzx
Independent Physics Researcher
 
Posts: 2075
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: New clocked EPR Simulation with 100% detection.

Postby minkwe » Sat Mar 01, 2014 11:59 pm

gill1109 wrote:Dear Michel,

You are *not* using precise mathematical statements based on what we wrote in our own paper, ....


I do not see any direct response here to the mathematical arguments I made against your paper. The question is, do you or do you not have a null set in EPR-experiments? How can an intersection of 4 disjoint sets not be null? How can the rest of your paper be valid if you have a null set as it is clearly stated in your own words. Please explain to us why the set is not null in EPR experiments.

Your arguments about my simulation do not answer the question.
Secondly if you read my README file you will notice the warning that the particles are not necessarily in pairs only being paired 99.9% of the time.
Thirdly, you are using the same source file each time with a different setting, which is equivalent to measuring the same particle more than once. I have already explained to you previously that this is a meaningless exercise which can never be measured in any doable experiment. It is impossible to measure. Of course if you do it like this, the 4 sets will not be disjoint, and my simulation will not violate the upper bound of 2. I already asked you to do this thought experiment using the other simulation but you were not interested. It shows you that using a single set as was intended will never violate the inequality even though using the 4 disjoint sets as is measured in any experiment violates it.

In any case, if this is what your paper is all about, then it confirms my claim that you are talking about impossible experiments. My arguments that your paper is fatally flawed stands. Contrary to what you claimed earlier, I do understand your paper very well, the error is very clear for anyone to see. I would like to hear Jan's opinion on this too.
minkwe
 
Posts: 1151
Joined: Sat Feb 08, 2014 10:22 am

Next

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 7 guests

CodeCogs - An Open Source Scientific Library