47 posts
• Page **3** of **3** • 1, 2, **3**

The deterministic upper bound is 4. However when the N setting pairs which are actually chosen to be measured are indendently sampled with equal probability from the four possibilities a b; a b'; a' b; a' b' then on average, CHSH does not exceed 4.

- gill1109
- Mathematical Statistician
**Posts:**2812**Joined:**Tue Feb 04, 2014 10:39 pm**Location:**Leiden

gill1109 wrote:The deterministic upper bound is 4. However when the N setting pairs which are actually chosen to be measured are indendently sampled with equal probability from the four possibilities a b; a b'; a' b; a' b' then on average, CHSH does not exceed 4.

For 4 separate sets of particles, each of the averages <a1b1>, <a2b2'>, <a3'b3>, <a4'b4'> are independent from each other. Each of those terms has an upper bound of +1 and a lower bound of -1. Of course in any specific set, the averages can have any value between those two extremes but we are only interested in the extremes for deriving the inequality. Obviously then <a1b1> + <a2b2'> + <a3'b3> - <a4'b4'> must have an upper bound not less than 4, IF we have 4 different independent (aka disjoint) sets of particles. It does not matter how randomly or non-randomly you sample the 4 sets, so long as the 4 terms above are independent, the upper bound is clearly 4 not less, not more.

The only way the upper bound of expression the <a1b1> + <a2b2'> + <a3'b3> - <a4'b4'> measured from 4 sets of particles can be less than 4 is if we introduce dependencies between the sets of particles. To see this, notice that when the first three terms are each at their maximum of +1, to have an upper bound less than 4, the last term must not be at it's minimum of -1. If the last term is at -1, then the only way to have the upper bound less than 4 is by forcing at least one of the other 3 terms to not be at their maximum of +1. This way we have introduced a dependency between the separate sets of particles which should be independent, so that values in one set of particles now impose conditions on a separate set of particles. It is also clear that it is impossible for this expression to exceed 4 no matter how you sample the individual values, that is what is meant by "upper bound". This simple proof demonstrates that there must be a serious error in any proof claiming to demonstrate an upper bound of less than 4 for independent sets of particles (like those measured in EPR experiments). The two are mutually contradictory.

I emphasize "upper bound" many times to make it very clear that the averages can be any value between the extremes but can not exceed the extremes in either direction. This is why I have argued that the results from experiments and QM are being compared with the wrong inequality. The correct inequality should be <a1b1> + <a2b2'> + <a3'b3> - <a4'b4'> <= 4 (which is not the CHSH). I believe my proof above combined with the though experiment using my simulation I presented earlier provides more support for the argument that Bell's theorem is wrong.

- minkwe
**Posts:**1441**Joined:**Sat Feb 08, 2014 10:22 am

gill1109 wrote:However when the N setting pairs which are actually chosen to be measured are indendently sampled with equal probability from the four possibilities a b; a b'; a' b; a' b' then ....

Although not really relevant for the argument in my previous post, I thought I should address one other point, a specific outcome of +1 or -1 is a result of, the particle hidden variables (say λ), the instrument hidden variables (say γ), the freely chosen setting α = (a b; a b'; a' b; a' b'). Therefore we can say each outcome is a function of three variables

f(α,λ,γ)

Of those, only α are known and controllable by experimentalists. Therefore only α can be truly randomly sampled (see .http://en.wikipedia.org/wiki/Bertrand_p ... robability) for why it is not possible to randomly sample a "hidden variable"). Therefore, although we may randomly sample, the angles α, it does not follow that we have randomly sampled the outcomes of the function f(α,λ,γ).

- minkwe
**Posts:**1441**Joined:**Sat Feb 08, 2014 10:22 am

I agree. Alice and Bob can randomly sample settings, or set them fixed to desired pre-set values. e.g. alpha = 0 degrees, beta = 30 degrees. The hidden variables in the source and in the detector can't be controlled by Alice and Bob. Nature samples them from Nature's probability distribution.

- gill1109
- Mathematical Statistician
**Posts:**2812**Joined:**Tue Feb 04, 2014 10:39 pm**Location:**Leiden

The recommendations in the initial message still look good without any need of revision. However, different purposes impose different requirements that in some cases may conflict with my recommendations. In particular, use of files restricts the number of particle pairs. In addition, when the intent is to simulate a particular experiment, the most important consderation is to reproduce the experiment as accurately as possible; and, to a lesser extent, when simulating experiments of a particular type.

My intent was to make obvious that the simulated model is local and realistic (in some naive sense) and to prevent secret use of (at least known) loopholes, while avoiding other restrictions. So far my attempt looks reasonably good.

My intent was to make obvious that the simulated model is local and realistic (in some naive sense) and to prevent secret use of (at least known) loopholes, while avoiding other restrictions. So far my attempt looks reasonably good.

- Mikko
**Posts:**163**Joined:**Mon Feb 17, 2014 2:53 am

Mikko wrote:The recommendations in the initial message still look good without any need of revision. However, different purposes impose different requirements that in some cases may conflict with my recommendations. In particular, use of files restricts the number of particle pairs. In addition, when the intent is to simulate a particular experiment, the most important consderation is to reproduce the experiment as accurately as possible; and, to a lesser extent, when simulating experiments of a particular type.

My intent was to make obvious that the simulated model is local and realistic (in some naive sense) and to prevent secret use of (at least known) loopholes, while avoiding other restrictions. So far my attempt looks reasonably good.

Indeed. A very nice codification of sensible recommendations.

- gill1109
- Mathematical Statistician
**Posts:**2812**Joined:**Tue Feb 04, 2014 10:39 pm**Location:**Leiden

There is another way to ensure compliance of computer programs with locality, freedom, etc. Let's focus on what I call clocked experiments (synchronized; pulsed; discrete time; event-ready detectors; ...).

There is just one program. It simulates the source, two particles, and two detectors. We initialize it and save the initial state (the seed) of the pseudo RNG so that it can be reinitialized, and given the same inputs, would produce identical outputs.

We give it a setting chosen by Alice and a default ("empty") setting from Bob, meaning, Alice measures her particle, Bob does nothing. Now we reset and rerun, with same pseudo RNG seed, but with the roles reversed. Now we have obtained one outcome in each wing of the experiment, for just one pair of particles, one setting of Alice, and one setting of Bob.

Locality is enforced, not by the rules of separation of three programs, but by the procedure whereby we "interrogate" just one program!

We can now continue and get a second pair of outcomes for a second pair of settings, in the same way.

A continuous time experiment can be treated in the same way by dividing time into, say, microseconds. Alice and Bob supply the settings for one microsecond. Two runs of one program generate Alice's outcomes and Bob's outcomes, separately.

There is just one program. It simulates the source, two particles, and two detectors. We initialize it and save the initial state (the seed) of the pseudo RNG so that it can be reinitialized, and given the same inputs, would produce identical outputs.

We give it a setting chosen by Alice and a default ("empty") setting from Bob, meaning, Alice measures her particle, Bob does nothing. Now we reset and rerun, with same pseudo RNG seed, but with the roles reversed. Now we have obtained one outcome in each wing of the experiment, for just one pair of particles, one setting of Alice, and one setting of Bob.

Locality is enforced, not by the rules of separation of three programs, but by the procedure whereby we "interrogate" just one program!

We can now continue and get a second pair of outcomes for a second pair of settings, in the same way.

A continuous time experiment can be treated in the same way by dividing time into, say, microseconds. Alice and Bob supply the settings for one microsecond. Two runs of one program generate Alice's outcomes and Bob's outcomes, separately.

- gill1109
- Mathematical Statistician
**Posts:**2812**Joined:**Tue Feb 04, 2014 10:39 pm**Location:**Leiden

47 posts
• Page **3** of **3** • 1, 2, **3**

Return to Sci.Physics.Foundations

Users browsing this forum: ahrefs [Bot] and 8 guests