A new simulation of the EPR-Bohm correlations

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: A new simulation of the EPR-Bohm correlations

Postby Heinera » Sun Jul 12, 2015 12:46 pm

minkwe wrote:
Heinera wrote:
minkwe wrote: please tell us what the correct QM predictions should be for , since those are the terms in the CHSH inequality.

That is an epression that only makes sense for a LHV model, and not QM. So QM does not predict anything for that expression.

Get your head out of the sand already. If QM does not predict anything for those terms, how then can any sane person claim that the QM prediction for those terms violates the CHSH? How can somebody like yourself who believes it is not possible to provide QM values for those terms, also believe Bell's theorem which is based on substituting QM predictions for those term into the expression.

It is not reasonable to believe mutually contradictory statements at the same time. So again, the CHSH is



Show us exactly how it is posdible for QM to violate this inequality.

There is no problem here, since QM can only predict the value of
. And it predicts a maximum value of .
Heinera
 
Posts: 378
Joined: Thu Feb 06, 2014 12:50 am

Re: A new simulation of the EPR-Bohm correlations

Postby FrediFizzx » Sun Jul 12, 2015 1:10 pm

Heinera wrote:
minkwe wrote:
Heinera wrote:That is an epression that only makes sense for a LHV model, and not QM. So QM does not predict anything for that expression.

Get your head out of the sand already. If QM does not predict anything for those terms, how then can any sane person claim that the QM prediction for those terms violates the CHSH? How can somebody like yourself who believes it is not possible to provide QM values for those terms, also believe Bell's theorem which is based on substituting QM predictions for those term into the expression.

It is not reasonable to believe mutually contradictory statements at the same time. So again, the CHSH is



Show us exactly how it is posdible for QM to violate this inequality.

There is no problem here, since QM can only predict the value of
. And it predicts a maximum value of .

Pretty typical of Bell diehards. The rules that apply to LHV models do not apply to QM models. The bottom line is that nothing can violate the LHV Bell inequalites. Not even QM with all things the same.
FrediFizzx
Independent Physics Researcher
 
Posts: 1139
Joined: Tue Mar 19, 2013 6:12 pm
Location: California, USA

Re: A new simulation of the EPR-Bohm correlations

Postby Heinera » Sun Jul 12, 2015 1:19 pm

FrediFizzx wrote:Pretty typical of Bell diehards. The rules that apply to LHV models do not apply to QM models.

Of course they do not apply. That's the whole point of Bell's theorem. LHV models necessarily self-impose rules that are too restrictive to reproduce the QM correlations.
Heinera
 
Posts: 378
Joined: Thu Feb 06, 2014 12:50 am

Re: A new simulation of the EPR-Bohm correlations

Postby FrediFizzx » Sun Jul 12, 2015 1:24 pm

Heinera wrote:
FrediFizzx wrote:Pretty typical of Bell diehards. The rules that apply to LHV models do not apply to QM models.

Of course they do not apply. That's the whole point of Bell's theorem. LHV models necessarily self-impose rules that are too restrictive to reproduce the QM correlations.

LOL! So you admit that the Bell inequalities are "rigged". And it is not the point of Bell's theorem. Bell's theorem has no point if the inequalities are rigged.
FrediFizzx
Independent Physics Researcher
 
Posts: 1139
Joined: Tue Mar 19, 2013 6:12 pm
Location: California, USA

Re: A new simulation of the EPR-Bohm correlations

Postby Heinera » Sun Jul 12, 2015 1:34 pm

FrediFizzx wrote:
Heinera wrote:
FrediFizzx wrote:Pretty typical of Bell diehards. The rules that apply to LHV models do not apply to QM models.

Of course they do not apply. That's the whole point of Bell's theorem. LHV models necessarily self-impose rules that are too restrictive to reproduce the QM correlations.

LOL! So you admit that the Bell inequalities are "rigged". And it is not the point of Bell's theorem. Bell's theorem has no point if the inequalities are rigged.

What on earth do you mean by "rigged"?
Heinera
 
Posts: 378
Joined: Thu Feb 06, 2014 12:50 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Sun Jul 12, 2015 3:37 pm

Heinera wrote:There is no problem here, since QM can only predict the value of . And it predicts a maximum value of .

I asked you to demonstrate violation of the CHSH, which is . What you've shown is not the CHSH. According to point (3) of my argument, which you agree with already (basically), the upper bound of is 4 and QM obviously does not violate that. It doesn't matter whether you have LHV or not.:lol:

Bell believers are using the wrong expectation values predicted for a different terms, to claim that an inequality which contains other terms
was violated. Nothing can violate either inequality. :shock:

The claimed violation is simply rigging. "Bell's theorem" is either fraud or evidence of grave mathematical incompetence. It is probably time for you to review De Raedt's, Rosinger's and Adenier's papers

A Refutation of Bell's Theorem
http://arxiv.org/pdf/quant-ph/0006014v3
Adenier wrote:Bell's Theorem was developed on the basis of considerations involving a linear combination of spin correlation functions, each of which has a distinct pair of arguments. The simultaneous presence of these different pairs of arguments in the same equation can be understood in two radically different ways: either as `strongly objective,' that is, all correlation functions pertain to the same set of particle pairs, or as `weakly objective,' that is, each correlation function pertains to a different set of particle pairs.
It is demonstrated that once this meaning is determined, no discrepancy appears between local realistic theories and quantum mechanics: the discrepancy in Bell's Theorem is due only to a meaningless comparison between a local realistic inequality written within the strongly objective interpretation (thus relevant to a single set of particle pairs) and a quantum mechanical prediction derived from a weakly objective interpretation (thus relevant to several different sets of particle pairs).


de Raedt wrote:http://rugth30.phys.rug.nl/pdf/bqt13.pdf
These inequalities derive from the rules of arithmetic and the non negativity of some functions only. A violation of these inequalities is at odds with the commonly accepted rules of arithmetic or, in the case of quantum theory, with the commonly accepted postulates of quantum theory ... A violation of the EBBI cannot be attributed to influences at a distance.


Rosinger wrote:https://hal.archives-ouvertes.fr/hal-00824124/document
It was shown in [1], cited in the sequel as DRHM, that upon a correct use of the respective statistical data, the celebrated Bell inequalities cannot be violated by quantum systems. This paper presents in more detail the surprisingly elementary, even if rather subtle related basic argument in DRHM
...
The inequalities (17) are purely mathematical. In particular, their proof depends in absolutely no way on anything else, except the mathematical properties of the set Z of positive and negative integers, set seen as a linearly ordered ring, [9]. As for the inequalities (16), they are a direct mathematical consequence of the inequalities (17), and thus again, their proof depends in absolutely no way on anything else, except the mathematical properties of the set R of real numbers, set seen as a linearly ordered field, [9]. It is, therefore, bordering on the amusing tinted with the ridiculous, when any sort of so called “physical” meaning or arguments are enforced upon these inequalities - be it regarding their proof, or their connections with issues such as realism and locality in physics - and are so enforced due to a mixture of lack of understanding of rather elementary and quite obviously simple mathematics
minkwe
 
Posts: 968
Joined: Sat Feb 08, 2014 9:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby FrediFizzx » Sun Jul 12, 2015 10:36 pm

How "dense" does one have to be to not understand that nothing can violate Bell's inequalities? It is mathematically impossible. Unless you cheat and use a different inequality and say it is the same as the one you are trying to violate. :lol: The old strawman type of trick. The Bell believers love to do that all the time.

However, it is possible to reproduce the EPRB QM prediction with a local realistic model. So Bell's "theorem" gets a double whammy against it. And... I don't want to hear about any loopholes since the inequalities are irrelevant anyways.
FrediFizzx
Independent Physics Researcher
 
Posts: 1139
Joined: Tue Mar 19, 2013 6:12 pm
Location: California, USA

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Mon Jul 13, 2015 1:06 am

minkwe wrote:
Jochen wrote:
Code: Select all
  Alice    |    Bob   
  A  |  C  |  B  |  D 
(A_i)|     |(B_i)|     
     |(C_j)|     |(D_j)
     |(C_k)|(B_k)|     
(A_l)|     |     |(D_l),



So do you believe QM cannot generate this spreadsheet?

Huh? Of course QM can produce this spreadsheet, that's where the troubles start! So let's go through this again. You make your measurements on the singlet state using appropriate observables, and put your outcomes in the above spreadsheet (so let's assume you perform each pair of measurements exactly N times). Then, you ask yourself: could these experimental values be explained in terms of a hidden variable model? Of course, you notice that on its own, this question doesn't permit an answer. So you add in the additional assumption of non-disturbance, i.e. that the value of an observable is independent of what other observables are measured in the same round, on the same pair of particles.

Then, the question becomes: are there values that I can write into the empty parts of the table, such that the same correlations are produced? This is exactly the same as asking whether there is a LHV model that can produce the observed data. And of course, the answer is negative: no matter what values you add in, the effect will always be to drive down the value of the CHSH quantity to its bound of two, since this bound holds for all completely filled tables, that is, for all LHV models.

Basically, in the first N rows, the value of is equal to the QM prediction . In the second, third, and fourth N rows, the same is true of the other pairs of observables. But adding in the values predicted by the hidden variable model into the blank entries will inexorably change this: it is not possible for all the correlators to equal the QM predictions across the whole table. So, no completely filled table can exceed the bound of two for the CHSH quantity; hence, there is no way to complete QM with (local) hidden variables and still obtain a violation. Again, if you disagree, show a completely filled table that violates CHSH.

minkwe wrote:The point remains that your spreadsheet above is not a 4xN spreadsheet. It is 4 separate 2xN spreadheets simply stacked on each other. For the CHSH to apply to it, it must be possible to row permutations within each of the sections such that All the outcomes almost match the outcomes and the outcomes almost match the outcomes and the outcomes almost match the outcomes and the outcomes almost match the outcomes.

And this reordering can always be done if the CHSH inequality holds. In each of the four sections, you will have outcomes with A=a, B=b, C=c, and D=d, with . So reorder the first section such that you have first all of the A=+1 entries, then all of the A=-1 entries. There will be +1 entries, and -1 entries, where P(A) is obtained from P(A,B,C,D) by summing over all other observables.

Then, reorder the section of A=+1 entries independently such that you put all those where B=+1 first, and then those with B=-1. Do the same thing with the A=-1 section. In the first block, where A=+1 and B=+1, there will be entries; in the next block, there will be entries; and so on.

Then, consider the A=+1, B=+1 block, and reorder it so that all of the entries with C=+1 come first, and then all of the entries with C=-1. Do the same in the A=+1, B=-1, the A=-1, B=+1, and the A=-1, B=-1 block. The A=+1, B=+1, C=+1 block will then have entries, the A=+1, B=+1, C=-1 block will have entries, and so on.

Finally, then, reorder the A=+1, B=+1, C=+1 block such that the entries with D=+1 come first, and then the entries with D=-1. Do the same for the A=+1, B=+1, C=-1 block, and so on.

Now, you will have 16 blocks in the first list, each of which has a definite number of entries. The first block, A=+1, B=+1, C=+1, D=+1 will have entries, and the final block will have entries. Now simply do the same thing with the remaining three sublists: reorder the list according to the value of A, then reorder them according to the value of B in each sub-list with A having a constant value, and so on. What you will get out, in each case, is a list in which the first block has , the last block has , and generally each block of values A=a, B=b, C=c, and D=d has entries. That is, you will have four identical lists; and thus, in particular, the column is the same as the column , the column is the same as the column , the column is the same as the column , and the column is the same as the column , as you wished.

In fact, this is nothing but a needlessly complicated proof that , since such a reordering is always possible if there exists a joint PD, and the existence of a joint PD implies the validity of the CHSH inequality.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 1:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Mon Jul 13, 2015 2:32 am

Or, to put this more simply, just take the first list containing and , and order them such that you have first the outcomes where A=+1, B=+1, C=+1 and D=+1, then the outcomes where A=+1, B=+1, C=+1 and D=-1, then the outcomes where A=+1, B=+1, C=-1 and D=+1, and so on; then do the same thing with the remaining three lists, and you have four identical lists, and hence, in particular, all the A columns will agree, as will the B, C, and D columns, and you're done.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 1:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jul 13, 2015 6:31 am

Jochen wrote:Huh? Of course QM can produce this spreadsheet, that's where the troubles start! So let's go through this again. You make your measurements on the singlet state using appropriate observables, and put your outcomes in the above spreadsheet (so let's assume you perform each pair of measurements exactly N times). Then, you ask yourself: could these experimental values be explained in terms of a hidden variable model? Of course, you notice that on its own, this question doesn't permit an answer. So you add in the additional assumption of non-disturbance, i.e. that the value of an observable is independent of what other observables are measured in the same round, on the same pair of particles.

Seriously!? Do you have some kind of memory loss syndrome or something. Please flip back 10 pages and start reading. YOU ARE REPEATING DEBUNKED ARGUMENTS!!!


Jochen wrote:Then, the question becomes: are there values that I can write into the empty parts of the table, such that the same correlations are produced? This is exactly the same as asking whether there is a LHV model that can produce the observed data. And of course, the answer is negative: no matter what values you add in, the effect will always be to drive down the value of the CHSH quantity to its bound of two, since this bound holds for all completely filled tables, that is, for all LHV models.

Again, you are repeating debunked arguments! If you start from a 4xN spreadsheet the CHSH follows immediately. This is a mathematical tautology affirmed and explicated in points (1) and (2) of my argument. You have completely missed the point. The only relevant question remaining is point (3). Heine agrees with point (3), Richard Gill agrees with point (3),

gill1109 wrote:He knows a thing or two about statistical degrees of freedom and he knows that the CHSH bound does not apply in this case. The only certain bound one can give is 4. I must say, that he's absolutely right there.


but you continue to disagree with it. The only remaining question is under what circumstances can we apply the inequality derived in point (1), to the independent sets from point (3). Again, I have to repeat this because it appears you need repetition to follow simple arguments. A 4xN spreadsheet of outcomes , implies the CHSH (ie ) STOP trying to convince us of this fact which has never been contested. In fact, this is the only requirement necessary to obtain the CHSH, the presence of a 4xN spreadsheet of outcomes! It is a mathematical tautology, nothing can violate it, not even QM.

4 independent sets of pairs of outcomes does not produce a 4xN spreadsheet. It produces 4 separate independent 2xN spreadsheet of outcomes which implies . It does not matter if you stack them on top of each other, you do not get a 4xN spreadsheet. The point you still do not get is that the inequality derived from a FULL 4xN spreadsheet does not apply to your stacked 4xN spreadsheet with empty spaces. Correlations are calculated using outcomes, not spaces!. The only remaining question is: what conditions need to apply in order for the inequality derived for 4xN spreadsheet of outcomes, to apply to 4 independent 2xN spreadsheets of outcomes.

Please read the above again carefully and make sure you understand it:
- 4xN spreadsheet of outcomes (points 1, 2)
- 4 independent 2xN spreadsheet of outcomes (point 3)
- Under what conditions can we conclude that 4 independent 2xN spreadsheet of outcomes ?

In other words, if those conditions are not met, it would be wrong to conclude that . When I asked you for the QM predictions for the terms in ie , you gave me QM predictions for ie , claiming that you are allowed to do this because the terms are equivalent. This is point (4) which you agree with.

So we did a little exercise to figure out under what conditions the terms will be equivalent to the terms. Those are the conditions which would allow you to apply the inequality to the terms, ie , and those are the same conditions which would allow you to use the same QM predictions for and like you did. Those conditions have been laid out in points (5), (6) and (7), and they have nothing whatsoever to do with LHV theories or 4xN spreadsheets, only 2xN spreadsheets are involved in those points. They have to do with what we mean when we say two expectation values are equal. If you say and and , like you did, then the conditions in (5), (6) and (7) apply immediately. It doesn't matter what kind of theory/experiment is producing the 2xN spreadsheet. If you say all those terms are equal, then it follows that the rearrangements in points (5), (6) and (7) must be possible. Again, since it is difficult to get the point across without repetition, those conditions must apply even to results of experiments or results from QM, if those expectations are the same as you claim they are.

And if those rearrangements can be made, then the inequality follows, even for QM! But point 8 shows that those conditions are impossible to meet. In case you still haven't gotten it, here is a summary: the conditions which allow you to give the same QM predictions for and , are also the same conditions which allow the inequality to apply to , and those conditions can not be met. Therefore it is wrong to conclude that the QM predictions for are the same as the QM predictions for like you did. Similarly, it is wrong to conclude that the inequality applies to .

Jochen wrote:In fact, this is nothing but a needlessly complicated proof that , since such a reordering is always possible if there exists a joint PD, and the existence of a joint PD implies the validity of the CHSH inequality.

Duh! If a joint PD of outcomes (4xN spreadsheet) exists, the CHSH follows immediately. This is point (1). You can make 4 new copies of pairs and rearrange them to your hearts content. The 4 2xN spreadsheets obtained as such will not be independent of each other, and it will always be possible to undo the rearrangements to retrieve your original 4xN spreadsheet and the CHSH , will continue to apply to the 4 rearranged 2xN spreadsheets. This is point (2). The only remaining question is under what conditions can we reduce 4 separate independent 2xN spreadsheets of outcomes into a single 4xN spreadsheet. Point 8 shows that this cannot be done. Therefore you are wrong to conclude that for LHV theories, it can always be done. Your confusion arises because you think these 4 2xN spreadsheets which were generated by combining paired columns from our original 4xN spreadsheet, are the same as 4 independent 2xN spreadsheets. You are woefully wrong here. The 2xN spreadsheets in point (2) are not independent. Those in point (3) are independent.

It bears repeating one more time. The only question remaining is under what conditions are the 4 2xN spreadsheets from point (2), which are not independent from each other, statistically equivalent to the 4 independent 2xN spreadsheets from point (3), as you claim they are in point (4). Point (8) shows that those conditions are impossible. Therefore point (4), which is your claim, is false and the QM predictions are not the same for the two scenarios, and the inequalities are not the same for the two scenarios.

This has been systematically discussed in great detail and carefully laid out. Like I suspected, it does not appear you fully understood the argument and how all 8 points fit together.

Hopefully now, you appreciate the point I was making in the dice example:
minkwe wrote:If I toss a die such that it settles with the 5 exposed sides facing up(U), north(N), south(S), east(E) and west(W). I ask two experimenters (Alice and Bob) to pick a direction from the exposed sides to observe and the outcomes which we will denote as (A) and (B) respectively, one after the other. Let us suppose that the act of reading the die destroys the chosen face and makes it unavailable for the second experimenter to observe the same face. Once a direction is picked, say N, the remaining experimenter can only pick one of the remaining 4 directions U, S, E, W. Now based on the description of the experiment, we derive a certain inequality . For the purpose of this illustration, it does not matter what is. Now it is obvious that the two expectation values <A> and <B>, are not independent in a number of ways. First, the fact that the paired outcomes (A,B) are both faces of the same die carries with it a specific kind of dependence due to symmetry. But in addition, there is a dependence because whatever Alice picked is unavailable to Bob and vice versa. Therefore the relationship will include those dependencies. If instead of doing those measurements on the same die, they decide to toss 2 separate independent dice, it will be foolish to claim that the relationship continues to hold simply because the two dice are of the same type! Obviously, even if the dice are identical, the fact that you have 2 separate independent dice affords more degrees of freedom for the experimenters and therefore the expectation values calculated from a single die will generally not agree with those from 2 independent dice.

NOTE TO HEINE: The point of the example, is simply this, you cannot recover all the dependencies present in the results of tossing a single die by using only outcomes obtained from tossing two independent dice, even if both dice are local realistic. Therefore an inequality which makes use of those dependencies should not be expected hold for the two independent dice, even if both are local realistic.
minkwe
 
Posts: 968
Joined: Sat Feb 08, 2014 9:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jul 13, 2015 7:24 am

Another example:
It is well known that for a tossing a single coin, the probability of obtaining H and T obeys the relationship P(H) + P(T) = 1. But the usual discussion about coin tosses, in which we toss it on a table or floor and read the pre-existing side is too contrived. Think instead of the following we have two devices into which we toss the coin. One is a H-reading device and the other is a T-reading device. Say for example that the devices each have an LED which illuminates if the appropriate side of the coin is UP, but does not illuminate otherwise. Let us use just the H-reading device, toss our coin N times, and every time the light illuminates we increment our H count and every time device does not illuminate, we increment out T count. At the end we calculate the relative frequencies P(H) and P(T), we will find that the relationship P(H) + P(T) = 1 holds.

We can repeat this procedure using just the T-reading device and still verify the same relationship holding exactly P(H) + P(T) = 1. But what if we toss our identical types of coins N times into the H-reading device, and from that calculate just P(H), and N times into the T reading device and from that calculate just P(T)? It would be quite naive to think that the relationship P(H) + P(T) = 1 continues to hold. In fact, such a scenario can violate the relationship drastically, up to P(H) + P(T) = 2.
minkwe
 
Posts: 968
Joined: Sat Feb 08, 2014 9:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Mon Jul 13, 2015 9:40 am

minkwe wrote:Seriously!? Do you have some kind of memory loss syndrome or something. Please flip back 10 pages and start reading. YOU ARE REPEATING DEBUNKED ARGUMENTS!!!

Making your bald pronouncements ALL CAPS does not make them any more true. One usually uses arguments for that purpose.

minkwe wrote:but you continue to disagree with it. The only remaining question is under what circumstances can we apply the inequality derived in point (1), to the independent sets from point (3). Again, I have to repeat this because it appears you need repetition to follow simple arguments. A 4xN spreadsheet of outcomes , implies the CHSH (ie ) STOP trying to convince us of this fact which has never been contested. In fact, this is the only requirement necessary to obtain the CHSH, the presence of a 4xN spreadsheet of outcomes! It is a mathematical tautology, nothing can violate it, not even QM.

But QM does never produce the 4xN spreadsheet, at least not with all values filled; instead, it produces something that can be ordered into the table I gave above. And, as it happens, there exists no way to 'complete' this table: every single way of adding values will inexorably lead to a decrease of the correlations below the CHSH bound. This, and this alone, is the essence of Bell's theorem.

minkwe wrote:4 independent sets of pairs of outcomes does not produce a 4xN spreadsheet. It produces 4 separate independent 2xN spreadsheet of outcomes which implies . It does not matter if you stack them on top of each other, you do not get a 4xN spreadsheet.

No, I get a 4x4N spreadsheet, for which the CHSH inequality holds if it can be filled up with additional values. In particular, the CHSH inequality will also hold for all the four sub-lists you describe, if they can be completed. But, and this is the crucial point: for QM, the 4x4N spreadsheet can't be completed; hence, there exists no LHV theory completing it.

minkwe wrote:The point you still do not get is that the inequality derived from a FULL 4xN spreadsheet does not apply to your stacked 4xN spreadsheet with empty spaces.

I get that perfectly well: that's after all how QM violates the CHSH inequality, while no LHV model can. For the LHV model, it is always possible to find some completion of the table; for QM, it isn't (in certain cases).

The only remaining question is: what conditions need to apply in order for the inequality derived for 4xN spreadsheet of outcomes, to apply to 4 independent 2xN spreadsheets of outcomes.[/b]

The condition is the same as always: there must exist a LHV model. Then, each of the 4 independent spreadsheets can be completed, i.e. have values assigned for the unmeasured observables, and the CHSH inequality holds.

In other words, if those conditions are not met, it would be wrong to conclude that . When I asked you for the QM predictions for the terms in ie , you gave me QM predictions for ie , claiming that you are allowed to do this because the terms are equivalent. This is point (4) which you agree with.

Again, this is trivial if you perform measurements on the same state (well, sufficiently many copies of particles in the same state, to be accurate).

minkwe wrote:And if those rearrangements can be made, then the inequality follows, even for QM! But point 8 shows that those conditions are impossible to meet.

I've shown you how these conditions can be met: if and only if there is a LHV model, you have a joint PD, and hence, can complete the spreadsheets, and rearrange them as desired. This fails for QM, because there, you don't have a LHV model. What's so difficult about that?

minkwe wrote:Therefore it is wrong to conclude that the QM predictions for are the same as the QM predictions for like you did. Similarly, it is wrong to conclude that the inequality applies to .

These are different things: the QM predictions are the same because you carry out experiments on the same state. The LHV bound obtains if you have a LHV model in each case (since again, then you can do your rearrangement).

minkwe wrote:Duh! If a joint PD of outcomes (4xN spreadsheet) exists, the CHSH follows immediately.

Exactly! And what I've shown is that you can do your rearrangements exactly if such a joint PD exists, and thus, if you have a LHV model, you can rearrange; if you don't have one, you obviously can't, and hence, the CHSH bound does not apply---but this is just a different way of saying that correlations which violate the CHSH inequality have no explanation in terms of LHVs.

minkwe wrote:The only remaining question is under what conditions can we reduce 4 separate independent 2xN spreadsheets of outcomes into a single 4xN spreadsheet. Point 8 shows that this cannot be done. Therefore you are wrong to conclude that for LHV theories, it can always be done.

But I've given an explicit recipe to do it in this case! This is a bit frustrating, to be honest.

minkwe wrote:Your confusion arises because you think these 4 2xN spreadsheets which were generated by combining paired columns from our original 4xN spreadsheet, are the same as 4 independent 2xN spreadsheets. You are woefully wrong here. The 2xN spreadsheets in point (2) are not independent. Those in point (3) are independent.

OK, define your use of 'independent'. I mean, of course you can make measurements on four separate systems such that for the first system, you get out the QM correlation prediction for A and B, for the second, the prediction for B and C, and so on, or even exceed them, yielding a value of 4; but of course, we're here talking about measurements on the same system (or again, sufficiently many copies thereof). And again, for each of the four systems, were you to measure the other observables, you'd find the CHSH inequality satisfied if there exists an LHV model.

So OK, maybe I can make sense of what you're saying in this way: you obtain your four different lists by measurements on what may not be the same system in all cases; then, of course, there is no bound for the CHSH quantity except the algebraic one. But of course, this only works if whenever say A and B are measured, the (LHV-) system that yields the appropriate correlation to violate the CHSH inequality is present, and if B and C are measured, then the system that yields the appropriate correlation for those is present, and so on, since no single LHV-system exists that can yield the appropriate correlation for all terms, which you now seem to agree on---such a system always yields a full table. But this of course can't be the case in actual Bell tests, since there, it is randomly decided which measurement will be made after the system has already been prepared, so there is no way to ensure that always the system yielding the appropriate correlations is produced. The best the source could do is select between those systems randomly, which however would never lead to a violation of the CHSH inequality.

It bears repeating one more time. The only question remaining is under what conditions are the 4 2xN spreadsheets from point (2), which are not independent from each other, statistically equivalent to the 4 independent 2xN spreadsheets from point (3), as you claim they are in point (4). Point (8) shows that those conditions are impossible. Therefore point (4), which is your claim, is false and the QM predictions are not the same for the two scenarios, and the inequalities are not the same for the two scenarios.

I suppose it then bears repeating one more time, too that the QM predictions are the same if in all four 2xN tables, you have performed measurements on copies of singlet states, and the rearranging is possible exactly if you have a hidden variable model applying to all four cases.

minkwe wrote:If I toss a die such that it settles with the 5 exposed sides facing up(U), north(N), south(S), east(E) and west(W). I ask two experimenters (Alice and Bob) to pick a direction from the exposed sides to observe and the outcomes which we will denote as (A) and (B) respectively, one after the other. Let us suppose that the act of reading the die destroys the chosen face and makes it unavailable for the second experimenter to observe the same face. Once a direction is picked, say N, the remaining experimenter can only pick one of the remaining 4 directions U, S, E, W.

Again, that's disanalogous to the situation in Bell test experiments: nothing one experimenter does can in any way impinge on what the other one does, because in this case, we have a failure of locality.

minkwe wrote:We can repeat this procedure using just the T-reading device and still verify the same relationship holding exactly P(H) + P(T) = 1. But what if we toss our identical types of coins N times into the H-reading device, and from that calculate just P(H), and N times into the T reading device and from that calculate just P(T)? But it would be quite naive to think that the relationship P(H) + P(T) = 1 continues to hold. In fact, such a scenario can violate the relationship drastically, up to P(H) + P(T) = 2.

Only if you assume that your 'identical' coins don't in fact have the same probability distribution. In which case, yes, you could violate the inequality, by say having two types of coins, one for which P(H) is approximately one (and hence, P(T) is approximately 0), and the other for which P(T) is approximately one, and P(H) approximately zero. But you'd have to make sure only to throw the first kind of coin into the H-reading device, and the second kind of coin into the T-reading device; in a Bell test, however, the nature of the device will only be decided once the coin is already in the air, and it's not hard to see that in this case, always P(T) + P(H) = 1 (at least up to statistical error): simply because on average, each type of coin lands as often in a H-measuring device as in a T-measuring device, these measurements simply tell us, for that type of coin, the actual probabilities with which the coin lands heads or tails. Thus, for each type of coin, we have that P(H) + P(T) = 1. Hence, if a fraction x of the coins have one probability distribution with P(H) = pxH and P(T) = pxT, and a fraction y of the coins have a probability distribution with P(H) = pyH and P(T) = pyT, then x*(pxH + pxT) + y*(pyH + pyT) = (x+y)*1 = 1, since x + y is the total fraction of coins.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 1:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jul 13, 2015 10:03 am

Jochen wrote:
minkwe wrote:Seriously!? Do you have some kind of memory loss syndrome or something. Please flip back 10 pages and start reading. YOU ARE REPEATING DEBUNKED ARGUMENTS!!!

Making your bald pronouncements ALL CAPS does not make them any more true. One usually uses arguments for that purpose.

Whey you ignore the arguments and keep repeating debunked stuff, then maybe shouting at you will wake you up.
Jochecn wrote:
minkwe wrote:but you continue to disagree with it. The only remaining question is under what circumstances can we apply the inequality derived in point (1), to the independent sets from point (3). Again, I have to repeat this because it appears you need repetition to follow simple arguments. A 4xN spreadsheet of outcomes , implies the CHSH (ie ) STOP trying to convince us of this fact which has never been contested. In fact, this is the only requirement necessary to obtain the CHSH, the presence of a 4xN spreadsheet of outcomes! It is a mathematical tautology, nothing can violate it, not even QM.

But QM does never produce the 4xN spreadsheet, at least not with all values filled; instead, it produces something that can be ordered into the table I gave above. And, as it happens, there exists no way to 'complete' this table: every single way of adding values will inexorably lead to a decrease of the correlations below the CHSH bound. This, and this alone, is the essence of Bell's theorem.

You must have difficulty reading. You are the one claiming that the QM predictions are the same for the two scenarios. The whole point of the argument is that if the two are the same, then QM can produce the 4xN table. Have you not been reading carefully. Your argument implies that QM can produce the table you continue to deny QM can not produce. You say from the left side of your mouth that QM can not produce the 4xN table. And from the right side of your mouth that the QM prediction for is exactly the same as the QM prediction for . I have proven conclusively that the two can not possibly be the same. You disagree and yet you continue to believe and spout the two contradictory statements.

Jochen wrote:
minkwe wrote:4 independent sets of pairs of outcomes does not produce a 4xN spreadsheet. It produces 4 separate independent 2xN spreadsheet of outcomes which implies . It does not matter if you stack them on top of each other, you do not get a 4xN spreadsheet.

No, I get a 4x4N spreadsheet, for which the CHSH inequality holds

Bah! Can't you see that by filling spaces in order to generate your 4x4N spreadsheet, you have now changed the correlations, as the outcomes from the 4 separate independent 2xN spreadsheets you have now stacked on each other, are not the same ones you are now using to calculate the new expectation values from your new 4x4N spreadsheet! So again, unless you are manufacturing outcomes, you do not have a 4xN spreadsheet. I think it is hopeless trying to get you to see this. You respond without reading. No doubt you hare always retracting statements.

Jochen wrote:
minkwe wrote:The point you still do not get is that the inequality derived from a FULL 4xN spreadsheet does not apply to your stacked 4xN spreadsheet with empty spaces.

I get that perfectly well: that's after all how QM violates the CHSH inequality, while no LHV model can. For the LHV model, it is always possible to find some completion of the table; for QM, it isn't (in certain cases).

Read carefully, I just showed you that it can not be done period. It is not possible to have 4 independent sets of pairs which are not independent. This is the lesson you will never learn.

Jochen wrote:The condition is the same as always: there must exist a LHV model. Then, each of the 4 independent spreadsheets can be completed, i.e. have values assigned for the unmeasured observables, and the CHSH inequality holds.

Wrong. The condition is what is laid out in points (5-7) and it has nothing to do with LHV. You will note the complete absence of any mention of hidden variables in points (1) to (8).
minkwe
 
Posts: 968
Joined: Sat Feb 08, 2014 9:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jul 13, 2015 11:51 am

Jochen wrote:Only if you assume that your 'identical' coins don't in fact have the same probability distribution. In which case, yes, you could violate the inequality, by say having two types of coins, one for which P(H) is approximately one (and hence, P(T) is approximately 0), and the other for which P(T) is approximately one, and P(H) approximately zero. But you'd have to make sure only to throw the first kind of coin into the H-reading device, and the second kind of coin into the T-reading device; in a Bell test, however, the nature of the device will only be decided once the coin is already in the air, and it's not hard to see that in this case, always P(T) + P(H) = 1 (at least up to statistical error): simply because on average, each type of coin lands as often in a H-measuring device as in a T-measuring device, these measurements simply tell us, for that type of coin, the actual probabilities with which the coin lands heads or tails.


He does not realize that for the experiment being described, P(T), P(H) is not simply a property of the coin. It is a property of the whole setup. He does not see that, if the H-measuring device is always a little bit more biased towards heads, and the T-measuring device is always a little bit more biased towards T, then even if the coins tossed into both machines are identical, you can get P(T) + P(H) above 1 up to 2 even, even though if you use just a single machine to determine P(T) + P(H), you will never get a value above 1. Like I explained, earlier, it is the outcomes that matter, not the coin. In this case, because of how we did the experiment, we do not have a joint PD P(H,T), even though the coins are local realistic, and all the mechanisms of the machines are local and realistic. Just like in the case of the EPRB experiment.
minkwe
 
Posts: 968
Joined: Sat Feb 08, 2014 9:22 am

Previous

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 1 guest

cron
CodeCogs - An Open Source Scientific Library