A new simulation of the EPR-Bohm correlations

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: A new simulation of the EPR-Bohm correlations

Postby FrediFizzx » Sun Jun 28, 2015 9:14 am

minkwe wrote:4) One more. Why do you care whether anything is "disturbed by the act of measurement", when we know already for a fact that the particles cease to exist after measurement?

Actually, the spin 1/2 particles don't cease to exist after measurement but the singlet state of the particle pair is destroyed by the act of measurement. In the case of a photon type of experiment, then the "particles" are destroyed.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: A new simulation of the EPR-Bohm correlations

Postby Ben6993 » Sun Jun 28, 2015 11:57 am

FrediFizzx wrote on Sun Jun 28, 2015 4:14 pm:

minkwe wrote:
4) One more. Why do you care whether anything is "disturbed by the act of measurement", when we know already for a fact that the particles cease to exist after measurement?


Actually, the spin 1/2 particles don't cease to exist after measurement but the singlet state of the particle pair is destroyed by the act of measurement. In the case of a photon type of experiment, then the "particles" are destroyed.


Well, who knpws for sure, but I don't like the idea of matter being replaced by pure energy, whatever that is.
In my preon model all the preons are conserved in a particle interaction. A spin + electron on interacting loses the subset of preons which make it spin + and those preons are replaced by preons which are spin -. Similarly with the photon, all its preons continue to exist, in some particle or field, after the interaction.
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: A new simulation of the EPR-Bohm correlations

Postby FrediFizzx » Sun Jun 28, 2015 11:28 pm

Jochen wrote:
FrediFizzx wrote:
Jochen wrote:The argument hinges on a couple of elementary misconceptions; however, Schmelzer seems to have dispensed with most of them quite handily, so I don't really know what I could add to the discussion there.

If you really think so then please post how quantum mechanics violates Bell's inequality. It can't. It is impossible for anything to violate it.

Well, Bell inequalities are statements about a certain kind of theories, where the observed probabilities always form a convex polytope. Their violation merely says that quantum mechanics is not such a theory.

Let me illustrate that. Take two propositions, and , which are about -valued observables and , and assert ' is ' and ' is ' respectively. This could, for instance, simply be independent coin throws. Then form the conjunction , which asserts ' is and is '. We can construct the truth table for our propositions (I couldn't get the array-environment to work):

Code: Select all
 
A | B | A^B
-----------
0 | 0 |  0
1 | 0 |  0
0 | 1 |  0
1 | 1 |  1


Now, each row contains a possible assignment of values, and all rows togethern contain all possible assignments. We can now construct probability distributions over these assignments: say, , where is the row , means that with probability p, all observables are not , while with probability (1-p), is . Plainly, the most general probability distribution on our system is , where and . This equation defines a convex polytope (actually a simplex) in three dimensions, the so-called correlation polytope. All admissible correlations of the system lie in this polytope, and thus, one will only ever observe probability distributions such that they can be written in the above form. Note here that these are statistical predictions: the only way to have access to them is to carry out a sufficiently large number of measurements on identically prepared systems.

Or at least so one once thought, until quantum mechanics came around. Because note that in the above derivation, we have made the implicit assumption that all of and have definite values; only this made it possible to derive the correlation polytope. If we drop this assumption, and instead calculate the probabilities using the quantum mechanical prescription , where is the projector onto the -eigenspace of , then we get a different---and actually larger---convex body, of which the classical correlation polytope is merely a subset.

What does this have to do with Bell inequalities? Well, polytopes have two ways of characterizing them: via their vertices, as I have done, and via their faces. These faces are given by inequalities: anything that's larger than a certain value is outside the polytope, e.g. These inequalities are exactly Bell's inequalities. Hence, their derivation hinges on the assumption of definite values; and thus, in theoris where that assumption is violated, as it is in quantum mechanics, there is no reason for Bell inequalities to hold---and indeed, they are violated.

Now you're probably wondering that I haven't said anything about locality. That's because that really only comes in once one tries to give the inequalities---which are just equivalent to the mathematical statement that there exists a joint probability distribution for all variables, which one gets from the assumption that there are definite values and that measurements are non-disturbing---any operational or empirical content. In reality, there are many ways for the assumption that there is a joint PD to fail: if my measurement of influenced the likelihood of coming up , for instance. So, you must introduce an auxiliary assumption in order to make testable predictions, which is also the reason that hidden-variable theories per se can never be excluded. There are three basic ways to make this assumption:

  • Macroscopic realism: essentially, you assume that a system is always in one of the states available to it (corresponding to having always one fixed set of responses to all possible measurements), and that measurements carried out at different points in time do not influence one another. This gives you the Leggett-Garg inequalities.
  • Noncontextuality: observables that mutually commute do not influence one another; hence, the value you obtain for is independent of whether you measure it together with or , which constitute the so-called measurement context. This gives you the Kochen-Specker theorem.
  • Locality: measurements performed in spacelike separation don't influence one another, thus Bob's outcomes are independent of Alice's choices. You can view this as a special case of noncontextuality, since operators at spacelike separation always commute. This gives you Bell's theorem.

So, it's in the following sense that quantum mechanics violates Bell inequalities, while classical mechanics doesn't: Bell inequalities are really statements about theories in which all variables have a joint probability distribution; for such theories, they hold necessarily. However, this is not something you can require in general, as any influence of one variable on another spoils it. Hence, you must make an additional assumption in order to produce testable predictions. The weakest such assumption is given by locality, since we have good reason (from special relativity) to believe that information can't be transmitted faster than light. So under the joint assumptions of realism and locality, you can justify the assumption of the existence of a joint PD.

But this need not hold: a given theory could well violate either assumption, in which case it isn't constrained by Bell inequalities. So, the experimental violation of BIs then tells us that quantum mechanics must not obey one of the assumptions. We can either give up locality, if we want to be able to claim that there is a definite value for every measurement to discover; then, we would have a theory essentially like quantum mechanics in that the space of admissible probability distributions is a convex polytope, there just isn't a single such polytope for each experiment. Or, we could give up the assumption of definite values; then, we get a theory for which the set of admissible probability distributions isn't a polytope, but a more general convex body (this can be fully characterized using convex optimization methods). Which one to choose is up to individual preferences.

You have not presented a convincing argument at all as to why QM can violate an inequality that is impossible for anything to violate it. Do the math; it is impossible. The QM experimenters have been cheating by using independent terms instead of the dependent terms that exist in the inequality.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Mon Jun 29, 2015 4:08 am

minkwe wrote:
Jochen wrote:The reasoning is that, if you are committed to the existence of predetermined values for all observables (i.e. realism), then, if they are not disturbed by the act of measurement, you can find a joint probability distribution ; and as shown above, such a joint distribution, no matter how the outcomes are generated, is a sufficient condition for Bell inequalities holding.


Hello Jochen and welcome to SPF. I have a few questions about what you posted above.

Hello minkwe, and thanks for the welcome!

1) I agree that the existence of the joint probability distribution P(A,B,C,D) is the only assumption required to obtain the inequalities. Do you agree that neither locality nor hidden variables are required?

One must be a bit careful here. Yes, Bell inequalities are a simple mathematical consequence of the existence of a joint probability distribution. However, it is not always reasonable to assume such a joint distribution exists, even in classical physics. The crux of Bell's theorem is really to identify sufficient conditions under which such a joint PD exists, and then to show that nevertheless, in QM you can't propose such a joint distribution.

One necessary condition is the existence of predifined values, such that a joint PD is simply a connvex combination of possible value ascriptions. The other condition is the non-disturbance---here I'll leap forward to your fourth point.

Consider a simple finite state machine, whose states are labeled by measurement outcomes---that is, if the machine is in state (ABCD)=(+++-), and you press the button labeled 'A', it will output +1; you press button D, it will output -1, and so on. You could be (classically) uncertain about the precise state of the machine, and hence, associate it with a probability distribution , where the index i counts all possible states from (----) to (++++), is the machine state, and . This means that the machine is in state with probability . So, for instance, such a distribution could be 0.5*(++++) + 0.5*(----), which means that for every observable, you'd have a fifty/fifty chance of getting +1 on measurement.

As such, this automaton could not violate a Bell inequality, as the derivation I have given above shows. However, we can introduce some state transition rules. Such rules take the form: 'If the state is (xxxx), and button Y is pressed, output v(Y) and transition to state (zzzz)'. This models a disturbance of the state of the system by measurement, but such a thing is perfectly ordinary: measuring the voltage in a circuit alters the circuit and hence, what's being measured. But it's clear now that the above argument doesn't apply any longer: if I measure first quantity A, then B, B will in general be drawn from a different probability distribution than A. Thus, the system can produce arbitrary violations of Bell inequalities.

That's why one needs some form of nondisturbance assumption in order to make such tests meaningful. This leads to the three possibilities I gave above, with the locality assumption due to Bell perhaps being the most 'reasonable' one, as again we don't believe that there exist faster than light influences.

So, in order to talk about systems constrained by Bell inequalities, one must make the assumptions of definite values and nondisturbance (in some form); otherwise, violation of a BI is no great surprise.

2) In an EPRB experiment, a series of 2 spin-half particles are measured at angles (a,b) to produce the joint probability distribution P(A,B) for these particles. Since the act of measurement destroys those particles. A different set of particle pairs are measured at angles (c,d) to produce the joint probability distribution P(C,D) for those particles. How is it possible to measure the joint probability distribution P(A,B,C,D) for a series of particles, if the outcomes (A,B,C,D) are never jointly measured for any of the particle pairs?

The existence of this joint PD follows from the assumption of value definiteness. Now, you're right in saying that quantum mechanics doesn't allow us to make all measurements simultaneously (although it's not necessary to perform destructive measurements to violate Bell inequalities---it's been suggested to perform Bell tests with weak measurements, where you can in fact measure all observables on the same member of the ensemble, though I'm unaware of any experimental implementation). But we're interested in a property of a hypothetical completion of quantum mechanics, not one of QM itself, so appealing to the properties of QM at this stage puts the cart before the horse.

So the basic reasoning is this: imagine we have some putative completion of quantum mechanics that attributes the observables A, B, C and D with definite values before each measurement. Assume also that we can make those measurements simultaneously. Then, we can derive, in one single measurement, the quantity . Clearly, whatever the value assignment, this quantity is always smaller than 2.

Now, we want to test this in the actual world. Unfortunately, the actual world does not let us test it directly: measuring A on one particle precludes measuring C on the same particle, since the first measurement will generally change the state. So, what do we do? Well, we note that if would be true for every single measurement if we could perform it (counterfactual definiteness), then certainly it is true on average, i.e. . And this we can now test, by preparing the same state a large number of times (here, the same state means that we observe the same statistics; if this is true, then it is also the case that each copy's observables are distributed according to P(A,B,C,D)---this follows directly from the assumption that our HV-theory completes QM, which in particular means that it must reproduce all of QM's predictions). Then, we can simply do the measurements---and figure out that we violate our statistical constraint, and that hence, our original assumptions can't be true.

3) Finally, QM makes a prediction for the expectation value <AB> of the joint measurement at angles (a,b). Why should any model which tries to reproduce this prediction care about the joint probability distribution P(A,B,C,D), when all that is needed to calculate <AB> is P(A,B), and no experiment will ever be able to measure P(A,B,C,D)?

Because of counterfactual definiteness: if any observation that one could make must be reproduced by the HV-theory, and the HV-theory does not know in advance which measurements will be performed on a given pair of particles, then every particle must have 'pre-prepared answers' to every question we could ask it. So let's say that the first particle of the ensemble comes out in a definite state (++++), the next one in a convex combination 1/3*(++--)+2/3(-+-+), the third one as (----); then, the probability distribution given by the source is simply such that we find with 1/3 probability the first particle, the second, or the third, and hence, yields a combined joint PD 1/3*(++++) + 1/9*(++--)+2/9*(-+-+) + 1/3*(----). In this way, we can always construct a joint PD for the particles the source provides us with, and as you know, no such joint PD can violate a BI.

If you're still uncomfortable with the averaging over, we can also just go on to an example that doesn't need it, the argument given by Greenberger, Horne, and Zeilinger, GHZ for short. There, you can find a contradiction between local realism and the predictions of quantum mechanics just investigating one copy of the state, just looking at definite outcomes, and without inequalities. Let's quickly look at that.

First, define the x-, y- and z-basis for a single qubit Hilber space by means of the eigenvectors of , and to the eigenvalues as follows. The eigenvectors of are labeled and . Whenever a system is in state , then an experimenter performing a measurement of spin in z-direction will receive +1 with certainty, and so on. The eigenvectors of the other two spin measurement directions can then be given in terms of the z-basis as , where is the complex unit, and .

Now let's consider the three qubit state ). Now, a hidden variable theory would have to provide outputs for any possible measurement in advance. But this is impossible. Consider the case where the first party makes an x-spin measurement, and the other two measure y-spin. We can determine the distribution of their outputs directly from re-expressing the state in the appropriate bases:



So if the first party gets a +1 outcome, then the other two parties will be anticorrelated; if they obtain a -1, the others will be correlated. Hence, from any two parties, the outcome of the other is determined. But now let's consider the same state in the x-basis:



We've previously established that if one party measures x-spin, and obtains the result +1, then the y-spin of the other parties must be anticorrelated (the first decomposition above shows this explicitly for the case of the first party measuring, and the other cases are obtained by permuting the parties). Hence, according to the first term of the x-basis decomposition, all three pairs of particles must have opposite y-spin, while the other three terms show that one of the three particle pairs must have opposing y-spin. Thus, in either case, we must have an odd number of pairs with opposite y-spin.

But clearly, this is impossible: either all y-spins are the same, in which case we have no pair with opposite spin, or one differs from the other two, in which case there are two pairs with opposite y-spin. Hence, there exists no pre-assignment of values that could account for all measurements which we might choose to perform and agree with the quantum mechanical predictions, and there's no averaging over needed. Only if we allow that measurement on one spin somehow influences the outcome on the other ones do we get the possibility of agreement; but this is just tantamount to giving up locality. (I'll defer answering your second post, because I don't really think that it brings any new issues into play; if you feel that I should reply to any point there, feel free to point it out.)
Last edited by Jochen on Mon Jun 29, 2015 4:30 am, edited 1 time in total.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 2:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Mon Jun 29, 2015 4:28 am

Joy Christian wrote:
Jochen wrote:If you now say you have a local realistic strategy that nevertheless violates Bell inequalities, then there is some recipe to calculate, from just the hidden variables and the local input of one party, the outcome of every experiment they could perform; that is, there is a function only of the hidden variables and the input of one party such that that parties outcomes are reproduced by that function. So all you have to do is build a box that implements this strategy---either by simulation (which is possible if there is such a recipe for calculation), or by some actual physical system. If you can do this such that neither box has knowledge of the input to the other, then (and only then) will you have shown that there is a local realistic model for quantum correlations.

I believe I have already shown this, both by simulation,

http://rpubs.com/jjc/84238,

which may be reformulated in terms of two separate computers as you have described (although I have not shown this),

This I think is the crucial point. It should be a trivial reformulation of your program such that you can, e.g., choose whether you want to act as Alice or Bob, then have the program simulate a couple of hundred thousand measurement runs, output the data to a table that indicates measurement direction and measurement outcome, and then just compute the correlations from an Alice-run and a Bob-run, implemented separately.

In my view it is meaningless to consider concepts like "locality" and "contextuality" without first considering the geometrical and topological properties of the physical space in which events such as Alice's observations and Bob's observations are occurring: http://arxiv.org/abs/1405.2355.

The thing is, locality etc. really work on a higher level of abstraction, an operational one, that only deals with measurement outcomes, explicitly paying no heed to how they are generated. Basically, whether a theory is local or noncontextual (and realistic) is a property of the state space of the theory: a local realistic theory will only yield probability distributions within a certain convex polytope, which is just the set of probability distributions such that a joint PD for all variables exists. But theories without these assumptions may be used to create probability distributions outside of this set, and consequently, yield to more general convex bodies. There is no talk about topology or geometry (other than that of the state space) because these are properties of certain solutions of a theory, i.e. a given solution of GR may have an -topology; but these are properties of that concrete solution, not of the theory itself, as locality/realism etc. are.

This is why I favour arguments that really only take into account what kind of measurement outcomes I could observe under a given theory. Take for instance the GHZ-argument given above: no assumption about topology is needed to derive the violation of local realism by quantum theory, merely some careful reasoning about what an arbitrary set of hidden parameters is able to provide for, and then showing that this doesn't suffice to explain what is observed quantum mechanically. Since no assumption about topology is made, there can in particular not be a wrong assumption about topology.

In short, you and I seem to be living in two different worlds: You in R^3 and I in S^3. What you see as quantum correlation in R^3, I see as classical correlation in S^3.

No, I'm not concerned with such high-level properties of the theory as geometry or topology of the spacetime it lives in, etc. All I care about is boxes that give a certain output, given a certain input. This reasoning completely abstracty away all the details of topology, and is able to show that if the output of the boxes given a certain input is predetermined, i.e. if 'measurement' merely reveals a pre-set value, and if the details of what I do to one box don't affect what happens to the other, i.e. if there is no disturbance (as is guaranteed, e.g., by the assumption of locality, or noncontextuality), then Bell inequalities necessarily hold, because then I can derive a joint probability distribution, and its existence and the Bell inequalitites holding are completely equivalent conditions.

So, that's my reasoning for thinking that a two-box instantiation of your strategy can't work; but I'm human, I'm fallible: producing such an instantiation would force me to accept that I'm wrong. This would be an unambiguous demonstration, and I think no reasonable critic could withstand its force without tying themselves in knots.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 2:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jun 29, 2015 8:15 am

Jochen wrote:
minkwe wrote:Do you agree that neither locality nor hidden variables are required?

Yes, Bell inequalities are a simple mathematical consequence of the existence of a joint probability distribution. However, it is not always reasonable to assume such a joint distribution exists, even in classical physics.

I think my question was more pointed than that. Is locality required or not? I ask such a direct question for very specific reasons. It is possible to find a situation which is completely local, for which a joint probability P(ABCD) does not exist. Similarly it is possible to find a situation which is non-local for which a joint probability P(ABCD) exists. This of course is immensely important later when you try to draw conclusions about what a violation of inequalities means. I already agreed with you about the crucial importance of the existence of the joint probability distribution P(ABCD) for Bell's inequalities. My question here was specifically about whether locality is required. If I understand your answer, you seem to be saying that locality is not required but is a sufficient condition. But then you go on to describe a system in which there is "disturbance". However, I think you would admit that the inequalities would not apply to a system exactly like the one you described, in which all the "disturbances" were local or travelling at or below the speed of light. So just to be clear, do the inequalities have anything to do with locality?


Jochen wrote:One necessary condition is the existence of predifined values, such that a joint PD is simply a connvex combination of possible value ascriptions.

Again I have to press you a little here on your use of the phrase "necessary condition". Imagine a stochastic process which produces a series of 4 integers . The values are not predefined, yet Bell's inequalities can be derived and such a system must obey the inequalities. Therefore, it is not true to say predefined values is a "necessary condition".


Jochen wrote:The other condition is the non-disturbance---here I'll leap forward to your fourth point.

Consider a simple finite state machine, whose states are labeled by measurement outcomes---that is, if the machine is in state (ABCD)=(+++-), and you press the button labeled 'A', it will output +1; you press button D, it will output -1, and so on.
...
But it's clear now that the above argument doesn't apply any longer: if I measure first quantity A, then B, B will in general be drawn from a different probability distribution than A. Thus, the system can produce arbitrary violations of Bell inequalities.
...
That's why one needs some form of nondisturbance assumption in order to make such tests meaningful.

Okay, you've presented an example of an experiment that does not produce a joint probability distribution. But think about this for a moment. Let us take your state machine exactly as it is, with all the "disturbances" mechanisms exactly the same and instead of pressing one button after the other, let us press all 4 buttons (A,B,C,D) at exactly the same time. Or if you like, instead of the same time, we always press A followed by B, followed by C and followed by D, and then write down those 4 outcomes as (A,B,C,D), and repeat. Do you agree that in such a way we will produce a joint probability distribution P(A,B,C,D) and Bell inequalities should apply? In which case, the presence or absence of "disturbance" is irrelevant to whether you have a joint PD or not?

Jochen wrote:This leads to the three possibilities I gave above, with the locality assumption due to Bell perhaps being the most 'reasonable' one, as again we don't believe that there exist faster than light influences.

It is clear that a joint PD P(ABCD) is a necessary and sufficient condition for Bell's inequality. The other assumptions such as "locality" or "non-disturbance" are superfluous, and are often used as limited contrived examples of ways in which not to obtain a joint PD. They are definitely not "additional" assumptions "required" to obtain the inequalities.

Jochen wrote:So, in order to talk about systems constrained by Bell inequalities, one must make the assumptions of definite values and nondisturbance (in some form); otherwise, violation of a BI is no great surprise.

I think I've just argued that this is not accurate. You could make those assumptions, but it is certainly false that you "must" make those assumptions. The only assumption you must make, is that a joint PD P(ABCD) exists.

Jochen wrote:
minkwe wrote:2) In an EPRB experiment, a series of 2 spin-half particles are measured at angles (a,b) to produce the joint probability distribution P(A,B) for these particles.
...
How is it possible to measure the joint probability distribution P(A,B,C,D) for a series of particles, if the outcomes (A,B,C,D) are never jointly measured for any of the particle pairs?

The existence of this joint PD follows from the assumption of value definiteness. Now, you're right in saying that quantum mechanics doesn't allow us to make all measurements simultaneously.

I just explained to you why value definiteness is not as relevant as you think but maybe you can define for me what you think it means. Are you referring to "counterfactual definiteness"?
Secondly, why do you say "quantum mechanics doesn't allow us to make ..." as if you will be able to make the measurements anyhow. Let us replace the two spin-half particles with two soluble tablets and replace the act of "measurement" with dissolving the tablets into one of 4 liquid solutions , drinking all of the result, and after 1 hr, writing down +1 if we get diarrhea and -1 if not, where the outcome is assigned the symbol based on which liquid the tablet was dissolved in respectively. Obviously, for two such tablets, and only two people Alice and Bob, it is impossible to measure the two tablets more than once each therefore the joint PD P(A,B,C,D) of outcomes does not exist in this experiment, without any disturbance, and without non-locality. Would you say in this case that QM doesn't allow Alice and Bob to measure their tablets more than once? My question is more general and all-encompassing than simply what QM allows or does not allow. Is it possible or not to measure two spin-half particles more than once?

Jochen wrote:(although it's not necessary to perform destructive measurements to violate Bell inequalities---it's been suggested to perform Bell tests with weak measurements

Weak measurements are destructive just the same. They are called weak not because of what they do to individual pairs, but because the measuring device is weakly coupled to the ensemble of pairs. Weak measurements do not allow you to measure a given pair of particles more than once.

Jochen wrote:where you can in fact measure all observables on the same member of the ensemble, though I'm unaware of any experimental implementation).

This is completely false. Pre, and Post selection is extremely important in "weak measurements" precisely because each observable is measured on a completely disjoint subset of the original ensemble. (see http://physicsworld.com/cws/article/new ... -after-all)

Jochen wrote:But we're interested in a property of a hypothetical completion of quantum mechanics, not one of QM itself, so appealing to the properties of QM at this stage puts the cart before the horse.

I think you introduced QM in a question which was general. Is it possible for Alice and Bob to measure their two tablets more than once? The impossibility of doing such a measurement has nothing to do with QM, does it?

Jochen wrote:imagine we have some putative completion of quantum mechanics that attributes the observables A, B, C and D with definite values before each measurement. Assume also that we can make those measurements simultaneously.

But you just admitted that QM does not allow you to measure them simultaneously. So why do you think a "completion" of QM would change that, by assuming that the measurements have to be done simultaneously? The EPR challenge was not about making impossible measurements possible. It would be silly to introduce a contradiction into QM in order to complete it. Besides, why would you want to add an assumption that you know is physically impossible and nonsensical even in simple situations like tablets and solutions?

Jochen wrote:Then, we can derive, in one single measurement, the quantity . Clearly, whatever the value assignment, this quantity is always smaller than 2.

Of course the triviality follows if we have a joint PD P(ABCD). No other assumption is required once you have the joint PD P(ABCD).

Jochen wrote:Now, we want to test this in the actual world. Unfortunately, the actual world does not let us test it directly: measuring A on one particle precludes measuring C on the same particle, since the first measurement will generally change the state. So, what do we do? Well, we note that if would be true for every single measurement if we could perform it (counterfactual definiteness), then certainly it is true on average, i.e. .


You start from the relationship between 4 jointly existing values (A,B,C,D), to derive a relationship and by averaging and statistics you conclude that this relationship should apply to averages of jointly existing pairs . But think for a moment.
Take a series of particle pairs , let us assume that each pair carries with it the set of values , but we can only measure two of them. Thus, for each specific pair of particles, only one pair in the set can be measured, even though the joint set exists. Therefore, your relationship applies to what exists as carried by the particles. But that relationship does not apply to what is measured, because you must measure the different pairs on different particle pairs to get . Obviously,
you can not factorize which means is the proper inequality for this experiment. I can therefore use your logic to say, this inequality must also apply to averages of separate measurements, and therefore . But you seem to think that after averaging, in this second case, the upper bound should be reduced to 2 instead of 4. If you believe this, you must be able to start not from a joint distribution of quartets as is always done, but from 4 distributions of pairs , and show that you can still derive the inequalities. Can you do that? Start from 4 separate distributions of pairs, and make all the assumptions you need to make to derive the inequalities from it.

More later ...
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jun 29, 2015 10:57 am

Jochen wrote:And this we can now test, by preparing the same state a large number of times (here, the same state means that we observe the same statistics; if this is true, then it is also the case that each copy's observables are distributed according to P(A,B,C,D)---this follows directly from the assumption that our HV-theory completes QM, which in particular means that it must reproduce all of QM's predictions). Then, we can simply do the measurements---and figure out that we violate our statistical constraint, and that hence, our original assumptions can't be true.

This is another dubious claim. Let us examine it a little more carefully so that you can appreciate the argument you are making and see if it holds up.
- First you derive an inequality using a necessary and sufficient assumption that a single PD P(A,B,C,D) exists, on the way, you make other additional non-essential assumptions and , for which you would still have obtained the inequalities without.
- Your final inequality contains terms all drawn from a single distribution , like was assumed to start with.
- Since you can't measure all the terms simultaneously, you make the additional assumption , that counterfactual averages , should all have the same value as averages each measured on a separate set of particle pairs . And you think this assumption is justified because the values you actually measure are reproducible?

Just because you can measure repeatedly and reproducibly does not mean


- Finally, you conlude that violation of the inequality implies one of assumptions is false. But you forget that assumptions are not necessary assumptions. Furthermore, you forget assumption completely.

In fact, the only thing a violation tells you is that there is no joint PD for the experiment, which we already knew from the fact that it is impossible to jointly measure the 4 outcomes for any series of particles or tablets if you like. Now let us show that in fact it is assumption which is false.

The expression means that for every single individual pair of particles in the series which produced the outcome pair , there is an equivalent pair of particles in the series. This means a function exists which maps a specific particle pair in set , to a specific particle pair in set . Let us call that function . Similarly, the other equalities in assumption , imply there must exist other independent functions , , , , , . etc.

Imagine a spreadsheet with two columns labelled , and containing all the outcomes for the series of measurements on its rows. Now let us try to apply the function so that we can place on the outcomes from the series of measurements on the next two columns, and we must be able to apply the all those functions to all the measurement outcomes and at the end all the columns labelled with similar upper case letters must be identical in numbers of +1's and -1's, and also in the pattern of switching back and forth. Only then can you assume that are all true and the upper bound of 2 should apply.


But this is most improbable, if not impossible. There is no reason why these independent maps should agree with each other. You will have separate independent mapping functions contradicting each other. The number of degrees of freedom present does not allow you to make that conclusion, therefore it is the assumption that is false.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby Joy Christian » Mon Jun 29, 2015 11:01 am

Jochen wrote:
Joy Christian wrote:
Jochen wrote:If you now say you have a local realistic strategy that nevertheless violates Bell inequalities, then there is some recipe to calculate, from just the hidden variables and the local input of one party, the outcome of every experiment they could perform; that is, there is a function only of the hidden variables and the input of one party such that that parties outcomes are reproduced by that function. So all you have to do is build a box that implements this strategy---either by simulation (which is possible if there is such a recipe for calculation), or by some actual physical system. If you can do this such that neither box has knowledge of the input to the other, then (and only then) will you have shown that there is a local realistic model for quantum correlations.

I believe I have already shown this, both by simulation,

http://rpubs.com/jjc/84238,

which may be reformulated in terms of two separate computers as you have described (although I have not shown this),

This I think is the crucial point. It should be a trivial reformulation of your program such that you can, e.g., choose whether you want to act as Alice or Bob, then have the program simulate a couple of hundred thousand measurement runs, output the data to a table that indicates measurement direction and measurement outcome, and then just compute the correlations from an Alice-run and a Bob-run, implemented separately.


That is exactly what the above simulation does, apart from using a single computer. As I mentioned above, Nature uses only one "computer" (i.e., 3-sphere), not two.

Jochen wrote:
In my view it is meaningless to consider concepts like "locality" and "contextuality" without first considering the geometrical and topological properties of the physical space in which events such as Alice's observations and Bob's observations are occurring: http://arxiv.org/abs/1405.2355.

The thing is, locality etc. really work on a higher level of abstraction, an operational one, that only deals with measurement outcomes, explicitly paying no heed to how they are generated. Basically, whether a theory is local or noncontextual (and realistic) is a property of the state space of the theory: a local realistic theory will only yield probability distributions within a certain convex polytope, which is just the set of probability distributions such that a joint PD for all variables exists. But theories without these assumptions may be used to create probability distributions outside of this set, and consequently, yield to more general convex bodies. There is no talk about topology or geometry (other than that of the state space) because these are properties of certain solutions of a theory, i.e. a given solution of GR may have an -topology; but these are properties of that concrete solution, not of the theory itself, as locality/realism etc. are.


What you describe is a standard folklore. But it is deeply misguided. You speak of boxes, outputs, and measurement outcomes. But in the actual experiments these boxes, outputs, and measurement outcomes are all occurring within spacetime, which has highly nontrivial physical properties, such as the spinorial properties. It is therefore pointless to resort to "higher level of abstraction" without paying any attention to these properties. In particular one cannot hope to have any understanding of "locality" and "contextuality" without first understanding the nontrivial properties of spacetime. Unless of course one is only interested in abstract operationalism, devoid of any relevance to physics. And by "physics" here I mean the physics of the actual experiments being performed within spacetime.

Jochen wrote:This is why I favour arguments that really only take into account what kind of measurement outcomes I could observe under a given theory. Take for instance the GHZ-argument given above: no assumption about topology is needed to derive the violation of local realism by quantum theory, merely some careful reasoning about what an arbitrary set of hidden parameters is able to provide for, and then showing that this doesn't suffice to explain what is observed quantum mechanically. Since no assumption about topology is made, there can in particular not be a wrong assumption about topology.


You are wrong here. The GHZ argument, and indeed all variants of Bell's theorem, implicitly assume one incorrect topology or another. They just don't tell you that they are assuming a certain topology in their argument. See, for example, the following paper where I bring out their hidden assumptions quite clearly:

http://arxiv.org/abs/0904.4259.

And see also a more general argument put forward here: viewtopic.php?f=6&t=115&start=30#p3977.


Jochen wrote:
In short, you and I seem to be living in two different worlds: You in R^3 and I in S^3. What you see as quantum correlation in R^3, I see as classical correlation in S^3.

No, I'm not concerned with such high-level properties of the theory as geometry or topology of the spacetime it lives in, etc. All I care about is boxes that give a certain output, given a certain input. This reasoning completely abstracty away all the details of topology, and is able to show that if the output of the boxes given a certain input is predetermined, i.e. if 'measurement' merely reveals a pre-set value, and if the details of what I do to one box don't affect what happens to the other, i.e. if there is no disturbance (as is guaranteed, e.g., by the assumption of locality, or noncontextuality), then Bell inequalities necessarily hold, because then I can derive a joint probability distribution, and its existence and the Bell inequalitites holding are completely equivalent conditions.


As I noted above, this "abstraction" ideology is misguided. It leads to an empty argument like Bell's, which -- even if true -- could not have any relevance for physics.

Jochen wrote:So, that's my reasoning for thinking that a two-box instantiation of your strategy can't work; but I'm human, I'm fallible: producing such an instantiation would force me to accept that I'm wrong. This would be an unambiguous demonstration, and I think no reasonable critic could withstand its force without tying themselves in knots.


Your last comment is fair and reasonable. As I mentioned, it may be possible to reformulate my simulation in a manner you suggest. But even if that turns out not to be possible, I would not be bothered, for the reasons I explained above.

There is, however, this macroscopic experiment I have proposed, which will not fail to convince everyone, provided it is competently performed and verifies my prediction: http://libertesphilosophica.info/blog/e ... taphysics/.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jun 29, 2015 2:02 pm

Just to expand on the last point in my previous post. The fact that assumption Z is false, can be appreciated by seeing that expression

Can be factored if we apply our mapping functions such that the series of equivalent outcome sequences have the same numbers of +1's and -1's and the same pattern of occurences of those numbers. That is, for the above expression we can use our functions to generate new sequences of outcomes from measured data such that and , where the prime , represents the fact that the sequence of outcomes has been rearranged using the mapping function. Thus we have


With our new re-ordered sets of outcomes we invoke the equivalence and do the factorization to get


But then we immediately face a wall. For this expression to obey the inequality , we also need to be able to rearrange so that and which is for all practical purposes impossible. Note that both and have already been rearranged independently of each other, and since any rearrangement will shuffle both outcomes in the set of pairs, any new rearrangement to make agree with will undo the previous rearrangements. The same for and . Different independent and conflicting rearrangements are required to make the inequality work for 4 separate sets of paired outcomes.

Therefore it is simply not true that the inequality
Derived assuming a single set of quartets of outcomes , should Should apply to 4 different independent sets of pairs of outcomes

As De Raedt et al explained, Bell inequalities simply do not apply to experiments in which the outcomes were not sampled as quartets. The two have different degrees of freedom and obviously different upper bounds.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Mon Jun 29, 2015 2:32 pm

Jochen wrote:Because of counterfactual definiteness: if any observation that one could make must be reproduced by the HV-theory, and the HV-theory does not know in advance which measurements will be performed on a given pair of particles, then every particle must have 'pre-prepared answers' to every question we could ask it. So let's say that the first particle of the ensemble comes out in a definite state (++++), the next one in a convex combination 1/3*(++--)+2/3(-+-+), the third one as (----); then, the probability distribution given by the source is simply such that we find with 1/3 probability the first particle, the second, or the third, and hence, yields a combined joint PD 1/3*(++++) + 1/9*(++--)+2/9*(-+-+) + 1/3*(----). In this way, we can always construct a joint PD for the particles the source provides us with, and as you know, no such joint PD can violate a BI.


That's the problem. You keep talking about the PD of the source. The only PD that matters is the one measured. This is the one predicted by QM and measured in experiments. Just because a particle pair carries a joint set of values, does not mean what you measure on separate independent pairs can be used to reconstruct the joint PD. The very nature of the experiment prevents you from measuring the joint PD, and I've explained above why the assumption that pairs sampled from different actual distributions are equivalent to the counterfactual pairs from the joint PD is just wrong.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Tue Jun 30, 2015 2:13 am

Hi minkwe, I hope you'll forgive me if I don't respond to your posts quote-by-quote; you've been quite busy. If you feel I missed anything of importance, feel free to point it out to me.

So lets take this in logical order. First, however, a slight digression: while I do have some misgivings about weak measurements myself, I think it's uncontroversial that if there are predetermined values, then the weak value will approximate them to any arbitrary degree. Additionally, weak measurements are called this because they weakly couple to individual members of an ensemble, thus minimally disturbing the state, and consequently extracting only a minimum amount of information; it is to rectify this that postselection (the preselection is really just the preparation of a definite state, which we do in any Bell experiment) is necessary. As you can see in Fig. 1 of this paper, it is indeed the case that all measurements are carried out on a single pair.

However, this is largely beside the point, and I haven't studied the paper in enough detail to vouch for its correctness. But lets now move on to your main arguments.

First, as I thought I'd made clear, Bell inequalities follow directly from the assumption of a joint probability distribution; however, in order to make that assumption plausible, one has to assume value definiteness and non-disturbance. Hence, in order to derive Bell inequalities in the original sense, yes, locality is essential. Otherwise, the violation of a BI simply wouldn't tell us anything beyond the fact that the measurements may have disturbed one another, which is commonplace even in classical settings. The locality assumption is there in order to curtail such disturbance.

But let me be a bit more explicit. The assumption of value definiteness is what allows us to conclude that there is a joint PD. This does not mean that there is always the same pre-defined value, just that some value necessarily exists, which is 'discovered' by measurement; that is, that if we knew the complete HV state, then we could predict the outcome of every measurement. Take again a finite state machine which we will consider to be the source of the systems we want to analyze. It prepares the system in a state in which every measurement produces a definite value. That doesn't mean that it always prepares the same state, however: it might, in one run, produce a state (++++), in the next one a state (++--), and so on. Since we have no direct means to access this underlying state, we must characterize it as being drawn from an ensemble given by the joint PD P(A,B,C,D). If in half the cases, it produces (++++), and in the other half of the cases, it produces (++--), then for instance we would have a joint PD 0.5*(++++) + 0.5*(++--), and the probability of finding A=+1 is 100%, while the probability of finding D=+1 is 50%.

Now, complications of this are possible that however don't change the basic point. For one, the state of the HV might evolve in time. But still, at any given point, it is in some state (abcd), and thus, randomly 'drawing' from the time evolution, we will have some joint PD. If for half of some given time interval, it exists in the state (++++), and for the other half, in the state (++--), then measuring it at some random point in time will again yield the above PD. So, in any case, what such a source provides us with is something that can be described using a joint PD, and hence, which must obey Bell inequalities, by virtue of value definiteness.

There is, however, one possibility that changes the above conclusion, and that's given by disturbance. If measuring one observable causes the HV state to change, then in general we can't any longer find a joint PD. This is because, as I have explained above, any subsequent measurement will no longer be drawn from the same PD. To guard against this possibility, we need a nondisturbance assumption. Now, you're right to say that if I could do a simultaneous measurement of all the quantities, then there would be no chance for a disturbance. However, nature will not in general grant you that possibility---not even in classical physics. Think of a baseball freely floating in the dark: you can measure its position by touching it, but this will change its momentum---nothing strange about that.

Hence, we do the next best thing: if influences can only propagate with a certain maximum speed, then doing the measurement in some time limit where the influence of the other measurement could not have an effect is basically the same thing as doing them simultaneously; there is no way for the system to 'know' that a measurement on its other part has already been carried out. This is why we must assume locality in order to expect that BIs hold. (The sequential measurement with fixed sequence as you've described it won't help; measuring A, B, C, D in sequence is no guarantee that each sequence will be drawn from the same PD, since the apparatus could just keep track of the number of measurements and prepare a different HV distribution accordingly.)

It's the same with your example of the tablets. No, you couldn't obtain the PD with a single measurement; you need to repeat that measurement, as you always must, if you're interested in probabilistic data. But if you do this, and there is always a fixed fact of the matter of which tablet induces diarrhea, and drinking one glass does not influence whether one of the others produces diarrhea, then there exists a joint PD, and the data you record this way will always fulfil the Bell inequalities.

Finally, regarding your point of whether the correlators are the same across different trials: they must be; this is a direct consequence of the fact that the values of the observables are drawn from the same probability distribution P(A,B,C,D). For in terms of this PD, the correlator is , hence if P(A,B,C,D) is the same, then so is the correlator. Of course, this holds strictly speaking only for the case of infinitely many measurements in an experiment, but a finite number of measurements will be enough to match the actual correlator to within some small statistical variation using the experimental data.

In any case, I hope that the following is now clear: if you have a source of systems, such that those systems are always in some state in which all measurement outcomes are definite, and if making one measurement has no influence on making other measurements, then you will never violate a Bell inequality. Hence, violation of BIs indicates the violation of one of the above assumptions.

And of course, there's also the matter that in order to demonstrate the failure of local realism, you don't need to appeal to probabilities and repeated measurements at all, but can merely follow the GHZ argument.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 2:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby Joy Christian » Tue Jun 30, 2015 3:25 am

Jochen wrote:And of course, there's also the matter that in order to demonstrate the failure of local realism, you don't need to appeal to probabilities and repeated measurements at all, but can merely follow the GHZ argument.

Hi Jochen,

I find it rather frustrating that you keep repeating arguments that have already been refuted by either minkwe or by me. For example, I doubt that you have actually read through the simulation I have linked above, since you continue to believe that Bell's theorem and its variants like GHZ demonstrate "the failure of local realism" despite all the evidence we have presented to the contrary. In particular, as I have already mentioned, the GHZ argument has been constructively refuted by me in this paper. Just have a look at the details to verify for yourself. In fact, not only the EPR-B correlations and the GHZ correlations, but ALL quantum correlations can be reproduced in a manifestly local-realistic manner, as proved by the theorem on page 12 of this paper. Now you say there are some criticisms of my work, but these criticisms are bogus; and more importantly they have been thoroughly debunked by me and others. It is therefore quite frustrating that you keep repeating already refuted formal, non-constructive arguments despite the existence of explicit and constructive counterexamples to the so-called theorems by Bell and his followers.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Tue Jun 30, 2015 3:48 am

Joy Christian wrote:
Jochen wrote:This I think is the crucial point. It should be a trivial reformulation of your program such that you can, e.g., choose whether you want to act as Alice or Bob, then have the program simulate a couple of hundred thousand measurement runs, output the data to a table that indicates measurement direction and measurement outcome, and then just compute the correlations from an Alice-run and a Bob-run, implemented separately.


That is exactly what the above simulation does, apart from using a single computer. As I mentioned above, Nature uses only one "computer" (i.e., 3-sphere), not two.

So then, you either have a recipe that can compute all local observables using only local data, i.e. a function that takes as input all the data locally available that you consider to be relevant, and the measurement setting; this, you can then implement on separated computers. Or, you have some form of nonlocality---that one part of the program needs some data from the other; but then, you've not demonstrated a conflict with Bell's theorem. It really is if and only if: if local data is enough to compute the experiment outcomes, you can produce a two-boxes implementation, so if you can't do that, then local data is insufficient, and you have some form of nonlocality after all.

Jochen wrote:The thing is, locality etc. really work on a higher level of abstraction, an operational one, that only deals with measurement outcomes, explicitly paying no heed to how they are generated. Basically, whether a theory is local or noncontextual (and realistic) is a property of the state space of the theory: a local realistic theory will only yield probability distributions within a certain convex polytope, which is just the set of probability distributions such that a joint PD for all variables exists. But theories without these assumptions may be used to create probability distributions outside of this set, and consequently, yield to more general convex bodies. There is no talk about topology or geometry (other than that of the state space) because these are properties of certain solutions of a theory, i.e. a given solution of GR may have an -topology; but these are properties of that concrete solution, not of the theory itself, as locality/realism etc. are.


What you describe is a standard folklore. But it is deeply misguided. You speak of boxes, outputs, and measurement outcomes. But in the actual experiments these boxes, outputs, and measurement outcomes are all occurring within spacetime, which has highly nontrivial physical properties, such as the spinorial properties.

All I have in an actual experiment is a list of numbers, the outcomes of my measurements. Now, these may be calculated using the spinorial properties of spacetime, or whatever else you fancy; but in the end, you get out numbers, and it's on the level of those numbers that the question of local realism is to be decided. So, either you have a recipe to generate these numbers using only local data, in which case, it'd be simple enough to implement it; or you don't.

Jochen wrote:This is why I favour arguments that really only take into account what kind of measurement outcomes I could observe under a given theory. Take for instance the GHZ-argument given above: no assumption about topology is needed to derive the violation of local realism by quantum theory, merely some careful reasoning about what an arbitrary set of hidden parameters is able to provide for, and then showing that this doesn't suffice to explain what is observed quantum mechanically. Since no assumption about topology is made, there can in particular not be a wrong assumption about topology.


You are wrong here. The GHZ argument, and indeed all variants of Bell's theorem, implicitly assume one incorrect topology or another. They just don't tell you that they are assuming a certain topology in their argument. See, for example, the following paper where I bring out their hidden assumptions quite clearly:

http://arxiv.org/abs/0904.4259.

In my view, you start off on the wrong track there. Take the version of the GHZ argument I have given above: by merely reasoning about what any local hidden variable value assignment would produce, it is possible to derive a contradiction with quantum predictions. Likewise, my derivation of the CHSH inequality shows that it follows merely from assuming the existence of a joint probability distribution for all variables, which is motivated from value definiteness and locality; again, it pays no heed to how the values are generated, but merely assumes that there are, via some mechanism, definite values that are revealed upon measurement. You're basically arguing that there is a way to generate these values with local data that reproduces the CHSH violation, but the derivation I gave holds for all possible ways of generating such values; it merely assumes that some such values exist.

Jochen wrote:So, that's my reasoning for thinking that a two-box instantiation of your strategy can't work; but I'm human, I'm fallible: producing such an instantiation would force me to accept that I'm wrong. This would be an unambiguous demonstration, and I think no reasonable critic could withstand its force without tying themselves in knots.


Your last comment is fair and reasonable. As I mentioned, it may be possible to reformulate my simulation in a manner you suggest. But even if that turns out not to be possible, I would not be bothered, for the reasons I explained above.

I think that if this should not be possible, then indeed you should be bothered; if computer A has available to it all the data that Alice has, plus all hidden variable data, then this should be enough to derive Alice's measurement outcomes, and likewise for computer B. That's in the end just what it means to have a local realistic strategy. Otherwise, there must be some dependence on data not locally available.

There is, however, this macroscopic experiment I have proposed, which will not fail to convince everyone, provided it is competently performed and verifies my prediction: http://libertesphilosophica.info/blog/e ... taphysics/.

I'd certainly be interested to see how this plays out, too.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 2:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Tue Jun 30, 2015 3:56 am

Joy Christian wrote:Now you say there are some criticisms of my work, but these criticisms are bogus; and more importantly they have been thoroughly debunked by me and others. It is therefore quite frustrating that you keep repeating already refuted formal, non-constructive arguments despite the existence of explicit and constructive counterexamples to the so-called theorems by Bell and his followers.

I'm sorry to frustrate you, but, to be honest, I find many of the criticisms quite appropriate, which I can't say about all the rebuttals. But my understanding of either or both may be flawed. Hence, I try to approach this as open-minded as possible, which is the reason why I proposed the two-boxes simulation. If you can show an implementation where I can play as either Alice or Bob, give the computer a set of measurement direction (or generate one randomly), and get out data that, if subsequently analyzed in conjunction, shows Bell inequality violation, then I think you've convincingly made your case.

I think the grounds that many of your critics come from are similar to mine: there is no topological assumption necessary to derive Bell inequalities; they're simple statements of consistency conditions that must hold for there to be a joint probability distribution. They work on the level of measurement devices that can be treated as black boxes, that either flash a light or don't. What goes on in the boxes is explicitly bracketed---anything at all might go on, as long as in the end, there's a light that either flashes or doesn't. Hence, most people, including me, just don't see what tinkering around with the mechanism inside the box could conceivably do to alter that conclusion.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 2:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby Joy Christian » Tue Jun 30, 2015 4:24 am

Well, we will have to agree to disagree. All criticisms of my work to date are straw-man arguments at best, and deceitful misrepresentations of my work at worst. Bell and his followers do make assumptions of wrong topology, which makes their argument irrelevant to both physics and the original EPR argument. The arguments of Bell and his followers do not even get off the ground, because with their assumption of incorrect co-domain, the EPR criterion of completeness is not satisfied. Your comments above show that neither you nor my critics you mention have understood my argument. I am not "tinkering around with the mechanism inside the box." The only difference between my constructions and those of Bell and his followers is that I am using the correct geometry and topology of the physical space in which the boxes and measurement devices are operating. As soon as one uses an incorrect topology of the physical space in the co-domain of the functions A(a, L) and B(b, L), the EPR criterion of completeness fails, and thus Bell's theorem and its variants become non-starters. Moreover, I have proposed an experiment to test this fact.

PS: Co-domain of the function is not the same as the set of image points of the function. The latter are +/-1 in my arguments, just as in the arguments of Bell. See, for example, equations (7) and (8), as well as equation (B10), of this paper: http://arxiv.org/abs/1501.03393.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Tue Jun 30, 2015 4:52 am

Joy Christian wrote:Well, we will have to agree to disagree.

Well, that's up to you: provide a two-boxes instantiation of your HV theory, and there's no reasonable grounds on which I could disagree anymore.

Your comments above show that neither you nor my critics you mention have understood my argument.

Then let me try to improve my understanding. In your one-page paper, you define your hidden variable model in the first two equations: basically, Alice always notes down , while Bob notes down . So in performing the experiment, they will get a measurement table like this one:
Code: Select all
lambda | Alice | Bob
   +1  |   +1  |   -1
   -1  |   -1  |   +1
   +1  |   +1  |   -1
   +1  |   +1  |   -1
   -1  |   -1  |   +1
   +1  |   +1  |   -1
.
.
.

I mean, that's what equations (1) and (2) say, no? When Alice performs her measurement, the outcome she gets---independently of her measurement setting---is , while the outcome of Bob's measurement in the same round is . These two equations suffice to characterize the full empirical content of your model, as for every measurement that each party performs, it gives us the outcome.

Then, Alice and Bob meet, and compare notes; and from the measurement outcomes they have observed, given by (1) and (2), they compute their correlation. Of course, for any given correlator, they will get for any two directions a and b, as there is always perfect anticorrelation. Hence, the value of the CHSH-expression is . So, it's clear that using this HV-model, there could never be a violation of the CHSH-inequality.

Of course, then you go on to claim that there is, using the correlator you define. But that then disagrees with the way you set up your HV-model, that is, with the actual experimental outcomes it produces. Thus, it's simply not actually the right correlator, as it disagrees with the direct computation of the correlation from the measurement outcomes.
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 2:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby Joy Christian » Tue Jun 30, 2015 5:51 am

Jochen wrote:
Joy Christian wrote:Well, we will have to agree to disagree.

Well, that's up to you: provide a two-boxes instantiation of your HV theory, and there's no reasonable grounds on which I could disagree anymore.

OK, let us try to unpack our differences to see where exactly we disagree. Now, as far as I see, the simulation in question does provide the two-boxes you are asking for. If you look at the definition of Alice and Bob's measurement functions, they are exactly what Bell wrote down in his 1964 paper: A(a, L) = +/-1 and B(b, L) = +/-1.
The correlations are then calculated in two different ways, one of them being the average of A and B, < AB >. So I am not quite sure what you mean by a two-boxes instantiation. By the way, I am not an expert in codes and programming, so I might be missing something here. In other words, I am asking a genuine question here.

Jochen wrote:Then let me try to improve my understanding. In your one-page paper, you define your hidden variable model in the first two equations: basically, Alice always notes down , while Bob notes down . So in performing the experiment, they will get a measurement table like this one:

Code: Select all
lambda | Alice | Bob
   +1  |   +1  |   -1
   -1  |   -1  |   +1
   +1  |   +1  |   -1
   +1  |   +1  |   -1
   -1  |   -1  |   +1
   +1  |   +1  |   -1
.
.
.

I mean, that's what equations (1) and (2) say, no?

I am afraid not. This is what some of my critics would have us believe. The measurement results in my model are occurring in S^3, not R^3. And the topology of S^3 is captured in the actual definition of the functions A(a, L) and B(b, L). What I show in the one-page paper is a theoretical calculation of what we would see if all the observed +1 and -1 results of Alice and Bob are averaged over in the usual fashion. Please read the last appendix of this paper to see this calculation. It has been deliberately misrepresented by people like Scott Aaronson and Richard Gill to make me look like an idiot. Their criticisms are quite disingenuous and irresponsible.

To see how wrong and naïve the claim of "always -1 correlation" is, you may wish to consult equation (A.9.15) on page 244 of this paper. You may be able to appreciate from this equation that the topology of 3-sphere is highly non-trivial, and therefore the correlations are not always -1. In fact they are equal to -a.b.

To reassure you about this, let me also note that the calculation in my one-page paper is now published in this paper (see also viewtopic.php?f=6&t=115#p3763).
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: A new simulation of the EPR-Bohm correlations

Postby Jochen » Tue Jun 30, 2015 6:18 am

Joy Christian wrote:
Jochen wrote:Well, that's up to you: provide a two-boxes instantiation of your HV theory, and there's no reasonable grounds on which I could disagree anymore.

OK, let us try to unpack our differences to see where exactly we disagree. Now, as far as I see, the simulation in question does provide the two-boxes you are asking for. If you look at the definition of Alice and Bob's measurement functions, they are exactly what Bell wrote down in his 1964 paper: A(a, L) = +/-1 and B(b, L) = +/-1.
The correlations are then calculated in two different ways, one of them being the average of A and B, < AB >. So I am not quite sure what you mean by a two-boxes instantiation. By the way, I am not an expert in codes and programming, so I might be missing something here. In other words, I am asking a genuine question here.

What I mean by a two-boxes instantiation is simply a program such that I could run one copy on computer A, which gets as input the measurement directions of Alice, and another copy on computer B, which gets the directions of Bob, and then take the output data and compute the correlations from that. In this way, you'd ensure that there is no 'hidden nonlocality', i.e. the measurement outcomes of both Alice and Bob would be computed from data available only locally in an unquestionable way.

I am afraid not. This is what some of my critics would have us believe. The measurement results in my model are occurring in S^3, not R^3.

OK, let me follow up on that, because I'm afraid you've lost me already. The way I'd read your equations (1) and (2) is that if Alice measures (say, the spin) along some axis a, if the hidden variable is equal to +1, she'd get a result of +1, i.e. spin up along axis a, for instance. Likewise, Bob, measuring in the same run along some axis b, would get an outcome of -1, i.e. spin down along his axis. Is that not correct? If not, then how is an individual measurement result calculated in your approach?
Jochen
 
Posts: 79
Joined: Sat Jun 27, 2015 2:24 am

Re: A new simulation of the EPR-Bohm correlations

Postby Joy Christian » Tue Jun 30, 2015 7:08 am

Jochen wrote:What I mean by a two-boxes instantiation is simply a program such that I could run one copy on computer A, which gets as input the measurement directions of Alice, and another copy on computer B, which gets the directions of Bob, and then take the output data and compute the correlations from that. In this way, you'd ensure that there is no 'hidden nonlocality', i.e. the measurement outcomes of both Alice and Bob would be computed from data available only locally in an unquestionable way.

OK, Fred and minkwe know much more about computers and programming than I do, so let us hope that they have something to say here. To me this seems perfectly possible. I see no need to write another simulation for this purpose. The one I have should allow us to do this.


Jochen wrote:
I am afraid not. This is what some of my critics would have us believe. The measurement results in my model are occurring in S^3, not R^3.

OK, let me follow up on that, because I'm afraid you've lost me already. The way I'd read your equations (1) and (2) is that if Alice measures (say, the spin) along some axis a, if the hidden variable is equal to +1, she'd get a result of +1, i.e. spin up along axis a, for instance. Likewise, Bob, measuring in the same run along some axis b, would get an outcome of -1, i.e. spin down along his axis. Is that not correct? If not, then how is an individual measurement result calculated in your approach?

The individual measurement results are not calculated in the Clifford-algebraic representation of the 3-sphere---i.e., the one used in my one-page paper. It is more convenient to calculated the individual measurement results in a non-Clifford-algebraic representation of the 3-sphere, as in this paper, or in the above simulation. The Clifford-algebraic approach simply proves, theoretically, that if we average the observed +1 or -1 results in the standard way then the correlations will necessarily come out to be -a.b, because of the non-trivial topology of the 3-sphere. Both representations of the 3-sphere are valuable, because they provide different insights into the problem. The Clifford-algebraic representation is very elegant, but not practical, whereas the non-Clifford-algebraic representation is ugly, but practical.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: A new simulation of the EPR-Bohm correlations

Postby minkwe » Tue Jun 30, 2015 8:06 am

Jochen wrote:Hi minkwe, I hope you'll forgive me if I don't respond to your posts quote-by-quote;

I prefer a quote by quote response, so that I don't waste time refuting arguments only for the same arguments to be repeated verbatim.

Jochen wrote:Additionally, weak measurements are called this because they weakly couple to individual members of an ensemble, thus minimally disturbing the state, and consequently extracting only a minimum amount of information; it is to rectify this that postselection (the preselection is really just the preparation of a definite state, which we do in any Bell experiment) is necessary. As you can see in Fig. 1 of this paper, it is indeed the case that all measurements are carried out on a single pair.

I'm afraid you are mistaken about weak measurements. But you are not the only one, it is a wide-spread malpractice in QM to be sloppy when talking about "single pair", "single qubit". In fact, I will wager that all the paradoxes such as GHZ, Bell's theorem and their variants can be effectively debunked by isolating and putting a wedge into the point where results of experiments on many different pairs or qubits are used, in combination with arguments about a "single pair", "single qubit". The quoted paper must be a joke. Please provide an experimental test where there was weak measurement and I will illustrate the error. The bottom line is that many in the community have not learned to appreciate the importance of "degrees of freedom". But I digress also.

Jochen wrote:First, as I thought I'd made clear, Bell inequalities follow directly from the assumption of a joint probability distribution; however, in order to make that assumption plausible, one has to assume value definiteness and non-disturbance. Hence, in order to derive Bell inequalities in the original sense, yes, locality is essential.

I thought I made it clear that neither locality nor non-disturbance is essential. The only essential component is existence of a Joint PD. And existence of a joint PD does not imply or require locality or non-disturbance. I gave you clear examples, based on the ones you had given to demonstrate that even with disturbance, it is still possible to obtain a joint PD, and therefore the Bell inequalities follow. Please review my post if you missed it, and I'll appreciate if you address the specific argument I made.

In fact, let me give you another very clear example. The Bell inequalities will apply to 3 spin-half entangled particles measured simultaneously by Alice, Bob and Cindy. This should give you a huge hint that non-locality or disturbance or even any physics is completely irrelevant.

Jochen wrote:But let me be a bit more explicit. The assumption of value definiteness is what allows us to conclude that there is a joint PD. This does not mean that there is always the same pre-defined value, just that some value necessarily exists, which is 'discovered' by measurement; that is, that if we knew the complete HV state, then we could predict the outcome of every measurement.

This argument has also been addressed. There are no predefined values in a stochastic process which is "discovered". Yet the Bell inequalities would apply for any process. A very simple counter example is a random number generator which generates 4 integers . There are no pre-defined values. Yet the Bell inequalities applies to the outcomes because there is a joint PD P(A,B,C,D). Therefore I think I have demonstrated that it is false to think "pre-determination" or "value-definiteness" is what allows us to conclude that there is a joint PD measurement outcomes. The only thing that allows us to say conclusively that there is a joint PD of measurement outcomes, is joint measurement of the outcomes, period. Such joint measurement is absent from EPRB experiments.

Jochen wrote:Take again a finite state machine ...

There is, however, one possibility that changes the above conclusion, and that's given by disturbance. If measuring one observable causes the HV state to change, then in general we can't any longer find a joint PD.

But again, you've ignored my arguments demonstrating that the above is false. Take the exact same state machine and measure repeatedly in sequence a joint series of 4 outcomes (A,B,C,D) by pressing the 4 buttons in sequence one after the other for every iteration. It follows that you can place the results on a 4xN spreadsheet and the Bell inequalities should apply just the same. Therefore disturbance or non-disturbance is irrelevant. It doesn't matter whether you allow the influence to propagate or not. It doesn't matter whether the influences propagate faster than light or not. So long as you record (A,B,C,D) for each iteration, you have a joint PD and the Bell inequalities apply. If you disagree I would like to see your 4xN spreadsheet of numbers (A,B,C,D) which violates the inequalities.

Jochen wrote:It's the same with your example of the tablets. No, you couldn't obtain the PD with a single measurement; you need to repeat that measurement

The point is that the measurement can not be repeated. It is impossible to reproduce the complete space-time properties of the tablet and liquid from original measurement. Therefore your "need" is impossible to fulfil and therefore no joint PD.

Jochen wrote:But if you do this, and there is always a fixed fact of the matter of which tablet induces diarrhea

First, it is clear that "repetition" of the measurement is impossible. Secondly, the assumption that there must be a fixed fact of the tablet which induces diarrhea, is unnecessarily highly restrictive especially given that you weren't told nor are aware of all the possible mechanisms by which the tablets induce diarrhea. For example, the two tablets in a pair could have a time synchronized dynamic properties such that every other hour, their effects are reversed when combined with certain liquids but not in others. Then Alice and Bob, without any knowledge of all such dynamic details about the tablets, would never be able to repeat the exact conditions of measurement, even if they could recover the dissolved tablets and re-test them. Therefore the probability that they could reconstruct a joint PD by measuring pairs of tablets is zero, except in the most contrived thought-experiments. You don't need the drinking of one glass to directly influence another in this case. But it is true that in this case, the fact that you tested in one glass at a given time, precludes testing the same tablet in another glass at the same time. Therefore the drinking of one glass indirectly influences the outcome you get for the other glasses. This is what degrees of freedom can do. Alice and Bob do not have the freedom to test the same tablet more than once. They do not have the freedom to test all tablets at the same time. By picking the times at which they can test particles in liquids (a,b), they have indirectly influenced the times at which they would be able to test liquids (c,d) etc. This is possible simply because by design the experiment precludes joint measurement of (A,B,C,D). The Bell inequalities would apply even with all the dynamics if instead of 2 tablets, they had 4 identical tablets and Cindy and Dave also testing simultaneously, such that the A,B,C,D outcomes are jointly measured. Therefore, I hope you now appreciate that what determines the existence of the joint PD of outcomes, is not the physical mechanisms/restrictions of non-disturbance or locality, but the ability/inability to jointly measure the outcomes. We know therefore that in EPRB experiments, it is practically impossible to jointly measure (A,B,C,D). Therefore the Bell's inequalities do not apply.

Jochen wrote:Finally, regarding your point of whether the correlators are the same across different trials

Please, could you remind me where I made such a point or asked such a question. You must be misunderstanding something very important if you think I asked such a question. A quote would be appreciated.

Jochen wrote:In any case, I hope that the following is now clear: if you have a source of systems, such that those systems are always in some state in which all measurement outcomes are definite, and if making one measurement has no influence on making other measurements, then you will never violate a Bell inequality.

I think I've shown convincingly that with tablets and solutions, such an argument is "handwavy" and not well thought out. It is true that if you have a series of measurements producing jointly a series of outcomes (A,B,C,D), those measurements will never violate a Bell inequality irrespective of any physical or mystical process underlying the mechanisms producing those outcomes, such as remote disturbance or non-locality. I hope it is also clear now that it is not always possible to reconstruct a joint PD of outcomes (A,B,C,D) from separately sampled paired distributions (A,B), (A,D), (C,B), (C,D), even if such a joint PD exists. Thirdly, I hope it is now clear that non-commutativity of measurements is much more than "disturbance". The inability to perform two measurements that are time-dependent, simultaneously introduces non-commutativity, without in anyway making one measurement to "disturb" the other. Therefore I hope it is now clear why the Bell's inequalities should not apply to EPRB experiments. In other words, "violation" of the Bell inequalities does not tell us anything about "non-locality" or "disturbance". It tells us there is no joint PD of outcomes in experiment. Which shouldn't be surprising, given that (1) we only sampled paired distributions, (2) the paired distributions were measured at different times (3) the particles involved are space-time dynamic.

Therefore Bell's theorem is a flawed argument: In order to apply the inequalities derived from a joint PD of quartets to experiments which only produce paired outcomes, you assume that the joint pairs exactly represent the joint quartets, but when that fails, you do not question the assumption.

Jochen wrote:And of course, there's also the matter that in order to demonstrate the failure of local realism, you don't need to appeal to probabilities and repeated measurements at all, but can merely follow the GHZ argument.

One battle at at a time. Once the Bell theorem is dealt with, then we can push ahead to the next battle, GHZ :D. But our tactic will be the same, since the fault-lines are the same. We will look for theoretical considerations which discuss "single qubit", or "single-particle" within the framework of a probabilistic theory, compare the predictions with outcomes of measurements which they claim were done on a "single particle" or "single qubit". It turns out, when you look closely that combination is a paradox factory.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

PreviousNext

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 69 guests

cron
CodeCogs - An Open Source Scientific Library