What is wrong with this argument?

Post a reply


This question is a means of preventing automated form submissions by spambots.

BBCode is ON
[img] is ON
[flash] is OFF
[url] is ON
Smilies are OFF
Topic review
   

Expand view Topic review: What is wrong with this argument?

Re: What is wrong with this argument?

Post by Joy Christian » Mon Feb 23, 2015 5:12 am

:lol:
Richard Gill, who is very fond of describing himself as a "Quantum c****pot", did not take Fred's advice and now paying a heavy price for it.

His seriously muddled paper has been comprehensively discredited on PubPeer: https://pubpeer.com/publications/D985B4 ... 22#fb25059 .

:lol:

Re: What is wrong with this argument?

Post by gill1109 » Mon Mar 24, 2014 11:16 pm

Zen wrote:ne question: have you changed your mind about this?

viewtopic.php?f=6&t=18#p151

Do you agree that in all simulations (nicely written, by the way; long live to R!) discussed in this forum we are either making the distribution of the hidden variable depend on the detector settings or we are playing with the detectors efficiencies (standard detection loopholes)?

Another question: do you believe that the description of the macroscopic parts of Aspect's apparatus cannot be made using good old Euclidean space?

Best,

Zen.

- No change of mind

- The simulations can be interpreted in two ways. Suppose we do N runs and we observe some "no shows", non-detections, so we only get say n < N pairs of both detected particles. There are two options.
Option 1: We can imagine that we are doing an N run experiment with ternary outcome (-1, 0, 1). Bell-CHSH is an inequality for binary outcomes. We need the CGLMP inequality for a ternary outcome experiment, or we can use CHSH after merging two of the three outcomes to one. Either of these kinds of inequalities are what we call "generalized Bell inequalities" and these two kinds are the only generalized Bell inequalities for a 2 party, 2 setting, 3 outcome experiment. See the section of my paper on generalized Bell inequalities http://arxiv.org/abs/1207.5103 (the final revision just came out on arXiv today). None of the generalized Bell inequalities for a 2x2x3 experiment are violated, because the experiment satisfies "local realism" with no conspiracy loophole.
Option 2: there really are only n runs. The probability distribution of the local hidden variables in the model is the conditional distribution given both particles are accepted by the detectors. In order to pick a value of the hidden variable in the model, we need to know the settings (in effect, we are using rejection sampling: we just keep on picking a value, discard if it is rejected by the settings and try again). We now have n runs of a local realistic 2x2x2 experiment in which the hidden variables are chosen knowing in advance what the settings will be. Or you could call it not conspiracy, realist it is for sure, but non local. First there is communication from the settings to the source. Then the hidden variable is created. Then the particles fly to the detectors, knowing in advance what the settings will be.

- Good old Aspect's experiment operated at very very low detector efficiency and moreover with settings chosen according to a periodic deterministic scheme (with the ratio of the periods in the two arms of the experiment not close to small integers). So it did not prove anything. It is easy to give a local realist simulation which generates similar statistics. Weihs' experiment is a whole lot better but still far too low detector efficiency. It is easy to give a local realist simulation which generates similar statistics. We probably have to wait another five years for an experiment which *cannot* be simulated in a local realist way.

Re: What is wrong with this argument?

Post by gill1109 » Sun Mar 23, 2014 12:30 am

Zen wrote:
gill1109 wrote:
Zen wrote:We really can make the spreadsheet in our computer lab!


Of course you can make the spreadsheet in your computer lab! That's obvious. But if you do that, the simulation has nothing to say about the kind of physical theory that I described above. The relation between the Monte Carlo simulation and Physics is not your concern? Your theorem is correct, Richard. But it amazes me that you don't want to think about the kind of theory that is potentially ruled out by the theorem.

I do think about all kinds of theories, Zen. And I am glad that you agree that my theory applies to computer simulation experiments of certain kinds of local hidden variables models.

You seem to be concerned about theories where subsequent measurements on the same system change the hidden variables in the system. I have been concerned in the past about the so-called memory loophole, which is the problem that in an EPR-B experiment, memory in the detectors could be built up from past particle detections and used in future ones. In particular, information about past settings used by Bob and the outcomes which were then observed by Bob, can easily be available several particles later in the measurement apparatus used by Alice, and in the source too, for that matter. Everyone uses statistics based on independent runs, to analyse the data from these experiments, but that need not be justified. And (before I did) nobody had investigated hidden variable models which exploit, for the n'th run, all the information which is potentially available from runs 1 up to n-1. Though the memory loophole was already being used succesfully to give local realist explanations e.g. for the two slit experiment.

Please take a a look at my quant-ph/0110137 and quant-ph/030159

Re: What is wrong with this argument?

Post by Heinera » Fri Mar 21, 2014 7:56 am

Zen wrote:
gill1109 wrote:
Zen wrote:We really can make the spreadsheet in our computer lab!


Of course you can make the spreadsheet in your computer lab! That's obvious. But if you do that, the simulation has nothing to say about the kind of physical theory that I described above. The relation between the Monte Carlo simulation and Physics is not your concern? Your theorem is correct, Richard. But it amazes me that you don't want to think about the kind of theory that is potentially ruled out by the theorem.


I don't think I understand your objection here. Nowhere in Richard's paper is Alice required to do two measurements on the particle in sequence, so that the first measurement might influence the result of the second measurement with a different detector setting. Alice only does one measurement on each particle, and whatever happens to the hidden varible after that is irrelevant (same goes for Bob). Hovever, the paper does assume that it is meanigful to ask what would be the outcome if Alice picked a different detector setting in the frist place.

Re: What is wrong with this argument?

Post by gill1109 » Thu Mar 20, 2014 11:53 pm

Zen wrote:No. Alice receives , determines , and after that, in her lab, for her electron (photon), the h.v. has now value . In the other lab, Bob receives the same , determines , and after that, in his lab, for his electron (photon),the h.v. has now value . Since Alice and Bob can't determine , , , and simultaneously for the same , there is no "spreadsheet" in this scenario.

Dear Zen

You should now read section 9 of my paper. Alice's measurement device receives lambda. Alice tosses a coin and chooses either to see A(a, lambda) or A(a', lambda). Only one of these is computed by the measurement device and output to Alice. But mathematically, both do exist.

The spreadsheet which I like to talk about does exist as a mathematical object, in the same mathematical universe where our hidden variables model lives. If we have a local hidden variables theory for the experiment, then within that same theory we can construct the spreadsheet.

This is perhaps easier still to understand, if you imagine a computer simulation of the model. Clone Alice's measurement computer. The source computer outputs lambda (it's contained in an email file attachement, or it is a file on a USB stick) and we make a copy and send it to both Alice's measurement computers. Set the setting to a on one of the two measurement computers, and set it to a' on the other. They both generate an outcome. Alice tosses a coin and chooses which computer output to read, given which setting she has chosen. This expanded simulation experiment generates exactly the same results as the original experiment in which there was only one measurement computer.

We really can make the spreadsheet in our computer lab! I explain in section 9 of my paper how to make it, e.g. for Minkwe's "epr-simple" program.

Re: What is wrong with this argument?

Post by Heinera » Thu Mar 20, 2014 2:31 pm

Zen wrote:I think it would be nice to add a comment to your paper saying that your results do not impose any restrictions on theories in which the measurement act changes the values of the hidden variables which determine the complete state of the system.

The hidden variable is sent to both Alice and Bob. When you say that the measurement act by e.g. Alice changes the value of the hidden variable, do you mean that this change also instantly applies to Bob's version of the hidden variable?

Re: What is wrong with this argument?

Post by gill1109 » Tue Mar 18, 2014 1:13 am

FrediFizzx wrote:I would have advise to not publish until you fix your mistakes. We have tried to explain to you why you are wrong but nothing seems to work so we should just drop it as it is just going around in circles.

So are you telling me my theorem is wrong? If so, where is the error in the proof?

Re: What is wrong with this argument?

Post by FrediFizzx » Mon Mar 17, 2014 9:41 pm

I would have advise to not publish until you fix your mistakes. We have tried to explain to you why you are wrong but nothing seems to work so we should just drop it as it is just going around in circles.

Re: What is wrong with this argument?

Post by gill1109 » Mon Mar 17, 2014 3:01 am

Fred, getting back on topic, I am also interested in the question whether or not *you* agree with my claim: in the situation described, the probability that rho11 + rho12 + rho21 - rho22 will be larger than 2.5, is smaller than 0.005 (5 pro mille, or half of one percent).

I am submitting the final version of my paper later this week. Anyone who believes the main theorem is wrong can still try to explain to me what is wrong about it, and prevent the literature from being polluted yet again by another pro-Bell nonsense propaganda piece.

Re: What is wrong with this argument?

Post by gill1109 » Sat Mar 15, 2014 2:16 am

FrediFizzx wrote:Off topic; let's get back on topic here.

Yes please.

The central question of the thread which I started here, is: is the proof of the quoted theorem, correct?

If we come to the conclusion that it seems to be a true theorem, then I will be delighted to open a new topic in which I would like to discuss if and how it can be applied to computer simulation models, QRC and so on.

If the answer is that it seems to be an incorrect theorem, then I will retract the paper in which I planned to publish it. (I have to submit the final version in one week).

The assumptions of the theorem are given, and I hope they are now completely clear. The relevance of the theorem can/will be another topic. (Superfluous if the theorem is false).
Richard Gill wrote:Consider a spreadsheet with N = 4 000 rows, and just 4 columns.
Place a +/-1, however you like, in every single one of the 16 000 positions.

Give the columns names: A1, A2, B1, B2.

Independently of one another, and independently for each row, toss two fair coins.

Define two new columns S and T containing the outcomes of the coin tosses, encoded as follows: heads = 1, tails = 2.

Define two new columns A and B defined (rowwise) as follows: A = A1 if S = 1, otherwise A = A2; B = B1 if T = 1, otherwise B = B2.

Our spreadsheet now has eight columns, named: A1, A2, B1, B2, S, T, A, B.

Define four "correlations" as follows:

rho11 is the average of the product of A and B, over just those rows with S = 1 and T = 1.
rho12 is the average of the product of A and B, over just those rows with S = 1 and T = 2.
rho21 is the average of the product of A and B, over just those rows with S = 2 and T = 1.
rho22 is the average of the product of A and B, over just those rows with S = 2 and T = 2.

I claim that the probability that rho11 + rho12 + rho21 - rho22 is larger than 2.5, is smaller than 0.005 (5 pro mille, or half of one percent)

You can find a proof at http://arxiv.org/abs/1207.5103 (appendix: Proof of Theorem 1 from Section 2), together with remarks by me in the first posting of this thread.

Re: What is wrong with this argument?

Post by FrediFizzx » Fri Mar 14, 2014 3:57 pm

Off topic; let's get back on topic here.