What is wrong with this argument?

Post a reply


This question is a means of preventing automated form submissions by spambots.

BBCode is ON
[img] is ON
[flash] is OFF
[url] is ON
Smilies are OFF
Topic review
   

Expand view Topic review: What is wrong with this argument?

Re: What is wrong with this argument?

Post by Joy Christian » Mon Feb 23, 2015 5:12 am

:lol:
Richard Gill, who is very fond of describing himself as a "Quantum c****pot", did not take Fred's advice and now paying a heavy price for it.

His seriously muddled paper has been comprehensively discredited on PubPeer: https://pubpeer.com/publications/D985B4 ... 22#fb25059 .

:lol:

Re: What is wrong with this argument?

Post by gill1109 » Mon Mar 24, 2014 11:16 pm

Zen wrote:ne question: have you changed your mind about this?

viewtopic.php?f=6&t=18#p151

Do you agree that in all simulations (nicely written, by the way; long live to R!) discussed in this forum we are either making the distribution of the hidden variable depend on the detector settings or we are playing with the detectors efficiencies (standard detection loopholes)?

Another question: do you believe that the description of the macroscopic parts of Aspect's apparatus cannot be made using good old Euclidean space?

Best,

Zen.

- No change of mind

- The simulations can be interpreted in two ways. Suppose we do N runs and we observe some "no shows", non-detections, so we only get say n < N pairs of both detected particles. There are two options.
Option 1: We can imagine that we are doing an N run experiment with ternary outcome (-1, 0, 1). Bell-CHSH is an inequality for binary outcomes. We need the CGLMP inequality for a ternary outcome experiment, or we can use CHSH after merging two of the three outcomes to one. Either of these kinds of inequalities are what we call "generalized Bell inequalities" and these two kinds are the only generalized Bell inequalities for a 2 party, 2 setting, 3 outcome experiment. See the section of my paper on generalized Bell inequalities http://arxiv.org/abs/1207.5103 (the final revision just came out on arXiv today). None of the generalized Bell inequalities for a 2x2x3 experiment are violated, because the experiment satisfies "local realism" with no conspiracy loophole.
Option 2: there really are only n runs. The probability distribution of the local hidden variables in the model is the conditional distribution given both particles are accepted by the detectors. In order to pick a value of the hidden variable in the model, we need to know the settings (in effect, we are using rejection sampling: we just keep on picking a value, discard if it is rejected by the settings and try again). We now have n runs of a local realistic 2x2x2 experiment in which the hidden variables are chosen knowing in advance what the settings will be. Or you could call it not conspiracy, realist it is for sure, but non local. First there is communication from the settings to the source. Then the hidden variable is created. Then the particles fly to the detectors, knowing in advance what the settings will be.

- Good old Aspect's experiment operated at very very low detector efficiency and moreover with settings chosen according to a periodic deterministic scheme (with the ratio of the periods in the two arms of the experiment not close to small integers). So it did not prove anything. It is easy to give a local realist simulation which generates similar statistics. Weihs' experiment is a whole lot better but still far too low detector efficiency. It is easy to give a local realist simulation which generates similar statistics. We probably have to wait another five years for an experiment which *cannot* be simulated in a local realist way.

Re: What is wrong with this argument?

Post by gill1109 » Sun Mar 23, 2014 12:30 am

Zen wrote:
gill1109 wrote:
Zen wrote:We really can make the spreadsheet in our computer lab!


Of course you can make the spreadsheet in your computer lab! That's obvious. But if you do that, the simulation has nothing to say about the kind of physical theory that I described above. The relation between the Monte Carlo simulation and Physics is not your concern? Your theorem is correct, Richard. But it amazes me that you don't want to think about the kind of theory that is potentially ruled out by the theorem.

I do think about all kinds of theories, Zen. And I am glad that you agree that my theory applies to computer simulation experiments of certain kinds of local hidden variables models.

You seem to be concerned about theories where subsequent measurements on the same system change the hidden variables in the system. I have been concerned in the past about the so-called memory loophole, which is the problem that in an EPR-B experiment, memory in the detectors could be built up from past particle detections and used in future ones. In particular, information about past settings used by Bob and the outcomes which were then observed by Bob, can easily be available several particles later in the measurement apparatus used by Alice, and in the source too, for that matter. Everyone uses statistics based on independent runs, to analyse the data from these experiments, but that need not be justified. And (before I did) nobody had investigated hidden variable models which exploit, for the n'th run, all the information which is potentially available from runs 1 up to n-1. Though the memory loophole was already being used succesfully to give local realist explanations e.g. for the two slit experiment.

Please take a a look at my quant-ph/0110137 and quant-ph/030159

Re: What is wrong with this argument?

Post by Heinera » Fri Mar 21, 2014 7:56 am

Zen wrote:
gill1109 wrote:
Zen wrote:We really can make the spreadsheet in our computer lab!


Of course you can make the spreadsheet in your computer lab! That's obvious. But if you do that, the simulation has nothing to say about the kind of physical theory that I described above. The relation between the Monte Carlo simulation and Physics is not your concern? Your theorem is correct, Richard. But it amazes me that you don't want to think about the kind of theory that is potentially ruled out by the theorem.


I don't think I understand your objection here. Nowhere in Richard's paper is Alice required to do two measurements on the particle in sequence, so that the first measurement might influence the result of the second measurement with a different detector setting. Alice only does one measurement on each particle, and whatever happens to the hidden varible after that is irrelevant (same goes for Bob). Hovever, the paper does assume that it is meanigful to ask what would be the outcome if Alice picked a different detector setting in the frist place.

Re: What is wrong with this argument?

Post by gill1109 » Thu Mar 20, 2014 11:53 pm

Zen wrote:No. Alice receives , determines , and after that, in her lab, for her electron (photon), the h.v. has now value . In the other lab, Bob receives the same , determines , and after that, in his lab, for his electron (photon),the h.v. has now value . Since Alice and Bob can't determine , , , and simultaneously for the same , there is no "spreadsheet" in this scenario.

Dear Zen

You should now read section 9 of my paper. Alice's measurement device receives lambda. Alice tosses a coin and chooses either to see A(a, lambda) or A(a', lambda). Only one of these is computed by the measurement device and output to Alice. But mathematically, both do exist.

The spreadsheet which I like to talk about does exist as a mathematical object, in the same mathematical universe where our hidden variables model lives. If we have a local hidden variables theory for the experiment, then within that same theory we can construct the spreadsheet.

This is perhaps easier still to understand, if you imagine a computer simulation of the model. Clone Alice's measurement computer. The source computer outputs lambda (it's contained in an email file attachement, or it is a file on a USB stick) and we make a copy and send it to both Alice's measurement computers. Set the setting to a on one of the two measurement computers, and set it to a' on the other. They both generate an outcome. Alice tosses a coin and chooses which computer output to read, given which setting she has chosen. This expanded simulation experiment generates exactly the same results as the original experiment in which there was only one measurement computer.

We really can make the spreadsheet in our computer lab! I explain in section 9 of my paper how to make it, e.g. for Minkwe's "epr-simple" program.

Re: What is wrong with this argument?

Post by Heinera » Thu Mar 20, 2014 2:31 pm

Zen wrote:I think it would be nice to add a comment to your paper saying that your results do not impose any restrictions on theories in which the measurement act changes the values of the hidden variables which determine the complete state of the system.

The hidden variable is sent to both Alice and Bob. When you say that the measurement act by e.g. Alice changes the value of the hidden variable, do you mean that this change also instantly applies to Bob's version of the hidden variable?

Re: What is wrong with this argument?

Post by gill1109 » Tue Mar 18, 2014 1:13 am

FrediFizzx wrote:I would have advise to not publish until you fix your mistakes. We have tried to explain to you why you are wrong but nothing seems to work so we should just drop it as it is just going around in circles.

So are you telling me my theorem is wrong? If so, where is the error in the proof?

Re: What is wrong with this argument?

Post by FrediFizzx » Mon Mar 17, 2014 9:41 pm

I would have advise to not publish until you fix your mistakes. We have tried to explain to you why you are wrong but nothing seems to work so we should just drop it as it is just going around in circles.

Re: What is wrong with this argument?

Post by gill1109 » Mon Mar 17, 2014 3:01 am

Fred, getting back on topic, I am also interested in the question whether or not *you* agree with my claim: in the situation described, the probability that rho11 + rho12 + rho21 - rho22 will be larger than 2.5, is smaller than 0.005 (5 pro mille, or half of one percent).

I am submitting the final version of my paper later this week. Anyone who believes the main theorem is wrong can still try to explain to me what is wrong about it, and prevent the literature from being polluted yet again by another pro-Bell nonsense propaganda piece.

Re: What is wrong with this argument?

Post by gill1109 » Sat Mar 15, 2014 2:16 am

FrediFizzx wrote:Off topic; let's get back on topic here.

Yes please.

The central question of the thread which I started here, is: is the proof of the quoted theorem, correct?

If we come to the conclusion that it seems to be a true theorem, then I will be delighted to open a new topic in which I would like to discuss if and how it can be applied to computer simulation models, QRC and so on.

If the answer is that it seems to be an incorrect theorem, then I will retract the paper in which I planned to publish it. (I have to submit the final version in one week).

The assumptions of the theorem are given, and I hope they are now completely clear. The relevance of the theorem can/will be another topic. (Superfluous if the theorem is false).
Richard Gill wrote:Consider a spreadsheet with N = 4 000 rows, and just 4 columns.
Place a +/-1, however you like, in every single one of the 16 000 positions.

Give the columns names: A1, A2, B1, B2.

Independently of one another, and independently for each row, toss two fair coins.

Define two new columns S and T containing the outcomes of the coin tosses, encoded as follows: heads = 1, tails = 2.

Define two new columns A and B defined (rowwise) as follows: A = A1 if S = 1, otherwise A = A2; B = B1 if T = 1, otherwise B = B2.

Our spreadsheet now has eight columns, named: A1, A2, B1, B2, S, T, A, B.

Define four "correlations" as follows:

rho11 is the average of the product of A and B, over just those rows with S = 1 and T = 1.
rho12 is the average of the product of A and B, over just those rows with S = 1 and T = 2.
rho21 is the average of the product of A and B, over just those rows with S = 2 and T = 1.
rho22 is the average of the product of A and B, over just those rows with S = 2 and T = 2.

I claim that the probability that rho11 + rho12 + rho21 - rho22 is larger than 2.5, is smaller than 0.005 (5 pro mille, or half of one percent)

You can find a proof at http://arxiv.org/abs/1207.5103 (appendix: Proof of Theorem 1 from Section 2), together with remarks by me in the first posting of this thread.

Re: What is wrong with this argument?

Post by FrediFizzx » Fri Mar 14, 2014 3:57 pm

Off topic; let's get back on topic here.

Re: What is wrong with this argument?

Post by gill1109 » Fri Mar 14, 2014 2:13 am

minkwe wrote:So in case you again accuse me of attempting to disrupt your discussions, this will be my last post in this thread. I'm sure you can discuss your theories with others without making occasional snide remarks about me, or forwarding every post you make to my e-mail.

Interesting. A scientist who appears to be "allergic" to a mathematical fact. By refusing to talk about it, he does not make it untrue. He simply closes off his mind to some useful information. De Raedt, Hess and Michielsen did not have this attitude; nor did Giullaume Adenier. Even Joy Christian has had useful discussions with Richard Gill, leading on the one hand to very nice improvements to Michel Fodje's simulation model, and on the other hand to an attempt to get a key experiment actually performed.

Re: What is wrong with this argument?

Post by minkwe » Thu Mar 13, 2014 9:23 am

gill1109 wrote:Interestingly, we have now seen a second discussion "locked".

Richard,
We both agreed that discussion was over, and the moderator acted based on our documented mutual agreement.
I wanted to mention to Michel that I am sure I will be able to explain to him my answers to his questions (1) to (12) .

You already stated in the relevant thread that my points (1) to (12) were nonsense (viewtopic.php?f=6&t=23&start=100#p827), as we agreed, there is nothing more for the two of us to discuss about them. Feel free to discuss with others but I won't participate in that discussion, not because I'm afraid of anything, but because I believe you don't get it, don't want to get it and I never will. So in case you again accuse me of attempting to disrupt your discussions, this will be my last post in this thread. I'm sure you can discuss your theories with others without making occasional snide remarks about me, or forwarding every post you make to my e-mail.

Re: What is wrong with this argument?

Post by gill1109 » Thu Mar 13, 2014 6:17 am

Zen wrote:Not really related to this thread, but in one of your papers you say that your current position is kind of "keep locality, give up on realism". Can you talk a little about how this position "explains" / "deals with" the existence of perfect anti-correlations when the two detectors in Aspect's lab are properly aligned?


Nice question! This position doesn't *explain* perfect anti-correlations. It just refuses to make the step, which Einstein did take, of deducing from perfect anti-correlation that Alice's outcome for any measurement she might have made "exists" in reality, in (or at) the particle, at that given time and place.

In our present context, "realism" is actually a kind of idealistic point of view. Unperformed measurements also have outcomes and those outcomes are moreover "located" in space-time in exactly the same place where the actual outcome of the actual measurement comes to be, after it is done.

If you have a local hidden variables theory, then you can do that. You can think of all the A(a, lambda) (hidden variable lambda fixed, Alice's possible settings a varying) as all living in the same place, all "existing" simultaneously. In fact they are all simply encoded by the value of lambda.

It can be thought to be a cheap way out. Just playing with words.

I think the more subtle position to take is that QM is different from classical physics. It allows things which classically would be impossible (like violating CHSH). It forbids things which classical physics in principle can allow. It is non-deteministic. The future is *not* determined by the past.

Being different, it also clashes with our "embodied cognition". Our brains evolved to allow us feed, breed and multiply in a world by always assuming there is a cause for anything that happened.

Re: What is wrong with this argument?

Post by gill1109 » Thu Mar 13, 2014 3:12 am

Interestingly, we have now seen a second discussion "locked".

I wanted to mention to Michel that I am sure I will be able to explain to him my answers to his questions (1) to (12) once he has worked through the proof of my theorem. So far it seems he refuses to do this, because he doesn't like the assumption of the theorem. The theorem is about an Nx4 spreadsheet. He might think, at the moment, that it is irrelevant to our main quest. Fine. If he's right, he has absolutely nothing to fear! I just want to hear whether he understands the theorem, and whether he thinks it's true or false. If he thinks it is false, I'd like to know why. Obviously I want to immediately retract any false mathematical claim I might have made in the past.

The same question, I put to Fred. If you're right that my theorem is *irrelevant* then it's truth can't hurt you.

And to Joy. If you're right that my theorem is *irrelevant* then it's truth can't hurt you.

Pythagoras's theorem doesn't hurt science. No one is scared of it. No-one ignores it.

Re: What is wrong with this argument?

Post by gill1109 » Thu Mar 13, 2014 12:18 am

Zen wrote:Thanks, Heinera! You're right. I thought we were given the whole spreadsheet initially, and not just the first four columns. My mistake. I will fix it and post my analysis of Gill's proof asap.

In the mathematical sense, there is "given" an Nx4 array of numbers +/-1. Alice and Bob then toss coins. From each row of the array, Alice gets to see the entry from column A1 (S = heads) or from column A2 (S = tails). Similarly for Bob. Alice and Bob then get together to calculate four correlations each based on a different (disjoint) subset of rows.

We start with an Nx4 array with the numbers A1, A2, B1, B2. From now on it is fixed, given. Independently of this we do independent fair coin tosses, S, T; think of them as filling another Nx2 table. The two tables are combined (and reduced) to a new table with columns S, T, A_obs, B_obs. The four correlations are calculated from the third table: correlations between A_obs and B_obs for each combination of values of S and T.

I hope this is slowly getting crystal clear! Improvements to my notation and presentation are surely possible. My aim is to tell a story which science journalists, and high school students, and your grandmum and granddad, can *all* understand.

The link to Bell is that A1, A2, B1, B2 stand for (local functions of) the local hidden variables. S and T stand for Alice and Bob's freely chosen settings. A(obs) and B(obs) are the actually observed outcomes of Alice and Bob's measurements. Because of local realism, or because of local hidden variables, the experiment is "as if" Alice and Bob just randomly pick a predetermined outcome from one of two "preexisting" values. Note the "as if". I'm not saying it really is that way. I'm saying that the final results - what we finally get to see - is mathematically indistinguishable from the final results described here.

We can later discuss why this "as if" is certainly valid for computer simulations like Michel Fodje's "epr-simple". And once one has got the idea, one can extend further to e.g. "epr-clocked". But first I want to see if there is agreement on this little bit of elementary mathematics about randomly picking rows from a spreadsheet.

Think of it as "creating facts on the ground". But in fact, they are not being created by force - they already exist. It seems that Michel and Joy don't want to see them, but they are there, all right.

Re: What is wrong with this argument?

Post by Heinera » Wed Mar 12, 2014 2:12 pm

Zen,

You forgot about the role played by S and T here. Being random coin tosses, they generate the required randomness so that the first expression you mention is not necessarily equal to either zero or one.

And hopefully we won't get bogged down in a discussion about notation here. I find Richard's notation perfectly understandable.

Re: What is wrong with this argument?

Post by gill1109 » Wed Mar 12, 2014 3:00 am

Here is a computer simulation illustration of the theorem.

http://rpubs.com/gill1109/Fred3

The R script reads a spreadsheet from internet. You can also download it yourself: http://www.math.leidenuniv.nl/~gill/fred3.csv. The spreadsheet has 100 rows and 7 columns. The first column contains just a sequence number ("run", which runs from 1 to 100). The next four columns are named A1, A2, B1, B2. This part of the spreadsheet (100 x 4) is the part which my theorem is about. The last two columns contain a particular realization of the settings S and T (called here Sa and Tb since "T" is also a name for "TRUE" in R).

The code computes the correlations and the CHSH quantity both the example realisation as well as for 100 new sets of random coin tosses. You'll see that the supplied coin tosses Sa and Tb were in fact rather lucky ... or the result of some kind of reverse engineering - not the result of fair coin tosses at all.

You can experiment by fillling the Nx4 part in different ways, and you can wonder how I managed to create fred3.csv. It was created by another R script, including some generation of random numbers but with the initial seed known, so that it can be independently reproduced on other computers. You can find that script here: http://www.math.leidenuniv.nl/~gill/fred.R.

(I later changed the order of the columns and the names of two of the columns. The script "fred.R" creates a spreadsheet called "fred.csv". It has the two special settings, which are there called SA and SB, in front of the A1, A2, B1, B2 columns, instead of behind them. The very first column, the one with the run number, isn't given a name).

Re: What is wrong with this argument?

Post by gill1109 » Wed Mar 12, 2014 12:51 am

Joy Christian wrote:
minkwe wrote:
gill1109 wrote:Consider a spreadsheet with N = 4 000 rows, and just 4 columns.

That is what is wrong with the argument. You still do not get it, and I doubt that you ever will.


Hear, hear.

(You have to be British to really understand what this expression means.)


The aim is to discuss the argument here (the theorem), not its premisses.

Michel and Joy's responses are off topic, and they do not even constitute a scientific discussion of another topic.

They are transparent attempts to disrupt a discussion.

Re: What is wrong with this argument?

Post by Joy Christian » Wed Mar 12, 2014 12:18 am

minkwe wrote:
gill1109 wrote:Consider a spreadsheet with N = 4 000 rows, and just 4 columns.

That is what is wrong with the argument. You still do not get it, and I doubt that you ever will.


Hear, hear.

(You have to be British to really understand what this expression means.)

Top

cron
CodeCogs - An Open Source Scientific Library