A silly computer experiment ... or, the heart of the matter?

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sat Apr 12, 2014 6:01 pm

minkwe wrote:The expectation values E(U), E(V), E(X), E(Y), each have a maximum value of 1.

What is the upper bound for
E(U) + E(V) + E(X) + E(Y)


4.

minkwe wrote:The expectation values E(U), E(V), E(X), E(Y), each have a maximum value of 1.

What is the upper bound for
E(X) + E(-X) + E(Y) + E(-Y)


0, as long as E(X) and E(Y) are finite. Otherwise, undefined.

minkwe wrote:Can anything violate them?


No.

Can we now move on?

If we have proved that a <= b, then a <= b.

If someone says a <= b in one context but says a > b in another context, then a and be are apparently different things in those two contexts. It would be wise to try and find out what the difference is between the two a's, and what the difference is between two b's.

Instead of barking up the same tree ad nauseam, take a look at the other trees in the park.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Sat Apr 12, 2014 6:09 pm

gill1109 wrote:
minkwe wrote:The expectation values E(U), E(V), E(X), E(Y), each have a maximum value of 1.

What is the upper bound for
E(U) + E(V) + E(X) + E(Y)


4.

minkwe wrote:The expectation values E(U), E(V), E(X), E(Y), each have a maximum value of 1.

What is the upper bound for
E(X) + E(-X) + E(Y) + E(-Y)


Infinity.

minkwe wrote:Can anything violate them?


No.

Can we now move on?

See, that wasn't so difficult was it. One more question, what if we also specify the minimum value of E(U), E(V), E(X), E(Y) to be -1. What will the bounds be.

Do you still agree with Heinera that When we are talking about an upper bound for expectations, the bound will usually be violated half of the time.?
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sat Apr 12, 2014 6:23 pm

minkwe wrote:what if we also specify the minimum value of E(U), E(V), E(X), E(Y) to be -1. What will the bounds be.

What bounds?
The upper bound doesn't change.
But now we can also deduce lower bounds 4, 0.

minkwe wrote:Do you still agree with Heinera that When we are talking about an upper bound for expectations, the bound will usually be violated half of the time.?

It depends what we mean by saying "the bound will usually be violated half of the time".

According to your interpretation, no.

According to my interpretation, yes.

We all agree that if a = 3 and b = 4, then the bound a + b <= 10 is satisfied
But if a = 8 and b = 3, then the bound a + b <= 10 is violated

If on the other hand I have a bound a + b <= 10 and then I say c = 8 and d = 3, then no-one is upset.

If someone shows you a proof of a bound, and then gives you an example where the bound is violated, then either

(a) he has just proved to you that his proof is wrong, or
(b) he is talking about two different situations. The situation in which the bound can be proved, doesn't apply to the second situation.

When we violate a bound, we simply prove that the assumptions which led to the bound are not applicable.

Logic:

A => B
not B
---------
not A





B
not B
------------------
A


For any A and B,

B, and not B => A

Yes, logic is fun. People don't know it as well nowadays as they used to.

Joy is very good at it. He especially likes to use

B
not B
------------------
A

You should study his papers. One can learn a lot of really clever proof techniques from them.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Sat Apr 12, 2014 8:20 pm

gill1109 wrote:What bounds?
The upper bound doesn't change.
But now we can also deduce lower bounds 4, 0.

I'm trying to get you to tell me based on your own definition of bounds what you believe the bounds of the two expressions

E(U) + E(V) + E(X) + E(Y)
E(X) + E(-X) + E(Y) + E(-Y)
are if each of the expectation values has maximum and minimum values of +1 and -1 respectively.

You've already answered that you think the first expression should have an upper bound of 4, but for the second one you answered based on the assumption that there was no minimum for the expectation values. So I clarified that the minimum value is -1 and asked you again what you believe the upper bound for the second expression would be based on your own definition of bounds. Based on my definition, it should be 0, and neither of them can ever be violated, even by experimental error.

Then I asked whether, still based on your definition of bounds, any of those bounds you just identified could ever be violated by anything, even by statistical error. You already answered that the bound for the first one could not. Please could you clarify if you believe the bounds for the second one, now clarified can be violated by anything whatsoever?

If someone shows you a proof of a bound, and then gives you an example where the bound is violated, then either

(a) he has just proved to you that his proof is wrong, or
(b) he is talking about two different situations. The situation in which the bound can be proved, doesn't apply to the second situation.

When we violate a bound, we simply prove that the assumptions which led to the bound are not applicable.

Exactly. So when your R-code above produces a value of 2.00001, it proves that the assumptions which led to the bound of 2 are not applicable to it. In other words, the CHSH does not apply to the second part of your R code experiment. As we have established, an upper bound can not be violated, not even by experimental error. A 0.000000001 violation is just as much proof as a 10000000 violation that the assumptions which led to the inequality are not applicable to the situation which has produced the violation. Kapish?

Therefore everything you said about how the upper bound of the CHSH should apply to a system which violates it sometimes (statistically) is baloney.
And Heinera's statement that "When we are talking about an upper bound for expectations, the bound will usually be violated half of the time" is baloney too.

I believe I have successfully demonstrated in various different threads that the apparent violation of the CHSH by QM and Experiments is simply because the CHSH was derived for one situation and is being applied to a different situation, and the difference has nothing to do with realism or locality whatsover, all it has to do with is the difference between counterfactual outcomes on a single set, and actual outcomes from 4 disjoint sets. Your arguments that the CHSH bound should still apply for some statistical reason, even though your own examples purporting to prove the claim actually violate the bound sometimes, do not survive.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sun Apr 13, 2014 12:01 am

I do not apply the CHSH bound to a CHSH experiment

Experiment produces results which might fit LHV predictions, or might fit QM predictions, or neither.

Neither LHV nor QM predict the four numbers coming out of the experiment.

Both theories predict a probability distribution of the four numbers coming out of the experiment. They predict that if N is large, ave(AB) will be close to rho(a, b).
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Sun Apr 13, 2014 7:33 am

gill1109 wrote:I do not apply the CHSH bound to a CHSH experiment

You do in your papers.


Neither LHV nor QM predict the four numbers coming out of the experiment.

Both theories predict a probability distribution of the four numbers coming out of the experiment. They predict that if N is large, ave(AB) will be close to rho(a, b).


Obviously false. A probability distribution is not a single number. QM predicts a single number for E(a,b). So does a LHV theory. The experimental averages should be close to these numbers for large N. Yet bounds are never violated by experimental error.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sun Apr 13, 2014 11:43 am

Michel you are completely mistaken. You clearly don't understand a thing about probability and statistics, and you obviously don't have a clue about the relation between experiment and theory

QM tells us what rho(a, b) is. You don't seem to realise that rho(a, b) stands for the expectation value of A x B (A times B). When we do an experiment and observe say 100 pairs (A, B), the average of their product, ave(A x B), is not going to equal the mean value predicted by theory.

Do you know the difference between the expressions "expectation value" and "sample average"?

Ever heard of the word "standard error"?

Ever seen "error bars" in graphics in experimental papers?

Do you know the difference between the words "population" and "sample"?

Ever heard of the word "p-value"?

Theoretical bounds on population expectation values can easily be violated by sample averages.

Suppose for instance I take a sample of size 100 from the normal distribution with mean mu and standard deviation sigma = 0.5. Suppose some theory states that mu <= 0.

Denote the average of the sample by Xbar ().

Then it possible that Xbar > 0. In fact, if actually mu = 0, then Prob(Xbar > 0) = 0.50

If N = 100 and we know sigma = 0.5 then it is possible to see Xbar = 1.5. In fact, though theory puts a bound on mu, the same theory puts no bound at all on Xbar. The only bound we can put on Xbar is +infty.

If Xbar took the value 1.5 and if we knew sigma = 0.5, and if our experiment had N = 100, we would say that we had experimentally observed a violation of the bound mu <= 0, since at mu = 0, the value Xbar = 1.5 is 1.5/(0.5 / sqrt 100) = 30 standard errors above zero. The chance to observe such a large value of Xbar when actually mu <=0 is so tiny, any reasonable person would reject the theory that states mu <= 0.

I'm still waiting (a) for your histograms of the results of my experiment, and your explanation of why they look the way they do; and (b) for your translation into Python of the R code for determining who has won Joy and my bet.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Sun Apr 13, 2014 11:59 am

Michel, I thought we were making some progress a day or two ago, so I actually read some of your posts just now. I responded to two of the four before I got bored with reading the same nonsense again and again. However you seem to be retrogressing, not progressing, so I am switching your posts off again. (You know it is easy to make posts of particular persons invisible). You badly need to learn the difference between (a) the expectation value of a probability distribution and (b) the average of a random sample from a probability distribution. As long as you are not capable of distinguishing between these two concepts there is no way to have an intelligent discussion with you.

I'm still waiting (a) for your histograms of the results of my experiment and your explanation of why they look the way they do; and (b) for your translation into Python of the R code for determining who has won the bet between Joy and me. You'll have to send me a personal email to let me know when you have succeeded, otherwise I probably won't notice.

Pity, I hoped to learn some Python skills for you.

And Joy needs to be reassured that my R code correctly reproduces the instructions in his experimental paper.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Sun Apr 13, 2014 12:44 pm

gill1109 wrote:You badly need to learn the difference between (a) the expectation value of a probability distribution and (b) the average of a random sample from a probability distribution.

Very funny. You are trying to dodge out of your obviously false claim that "Neither LHV nor QM predict the four numbers coming out of the experiment. Both theories predict a probability distribution of the four numbers coming out of the experiment." Throwing accusations does not change the obvious fact that that statement is false. Nor have you been able to show how any upper bound can ever be violated by even 0.0000001. So why do you keep repeating the falsehood that due to experimental error, the bound can be exceeded?

Both QM and LHV make predictions for expectation values. Whether they do that my predicting the probability distribution or not is a red-herring. There is no doubt that they predict expectation values. You know that, so why the rhetorical tricks when the rest of your argument is failing? If you want, I can show you where you state in your own papers that QM predicts expectation values. Now if you want to nitpick some more and claim expectation values are not numbers, be my guest.

I already pointed out why your R-code is pointless. Close your ears and eyes all you want. You cannot extinguish the sun by refusing to see it. For somebody you claim doesn't understand statistics and probability theory, you surely are having a very hard time forming legitimate responses to my arguments. I wonder what that means. Now next time you claim that "realism is untenable", you will think twice. You will remember our exchanges and even though you make frivolous face-saving discourteous statements in public about my arguments, I believe you know deep down that I am right.

Anyone in doubt can simply look at these expressions from your Larsson and Gill paper:
δ > 0
δ ≥ 4 − 3/γ
|E(AC′|ΛAC′) + E(AD′|ΛAD′)| + |E(BC′|ΛBC′) − E(BD′|ΛBD′)|≤ 6/γ − 4
Where γ is coincidence efficiency of an experiment.

A master-piece of contradiction. Just set the coincidence efficiency to 0.5 and see what you get, and ask yourself why a valid theory with a linear combination of 4 terms each with bounds of [+1, -1] will predict an upper bound of 8 when we have established the maximum must be no bigger than 4. Well, I guess the response will be more character assassination and no response to the substance.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Mon Apr 14, 2014 1:12 am

Expectation values are numbers.

Sample averages are numbers.

Theory does not say they have to be equal.

Example: the support of the standard normal distribution has upper bound +infty.

The only upper bound one can give to a sample of any size from the standard normal distribution is +infty.

The (theoretical) mean value of the standard normal distribution is zero.

The average of a random sample of size N = 1 000 000 is less that 0 + 0.005 (that's five standard deviations of the sample average above the mean value) with probability 1 - ( 2.8665 * 10^-7 )

Five standard deviations is commonly accepted in the experimental physics community as experimental proof.

My old teacher Stephen Hawking accepted on this basis that the Higgs boson does exist, paid up on his bet to Kip Thorne, and recommends the Nobel for Higgs (Higgs the person, not Higgs the boson).

What is proof for Stephen Hawking is proof for me. This forum is a forum about physics. As Einstein remarked "As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain, they do not refer to reality". We use mathematics to do physics. In this forum we talk about physics.

A QM expectation value is something coming out of some mathematics. A LHV bound on expectation values is something coming out of some mathematics. To make either refer to reality, we have to add in uncertainty (= probability, statistics).
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: A silly computer experiment ... or, the heart of the mat

Postby minkwe » Mon Apr 14, 2014 5:22 am

minkwe wrote:Anyone in doubt can simply look at these expressions from your Larsson and Gill paper:
δ > 0
δ ≥ 4 − 3/γ
|E(AC′|ΛAC′) + E(AD′|ΛAD′)| + |E(BC′|ΛBC′) − E(BD′|ΛBD′)|≤ 6/γ − 4
Where γ is coincidence efficiency of an experiment.

A master-piece of contradiction. Just set the coincidence efficiency to 0.5 and see what you get, and ask yourself why a valid theory with a linear combination of 4 terms each with bounds of [+1, -1] will predict an upper bound of 8 when we have established the maximum must be no bigger than 4. Well, I guess the response will be more character assassination and no response to the substance.


gill1109 wrote: so I am switching your posts off again. (You know it is easy to make posts of particular persons invisible).

[Personal comment removed by Admin]
Last edited by Admin on Mon Apr 14, 2014 5:00 pm, edited 1 time in total.
Reason: Removed derogatory personal remark.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: A silly computer experiment ... or, the heart of the mat

Postby gill1109 » Mon Apr 14, 2014 9:29 am

Imagine the following experiment.

Alice and Bob are physics students. They are in different classrooms and in each classroom there is a black box, called "measuring device A" and "measuring device B" respectively. These two boxes are connected to another black box in another classroom, called "source", through some kind of cables, tunnels, or whatever, so that all three black boxes have means to share any information they like. Alice and Bob's boxes each have two buttons, and two lights. The buttons can be pressed, the lights may or may not flash. The communication channels can be switched on and off.

Initially the communication channels are open.

The following is now repeated 10 000 times:

Step 1. The connections are severed.

Step 2. Alice presses the button marked "0" or the button marked "90"; Bob presses the button marked "45" or the button marked "135". After Alice and Bob have each pressed a button, a red or a green light flashes on their box. They record their input and their output.

Step 3. The connections between the three magic black boxes are restored.

Assume that in each of the 10 000 runs or trials, Alice and Bob each choose their button completely at random. Imagine that we get to see the following statistics, each of course based on a disjoint subset of about 2 500 runs:

Prob(lights flash same colour | 0, 45)
= Prob(lights flash same colour | 90, 45)
= Prob(lights flash same colour | 90, 135)
= 0.15

Prob(lights flash same colour | 0, 135)
= 0.85

Challenge to Michel (and anyone else who wants to become famous): write three computer programs which simulate the three magic boxes, to be run on three separate computers sending one another messages by internet (to simulate the communication channels).

Fine by me if you just write one computer program but it must be clear that it could be broken into separate pieces as required.

The first person who succeeds will almost certainly win the Nobel prize and revolutionarize quantum physics. And prove conclusively that quantum entanglement is a myth. You are welcome to program Joy Christian's S^3 based theory to achieve this aim. Why are we waiting? Which Bell-denier is going to be first? They can't even do this yet in the quantum optics lab, though they say they might get there in five years. Beat them to it, beat them at their own game!
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Previous

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 102 guests

cron
CodeCogs - An Open Source Scientific Library