minkwe's challenge

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: minkwe's challenge

Postby minkwe » Thu May 08, 2014 7:40 am

Heinera wrote:Since Bell's theorem (which you seem to disagree with) is about loophole free models, show me a LHV model that reproduces the quantum correlations without data rejection.

Yes, this is Bell's theorem:

No physical theory of local hidden variables, in which every particle is detected (cf detection loophole), and time-delays between particles are forbidden (cf coincidence time loophole), and momentum transfers are forbidden (cf memory loophole), and counter-factual outcomes are measured on the same set of particles as actual ones, can ever reproduce all of the predictions of quantum mechanics.


Then your statement above is equivalent to :
Since Bell's theorem is about non-physical models, show me a physical model which violates it.
Simply nonsense.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby minkwe » Thu May 08, 2014 8:11 am

Heinera wrote:"In particular then, the program can be run with N = 1 and all four possible pairs of measurement settings, and the same initial random seed, and it will thereby generate successively four pairs (A,B), (A',B), (A,B'), (A,B'). If the programmer neiter cheated or made any errors, in other words, if the program is a correct implementation of a genuine LHV model, then both values of A are the same, and so are both values of A', both values of B, and both values of B'. We now have the first row of the Nx4 spreadsheet of Section 2 of this paper." (My emphasis.)

In other words, if the model is not a LHV model (e.g., a non-local model) both values are not generally the same, and there will be no table; no upper bound of 2.

You still don't have a clue:

1 - If you measure (A,B), (A',B), (A,B'), (A,B') on a different particle pair, the A in (A,B) can be different from the A in (A,B') without any mistake or cheating.
2 - If you measure the same particle at a (A,B), and exactly the same particle again at (A,B'), then A in (A,B) can be different from the A in (A,B') without any mistake or cheating.
3 - The only way to measure (A,B), (A',B), (A,B'), (A,B') on the same particle, and make sure the A in (A,B) and the A in (A,B') are the same (and each outcome is the same in each pair), is to measure the same particle pair, simultaneously at (A, A', B, B'), an impossibility. Therefore a genuine experiment testing S <= 2 is impossible.
4 - If the probability of obtaining H for a coin is 0.75, the probability of the counter-factual H outcome for the same coin cannot be 0.75 too. It must be 0.25.
5 - No 4xN spreadsheet can violate the S <= 2. It doesn't matter where you get your data to put in the spreadsheet, from LHV/QM/non-local model/non-real model/statistical error etc.
6 - The correct inequality for 4 different 2XN spreadsheets is S<= 4, it doesn't matter where you get your data to put in the spreadsheet, from LHV/QM/non-local model/non-real model/statistical error etc. 4 *different* 2xN spreadsheets can easily violate S <= 2, because that inequality does not apply to such data. It is a mathematical error to even compare them.
7 - It is utter nonsense to compare an inequality derived from a 4xN spreadsheet, with data in the form of 4 different 2xN spreadsheets, even if your 4 *different* 2xN spreadsheets are randomly sampled from a single 4xN spreadsheet. What determines the upper bound is the degrees of freedom in the data, not the degrees of freedom in the original spreadsheet you randomly sampled from.
8 - These inequalities have nothing to do with physics, they are mathematical tautologies about real numbers and degrees of freedom. Please read the Rosinger paper carefully. Their violation points to a mathematical error in their application. Nothing can violate them.
9 - No EPRB experiment will ever be done which produces a 4xN spreadsheet, as it must if it purports to *test* the S <= 2 relationship. As long as they keep producing 4 *different* 2XN spreadsheets, the appropriate inequality is S <= 4, and it will never be violated.

These are the points that continue to elude you and Richard and many other Bell worshipers.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Thu May 08, 2014 8:32 am

minkwe wrote:
Heinera wrote:"In particular then, the program can be run with N = 1 and all four possible pairs of measurement settings, and the same initial random seed, and it will thereby generate successively four pairs (A,B), (A',B), (A,B'), (A,B'). If the programmer neiter cheated or made any errors, in other words, if the program is a correct implementation of a genuine LHV model, then both values of A are the same, and so are both values of A', both values of B, and both values of B'. We now have the first row of the Nx4 spreadsheet of Section 2 of this paper." (My emphasis.)

In other words, if the model is not a LHV model (e.g., a non-local model) both values are not generally the same, and there will be no table; no upper bound of 2.

You still don't have a clue:

1 - If you measure (A,B), (A',B), (A,B'), (A,B') on a different particle pair, the A in (A,B) can be different from the A in (A,B') without any mistake or cheating.


You went wrong already here. This thread is not about experiments and actual particles. This thread is about computer models.

You seem to agree that a LHV computer model can never exceed the CHSH bound of 2 when all correlations are computed on the same set of hidden variables.

I gave a trivial example of a non-local HV model that easily violates that bound, even whith all correlations computed on the same set. And I explained why it could do so; because with this non-local model it is impossible to construct the table in Richard's paper, so his theorem doesn't apply.

Yet you seem to conclude that there is no difference between LHV computer models and non-local computer models. That is a kind of logic that beats me.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Thu May 08, 2014 8:41 am

Heinera wrote:You went wrong already here. This thread is not about experiments and actual particles. This thread is about computer models.

I forgot you were only interested in non-physical particles and computer models of non-physical particles and impossible experiments.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby gill1109 » Thu May 08, 2014 8:49 am

Michel is only interested in computer simulations of already actually performed experiments. He's been good at them, so far. He might get a nasty shock in five years or so, however.

Thought experiments are idle speculation, mathematics is irrelevant tautologies. Nobody needs to know anything about statistics.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: minkwe's challenge

Postby minkwe » Thu May 08, 2014 8:53 am

You seem to agree that a LHV computer model can never exceed the CHSH bound of 2 when all correlations are computed on the same set of hidden variables.

Nope, I don't *seem*. I've been saying since the beginning that NOTHING WHATSOEVER, can violate the CHSH bound of 2, when computed from a single 4xN spreadsheet of OUTCOMES. The physics of what generates the spreadsheet is completely irrelevant to the inequality.

I gave a trivial example of a non-local HV model that easily violates that bound, even whith all correlations computed on the same set.

Nope you didn't. You had 4 different 2xN spreadsheets of OUTCOMES. And even your claim that you had a single set of hidden variables is false. You forgot to add your non-local hidden variable to the column of variables. You used the setting at the other end as a non-local variable. You can't eat your cake and have it. If you introduce a non-local hidden variable, then it is a variable, and since it is different from correlation to correlation, you can't then argue that you have the same set of hidden variables. So your claim is obviously false.
Last edited by minkwe on Thu May 08, 2014 9:08 am, edited 1 time in total.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby minkwe » Thu May 08, 2014 9:07 am

gill1109 wrote:Michel is only interested in computer simulations of already actually performed experiments.

Nope he is not. He is interested in correctly applying mathematics to physics. He is interested in logic and consistency of reasoning, which avoids silly unnecessary mysticism/nonsense such as non-locality, non-reality, multiple universes, backward causation, negative probabilities etc.


He's been good at them, so far. He might get a nasty shock in five years or so, however.

He will not get any shock. Nobody has so far been able to simulate the data from that the so-called non-local/non-real experiment that they purport will be done in 5 years.

Thought experiments are idle speculation, mathematics is irrelevant tautologies. Nobody needs to know anything about statistics.

Thought experiments are only relevant to physics if they represent proper reasoning about physics. Wild speculation about non-physical particles and impossible experiments are irrelevant to physics. Each physical situation requires the appropriate statistical/mathematical treatment. Just because you are a carpenter with a hammer doesn't mean everything is a nail. You have to apply statistics appropriately in every situation. Unfortunately, some well-meaning statisticians, thinking they are experts in statistics, assume erroneously that they are experts in the application of statistics to physics, not knowing that you need a deep understanding of the physical issues involved in order to apply statistics appropriately. The same thing happens in law and politics. Richard has been very successful at applying statistics to law. But law is not physics.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Thu May 08, 2014 9:39 am

minkwe wrote:And even your claim that you had a single set of hidden variables is false. You forgot to add your non-local hidden variable to the column of variables. You used the setting at the other end as a non-local variable. You can't eat your cake and have it. If you introduce a non-local hidden variable, then it is a variable, and since it is different from correlation to correlation, you can't then argue that you have the same set of hidden variables. So your claim is obviously false.

The detector settings are not hidden variables. They are inputs to the model, exogenously given and completely outside the model's control.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Thu May 08, 2014 9:52 am

Heinera wrote:
minkwe wrote:And even your claim that you had a single set of hidden variables is false. You forgot to add your non-local hidden variable to the column of variables. You used the setting at the other end as a non-local variable. You can't eat your cake and have it. If you introduce a non-local hidden variable, then it is a variable, and since it is different from correlation to correlation, you can't then argue that you have the same set of hidden variables. So your claim is obviously false.

The detector settings are not hidden variables. They are inputs to the model, exogenously given and completely outside the model's control.

So you don't even understand how your own model works. Isn't Alice's setting a hidden variable for Bob's outcome, and Bob's setting a hidden variable for Alice's outcomes. Isn't that how you achieve your non-locality? If Bob gave both settings at his detector and Alice gave both settings at her detector, then how is your model a non-local one? If not please tell me exactly all the "variables" which generate each outcome at each station.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Thu May 08, 2014 11:35 am

minkwe wrote: Isn't Alice's setting a hidden variable for Bob's outcome, and Bob's setting a hidden variable for Alice's outcomes. Isn't that how you achieve your non-locality? If Bob gave both settings at his detector and Alice gave both settings at her detector, then how is your model a non-local one? If not please tell me exactly all the "variables" which generate each outcome at each station.

You confuse "non-local" with "hidden". In the model I don't achieve non-locality; I assume it.

What is a hidden variable? A variable can only be hidden if it is hidden from someone. Why should the settings be hidden? Just have Bob and Alice (secretly) in advance agree on a list of detector settings for each run. Obviously detector settings will no longer be hidden for them in any way. For each run, they both know the detector setting at the other place. Nothing hidden there; no hidden variable for them.

If you on the other hand imply that "non-local" means that something weirdly non-classical must be going on, you are correct.

Unfortunately (or happily, however you like it), any attempt at introducing the detector settings as a non-local hidden variable in my program would still not make it possible to construct Richard's table in a unique and consistent way. So his theorem would still not apply. If you disagree, try it out.

By the way, what is your definition of a hidden variable?
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Thu May 08, 2014 4:13 pm

Heinera wrote:You confuse "non-local" with "hidden".

Nope I don't. You are playing word games again. None of your variables in your model are hidden. I can see them all even though you call them hidden.

In the model I don't achieve non-locality; I assume it.

More word games. Your assumption telepathically appeared in the results then :roll:
What is a hidden variable? A variable can only be hidden if it is hidden from someone.

What exactly is hidden in your model that allows you to call them " hidden variables "?

Just have Bob and Alice (secretly) in advance agree on a list of detector settings for each run. Obviously detector settings will no longer be hidden for them in any way.

So everything is perfectly local, yet you call them non-local. I don't know why I bother with you. You get stuck in a hole, yet you keep digging. Call them anything you like. It matters not one iota. The indisputable point is that you have 4 different 2xN spreadsheets of outcomes for which the upper bound is 4, irrespective of any physics.

Unfortunately (or happily, however you like it), any attempt at introducing the detector settings as a non-local hidden variable in my program would still not make it possible to construct Richard's table in a unique and consistent way. So his theorem would still not apply. If you disagree, try it out.

Amusing indeed. Your outcomes are 4 different 2xN spreadsheets, for which the correct upper bound is 4. Your "non-local" model failed the challenge. If quantum mysticism tickles you and you'd rather bury your head in the sand and keep comparing 4x2xN data with 4xN inequalities, go ahead.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Thu May 08, 2014 11:47 pm

minkwe wrote:Amusing indeed. Your outcomes are 4 different 2xN spreadsheets, for which the correct upper bound is 4. Your "non-local" model failed the challenge. If quantum mysticism tickles you and you'd rather bury your head in the sand and keep comparing 4x2xN data with 4xN inequalities, go ahead.

Exactly, the correct upper bound is 4. So why did it fail the challenge? The challenge was to show that some programs could violate the upper bound of 2?

And we don't have to agree on what is or is not a "hidden variable." Any computer simulation of Bell types of experiment must have some pseudo-random variable(s). So I can rephrase:

Do you believe that for a LHV computer model, the CHSH has an upper bound of 2 if you compute all the four correlations with the same set of values for the pseudo-random variables(s) (i.e., the same seed)?

Do you believe that for non-local computer models, the CHSH has an upper bound of 4 if you compute all the four correlations with the same set of values for the pseudo-random variables(s) (i.e., the same seed)?
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Fri May 09, 2014 4:40 am

Heinera wrote:Exactly, the correct upper bound is 4. So why did it fail the challenge? The challenge was to show that some programs could violate the upper bound of 2?

You did not even bother to read the challenge, before taking it on. And it appears from your other questions you don't even bother to read anything else that I write.
minkwe wrote:1 - If you measure (A,B), (A',B), (A,B'), (A,B') on a different particle pair, the A in (A,B) can be different from the A in (A,B') without any mistake or cheating.
2 - If you measure the same particle at a (A,B), and exactly the same particle again at (A,B'), then A in (A,B) can be different from the A in (A,B') without any mistake or cheating.
3 - The only way to measure (A,B), (A',B), (A,B'), (A,B') on the same particle, and make sure the A in (A,B) and the A in (A,B') are the same (and each outcome is the same in each pair), is to measure the same particle pair, simultaneously at (A, A', B, B'), an impossibility. Therefore a genuine experiment testing S <= 2 is impossible.
4 - If the probability of obtaining H for a coin is 0.75, the probability of the counter-factual H outcome for the same coin cannot be 0.75 too. It must be 0.25.
5 - No 4xN spreadsheet can violate the S <= 2. It doesn't matter where you get your data to put in the spreadsheet, from LHV/QM/non-local model/non-real model/statistical error etc.
6 - The correct inequality for 4 different 2XN spreadsheets is S<= 4, it doesn't matter where you get your data to put in the spreadsheet, from LHV/QM/non-local model/non-real model/statistical error etc. 4 *different* 2xN spreadsheets can easily violate S <= 2, because that inequality does not apply to such data. It is a mathematical error to even compare them.
7 - It is utter nonsense to compare an inequality derived from a 4xN spreadsheet, with data in the form of 4 different 2xN spreadsheets, even if your 4 *different* 2xN spreadsheets are randomly sampled from a single 4xN spreadsheet. What determines the upper bound is the degrees of freedom in the data, not the degrees of freedom in the original spreadsheet you randomly sampled from.
8 - These inequalities have nothing to do with physics, they are mathematical tautologies about real numbers and degrees of freedom. Please read the Rosinger paper carefully. Their violation points to a mathematical error in their application. Nothing can violate them.
9 - No EPRB experiment will ever be done which produces a 4xN spreadsheet, as it must if it purports to *test* the S <= 2 relationship. As long as they keep producing 4 *different* 2XN spreadsheets, the appropriate inequality is S <= 4, and it will never be violated.

These are the points that continue to elude you and Richard and many other Bell worshipers.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Fri May 09, 2014 9:06 am

minkwe wrote:
Heinera wrote:Exactly, the correct upper bound is 4. So why did it fail the challenge? The challenge was to show that some programs could violate the upper bound of 2?

You did not even bother to read the challenge, before taking it on. And it appears from your other questions you don't even bother to read anything else that I write.

I read the challenge, I just couldn't make heads or tails of it. So I thought I should just supply something concrete, like a computer model. (We are discussing computer models in this thread. I know you are skilled with them, so they are a perfect arena for mutual exchange).

But let's forget about the challenge. I interpret your answer to mean that you agree with the two questions? (They are after all only about computer models; no actual physics)
Two yes/no questions; please let me know if you instead answer "no" to one of the two questions.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Fri May 09, 2014 7:35 pm

Heinera wrote:But let's forget about the challenge. I interpret your answer to mean that you agree with the two questions? (They are after all only about computer models; no actual physics)
Two yes/no questions; please let me know if you instead answer "no" to one of the two questions.

I don't agree with the pretext of your two questions. Please read the list of points I just reminded you of, taking note of the difference between "outcomes" and "variables", and if you still disagree with any one of them, or you still cant make head or tails of them, point it out, specifically. Otherwise I'll assume you agree with all of them, and this discussion will be over as far as I'm concerned.

The answers to your questions are already included in those points, and they are not simply "yes/no", because your questions are premised on misconceptions, which I've addressed in those points.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Sat May 10, 2014 4:17 am

minkwe wrote:
Heinera wrote:But let's forget about the challenge. I interpret your answer to mean that you agree with the two questions? (They are after all only about computer models; no actual physics)
Two yes/no questions; please let me know if you instead answer "no" to one of the two questions.

I don't agree with the pretext of your two questions. Please read the list of points I just reminded you of, taking note of the difference between "outcomes" and "variables", and if you still disagree with any one of them, or you still cant make head or tails of them, point it out, specifically. Otherwise I'll assume you agree with all of them, and this discussion will be over as far as I'm concerned.


There is no pretext. The questions are purely technical questions about computer programs.

I read your list of point. Unfortunately, they include nothing about statistics, making them less useful when discussing computer programs that includes random variables.

For a LHV computer model, the CHSH has an upper bound of 2 if you compute all the four correlations with the same set of values for the pseudo-random variables(s) (i.e., the same seed). Which means that at least one of the correlations must be completely wrong. If the correlations changes considerably when they are computed with four different seeds, even for large N, it means the correlations must be extremely seed dependent. That is crazy. What if someone would want to simulate the model with yet different seeds?

So if the absolute upper bound is 2 with same seed, you can't expect to see much of a difference with different seeds (given large N).
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Sat May 10, 2014 6:25 am

Heinera wrote:There is no pretext. The questions are purely technical questions about computer programs.

I read your list of point. Unfortunately, they include nothing about statistics, making them less useful when discussing computer programs that includes random variables.


You keep saying that, probably because your friend Richard also mentions statistics, but your fail to see or understand the relevant issues. That is why you still are unable to say which of the points is wrong or unclear. Yet you keep repeating the same misconceptions I've already preempted and addressed.

For a LHV computer model, the CHSH has an upper bound of 2 if you compute all the four correlations with the same set of values for the pseudo-random variables(s) (i.e., the same seed).

Correlations are computed from outcomes. The upper bound is determined by the degrees of freedom in the outcomes. It doesn't matter where you get your list of outcomes. LHV theory or non-local theory. The number of degrees of freedom in the outcomes determines the upper bound.This is explained in points 1,2,5 and 6 of my list. Statistics doesn't change that. Maybe Richard has deceived you that it does when even he does not believe that. Its just a rhetorical device for him. If you disagree, explain how statistics changes those points. That was the whole point of the challenge.

Which means that at least one of the correlations must be completely wrong. If the correlations changes considerably when they are computed with four different seeds, even for large N, it means the correlations must be extremely seed dependent. That is crazy. What if someone would want to simulate the model with yet different seeds?

This doesn't make sense. Something is only wrong if it is significantly different from the correct value. It is nonsense to say an apple is wrong because it is not an orange. So no, the value is not wrong, you are simply calculating a completely different thing from what you are comparing with. This is explained in points 4 and 7 on my list. Statistics does not change anything. If you disagree explain how it does.

So if the absolute upper bound is 2 with same seed, you can't expect to see much of a difference with different seeds (given large N).

Completely wrong. Read the list of points again. Your single seed results include both actual and counterfactual outcomes. Your different seed results are all actual. The counterfactual probabilities are opposite to actual ones, therefore you should obtain different results. This is covered in point 4 of my list. If you believe statistics changes these facts, explain how.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Sat May 10, 2014 6:59 am

minkwe wrote:
Heinera wrote: For a LHV computer model, the CHSH has an upper bound of 2 if you compute all the four correlations with the same set of values for the pseudo-random variables(s) (i.e., the same seed).

Correlations are computed from outcomes.

And in computer models outcomes are ultimately computed from the pseudo-random variables.

The upper bound is determined by the degrees of freedom in the outcomes. It doesn't matter where you get your list of outcomes. LHV theory or non-local theory. The number of degrees of freedom in the outcomes determines the upper bound.This is explained in points 1,2,5 and 6 of my list. Statistics doesn't change that. Maybe Richard has deceived you that it does when even he does not believe that. Its just a rhetorical device for him. If you disagree, explain how statistics changes those points. That was the whole point of the challenge.


With the same seed (same set of hidden variables) in the computer program the absolute upper bound is 2. You have earlier agreed to that. That is why you were warning Joy about computing all correlations on the same set.

Which means that at least one of the correlations must be completely wrong. If the correlations changes considerably when they are computed with four different seeds, even for large N, it means the correlations must be extremely seed dependent. That is crazy. What if someone would want to simulate the model with yet different seeds?


This doesn't make sense. Something is only wrong if it is significantly different from the correct value. It is nonsense to say an apple is wrong because it is not an orange. So no, the value is not wrong, you are simply calculating a completely different thing from what you are comparing with. This is explained in points 4 and 7 on my list. Statistics does not change anything. If you disagree explain how it does.


This is a computer program. It produces correlations, not fruit. One of the correlations must be significantly different from the QM value.

The statistics enter in my next sentence:

So if the absolute upper bound is 2 with same seed, you can't expect to see much of a difference with different seeds (given large N).

Completely wrong. Read the list of points again. Your single seed results include both actual and counterfactual outcomes. Your different seed results are all actual. The counterfactual probabilities are opposite to actual ones, therefore you should obtain different results. This is covered in point 4 of my list. If you believe statistics changes these facts, explain how.


This is a computer program. It takes a pair of detector settings, and generates two lists of outcomes, and then it computes the correlation. And then I ask the question: Could this be massively seed dependent? There are no factual or counterfactual outcomes, only outcomes. The distinction between factual and counterfactual is something that belongs in the twilight zone between experiment and philosophy.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby Heinera » Sat May 10, 2014 7:13 am

minkwe wrote:
Heinera wrote: For a LHV computer model, the CHSH has an upper bound of 2 if you compute all the four correlations with the same set of values for the pseudo-random variables(s) (i.e., the same seed).

Correlations are computed from outcomes.

And in computer models outcomes are ultimately computed from the pseudo-random variables.

The upper bound is determined by the degrees of freedom in the outcomes. It doesn't matter where you get your list of outcomes. LHV theory or non-local theory. The number of degrees of freedom in the outcomes determines the upper bound.This is explained in points 1,2,5 and 6 of my list. Statistics doesn't change that. Maybe Richard has deceived you that it does when even he does not believe that. Its just a rhetorical device for him. If you disagree, explain how statistics changes those points. That was the whole point of the challenge.


If you only appeal to degrees of freedom, the only thing you can say about 1000 coin flips is that the number of heads can be between 0 and 1000. With statistics, you can say a lot, lot more than that.

With the same seed (same set of hidden variables) in the computer program the absolute upper bound is 2. You have earlier agreed to that. That is why you were warning Joy about computing all correlations on the same set.

Which means that at least one of the correlations must be completely wrong. If the correlations changes considerably when they are computed with four different seeds, even for large N, it means the correlations must be extremely seed dependent. That is crazy. What if someone would want to simulate the model with yet different seeds?


This doesn't make sense. Something is only wrong if it is significantly different from the correct value. It is nonsense to say an apple is wrong because it is not an orange. So no, the value is not wrong, you are simply calculating a completely different thing from what you are comparing with. This is explained in points 4 and 7 on my list. Statistics does not change anything. If you disagree explain how it does.


This is a computer program. It produces correlations, not fruit. One of the correlations must be significantly different from the QM value.

The statistics enter in my next sentence:

So if the absolute upper bound is 2 with same seed, you can't expect to see much of a difference with different seeds (given large N).

Completely wrong. Read the list of points again. Your single seed results include both actual and counterfactual outcomes. Your different seed results are all actual. The counterfactual probabilities are opposite to actual ones, therefore you should obtain different results. This is covered in point 4 of my list. If you believe statistics changes these facts, explain how.


This is a computer program. It takes a pair of detector settings, and generates two lists of outcomes, and then it computes the correlation. And then I ask the question: Could this in any reasonable way be massively seed dependent? There are no factual or counterfactual outcomes in this program, only outcomes. The distinction between factual and counterfactual is something that belongs in the twilight zone between experiment and philosophy.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Sat May 10, 2014 9:06 am

Heinera wrote:And in computer models outcomes are ultimately computed from the pseudo-random variables.

Computer models of what? The computer model excuse doesn't work. "Ultimately doesn't work either", you still haven't read or understood any of the points. Does the computer model produce data? What is the number of degrees of freedom in the data? That determines the upper bound. This issue is addressed in point 6 and 7, but, you still do not understand that.

If you only appeal to degrees of freedom, the only thing you can say about 1000 coin flips is that the number of heads can be between 0 and 1000. With statistics, you can say a lot, lot more than that.

Wrong. What determines the UPPER BOUND is the number of degrees of freedom. That doesn't mean you can not say anything else about the coin flips. The UPPER BOUND is not the only thing that is interesting about coin flips. Statistics does not change the UPPER BOUND, if you disagree explain how. That was the point of the challenge.

With the same seed (same set of hidden variables) in the computer program the absolute upper bound is 2.

Wrong. Read the points again as I keep telling you. What determines the upper bound is the number of degrees of freedom in the outcomes. The correlations are calculated from the outcomes, it is the degrees of freedom in the outcomes that matters for the upper bound. The outcomes have possible values (-1, +1), it is the values of the outcomes that determine the upper bound. The inequalities are valid whether you have hidden variables or not. See Rosinger's paper (point 8).

This is a computer program. It produces correlations, not fruit. One of the correlations must be significantly different from the QM value

Very funny. The QM values are all predictions of actual results, not counterfactual results. It is silly to expect the counterfactual results to be the same as the actual ones. If the actual probability of obtaining 'H' in a coin toss is 0.75, it is silly to expect the counterfactual probability of obtaining 'H' to be the same. The silly mistake you keep making is to assume that if you toss the coin on a glass table, the relative frequency of seeing H from above the table will be the same as the relative frequency of seeing H from underneath the table. (point 4)
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

PreviousNext

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: ahrefs [Bot] and 116 guests

CodeCogs - An Open Source Scientific Library