The CHSH inequality as a weakly objective result

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: The CHSH inequality as a weakly objective result

Postby gill1109 » Wed Oct 20, 2021 8:50 am

Justo wrote:
gill1109 wrote:
Justo wrote:Measurement events in each laboratory must correspond to the same source event. If you can't do that, then you cannot test the Bell inequality.

You can do that. Experimenters do do that.

Maybe I shouldn't have used the word "test". What I meant is that if you do not "theoretically" assume your results come from the same event you cannot "theoretically" derive the inequality. Whether you can do that or how to do that in an experiment is another issue, it is a problem relating to the experimental implementation.

You theoretically derive the inequality by making certain metaphysical assumptions about the nature of reality. The inequality is interesting when there are certain space-time relations between binary inputs and binary outputs at *three* measurement locations. See Figure 7 of "Bertlmann's socks" and the explanatory text https://hal.archives-ouvertes.fr/jpa-00220688/document. The words "event" and "source" mean something in quantum mechanics. But they don't mean anything in local realism.

For instance, the Delft and Munich experiments have sources in both Alice and Bob's labs, and a measurement is made in both labs and at an intermediate location, say at Caspar's location. Caspar's detector tells us whether or not to correlate Alice and Bob's outcomes given their settings.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The CHSH inequality as a weakly objective result

Postby minkwe » Wed Oct 20, 2021 10:14 am

What I've proven above is not very complicated. Anyone can verify that Justo's key assumption is impossible using his dice example on page 17 of https://arxiv.org/pdf/2012.10238v5.pdf.

Let us assume that there only 6 equivalent classes of corresponding to the 6 sides of a die. For simplicity but without losing generality, let us assume that is uniformly distributed, therefore each value of lambda will occur the exact number of times for each of the pairs of experiments. Consider the simplest experiment where the sample distribution of lambdas is exactly the same as those in the population. This experiment will correspond to 4 random experiments with N = 6, where each lambda exists only once. In this case, it is not important for N to be very large because we already have fair samples. Here are the 4 spreadsheets. I will show just the lambdas for simplicity.






The permutation required to make is [0 5 3 2 1 4] where these numbers represent the 0-based order of values in the required to make it match . To make we need to make the permutation [4 5 1 0 3 2] to . At this point, we are good because so far is staying unchanged. But we need two more permutations. We need to make

.

But we already applied permutations to and so either we keep those fixed and only rearrange or we will have to undo the previous rearrangements. If we keep fixed, we will need to rearrange to match using the permutation [3 5 2 4 1 0]. And keeping fixed we will have to rearrange to match using the permutation [1 4 3 5 2 0].

Obviously, this won't work because we have two incompatible permutations needed for . Therefore the assumption that those equivalences apply, is false.
Here is python code to do this calculation. Each time you run it, it should generate a different set of random spreadsheets with exactly the same lambda values.


Code: Select all
import numpy

w = numpy.arange(6) + 1
x = numpy.arange(6) + 1
y = numpy.arange(6) + 1   
z = numpy.arange(6) + 1   

numpy.random.shuffle(w)
numpy.random.shuffle(x)
numpy.random.shuffle(y)
numpy.random.shuffle(z)

xw = numpy.where(w[:, None] == x[None, :])[1]   
yw = numpy.where(w[:, None] == y[None, :])[1]
zx = numpy.where(x[:, None] == z[None, :])[1]
zy = numpy.where(y[:, None] == z[None, :])[1]

print('Experimental Lambdas')
print('w', w)
print('x', x)
print('y', y)
print('z', z)

print('Permutations')
print('x->w', xw)
print('y->w', yw)
print('z->x', zx)
print('z->y', zy)

print(all(z[zx] == z[zy]))


We've demonstrated this using the simplest of examples. Imagine extending this to a larger set of lambdas and larger N >> 6. It gets worse not better.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: The CHSH inequality as a weakly objective result

Postby gill1109 » Wed Oct 20, 2021 10:00 pm

Justo wrote:That is correct I am implicitly assuming that it is possible to accommodate data such that match , etc.,in other words I assume that the four different experiments(spreadsheets) share the same domain of hidden variables.
That I recognize as a possible loophole in the derivation and include the subsection "4.2 A Possible Loophole" to justify it.
The argument is fairly simple. It is based on an observation made by Eugene Wignar in 1970. Wigner observed that only a small finite number of hidden variables can exist. For a CHSH experiment only 16 different hidden variables exist. That means when you perform more that 16 experiments, the hidden variable values must necessarily start repeating its values if they actually exists. Thefore, after a sufficiently large number of experiments, you will end up with the same values appearing in all four different experiments. Besides, assuming statistical independence the relative frequency will be the same for the possible values.

I think, more precisely, that Wigner showed that for a CHSH experiment, one can (without loss of generality) suppose that the hidden variable is discrete with 16 possible different values. The idea is to replace lambda with the quartet (A(a, lambda), A(a’, lambda), B(b, lambda), B(b’, lambda) =: (x1, x2, y1, y2) = z, a member of the set {-1, +1}^4. One also replaces A and B by the functions which extract the value of the appropriate coordinate of z = (x1, x2, y1, y2); the settings now take values in the set {1, 2}. Thus, “new A(new setting a, z)” = z1 if new setting is “1”, z2 if new setting = “2”. New B(new setting b, z)” = z3 if new setting is “1”, z4 if new setting = “2”.

To derive the CHSH inequality for the observable four correlations (each obtained from a different experiment) one needs to assume that the probability distribution of the original lambda, hence also of the new hidden variable “z”, does not depend on the setting pair. If the setting pair is chosen according to some probability distribution over {1, 2}^2, this means we need to assume that the setting pair is statistically independent of the hidden variable.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The CHSH inequality as a weakly objective result

Postby FrediFizzx » Wed Oct 20, 2021 11:28 pm

The Bell inequalities are meaningless no matter which way you try to understand them. It is because of ONE simple fact that you Bell fanatics can't seem to wrap your mind around. NOTHING can exceed the bound on the inequalities!!!!!!!!!!!! :lol: :lol: :lol:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: The CHSH inequality as a weakly objective result

Postby Justo » Thu Oct 21, 2021 3:48 am

minkwe wrote:What I've proven above is not very complicated. Anyone can verify that Justo's key assumption is impossible using his dice example on page 17 of https://arxiv.org/pdf/2012.10238v5.pdf.

Let us assume that there only 6 equivalent classes of corresponding to the 6 sides of a die. For simplicity but without losing generality, let us assume that is uniformly distributed, therefore each value of lambda will occur the exact number of times for each of the pairs of experiments. Consider the simplest experiment where the sample distribution of lambdas is exactly the same as those in the population. This experiment will correspond to 4 random experiments with N = 6, where each lambda exists only once. In this case, it is not important for N to be very large because we already have fair samples. Here are the 4 spreadsheets. I will show just the lambdas for simplicity.






The permutation required to make is [0 5 3 2 1 4] where these numbers represent the 0-based order of values in the required to make it match . To make we need to make the permutation [4 5 1 0 3 2] to . At this point, we are good because so far is staying unchanged. But we need two more permutations. We need to make

.

But we already applied permutations to and so either we keep those fixed and only rearrange or we will have to undo the previous rearrangements. If we keep fixed, we will need to rearrange to match using the permutation [3 5 2 4 1 0]. And keeping fixed we will have to rearrange to match using the permutation [1 4 3 5 2 0].

Obviously, this won't work because we have two incompatible permutations needed for . Therefore the assumption that those equivalences apply, is false.

@minkwe you did not understand what I said. Thre are no permutations, there are only 16 different hidden variables . That means can only be equal to one of the 16 possibilities and the same is true for .
So, if you perform 17 trials at least one must be repeated, and so on. When you perform a statistical significant number of trials all hidden variables will appear with the same relative frequency if statistical independence is true, this is why is a common factor, and you are left with an expression like .
So, what do people who understand this simple fact do to evade nonlocality? Quite simple, reject statistical independence. That kills the Bell inequality.
That is what physicists like Hossenfelder and t' Hooft do, quite simple no metaphysics, no realism, etc, simple rational physical facts.
That is the quite simple fact Bell tried to explain but for some mysterious reason not even professional physicists seem to understand.
Justo
 
Posts: 83
Joined: Fri Aug 20, 2021 8:20 am

Re: The CHSH inequality as a weakly objective result

Postby minkwe » Thu Oct 21, 2021 6:55 pm

Justo wrote:@minkwe you did not understand what I said. Thre are no permutations, there are only 16 different hidden variables . That means can only be equal to one of the 16 possibilities and the same is true for .
So, if you perform 17 trials at least one must be repeated, and so on.

I understood you. Rather you have not understood me. The example I gave above has 6 equivalence classes of lambdas not 16. it can easily be expanded to 16. is not a single value. it is an ordered set of the 6 lambdas realized in a randomized experiment. are not individual lambdas, they are the three other ordered sets of lambdas realized in the three other randomized experiments which together with constitute a weakly objective Bell test experiment. The order of the individual lambdas in each of those experiments is obviously different.

When you perform a statistical significant number of trials all hidden variables will appear with the same relative frequency if statistical independence is true, this is why is a common factor, and you are left with an expression like .

Yes. The purpose of performing a statistical significant number of trials is too ensure fair-sampling such that the relative frequencies of the different lambdas would be the same in each of the weakly objective sets. In my example above, I have eliminated this need by ensuring that all have the same relative frequencies of all the individual lambdas. The only difference is that since each one is a different random experiment, the order of lambdas is different.

The hidden assumption in your comes from the fact that it implicitly assumes that is an ordered set of lambdas. To see this, let us take the full equivalence set of lambdas and place them on a spreadsheet . Now if we apply to each of the rows for the settings we will obtain the spreadsheets of corresponding outcome pairs . You will immediately notice that all the columns in those spreadsheets will be identical and I don't mean just that they contain the same distribution of numbers. They will be identical in the ordering of numbers too! In fact, you can place all those spreadsheets side by side and add a column of the corresponding lambdas at the end, and any mathematical operations you carry out with your functions, you can also carry out with the spreadsheets . In fact, we can factor out the from and the column from . The derivation of the inequality from this way boils down to merging the duplicate columns to end up with a 5xN spreadsheet with columns . This is the meaning of your equation (29) and it is obvious how the upper bound of 2 follows from this.

You claim that is weakly objective. But if you apply the functions to each of the weakly objective spreadsheets of lambdas for the settings you will obtain the spreadsheets of corresponding outcome pairs .
Note that each of will contain exactly the same elements as but the order will be different. Also, the columns will not match. They will have the same distribution of values but the order will be different. As explained above, for to be weakly objective, any mathematical operations you do with those functions must also be doable with the above outcome pairs as well. But we can't do that directly because each of them originates from a different sequence of lambdas, even if the distribution of lambdas is identical. Before we can do the required mathematical operations of factoring, we must rearrange the spreadsheets so that the columns match and the columns also match, etc. This is why you need the permutations. The proof I provided in my last two posts shows that it is not possible to do these permutations required to convert into . Therefore your claim that is weakly objective is false. At most 3 out of the 4 terms can be weakly objective.

This has nothing to do with statistical dependence. We already have exactly the same distribution of lambdas in as in
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: The CHSH inequality as a weakly objective result

Postby gill1109 » Thu Oct 21, 2021 9:53 pm

Justo wrote:@minkwe you did not understand what I said. Thre are no permutations, there are only 16 different hidden variables . That means can only be equal to one of the 16 possibilities and the same is true for .
So, if you perform 17 trials at least one must be repeated, and so on. When you perform a statistical significant number of trials all hidden variables will appear with the same relative frequency if statistical independence is true, this is why is a common factor, and you are left with an expression like .
So, what do people who understand this simple fact do to evade nonlocality? Quite simple, reject statistical independence. That kills the Bell inequality.
That is what physicists like Hossenfelder and t' Hooft do, quite simple no metaphysics, no realism, etc, simple rational physical facts.
That is the quite simple fact Bell tried to explain but for some mysterious reason not even professional physicists seem to understand.

Justo, just a matter of terminology: are the 16 possible different *values* of just one “hidden variable” lambda. Without loss of generality, they can be taken to be the 16 points of the set {-1, +1}^4. They are just 4-tuples of numbers +/-1. lambda = (x1, x2, y1, y2).

They are called “hidden” because in any given trial, we do not observe all four components of lambda. We get to see only two of them, by our own choice to select a and b (=1 or 2). We see measurement outcomes x_a and y_b.

What physicists like Hossenfelder and ’t Hooft do is to try to come up with a model for how a, b and lambda are all generated together. The physics of the selection of settings has to be intimately entangled (not in the quantum sense) with the physics of source and detectors. I didn’t notice any serious physicists taking much notice of these models. It is trivially easy to construct such a model. The problem is to make it make any physical sense.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The CHSH inequality as a weakly objective result

Postby FrediFizzx » Thu Oct 21, 2021 10:22 pm

gill1109 wrote:
Justo wrote:@minkwe you did not understand what I said. Thre are no permutations, there are only 16 different hidden variables . That means can only be equal to one of the 16 possibilities and the same is true for .
So, if you perform 17 trials at least one must be repeated, and so on. When you perform a statistical significant number of trials all hidden variables will appear with the same relative frequency if statistical independence is true, this is why is a common factor, and you are left with an expression like .
So, what do people who understand this simple fact do to evade nonlocality? Quite simple, reject statistical independence. That kills the Bell inequality.
That is what physicists like Hossenfelder and t' Hooft do, quite simple no metaphysics, no realism, etc, simple rational physical facts.
That is the quite simple fact Bell tried to explain but for some mysterious reason not even professional physicists seem to understand.

Justo, just a matter of terminology: are the 16 possible different *values* of just one “hidden variable” lambda. Without loss of generality, they can be taken to be the 16 points of the set {-1, +1}^4. They are just 4-tuples of numbers +/-1. lambda = (x1, x2, y1, y2).

They are called “hidden” because in any given trial, we do not observe all four components of lambda. We get to see only two of them, by our own choice to select a and b (=1 or 2). We see measurement outcomes x_a and y_b.

What physicists like Hossenfelder and ’t Hooft do is to try to come up with a model for how a, b and lambda are all generated together. The physics of the selection of settings has to be intimately entangled (not in the quantum sense) with the physics of source and detectors. I didn’t notice any serious physicists taking much notice of these models. It is trivially easy to construct such a model. The problem is to make it make any physical sense.

The Bell inequalities are meaningless no matter which way you try to understand them. It is because of ONE simple fact that you Bell fanatics can't seem to wrap your mind around. NOTHING can exceed the bound on the inequalities!!!!!!!!!!!! :lol: :lol: :lol:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: The CHSH inequality as a weakly objective result

Postby Justo » Fri Oct 22, 2021 5:16 am

gill1109 wrote:Justo, just a matter of terminology: are the 16 possible different *values* of just one “hidden variable” lambda. Without loss of generality, they can be taken to be the 16 points of the set {-1, +1}^4. They are just 4-tuples of numbers +/-1. lambda = (x1, x2, y1, y2).

They are called “hidden” because in any given trial, we do not observe all four components of lambda. We get to see only two of them, by our own choice to select a and b (=1 or 2). We see measurement outcomes x_a and y_b.

What physicists like Hossenfelder and ’t Hooft do is to try to come up with a model for how a, b and lambda are all generated together. The physics of the selection of settings has to be intimately entangled (not in the quantum sense) with the physics of source and detectors. I didn’t notice any serious physicists taking much notice of these models. It is trivially easy to construct such a model. The problem is to make it make any physical sense.


The important point is what Wigner noticed: there are only a finite number of possible hidden variables. In your spreadsheet model the hidden variable is the row number because each row determines the possible results of the individual measurements. At first sight one could believe that there are as many different values of hidden variables a the number of rows. Wigner noticed there can only be 16 different "rows" if we think of them as variables determining the result.
In other words, your spreadsheet model is correct and makes sense as your "statistical" derivation. That is opposed to your meaningless "counterfactual derivation".

@minke now is talking about the order of the hidden variables. I can't imagine how the order could have any relevant meaning here if arithmetic is commutative. Obviously, @minke and I are talking different things.
Justo
 
Posts: 83
Joined: Fri Aug 20, 2021 8:20 am

Re: The CHSH inequality as a weakly objective result

Postby Heinera » Fri Oct 22, 2021 5:58 am

Justo wrote:@minke now is talking about the order of the hidden variables. I can't imagine how the order could have any relevant meaning here if arithmetic is commutative. Obviously, @minke and I are talking different things.

I don't understand his point either. The computed correlations are independent of the order of the hidden variables, as they must be, since they are computed as averages.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: The CHSH inequality as a weakly objective result

Postby gill1109 » Fri Oct 22, 2021 7:12 am

Heinera wrote:
Justo wrote:@minke now is talking about the order of the hidden variables. I can't imagine how the order could have any relevant meaning here if arithmetic is commutative. Obviously, @minke and I are talking different things.

I don't understand his point either. The computed correlations are independent of the order of the hidden variables, as they must be, since they are computed as averages.

I too do not understand why Michel is talking about the order of the terms. He has some physical picture in his mind of what a line in a mathematical proof means. But after you have written down some mathematical assumptions, and look at a mathematical expression involving various functions, you are just doing some calculus and algebra. Rearranging a sum of products. Writing an integral of a sum as a sum of integrals.
Last edited by gill1109 on Fri Oct 22, 2021 7:32 am, edited 1 time in total.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The CHSH inequality as a weakly objective result

Postby minkwe » Fri Oct 22, 2021 7:29 am

Justo wrote:[@minke now is talking about the order of the hidden variables. I can't imagine how the order could have any relevant meaning here if arithmetic is commutative. Obviously, @minke and I are talking different things.

I've explained very clearly I believe. It is all centred on your claim that is weakly objective. I'm explaining to you that the claim does not make sense. If you would drop that claim you can do all the basic arithmetic you like. Once you talk about "weakly objective" you are making a claim about experimental outcomes and that requires your mathematical expressions to relate to the experimental data in a particular way that I've shown is impossible.

And don't think I'm only "now" talking about order. You can search this forum and see that my very first few posts and discussions with Gill were about this. I've been explaining this to them for six years and all this time they keep trying to convince me that Bell's inequalities are correct and true. It's about degrees of freedom and order.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: The CHSH inequality as a weakly objective result

Postby gill1109 » Fri Oct 22, 2021 7:46 am

minkwe wrote:
Justo wrote:[@minke now is talking about the order of the hidden variables. I can't imagine how the order could have any relevant meaning here if arithmetic is commutative. Obviously, @minke and I are talking different things.

I've explained very clearly I believe. It is all centred on your claim that is weakly objective. I'm explaining to you that the claim does not make sense. If you would drop that claim you can do all the basic arithmetic you like. Once you talk about "weakly objective" you are making a claim about experimental outcomes and that requires your mathematical expressions to relate to the experimental data in a particular way that I've shown is impossible.

And don't think I'm only "now" talking about order. You can search this forum and see that my very first few posts and discussions with Gill were about this. I've been explaining this to them for six years and all this time they keep trying to convince me that Bell's inequalities are correct and true. It's about degrees of freedom and order.


Justo explains lucidly in his paper that the whole discussion about strongly objective and weakly objective is a red herring!

The conflation of those different sides results in much confusion about the Bell theorem interpretation. When analyzing the LHV prediction, there are no questions about Hilbert spaces’ non-commuting operators or observables’ eigenvalues. The LHV prediction exclusively concerns whether the functions A(a, λ), B(b, λ), and hidden variables with probability distribution p(λ) can explain what has been experimentally found in four different series of actual experiments. There are no issues related to unperformed or incompatible experiments, joint probabilities, elements of physical reality, or metaphysical assumptions of any sort. The problem under investigation and the proposed Bell’s deterministic model are so simple and straightforward that people seem suspicious that such a stunning simplicity could have profound foundational consequences [32]


He goes on to point out that there is a there is an important issue, namely the statistical independence between the settings and the (presumed) hidden variables, which apparently many people don't appreciate.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The CHSH inequality as a weakly objective result

Postby minkwe » Fri Oct 22, 2021 8:27 am

gill1109 wrote:Justo explains lucidly in his paper that the whole discussion about strongly objective and weakly objective is a red herring!

Of course this is what you will say. You have never understood the difference between "Strongly Objective" and "Weakly Objective". viewtopic.php?f=6&t=49&start=100#p2512
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: The CHSH inequality as a weakly objective result

Postby gill1109 » Fri Oct 22, 2021 11:31 am

minkwe wrote:
gill1109 wrote:Justo explains lucidly in his paper that the whole discussion about strongly objective and weakly objective is a red herring!

Of course this is what you will say. You have never understood the difference between "Strongly Objective" and "Weakly Objective". viewtopic.php?f=6&t=49&start=100#p2512


I think that I understand the difference very well. I think that neither concept is useful.

Justo emphasises that we not only need locality and realism but also statistical independence. I have also always done the same. You don't explain what is wrong with Justo's mathematics just as you have never explained what is wrong with mine. Just as I see no need for the concepts "Strongly Objective" and "Weakly Objective", you don't seem to appreciate the crucial role of statistical independence between, on the one hand, a pair of settings determined by experimenters using physical processes outside of the source, transmission lines, and detectors, and on the other hand, the physical processes going on inside source, transmission lines, and detectors.

In physics, one starts with physical assumptions. One tries to express them in mathematical terms. One then works purely within a mathematical model, looking for consequences of the assumptions, which have physically interpretable consequences. That is what Bell did. And later, experimenters showed that the consequences were violated. Conclusion: the initial physical assumptions must have been invalid.

As Justo mentioned, the line of attack chosen by Hossenfelder, Palmer, 't Hooft and others is to attack the statistical independence. Superdeterminism. Others advocate retro-causality (Jarek Duda), that also generates dependence.

Pearle's model (and later, Caroline Thompson's chaotic ball model) showed that by selection of pairs of measurement outcomes depending on what was measured one can also generate statistical dependence, in the subsample which is left, between hidden variables and measurement settings. Paerle's model has an unpleasant defect that the production rate of complete observations depends on the difference between the two settings. That is an observable violation of locality which any experimenter would notice. Pearle knew that already and knew that his model was actually not that interesting, because of that defect. There is a different detection loophole model by Gisin and Gisin, invented much later, which does not have that defect. The coincidence loophole (Fine, Paszacio) allows even stronger selection effects, since it is not just whether or not both particles arrive, but the exact times they both arrive, which determine whether or not a sample point (two settings, two binary outcomes) gets into the final CHSH analysis.

Of course, another way out is to argue that quantum mechanics itself does not predict the singlet correlations because distant particles will not remain entangled with one another. Or that measurement will destroy entanglement because the non-local collapse of the wave function is obviously false. It seems that de Broglie raised that objection in the early days after Bell's paper came out. See https://cds.cern.ch/record/980330/files/CM-P00061609.pdf by Bell. Reply to Critics. 1975.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The CHSH inequality as a weakly objective result

Postby Austin Fearnley » Fri Oct 22, 2021 2:05 pm

Hi all

Just checking in. I did a rapid lateral flow covid test this afternoon and was positive. I am 72 and have had two jabs already. My 3rd jab, the booster, was due in November. I need to take a confirmatory PCR covid test in the next few days. In England.

I wasn't going to comment on this physics thread, but while I am still here (!), I will do so.
My method uses retrocausality, of the antiparticles. I think superdeterminism is wide of the truth.

As I accept the maths of Bell's theorem, therefore(?) the Bell complexities are irrelevant to me. And, in accepting Bell, I only use the cases where there are two fixed detector angles for all particle pairs. That is why Richard's definition of a hidden variable was not much use to me.

In my simulations there are only two detector angles for the whole experiment. This is like tossing a coin or dice. Every toss has a random outcome, but you will not find one million heads in one million tosses of a fair coin. The randomness exists in an individual toss but it is not there in a statistical summation of toss outcomes. (Especially when my detector angles are fixed! So there is only one toss per detector angle.)

Not sure if this helps at all...
Austin Fearnley
 

Re: The CHSH inequality as a weakly objective result

Postby minkwe » Fri Oct 22, 2021 7:26 pm

gill1109 wrote:
minkwe wrote:
gill1109 wrote:Justo explains lucidly in his paper that the whole discussion about strongly objective and weakly objective is a red herring!

Of course this is what you will say. You have never understood the difference between "Strongly Objective" and "Weakly Objective". viewtopic.php?f=6&t=49&start=100#p2512


I think that I understand the difference very well. I think that neither concept is useful.

Justo emphasises that we not only need locality and realism but also statistical independence. I have also always done the same. You don't explain what is wrong with Justo's mathematics just as you have never explained what is wrong with mine. Just as I see no need for the concepts "Strongly Objective" and "Weakly Objective", you don't seem to appreciate the crucial role of statistical independence between, on the one hand, a pair of settings determined by experimenters using physical processes outside of the source, transmission lines, and detectors, and on the other hand, the physical processes going on inside source, transmission lines, and detectors.

In physics, one starts with physical assumptions. One tries to express them in mathematical terms. One then works purely within a mathematical model, looking for consequences of the assumptions, which have physically interpretable consequences. That is what Bell did. And later, experimenters showed that the consequences were violated. Conclusion: the initial physical assumptions must have been invalid.

As Justo mentioned, the line of attack chosen by Hossenfelder, Palmer, 't Hooft and others is to attack the statistical independence. Superdeterminism. Others advocate retro-causality (Jarek Duda), that also generates dependence.

Pearle's model (and later, Caroline Thompson's chaotic ball model) showed that by selection of pairs of measurement outcomes depending on what was measured one can also generate statistical dependence, in the subsample which is left, between hidden variables and measurement settings. Paerle's model has an unpleasant defect that the production rate of complete observations depends on the difference between the two settings. That is an observable violation of locality which any experimenter would notice. Pearle knew that already and knew that his model was actually not that interesting, because of that defect. There is a different detection loophole model by Gisin and Gisin, invented much later, which does not have that defect. The coincidence loophole (Fine, Paszacio) allows even stronger selection effects, since it is not just whether or not both particles arrive, but the exact times they both arrive, which determine whether or not a sample point (two settings, two binary outcomes) gets into the final CHSH analysis.

Of course, another way out is to argue that quantum mechanics itself does not predict the singlet correlations because distant particles will not remain entangled with one another. Or that measurement will destroy entanglement because the non-local collapse of the wave function is obviously false. It seems that de Broglie raised that objection in the early days after Bell's paper came out. See https://cds.cern.ch/record/980330/files/CM-P00061609.pdf by Bell. Reply to Critics. 1975.

This is the umpteenth time that you post this irrelevant history "lesson". You don't think the distinction between weakly objective and strongly objective is important because you don't understand what it means. I remember once I explained to you that to measure the correlation between the height of married couples you have to jointly measure height per couple. But then you claimed that you can get the same correlation by measuring the heights of men only in half the couples and then the heights of the women in the remaining half separately and compare the averages :shock:. If you understood the difference between weakly objective and strongly objective, you would not have made such a daft claim.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: The CHSH inequality as a weakly objective result

Postby FrediFizzx » Fri Oct 22, 2021 7:44 pm

minkwe wrote: ... This is the umpteenth time that you post this irrelevant history "lesson". ...

History is all that Gill has left now that we shot down Gill's theory to a million little pieces that will never come together again. :D
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: The CHSH inequality as a weakly objective result

Postby gill1109 » Fri Oct 22, 2021 9:37 pm

minkwe wrote:This is the umpteenth time that you post this irrelevant history "lesson". You don't think the distinction between weakly objective and strongly objective is important because you don't understand what it means. I remember once I explained to you that to measure the correlation between the height of married couples you have to jointly measure height per couple. But then you claimed that you can get the same correlation by measuring the heights of men only in half the couples and then the heights of the women in the remaining half separately and compare the averages :shock:. If you understood the difference between weakly objective and strongly objective, you would not have made such a daft claim.


Dear Michel

You think I don’t know what the difference means but you’re wrong. I think it is irrelevant. And Guillaume Adenier later changed his mind!

I’m glad you recall that story. But you misreport it. It was not about the correlation, but about the average difference. You can sample husbands. You can sample wives. The difference between the averages estimates the mean difference between heights in a husband and wife couple.

Provided, of course, that the populations of husbands and wives are in one-to-one correspondence and the samples are random samples.

I think that you think the story is daft because you don’t understand statistics and probability very well.

Tell me what you think is wrong with my proof, or with Justo’s proof.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The CHSH inequality as a weakly objective result

Postby Justo » Sat Oct 23, 2021 4:22 am

minkwe wrote:
Justo wrote:[@minke now is talking about the order of the hidden variables. I can't imagine how the order could have any relevant meaning here if arithmetic is commutative. Obviously, @minke and I are talking different things.

I've explained very clearly I believe. It is all centred on your claim that is weakly objective. I'm explaining to you that the claim does not make sense. If you would drop that claim you can do all the basic arithmetic you like. Once you talk about "weakly objective" you are making a claim about experimental outcomes and that requires your mathematical expressions to relate to the experimental data in a particular way that I've shown is impossible.

How the mathematical expressions are related to the experimental outcomes is what paper is all about. You just say my is not weakly objective but you don't say why and where my reasoning fails. The reason why it is weakly objective is too simple and elementary: there can only be a "finite" number of hidden variables(HV) therefore they have no other option but to repeat their values in different experiments.

Let me explain that in even more simple terms: if you toss two different coins many times in two different series of experiments: their results will have to repeat because there are only two different results. The same happens with the hidden variables, they have no option but to repeat their values in different experiments(different settings). Give a rational counterargument to reject or prove that wrong.
First, the HV of the same pair of particles is the same. You are supposed to be capable of identifying corresponding events. Otherwise, the inequality cannot be derived. This should not be a problem because we are discussing a theoretical derivation. When you take the product AB you "theoretically" assume they originated in the same event.
Secondly, the order cannot be a problem. If your experiment gives 2 and then 1 the sum is 2+1 and you can write it in the "incorrect" order 1 +2 without changing the result. If you want to explain my mistake you should give the equation number(they are all numbered) and say exactly where one step fails.
You say that C_j is incorrect but as it is shown in the paper it is the result of clear mathematical steps. Where in of those steps values depart from experimental results? or where are they mathematically incorrect?
You give your own interpretation of C_j but that interpretation does not belong to what I show in the paper or at least is not evident. Other people also expressed their puzzlement with what you are saying.

minkwe wrote:And don't think I'm only "now" talking about order. You can search this forum and see that my very first few posts and discussions with Gill were about this. I've been explaining this to them for six years and all this time they keep trying to convince me that Bell's inequalities are correct and true. It's about degrees of freedom and order.

This shows that you do not understand my derivation. I agree with you and Adenier (not with Gill) that strongly objective is meaningless. So, whatever it is that you explain to Gill I would probably agree with you but that obviously cannot explain my mistake because my argument is different.
Justo
 
Posts: 83
Joined: Fri Aug 20, 2021 8:20 am

PreviousNext

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 15 guests

cron
CodeCogs - An Open Source Scientific Library