Justo wrote:@minkwe you did not understand what I said. Thre are no permutations, there are only 16 different hidden variables
. That means
can only be equal to one of the 16 possibilities and the same is true for
.
So, if you perform 17 trials at least one must be repeated, and so on.
I understood you. Rather you have not understood me. The example I gave above has 6 equivalence classes of lambdas not 16. it can easily be expanded to 16.
is not a single value. it is an ordered set of the 6 lambdas realized in a randomized experiment.
are not individual lambdas, they are the three other ordered sets of lambdas realized in the three other randomized experiments which together with
constitute a weakly objective Bell test experiment. The order of the individual lambdas in each of those experiments is obviously different.
When you perform a statistical significant number of trials all hidden variables will appear with the same relative frequency if statistical independence is true, this is why
is a common factor, and you are left with an expression like
.
Yes. The purpose of performing a
statistical significant number of trials is too ensure
fair-sampling such that the relative frequencies of the different lambdas would be the same in each of the weakly objective sets. In my example above, I have eliminated this need by ensuring that
all have the same relative frequencies of all the individual lambdas. The only difference is that since each one is a different random experiment, the order of lambdas is different.
The hidden assumption in your
comes from the fact that it implicitly assumes that
is an ordered set of lambdas. To see this, let us take the full equivalence set of lambdas and place them on a spreadsheet
. Now if we apply
to each of the rows
for the settings
we will obtain the spreadsheets of corresponding outcome pairs
. You will immediately notice that all the
columns in those spreadsheets will be identical and I don't mean just that they contain the same distribution of numbers. They will be identical in the ordering of numbers too! In fact, you can place all those spreadsheets side by side and add a column of the corresponding lambdas at the end, and any mathematical operations you carry out with your functions, you can also carry out with the spreadsheets
. In fact, we can factor out the
from
and the
column from
. The derivation of the inequality from
this way boils down to merging the duplicate columns to end up with a 5xN spreadsheet with columns
. This is the meaning of your equation (29) and it is obvious how the upper bound of 2 follows from this.
You claim that
is weakly objective. But if you apply the functions
to each of the weakly objective spreadsheets of lambdas
for the settings
you will obtain the spreadsheets of corresponding outcome pairs
.
Note that each of
will contain exactly the same elements as
but the order will be different. Also, the
columns will not match. They will have the same distribution of values but the order will be different. As explained above, for
to be weakly objective, any mathematical operations you do with those functions must also be doable with the above outcome pairs as well. But we can't do that directly because each of them originates from a different sequence of lambdas, even if the distribution of lambdas is identical. Before we can do the required mathematical operations of factoring, we must rearrange the spreadsheets so that the
columns match and the
columns also match, etc. This is why you need the permutations. The proof I provided in my last two posts shows that it is not possible to do these permutations required to convert
into
. Therefore your claim that
is weakly objective is false. At most 3 out of the 4 terms can be weakly objective.
This has nothing to do with statistical dependence. We already have exactly the same distribution of lambdas in
as in