51 posts
• Page **2** of **3** • 1, **2**, 3

Hi Fred

I haven't read the paper so I won't comment on any aspect of that.

I can say what I think is meant by marginal and joint in Bell experiments, as the OP doesn't have time nor patience to explain and I haven't seen you or Gordon state clearly enough what is meant by the terms. In very basic statistics, where data are somehow paired, the data can be set out in an n x n array of different categories. In a Bell test situation, the categories could be A = +1, or 0 or -1. And ditto for B. That give n squared different pairs of categories A=1&B=1; A=0&B=1, etc. When an experiment is run there can be Nij data points for any pair of categories, eg 100 outcomes of A=1&B=1. In this case there are 9 categories of joint data in the body of the table but there are only three 'categories' in the row totals and another three in the column totals. The row and column totals are the marginal data. In general, if you need the n squared numbers for a joint calculation, for example a correlation coefficient, you cannot meet that requirement by working on the 2n marginal numbers for marginal calculations.

In a Bell test situation using outcomes A= +1 and -1; ditto for B; then n=2. In this case, n squared and 2n are both equal to four, which maybe is unfortunate in making numbers of marginal and joint data category points equal.

In an experiment which lets Alice and Bob collect their n=2 data completely independently, ie in a situation where they are told, only once, "start collecting data now" .... and "finish now". The data they collect are marginal data where Alice may have say 101 at +1 and 99 at -1. And something similar for Bob.

When an experiment is wanted for an estimation of a correlation, it needs to use joint data, so this is when the experiment is arranged so that Alice and Bob collect data in a united strategy. For example, for each recording of a datum pair the overseer says simultaneously to Alice and Bob "start recording now" and "stop now". In this case, if Alice records 1 and Bob records -1 then that can be assigned to the joint category A=1&B=-1, and the counter for that category can be incremented by 1. So in a tightly controlled test, timed particle by particle [cf the double slit experiment done one electron at a time] where the pairs of data are robustly assignable as pairs, the data recorded can be recorded as joint data in the body of the table, and the data for a correlation are available.

If there is a workable rule which partitions a joint density into marginal densities, for n=2, in the special case for Bell's tests, then so be it. That presumably works back from a predicted correlation coefficient based on the joint density. I personally don't see how there can be a general rule for partitioning joint densities into marginal densities for any n without assuming the joint distribution is already known. Say there are six pairs of data for the n=2 cateories of +1 and -1. The 2x2 table has a grand total of 6. Say the marginal counts are all 3 per category. So for the joint data, there could be 3s on the diagonal and zero off diagonal. Or 2s on the diagonal and 1s off diagonal. Or 1s on the diagonal and 2s off diagonal. Or 0s on diagonal and 3s off diagonal. These four instances show a steady decrease in correlation. But there are other instances not shown her.

I haven't read the paper so I won't comment on any aspect of that.

I can say what I think is meant by marginal and joint in Bell experiments, as the OP doesn't have time nor patience to explain and I haven't seen you or Gordon state clearly enough what is meant by the terms. In very basic statistics, where data are somehow paired, the data can be set out in an n x n array of different categories. In a Bell test situation, the categories could be A = +1, or 0 or -1. And ditto for B. That give n squared different pairs of categories A=1&B=1; A=0&B=1, etc. When an experiment is run there can be Nij data points for any pair of categories, eg 100 outcomes of A=1&B=1. In this case there are 9 categories of joint data in the body of the table but there are only three 'categories' in the row totals and another three in the column totals. The row and column totals are the marginal data. In general, if you need the n squared numbers for a joint calculation, for example a correlation coefficient, you cannot meet that requirement by working on the 2n marginal numbers for marginal calculations.

In a Bell test situation using outcomes A= +1 and -1; ditto for B; then n=2. In this case, n squared and 2n are both equal to four, which maybe is unfortunate in making numbers of marginal and joint data category points equal.

In an experiment which lets Alice and Bob collect their n=2 data completely independently, ie in a situation where they are told, only once, "start collecting data now" .... and "finish now". The data they collect are marginal data where Alice may have say 101 at +1 and 99 at -1. And something similar for Bob.

When an experiment is wanted for an estimation of a correlation, it needs to use joint data, so this is when the experiment is arranged so that Alice and Bob collect data in a united strategy. For example, for each recording of a datum pair the overseer says simultaneously to Alice and Bob "start recording now" and "stop now". In this case, if Alice records 1 and Bob records -1 then that can be assigned to the joint category A=1&B=-1, and the counter for that category can be incremented by 1. So in a tightly controlled test, timed particle by particle [cf the double slit experiment done one electron at a time] where the pairs of data are robustly assignable as pairs, the data recorded can be recorded as joint data in the body of the table, and the data for a correlation are available.

If there is a workable rule which partitions a joint density into marginal densities, for n=2, in the special case for Bell's tests, then so be it. That presumably works back from a predicted correlation coefficient based on the joint density. I personally don't see how there can be a general rule for partitioning joint densities into marginal densities for any n without assuming the joint distribution is already known. Say there are six pairs of data for the n=2 cateories of +1 and -1. The 2x2 table has a grand total of 6. Say the marginal counts are all 3 per category. So for the joint data, there could be 3s on the diagonal and zero off diagonal. Or 2s on the diagonal and 1s off diagonal. Or 1s on the diagonal and 2s off diagonal. Or 0s on diagonal and 3s off diagonal. These four instances show a steady decrease in correlation. But there are other instances not shown her.

- Ben6993
**Posts:**287**Joined:**Sun Feb 09, 2014 12:53 pm

Dear Ben: Thanks for your comments. For my part, I placed the terms marginal, joint and conditional beside the related equations so that there could be no doubt as to my meanings. You are correct in saying that arrays can be arranged with the marginal probabilities in the "margin". But I suspect that marginal -- as in marginal probability -- refers to the fact that all other events are marginalised under the ordinary meaning of that word: ie, all other events are treated as insignificant or peripheral when the marginal probability of a particular event is considered.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

FrediFizzx wrote: … … There is not all that much significance to the term "marginal". It is just a probability term as opposed to "joint" probability.

Fred, I take marginal to be an interesting word in PT: see my reply to Ben (above). Further, Don's use of the same term in "marginal (separated) measurements" is redundant. His trademark phrase just means "separated measurements" -- as in "Alice and Bob being separated when they take measurements" -- of which, more soon.

PS: As a brief response to your earlier comment on the subject (to avoid any misunderstandings): Don's phrase has no relevance to any study of Bell's theorem.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

Don wrote:… … Regarding the quantum prediction, -a.b is the quantum joint prediction. It cannot be recovered in an experiment with separated measurements, and reduced density matrices must be used instead of the joint distribution. Refer to http://arxiv.org/abs/1309.1153 and the Conclusion of http://arxiv.org/abs/1409.5158 for a full account of this important distinction. ... ...

Question: Figure 1 in http://arxiv.org/pdf/1309.1153.pdf has an "area = (π/2) cos^2 θ". Could someone please explain the basis of this area? I note that it equals zero when θ = π/2. Thanks.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

Hi Gordon

I would suspect that a statistician would only use 'marginal' to mean the sort of data found in the margins of a table. I did not cover 'conditional' in my previous post, but that in basic statistics is the joint data in the body of a table, with that data restricted to either one row or one column. So joint, conditional and marginal have specific and different meanings. The data in the table can get there either as laboratory observations or as theoretical calculations based on as-esoteric-as-you-like formulae. Marginal data in isolation are useless when correlations are required. Hence my example which had a table with fixed marginal data for which I gave four different ways of specifying the joint data where each way gave a different correlation. Marginal data do of course provide the two variances which are needed to scale the correlation coefficient.

Anyway, marginal distributions cannot give you the correlation of the joint data. Further I have just now had a browse of Don's paper and the word 'marginal' is not even in the paper!

.........

AFAIK QM can predict breaking of Bell's inequality. I can give a link, if anyone is interested, to a Susskind online lecture where a (sort of) limit of 0.15 is exceeded by a value of 0.25. However, that depends on counterfactual data which some would find unacceptable.

My own view is that locality is not broken. My preon model is a local, hidden variable non-mathematical model though it says nothing about Bell's inequalities. In my model, particles have a specific and continuing (at least in between particle interactions) chiral handedness, which in my view reflects a hidden variable of say the electron. So in my model a pair of electrons will have one chiral left handed and one chiral right handed, and they each stay that way until measured. Chiral RH and LH electrons have different physical structures. They are really different particles. So this is chirality, not helicity. In QM, single handedness implies a massless and maybe unphysical particle. QM requires superpositions of LH and RH states to enable physicality. So how can an electron oscillate between LH and RH states to make it massive and physical ... but without screwing up the +1 or -1 observations to render a complete randomness on its measurement in a Bell's test? The higgs is implicated in giving mass to particles, and the higgs is scalar with no spin value, but with weak isospin pure states of 0.5 or -0.5. As a field effect, the LH electron can oscillate in weak isospin in a higgs field and that IMO gives the LH chiral electron mass and it does not interfere with its LH chiral spin value (-0.5) which is its hidden variable at its next measurement (at a particle interaction). In this way, the LH electron gets mass and keeps its chirality constant. It can only change chirality at a particle interaction, to a RH structure.

I would suspect that a statistician would only use 'marginal' to mean the sort of data found in the margins of a table. I did not cover 'conditional' in my previous post, but that in basic statistics is the joint data in the body of a table, with that data restricted to either one row or one column. So joint, conditional and marginal have specific and different meanings. The data in the table can get there either as laboratory observations or as theoretical calculations based on as-esoteric-as-you-like formulae. Marginal data in isolation are useless when correlations are required. Hence my example which had a table with fixed marginal data for which I gave four different ways of specifying the joint data where each way gave a different correlation. Marginal data do of course provide the two variances which are needed to scale the correlation coefficient.

Anyway, marginal distributions cannot give you the correlation of the joint data. Further I have just now had a browse of Don's paper and the word 'marginal' is not even in the paper!

.........

AFAIK QM can predict breaking of Bell's inequality. I can give a link, if anyone is interested, to a Susskind online lecture where a (sort of) limit of 0.15 is exceeded by a value of 0.25. However, that depends on counterfactual data which some would find unacceptable.

My own view is that locality is not broken. My preon model is a local, hidden variable non-mathematical model though it says nothing about Bell's inequalities. In my model, particles have a specific and continuing (at least in between particle interactions) chiral handedness, which in my view reflects a hidden variable of say the electron. So in my model a pair of electrons will have one chiral left handed and one chiral right handed, and they each stay that way until measured. Chiral RH and LH electrons have different physical structures. They are really different particles. So this is chirality, not helicity. In QM, single handedness implies a massless and maybe unphysical particle. QM requires superpositions of LH and RH states to enable physicality. So how can an electron oscillate between LH and RH states to make it massive and physical ... but without screwing up the +1 or -1 observations to render a complete randomness on its measurement in a Bell's test? The higgs is implicated in giving mass to particles, and the higgs is scalar with no spin value, but with weak isospin pure states of 0.5 or -0.5. As a field effect, the LH electron can oscillate in weak isospin in a higgs field and that IMO gives the LH chiral electron mass and it does not interfere with its LH chiral spin value (-0.5) which is its hidden variable at its next measurement (at a particle interaction). In this way, the LH electron gets mass and keeps its chirality constant. It can only change chirality at a particle interaction, to a RH structure.

- Ben6993
**Posts:**287**Joined:**Sun Feb 09, 2014 12:53 pm

Ben6993 wrote: Further I have just now had a browse of Don's paper and the word 'marginal' is not even in the paper!

You're either an idiot or a liar, Ben. Variations of "marginal" appear 14 times in the paper we are talking about:

http://arxiv.org/ftp/arxiv/papers/1309/1309.1153.pdf

@Fred Diether

Please ban me immediately. I wasn't aware of how stupid you people are here.

- Don
**Posts:**8**Joined:**Tue Dec 15, 2015 12:45 pm

Well, that explains it as I was browsing this paper:

I haven't agreed with the banning of anyone so far on this site as it is counter-productive. But they are not my decisions. You are so rude that I won't bother browsing any more of your words, banned or not.

My new model (http://arxiv.org/abs/1507.06231) is a killer for the whole Bell Test program because ....

I haven't agreed with the banning of anyone so far on this site as it is counter-productive. But they are not my decisions. You are so rude that I won't bother browsing any more of your words, banned or not.

- Ben6993
**Posts:**287**Joined:**Sun Feb 09, 2014 12:53 pm

So you're too stupid to even follow the thread to know what paper we are talking about. Too narcissistic to consider an apology for demeaning me with nonsense. Stupid enough to show that to the world. And stupid enough to think "browsing" scientific works qualifies you to properly criticize them.

C'mon, Fred, get me out of here!

C'mon, Fred, get me out of here!

- Don
**Posts:**8**Joined:**Tue Dec 15, 2015 12:45 pm

Don wrote:So you're too stupid to even follow the thread to know what paper we are talking about. Too narcissistic to consider an apology for demeaning me with nonsense. Stupid enough to show that to the world. And stupid enough to think "browsing" scientific works qualifies you to properly criticize them.

C'mon, Fred, get me out of here!

Cool it Don, you don't have any enemies here. Taking criticism gracefully, is a virtue. You are not right about everything. The day you think you are, is the day you stop learning.

- minkwe
**Posts:**1019**Joined:**Sat Feb 08, 2014 10:22 am

Hi Don,

Perhaps it is just a matter of ignorance and not stupidity. If that is the case then please educate us. If you don't wish to participate, then just stop reading the forum and responding. Thanks.

Perhaps it is just a matter of ignorance and not stupidity. If that is the case then please educate us. If you don't wish to participate, then just stop reading the forum and responding. Thanks.

- FrediFizzx
- Independent Physics Researcher
**Posts:**1559**Joined:**Tue Mar 19, 2013 7:12 pm**Location:**N. California, USA

Don wrote:So you're too stupid to even follow the thread to know what paper we are talking about. Too narcissistic to consider an apology for demeaning me with nonsense. Stupid enough to show that to the world. And stupid enough to think "browsing" scientific works qualifies you to properly criticize them.

C'mon, Fred, get me out of here!

Don, despite your extremes of rudeness and sensitivity to criticism, I hope you'll stay engaged here. Those of us that support "Einstein's locality legacy" need all the help we can get: and I see you to be on the right side of history in that regard. More over, too few of us analyse the raw experimental data, so it's good that you do just that. In that regard you follow in the footsteps of the late Caroline Thompson -- though she too had similar sensitivities and believed that Bellian inequalities would never be breached by any experiment.

So I suggest you apologise to Ben and answer my question about that "area designation" -- for I too might look to be stupid on this point from above:

Question: Figure 1 in http://arxiv.org/pdf/1309.1153.pdf has an "area = (π/2) cos^2 θ". Could someone please explain the basis of this area? I note that it equals zero when θ = π/2. Thanks.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

I don't get what all the fuss is about in relation to the word "marginal".

We are interested in measuring the joint conditional probability P(AB|ab). We can do it two ways, we measure AB jointly, or we can measure A and B separately. For example, tossing two coins (a,b) repeatedly and measuring jointly means for each toss, you evaluate if both the A and B events are true. You only need 1 column on your spreadsheet titled "AB" and the rows will be "true", "false" etc. At the end you count the number of "true" and divide by the number of "false" to get your estimate of P(AB|ab) directly. Measuring separately means you toss coin "a" only repeatedly to create a table spreadsheet with one column labeled "A". Then in a separate experiment, you toss coin "b" to create a spreadsheet with one column labeled "B", and then you try to calculate from those two separate experiments.

This is problematic because the separate experiment can only give you P(A|a) and P(B|b), but according to the chain rule of probability theory:

P(AB|ab) = P(A|ab)P(B|Aab) = P(B|ab)P(A|Bab)

and those terms do not appear. You might make the assumption that P(A|Bab) = P(A|ab) = P(A|a), and P(B|Aab) = P(B|ab) = P(B|b). With those additional assumptions, you would then be able to calculate

Although in the usual usage, the marginal probability is P(A) while P(A|ab), P(A|a) are considered conditional, it is correct to say P(A|a) is marginal with respect to the B outcome and settings, compared to P(A|ab) and P(A|Bab).

As far as QM is concerned P(A|a) = P(B|b) = 1/2

But P(AB|ab) = 0.5 cos^2(theta) =/= P(A|a)P(B|b)

Therefore the QM joint prediction can never be measured separately.

We are interested in measuring the joint conditional probability P(AB|ab). We can do it two ways, we measure AB jointly, or we can measure A and B separately. For example, tossing two coins (a,b) repeatedly and measuring jointly means for each toss, you evaluate if both the A and B events are true. You only need 1 column on your spreadsheet titled "AB" and the rows will be "true", "false" etc. At the end you count the number of "true" and divide by the number of "false" to get your estimate of P(AB|ab) directly. Measuring separately means you toss coin "a" only repeatedly to create a table spreadsheet with one column labeled "A". Then in a separate experiment, you toss coin "b" to create a spreadsheet with one column labeled "B", and then you try to calculate from those two separate experiments.

This is problematic because the separate experiment can only give you P(A|a) and P(B|b), but according to the chain rule of probability theory:

P(AB|ab) = P(A|ab)P(B|Aab) = P(B|ab)P(A|Bab)

and those terms do not appear. You might make the assumption that P(A|Bab) = P(A|ab) = P(A|a), and P(B|Aab) = P(B|ab) = P(B|b). With those additional assumptions, you would then be able to calculate

Although in the usual usage, the marginal probability is P(A) while P(A|ab), P(A|a) are considered conditional, it is correct to say P(A|a) is marginal with respect to the B outcome and settings, compared to P(A|ab) and P(A|Bab).

As far as QM is concerned P(A|a) = P(B|b) = 1/2

But P(AB|ab) = 0.5 cos^2(theta) =/= P(A|a)P(B|b)

Therefore the QM joint prediction can never be measured separately.

- minkwe
**Posts:**1019**Joined:**Sat Feb 08, 2014 10:22 am

minkwe wrote:Therefore the QM joint prediction can never be measured separately.

Correct. However, in a typical EPR type experiment they attempt to always measure in pairs so not separate. Of course we can see a grey area creeps in when trying to always measure pairs. Is this what Don is trying to exploit?

- FrediFizzx
- Independent Physics Researcher
**Posts:**1559**Joined:**Tue Mar 19, 2013 7:12 pm**Location:**N. California, USA

As concerns experiments

When Don says " The measurements taken in an EPRB experiment, however, are separated, so the measurement at A proceeds in ignorance of the analyzer setting at B and the outcomes at B, and vice versa."

I would say what the experimenter knows is irrelevant, it all depends in the end on what the data analyst "DOES". Note that in the coin toss example above, each toss could be performed "separately" by two different people Alice and Bob. They could each record down the time of each toss together with the outcomes on their spreadsheets. Their experiments appear to be separate, until during the data analysis, the analyst uses the recorded times to match the rows up and uses just the matched rows in order to calculate P(AB|ab). The P(AB|ab) calculated as such is joint not separated, and this is the case in all EPRB experiments. It might not even be times, it might be other criteria or even post-processing based on so-called "event-ready" signals.

Let x be the common information such as "event-ready" or coincidence time etc.

What is being calculated at the end is .

The experiments do the experiment separately. The data analysts analyse the data jointly making what the experimenters did irrelevant. The data analysis is inconsistent with the original assumption of P(A|Bab) = P(A|ab) = P(A|a), and P(B|Aab) = P(B|ab) = P(B|b).

The QM joint prediction can only be recovered in separated measurements, if there is a "glue" (x) to bind the results during data analysis and the use of this glue violates the P(A|Bab) = P(A|ab) = P(A|a), and P(B|Aab) = P(B|ab) = P(B|b) assumption.

When Don says " The measurements taken in an EPRB experiment, however, are separated, so the measurement at A proceeds in ignorance of the analyzer setting at B and the outcomes at B, and vice versa."

I would say what the experimenter knows is irrelevant, it all depends in the end on what the data analyst "DOES". Note that in the coin toss example above, each toss could be performed "separately" by two different people Alice and Bob. They could each record down the time of each toss together with the outcomes on their spreadsheets. Their experiments appear to be separate, until during the data analysis, the analyst uses the recorded times to match the rows up and uses just the matched rows in order to calculate P(AB|ab). The P(AB|ab) calculated as such is joint not separated, and this is the case in all EPRB experiments. It might not even be times, it might be other criteria or even post-processing based on so-called "event-ready" signals.

Let x be the common information such as "event-ready" or coincidence time etc.

What is being calculated at the end is .

The experiments do the experiment separately. The data analysts analyse the data jointly making what the experimenters did irrelevant. The data analysis is inconsistent with the original assumption of P(A|Bab) = P(A|ab) = P(A|a), and P(B|Aab) = P(B|ab) = P(B|b).

The QM joint prediction can only be recovered in separated measurements, if there is a "glue" (x) to bind the results during data analysis and the use of this glue violates the P(A|Bab) = P(A|ab) = P(A|a), and P(B|Aab) = P(B|ab) = P(B|b) assumption.

- minkwe
**Posts:**1019**Joined:**Sat Feb 08, 2014 10:22 am

minkwe wrote:I don't get what all the fuss is about in relation to the word "marginal".

We are interested in measuring the joint conditional probability P(AB|ab). We can do it two ways, we measure AB jointly, or we can measure A and B separately. For example, tossing two coins (a,b) repeatedly and measuring jointly means for each toss, you evaluate if both the A and B events are true. You only need 1 column on your spreadsheet titled "AB" and the rows will be "true", "false" etc. At the end you count the number of "true" and divide by the number of "false" to get your estimate of P(AB|ab) directly. Measuring separately means you toss coin "a" only repeatedly to create a table spreadsheet with one column labeled "A". Then in a separate experiment, you toss coin "b" to create a spreadsheet with one column labeled "B", and then you try to calculate from those two separate experiments.

This is problematic because the separate experiment can only give you P(A|a) and P(B|b), but according to the chain rule of probability theory:

P(AB|ab) = P(A|ab)P(B|Aab) = P(B|ab)P(A|Bab)

and those terms do not appear. You might make the assumption that P(A|Bab) = P(A|ab) = P(A|a), and P(B|Aab) = P(B|ab) = P(B|b). With those additional assumptions, you would then be able to calculate

Although in the usual usage, the marginal probability is P(A) while P(A|ab), P(A|a) are considered conditional, it is correct to say P(A|a) is marginal with respect to the B outcome and settings, compared to P(A|ab) and P(A|Bab).

As far as QM is concerned P(A|a) = P(B|b) = 1/2

But P(AB|ab) = 0.5 cos^2(theta) =/= P(A|a)P(B|b)

Therefore the QM joint prediction can never be measured separately.

1. As I see the situation, part of the fuss re marginal arises from Don's use of that term in in his trademark phrase "marginal (separated) measurements". In my view marginal is here redundant. So Don's trademark phrase just means "separated measurements".

2. So NOW we need to recognise that "separated measurements" can indeed breach any Bellian inequality: FOR they certainly do!

3. Those "separated measurements", when properly consolidated via proper matching, yield the following pairs: A+B+, A+B-, A-B+, A-B-. In the following beautiful numbers: N(A+B+), N(A+B-), N(A-B+), N(A-B-)! THEN, for any Bellian inequality, it's all down-hill from there.

.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

Gordon Watson wrote:3. Those "separated measurements", when properly consolidated via proper matching, yield the following pairs: A+B+, A+B-, A-B+, A-B-. In the following beautiful numbers: N(A+B+), N(A+B-), N(A-B+), N(A-B-)! THEN, for any Bellian inequality, it's all down-hill from there.

.

The bold text is the glue that joins them. Any calculation which uses terms like N(A+B+), N(A+B-), N(A-B+), N(A-B-) cannot be a separated measurement for those are joint terms. It does not matter how the measurements were actually made. They've been glued already prior to that point. Separated terms should be

N(A+) N(B+), N(A-), N(B-)

I aways give the example of the correlation between the heights of husbands and their wives C(HW). A joint measurement means you record both the husband and wife's heights on the same row and use that to jointly calculate C(HW). Separated means you measure all the Husbands only in experiment 1, then measure all the wives only in experiment 2. The only way to recover the joint correlation would be to stitch them using common information. For example if each couple had an index and you recorded the index along side the Husband's or Wife's height, later using the indices to do proper matching. If you do this, the experiment becomes a joint experiment not a separated one. This is the case in all Bell test experiments.

- minkwe
**Posts:**1019**Joined:**Sat Feb 08, 2014 10:22 am

minkwe wrote:Gordon Watson wrote:3. Those "separated measurements", when properly consolidated via proper matching, yield the following pairs: A+B+, A+B-, A-B+, A-B-. In the following beautiful numbers: N(A+B+), N(A+B-), N(A-B+), N(A-B-)! THEN, for any Bellian inequality, it's all down-hill from there.

.

The bold text is the glue that joins them. Any calculation which uses terms like N(A+B+), N(A+B-), N(A-B+), N(A-B-) cannot be a separated measurement for those are joint terms. It does not matter how the measurements were actually made. They've been glued already prior to that point. Separated terms should be

N(A+) N(B+), N(A-), N(B-)

Mate! ??? Please slow down, you are confusing an already confused situation! I agree re the glue, but please note this next (from you):

Of course this is true: "Any calculation which uses terms like N(A+B+), N(A+B-), N(A-B+), N(A-B-)" cannot be a separated measurement for those are joint terms."

WHY: Because it's a calculation NOT a measurement! A totally valid and inarguable calculation since highschool and no loophole; not for Don, nor anyone else!

.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

A note here; for most EPR experiments the pairing is always done in the analysis after the measurements are done. I guess one could be a little bit pendantic about that saying that the time stamps took care of the pairing during measurements. Anyways, the experiments do attempt to measure in pairs I believe.

- FrediFizzx
- Independent Physics Researcher
**Posts:**1559**Joined:**Tue Mar 19, 2013 7:12 pm**Location:**N. California, USA

minkwe wrote: ...

I aways give the example of the correlation between the heights of husbands and their wives C(HW). A joint measurement means you record both the husband and wife's heights on the same row and use that to jointly calculate C(HW). Separated means you measure all the Husbands only in experiment 1, then measure all the wives only in experiment 2. The only way to recover the joint correlation would be to stitch them using common information. For example if each couple had an index and you recorded the index along side the Husband's or Wife's height, later using the indices to do proper matching. If you do this, the experiment becomes a joint experiment not a separated one. This is the case in all Bell test experiments.

minkwe, (responding to the above comment which came in while my last reply was in progress). I agree that the key phrase -- the one that I also emphasised in my initial response on the matter -- is: PROPER MATCHING!

That's why I encourage others, like Don Graft and (formerly) Caroline Thompson, to monitor the matchings. However, here's where I differ from DG and CT: as the matching improves, the Bellian inequalities will continue to be breached; even more so than now!

PS: As to this line of yours: "If you do this, the experiment becomes a joint experiment not a separated one. This is the case in all Bell test experiments." I did not find this to be in line with your normal clarity (perhaps it's me and the silly season).

For, given the indexing, all the husband's can be measured in Paris, all the wives where I live! To borrow Don's phrase: they can remain "separated measurements". Indeed, in Bell tests, they are working ever harder to increase the separation. (Though, to be honest, I have no clue as to what mechanism could possibly breach ANY spacelike separation under proper matching.)

HTH.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

FrediFizzx wrote:A note here; for most EPR experiments the pairing is always done in the analysis after the measurements are done. I guess one could be a little bit pendantic about that saying that the time stamps took care of the pairing during measurements. Anyways, the experiments do attempt to measure in pairs I believe.

Your belief is right! So we can express it confidently as a fact. Thus some of the technical difficulties relate to the old Ai - Bi (i= 1,2, …, n), pair-matching problem.

- Gordon Watson
**Posts:**240**Joined:**Wed Apr 30, 2014 4:39 am

51 posts
• Page **2** of **3** • 1, **2**, 3

Return to Sci.Physics.Foundations

Users browsing this forum: Semrush [Bot] and 1 guest