The insanity of non-realism

A special section about sanity in modern physics

Re: The insanity of non-realism

Postby minkwe » Sat Mar 22, 2014 6:21 am

gill1109 wrote:Regarding the possible insanity of non-realism, anyone who thinks that rejecting "counterfactual definiteness" is insane, is quite free to do so. They have three or four other logical alternatives:

(1) non-locality,
(2) super-determinism aka conspiracy,
(3) a successful loohole free experiment will never be done

(4) there is no Nx4 spreadsheet to start with. (The intersection of disjoint sets is a null set). So the theorem is irrelevant to the experiments.

De Raedt, Adenier, Rosinger, Accardi, Hess, Khrennikov, Krauklauer. Have been saying this for years. I'm done trying to make you understand it. Zen is also telling you the same thing in your thread about your paper. You should consider moving your arguments to that thread where they are more relevant. I'm done.
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby gill1109 » Sun Mar 23, 2014 12:19 am

minkwe wrote:There is no Nx4 spreadsheet to start with. (The intersection of disjoint sets is a null set). So the theorem is irrelevant to the experiments.

Michel should now read section 9 of my paper, or read Joy Christian's experimental paper (see section 4 of arXiv: 1211.0784).

There is no Nx4 spreadsheet to start with, but it is very easy to construct it.

There is also no intersection of disjoint sets in my work. Four correlations are computed each from a disjoint subsets. The subsets had been formed by taking random subsamples from a large set. The average in the subset is with large probability close to the corresponding average in the large set. This is called statistical sampling theory. It works because the four sub-sets are chosen according to a known, precise, sampling scheme (each row has, independently of all the others, chance 1/4 to belong to each of the four subsets).

The theorem can be easily applied to computer simulation models of real world EPR-B experiments (provided the user of the model is allowed to freely choose settings), and it tells us interesting things about what can be done with them.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby Mikko » Sun Mar 23, 2014 1:19 am

minkwe wrote:(4) there is no Nx4 spreadsheet to start with. (The intersection of disjoint sets is a null set). So the theorem is irrelevant to the experiments.

Naive realism says that there is, or at least the same information in some physical form.
Mikko
 
Posts: 155
Joined: Mon Feb 17, 2014 2:53 am

Re: The insanity of non-realism

Postby minkwe » Sun Mar 23, 2014 6:30 am

Mikko wrote:
minkwe wrote:(4) there is no Nx4 spreadsheet to start with. (The intersection of disjoint sets is a null set). So the theorem is irrelevant to the experiments.

Naive realism says that there is, or at least the same information in some physical form.

Its called naive for a reason. No physicist is interested in such a realism to start with. Even classical physics violates it.
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby minkwe » Sun Mar 23, 2014 6:38 am

gill1109 wrote:There is also no intersection of disjoint sets in my work. Four correlations are computed each from a disjoint subsets. The subsets had been formed by taking random subsamples from a large set. The average in the subset is with large probability close to the corresponding average in the large set. This is called statistical sampling theory. It works because the four sub-sets are chosen according to a known, precise, sampling scheme (each row has, independently of all the others, chance 1/4 to belong to each of the four subsets).

I've told you already many times that this is fantasy, and pointed you to Bertrands paradox. But you do not listen. You do not seem to understand what a disjoint set is, otherwise you won't suggest that two disjoint sets could be fair random samples of a bigger set.
gill1109 wrote:Michel should now read section 9 of my paper,

Do you have a problem with reading comprehension? How many times do I have to tell you I'm not interested. Find somebody else to read your paper and leave me alone.
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby Mikko » Sun Mar 23, 2014 8:00 am

minkwe wrote:
Mikko wrote:
minkwe wrote:(4) there is no Nx4 spreadsheet to start with. (The intersection of disjoint sets is a null set). So the theorem is irrelevant to the experiments.

Naive realism says that there is, or at least the same information in some physical form.

Its called naive for a reason. No physicist is interested in such a realism to start with. Even classical physics violates it.

Of course. But you din't specify which sort of non-realism is insane.
Mikko
 
Posts: 155
Joined: Mon Feb 17, 2014 2:53 am

Re: The insanity of non-realism

Postby gill1109 » Sun Mar 23, 2014 9:42 am

minkwe wrote:You do not seem to understand what a disjoint set is, otherwise you won't suggest that two disjoint sets could be fair random samples of a bigger set.

Write down any 100 numbers between 0 and 9. Choose a random sample of 50 of them. Look at the average of the 50 you selected. Look at the average of the remaining 50. Look at the average of all 100. All these three averages will with large probability be close to one another. Run this code at the R command line for a few times.

Code: Select all
x <- sample(0:9, 100, replace = TRUE)          ## create list of 100 numbers between 0 and 9
temp <- c(rep(TRUE, 50), rep(FALSE, 50))     ## temporary list containing 50 times "TRUE" and 50 times "FALSE"
choice <- sample(temp, 100, replace = FALSE)  ## a random permutation of "temp"
y <- x[choice]                                              ## a random sample of size 50 from the numbers in "x"
z <- x[!choice]                                             ## the remaining 50 numbers in "x"
mean(x); mean(y); mean(z)

Investigate what happens when you increase the sample size 100. You can fill in the initial list of numbers in x in any way you like, but do keep them within the same bounded range.
Last edited by gill1109 on Sun Mar 23, 2014 9:51 am, edited 2 times in total.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby gill1109 » Sun Mar 23, 2014 9:47 am

Mikko wrote:
minkwe wrote:
Mikko wrote:(Answering Minkwe, who said "there is no Nx4 spreadsheet to start with"). Naive realism says that there is, or at least the same information in some physical form.

Its called naive for a reason. No physicist is interested in such a realism to start with. Even classical physics violates it.

Of course. But you didn't specify which sort of non-realism is insane.

A computer simulation satisfies naive realism. Are computer simulations insane?
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby minkwe » Sun Mar 23, 2014 10:55 pm

gill1109 wrote:Write down any 100 numbers between 0 and 9. Choose a random sample of 50 of them. Look at the average of the 50 you selected. Look at the average of the remaining 50. Look at the average of all 100. All these three averages will with large probability be close to one another. Run this code at the R command line for a few times.

So you still haven't read Bertrands paradox. What you write above is naive. You know what HIDDEN VARIABLES mean right?
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby minkwe » Sun Mar 23, 2014 10:58 pm

gill1109 wrote:A computer simulation satisfies naive realism. Are computer simulations insane?

None of my simulations satisfy naive realism. Although I recognize that you "fixed" them so that they would be naive. That is insane.

There is no "nonsense" or "insanity" limit when it comes to simulations. Pretty much any garbage can be simulated. All you need is a vivid immagination and a lack of physical insight about the real world. I've met a few of such people who do not believe atoms exist independently of measurement but believe that probabilities have objective reality.
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby gill1109 » Mon Mar 24, 2014 1:21 am

minkwe wrote:
gill1109 wrote:A computer simulation satisfies naive realism. Are computer simulations insane?

None of my simulations satisfy naive realism. Although I recognize that you "fixed" them so that they would be naive. That is insane.

There is no "nonsense" or "insanity" limit when it comes to simulations. Pretty much any garbage can be simulated. All you need is a vivid immagination and a lack of physical insight about the real world. I've met a few of such people who do not believe atoms exist independently of measurement but believe that probabilities have objective reality.

You have a very naive view of what naive realism means.

I did not fix your simulations, I entertained a thought experiment about them in my mind (i.e., I used my imagination). Through this device I was able to make non-trivial mathematical predictions about the operating characteristics of your simulations. Your simulation results fit to my predictions.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby gill1109 » Mon Mar 24, 2014 1:27 am

minkwe wrote:
gill1109 wrote:Write down any 100 numbers between 0 and 9. Choose a random sample of 50 of them. Look at the average of the 50 you selected. Look at the average of the remaining 50. Look at the average of all 100. All these three averages will with large probability be close to one another. Run this code at the R command line for a few times.

So you still haven't read Bertrand's paradox. What you write above is naive. You know what HIDDEN VARIABLES mean right?

I know three Bertrand's paradoxes very well indeed, and what you are saying appears to me to be nonsense, whichever of the three paradoxes you are referring to, because as far as I can see none of them have any relevance to our situation at all.

Please explain in what way you think Bertrand's paradox applies to our situation here. I suppose you mean the most famous one:

https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)

I suspect you are thinking of Borel's paradox,

https://en.wikipedia.org/wiki/Borel_paradox

This is a "paradox" about conditioning on zero probability events. As Kolmogorov said "The concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible". However there is no conditioning in my proof of CHSH at all, let alone conditioning on zero probabilty events. It seems clear to me that you haven't appreciated the statement of my Theorem 1, nor studied its proof (appendix), because if you had, you would have seen that your critique on this point is quite irrelevant.

Did you run that little bit of R code yet? It disproves another one of your many claims about statistics and probability.

Finally, regarding HIDDEN VARIABLES, you are saying that because hidden variables are called "hidden", we are not allowed to do anything with them when we simulate them on a computer, except what the writer of the code imagines is happening with them in the physical model he or she is trying to simulate. You are saying that we are not even allowed to think about doing things with them, beyond what the writer of the program intended. Do you find that attitude scientific? If I buy a piece of software, am I forbidden to think about how it works, am I forbidden to think about what it can do, and am I forbidden to think about whether it can do things which the programmer did not have in mind? To me, these notions are incredible. Various words from the title of this thread come to my mind.

I wonder what other people think.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby minkwe » Mon Mar 24, 2014 8:06 am

gill1109 wrote:I know three Bertrand's paradoxes very well indeed, and what you are saying appears to me to be nonsense, whichever of the three paradoxes you are referring to, because as far as I can see none of them have any relevance to our situation at all.
Please explain in what way you think Bertrand's paradox applies to our situation here. I suppose you mean the most famous one:

It is very simple really, if you actually think about it. You are able to randomly select numbers between 0 and 100 because you know everything about the numbers. They are not hidden and they are fixed (naive realism). If the variables are hidden and dynamically changing in a way you do not know, as they probably are in the EPR situation, your claims that you are randomly sampling them is fantasy. So contrary to your statement above, it is your little example of averaging numbers that is completely irrelevant. Bertrand's paradox tells you that you can generate a biased sample by random sampling if you do not know the exact nature of the variable you are sampling for. A proper random sampling method must be tailored to the variable being sampled.

I suspect you are thinking of Borel's paradox,
https://en.wikipedia.org/wiki/Borel_paradox

That "null set" discussion is closed remember? Though, Borel's paradox and Bertrand's paradox are not unrelated.

Finally, regarding HIDDEN VARIABLES, you are saying that because hidden variables are called "hidden", we are not allowed to do anything with them when we simulate them on a computer

I wonder where you would get such an insane idea in anything I said. Anybody can do anything the like with hidden variables on any computer they like. I just told you there was no limit. Just because somebody can do it and is free to do it, does not mean it is sane.

You are saying that we are not even allowed to think about doing things with them, beyond what the writer of the program intended. Do you find that attitude scientific?

Again, I never said anything that insane. Anyone can think and dream up anything they like. There is no limit. Just because somebody can do it and is free to do it, does not mean it is sane. And yes, it is perfectly scientific to call an insane idea insane.

If I buy a piece of software, am I forbidden to think about how it works, am I forbidden to think about what it can do, and am I forbidden to think about whether it can do things which the programmer did not have in mind? To me, these notions are incredible. Various words from the title of this thread come to my mind.

Except that you have completely dreamed up the notion of "forbidden" and are now having a hissy fit about imagined prohibitions. That is insane.
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby gill1109 » Mon Mar 24, 2014 8:42 am

minkwe wrote:It is very simple really, if you actually think about it. You are able to randomly select numbers between 0 and 100 because you know everything about the numbers. They are not hidden and they are fixed (naive realism). If the variables are hidden and dynamically changing in a way you do not know, as they probably are in the EPR situation, your claims that you are randomly sampling them is fantasy. So contrary to your statement above, it is your little example of averaging numbers that is completely irrelevant. Bertrand's paradox tells you that you can generate a biased sample by random sampling if you do not know the exact nature of the variable you are sampling for. A proper random sampling method must be tailored to the variable being sampled.

And what about in the situation of "epr-simple"? Those hidden variables in your program are not hidden to me. They are dynamically changing through repeated calls to Python's pseudo-random generator which is a totally deterministic piece of computer code. It's even open source so I know exactly what generator you are using. Fix the initial seed and the whole simulation generates *identical* results on any computers using today's ACM and IEEE standards on floating point arithmetic, etc, as long as the input (two setting sequences) is the same.

The experimenter gets to choose the settings outside of your program. He or she can choose settings which are repeatedly changing, at random. As in real world experiments.

There are good reasons why the best experiments use repeated re-randomisation of settings!

This way, the experimenter (who is now experimenting with your computer programs, treating them as a new physical system, which he studies in his virtual lab) can effectively randomly sample from your sequence of hidden variables. And the mathematician with imagination can do these experiments in his head, and use logical deduction to prove interesting things about the limits of such simulation models!

Have you run my R code by the way? The results you'll see when running it a number of times, especially when you increase N from 100, give empirical evidence that your claims about randomly sampling disjoint subsets are wrong. A scientist is always open to consideration of new evidence.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby Ben6993 » Mon Mar 24, 2014 9:45 am

Richard wrote
I wonder what other people think.


1. I have not met Bertrand's Paradox before but the following link gives an interesting explanation:
http://www.youtube.com/watch?v=uI2FnUmBeeo

The 'paradox' does seem relevant to the simulations. The link shows that how one generates random numbers in a process affects the results. So in our context, a process which generates random numbers on a line, say, will give different answers to a process which generates random numbers on an R3 sphere. Joy seems to be content with generating random numbers on R3, so I suppose he judges that to be equivalent in the simulations, to generating random numbers on S3. Something like: R3 is a tangential subspace to S3 at a point ... (?) but that may be incorrect as I am very weak on Clifford Algebra. But, the data generated on R3 are not all particles, so maybe one problem is a difficulty in generating all data from S3 in one initial step, instead of generating in R3 and then later discarding some data?

2. Simpson's Paradox was mentioned in earlier posts as also being relevant. I have met problems with Simpson's Paradox and agree that it might be relevant. I am retired though and no longer have access to the real-life data affected by this paradox. Say you have an able group of students marked on a test and need to give them a pass mark. And there is a much less able group taking an easier test but also needing the an equivalent pass grade. So you are allocating a 1 or 0 to all students. And a "1" means the same thing to all students whichever test they take. The problem arises because the distributions of marks for the two tests are very different. The cut-off for the brighter group is at the bottom tail of their normal distribution while for the weaker group the cut-off is at the top end of their distribution of marks. The standard of the pass grade this year should be related in a somehow-agreed variation on the grades of the previous year's students. I may be exaggerating slightly here, but one year, when experimenting with the cut-off marks in the two tests, the grades for the two groups were sent closer to the target grades. Then on combining the groups to compare the total cohorts across years, the total groups diverged. Very strange. In getting the sub-groups to match more closely you force the total populations to diverge more. Another problem was that the numbers of students in the sub-groups had changed balance of easy/hard test entered that year.

Could that paradox affect the simulations? Well the simulations have 1 or -1 outcomes equivalent to pass or fail. And there is an underlying continuous variable comparable to marks, ie the orientation angles of the particles. However, for any single detector angle, there will be as many 1s as 0s recorded there. There does not seem to be any difference between the distributions of particle orientation angles at the two detectors so I am struggling to see how this paradox works here.

3. Hidden variables.
I was under the impression that if one could write down all the hidden variables then there was no randomness left for the particles.
The particles have to give the results that they do give. There may be randomness in the simulation, but that is not because of any randomness inherent in the particles once hidden variables are known.

4. Generated data v generated particles
I presume there is no way of mapping null particles (generated data which are not particles) onto S3 space? Since these data are supposed not to be genuine particles than a mapping must be impossible. I was also wondering if a set of R3 points which give rise to null particles in one simulation are the same set of data which form null-particles in a second simulation? I suppose it is not that straightforward.
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: The insanity of non-realism

Postby gill1109 » Mon Mar 24, 2014 11:23 pm

Ben6993 wrote:4. Generated data v generated particles
I presume there is no way of mapping null particles (generated data which are not particles) onto S3 space? Since these data are supposed not to be genuine particles than a mapping must be impossible. I was also wondering if a set of R3 points which give rise to null particles in one simulation are the same set of data which form null-particles in a second simulation? I suppose it is not that straightforward.

They are very different sets of particles. Because of Bell's theorem.

I just wrote the following on another thread (on "what is wrong with this argument")
http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=30&start=30#p1092

Richard wrote:Suppose we do N runs and we observe some "no shows", non-detections, so we only get say n < N pairs of both detected particles. There are two options.
Option 1: We can imagine that we are doing an N run experiment with ternary outcome (-1, 0, 1). Bell-CHSH is an inequality for binary outcomes. We need the CGLMP inequality for a ternary outcome experiment, or we can use CHSH after merging two of the three outcomes to one. Either of these kinds of inequalities are what we call "generalized Bell inequalities" and these two kinds are the only generalized Bell inequalities for a 2 party, 2 setting, 3 outcome experiment. See the section of my paper on generalized Bell inequalities http://arxiv.org/abs/1207.5103 (the final revision just came out on arXiv today). None of the generalized Bell inequalities for a 2x2x3 experiment are violated, because the experiment satisfies "local realism" with no conspiracy loophole.
Option 2: there really are only n runs. The probability distribution of the local hidden variables in the model is the conditional distribution given both particles are accepted by the detectors. In order to pick a value of the hidden variable in the model, we need to know the settings (in effect, we are using rejection sampling: we just keep on picking a value, discard if it is rejected by the settings and try again). We now have n runs of a local realistic 2x2x2 experiment in which the hidden variables are chosen knowing in advance what the settings will be. Or you could call it not conspiracy, realist it is for sure, but non local. First there is communication from the settings to the source. Then the hidden variable is created. Then the particles fly to the detectors, knowing in advance what the settings will be.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby minkwe » Wed Mar 26, 2014 7:23 am

gill1109 wrote:And what about in the situation of "epr-simple"? Those hidden variables in your program are not hidden to me.

So? You still do not get it. Hidden variables are hidden in the real experiment being modeled by the simulation.

They are dynamically changing through repeated calls to Python's pseudo-random generator which is a totally deterministic piece of computer code. It's even open source so I know exactly what generator you are using. Fix the initial seed and the whole simulation generates *identical* results on any computers using today's ACM and IEEE standards on floating point arithmetic, etc, as long as the input (two setting sequences) is the same.

The experimenter gets to choose the settings outside of your program. He or she can choose settings which are repeatedly changing, at random. As in real world experiments.

This is insanity as I've told you umpteen times. You can not control the randomness in a real experiment, so "fixing" the randomness in my simulation just because you can is insanity. Garbage in garbage out. You are free do do whatever you like but don't be deceived that what you get is meaningful in anyway whatsoever. Each individual outcome is a result of at least three variables only two of which are truly random. The hidden particle variable (λ), the hidden instrument/detector variable (ζ) and the known detector setting (α). In my simulation, α is picked randomly but is not really a random variable, since it is fixed for each relevant correlation being calculated. It can even be argued that it is not really a variable. In any case, when you "fix" the initial random number seed, you are doing the insane operation of controlling not just α (which you should be able to do, as is done in real experiments), but also λ and ζ, which even though you can control in a simulation, are uncontrollable hidden variables in any real performable experiment. I'm tired of explaining this to you in thread after thread after thread.

There are good reasons why the best experiments use repeated re-randomisation of settings!

Again, experimenters randomize *settings* not hidden variables, and they can randomize them to their hearts content. We are talking about randomization of hidden variables. Bertrand's paradox tells you that any claims that you are randomly sampling the hidden variables in any experiment is a fantasy. To suggest that by simply randomizing the *settings* you are able to obtain *identical* results is very naive. It is equivalent to the assumption that λ and ζ do not exist or are fixed. This is naive realism. There is no experimental, physical or logical basis for such an assumption. To obtain *identical* results in any of my simulations, you have to fix the random number generator so that you get an identical fixed set of λ and ζ every time. This is insanity as it then becomes completely irrelevant to real performable experiments.

This way, the experimenter (who is now experimenting with your computer programs, treating them as a new physical system, which he studies in his virtual lab) can effectively randomly sample from your sequence of hidden variables. And the mathematician with imagination can do these experiments in his head, and use logical deduction to prove interesting things about the limits of such simulation models!

No. They will be proving uninteresting things about the limits of naive, non-physical and irrelevant derivatives of the simulations. And their proofs will not be relevant at all to any performable experiment or about the real world. If you deliberately make a virtual system violate physical principles, it is disingenuous to call it a "physical system".

Have you run my R code by the way? The results you'll see when running it a number of times, especially when you increase N from 100, give empirical evidence that your claims about randomly sampling disjoint subsets are wrong. A scientist is always open to consideration of new evidence.

That you keep asking me to run your silly R code given everything I've told you about it's irrelevance is the epitome of insanity. Maybe somebody else can have a go at pointing out your errors but I'm done.
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby minkwe » Wed Mar 26, 2014 7:34 am

minkwe wrote:We set out assuming that a single particle pair has simultaneous values for 4 observables A, B, C, D. And this assumption is supposed to be the realism assumption. We then derive a relationship between those observables in the form of inequalities.
But unfortunately we are only able to measure two of those observables since we only have 2 particles in a pair. But maybe if we measure A, B on our initial particles, and C, D on two different particles then we can use those outcomes. It turns out, the outcomes we obtain in such a manner violate the inequalities we obtained by making our realism assumption. So we conclude with straight faces that therefore A, B, C, D do not simultaneously exist and therefore the realism assumption is false.


So getting back on topic, the question remains: When we say "realism is untenable", what variables exactly are we claiming do not exist. A, B, C, D?
minkwe
 
Posts: 1211
Joined: Sat Feb 08, 2014 10:22 am

Re: The insanity of non-realism

Postby gill1109 » Thu Mar 27, 2014 8:04 am

minkwe wrote:
gill1109 wrote:And what about in the situation of "epr-simple"? Those hidden variables in your program are not hidden to me.

So? You still do not get it. Hidden variables are hidden in the real experiment being modeled by the simulation.


I think *you* don't get it.

Firstly, when I say "in situation of epr-simple", I am talking about your computer programs running on anyone's computer. I am not talking about what physical phenomena in the real world you are modelling. I am trying to get across to you that mathematics can be used to deduce the limitations of those programs.

Secondly, if we are not talking about computer programs but about some physicist's model of physical reality, It is totally irrelevant what name the physicist wants to give to the variables in his or mathematical-physical model of real experiments. He or she may call them "hidden variables" or "blue variables" or "micro-variables". What's in a name?

The label comes from history, from tradition; because these variables were hidden so far; they cause the correlations we observe but so far we never observed them directly. The word "hidden" is a value-judgement! It's an aid to picturing the model! No more than that.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: The insanity of non-realism

Postby gill1109 » Thu Mar 27, 2014 8:09 am

minkwe wrote:
minkwe wrote:We set out assuming that a single particle pair has simultaneous values for 4 observables A, B, C, D. And this assumption is supposed to be the realism assumption. We then derive a relationship between those observables in the form of inequalities.
But unfortunately we are only able to measure two of those observables since we only have 2 particles in a pair. But maybe if we measure A, B on our initial particles, and C, D on two different particles then we can use those outcomes. It turns out, the outcomes we obtain in such a manner violate the inequalities we obtained by making our realism assumption. So we conclude with straight faces that therefore A, B, C, D do not simultaneously exist and therefore the realism assumption is false.


So getting back on topic, the question remains: When we say "realism is untenable", what variables exactly are we claiming do not exist. A, B, C, D?

The ones which were not measured, obviously.
gill1109
Mathematical Statistician
 
Posts: 2287
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

PreviousNext

Return to Sci.Physics.Sanity

Who is online

Users browsing this forum: No registered users and 1 guest

CodeCogs - An Open Source Scientific Library