minkwe's challenge

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: minkwe's challenge

Postby minkwe » Sat May 10, 2014 9:07 am

Heinera wrote:The statistics enter in my next sentence:
So if the absolute upper bound is 2 with same seed, you can't expect to see much of a difference with different seeds (given large N).

Maybe you think by simply mentioning "large N" and "random number seed", you are talking statistics. But the truth of all the points I gave you above do not change whether you have N=1 or N=1 billion, nor do they change because you restrict yourself to talking about "computer models" with random seeds. You still haven't pointed to a single point in the list that you disagree with, nor a single fact about statistics that changes any one of them. You just keep digging.

Heinera wrote:This is a computer program. It takes a pair of detector settings, and generates two lists of outcomes, and then it computes the correlation. And then I ask the question: Could this in any reasonable way be massively seed dependent?

Again I ask you, a computer program modelling what? Does it produce data, what are the maximum and minimum values of each record in the data, then what is the number of degrees of freedom of the data? That is all you need to know to determine the appropriate upper bound for S.

Heinera wrote:There are no factual or counterfactual outcomes in this program, only outcomes.

Your program produces actual outcomes, whether you like it or not, whether you admit it or not. Call it whatever you like, it still doesn't change any of the points on my list. What is the number of degrees of freedom in the "outcomes"? That will determine the upper bound. It doesn't matter where you get the "outcomes", from your dreams, or a "computer model", or "non-locality", or "backward causation", or "multiple universes", or "voodoo", or QM, or LHV. Nothing whatsoever will ever violate the upper bound.

Heinera wrote:The distinction between factual and counterfactual is something that belongs in the twilight zone between experiment and philosophy.

That is why Bell believers like you will never understand, they prefer to believe in completely baseless and utterly ridiculous concepts such as "non-realism"/"non-locality"/"multiple-universes"/"backward-causation", rather than sound logic and proper application of mathematics to experiments. And their only justification for believing those things, is the mathematical error of comparing an inequality derived from a 4xN spreasheet, with correlations calculated using data from 4 different 2xN spreadsheets, deluding themselves that the purely mathematical inequality has anything to do with physics.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby minkwe » Sat May 10, 2014 9:14 am

http://vixra.org/pdf/1305.0129v1.pdf
Rosinger wrote:The inequalities (17) are purely mathematical. In particular, their proof depends in absolutely no way on anything else, except the mathematical properties of the set Z of positive and negative integers, set seen as a linearly ordered ring, [9].

As for the inequalities (16), they are a direct mathematical consequence of the inequalities (17), and thus again, their proof depends in absolutely no way on anything else, except the mathematical properties of the set R of real numbers, set seen as a linearly ordered field, [9].
It is, therefore, bordering on the amusing tinted with the ridiculous, when any sort of so called "physical" meaning or arguments are enforced upon these inequalities - be it regarding their proof, or their connections with issues such as realism and locality in physics - and are so enforced due to a mixture of lack of understanding of rather elementary and quite obviously simple mathematics ...
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Heinera » Sat May 10, 2014 10:09 am

I think I've explained all my reasoning about the computer model, and I also think that you somehow got it. So now I ask you to produce a LHV computer model that implements Joy's theory (or just any LHV model). Outcomes only in {-1, 1} (no post selection), since Joy says that outcome 0 doesn't exist, and its presence in your code was only because the code "is not advanced enough". And as Joy says, the model should reproduce "all quantum correlations".

Let's call it "the counter challenge". Whenever you have a submission ready, open a new thread.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby minkwe » Sat May 10, 2014 2:56 pm

Heinera wrote:I think I've explained all my reasoning about the computer model, and I also think that you somehow got it.

Keep dreaming. You still haven't pointed to a single point in the list that you disagree with, nor a single fact about statistics that changes any one of them. That means you agree with all of them, yet you keep arguing.

So now I ask you to produce a LHV computer model that implements Joy's theory (or just any LHV model). Outcomes only in {-1, 1} (no post selection)

What's the point. I've produced two different models which reproduce the QM correlations, yet you do not like them for some unreasonable unphysical reason. You want to believe in mysticism, nothing will convince you otherwise, there will always be an excuse.

And as Joy says, the model should reproduce "all quantum correlations".

Both of my simulations reproduce all the QM correlations as does Joys model.

Let's call it "the counter challenge". Whenever you have a submission ready, open a new thread.

That's a non- challenge I have zero interest in, having already reproduced all the QM correlations twice. May be you should read those months old threads again and convince yourself why you demand no non-detection and no post selection when QM does not, and no experimental results demand full detection/no post selection. Maybe you'll be able to finally answer the question of how Alice can know that she was supposed to detect a particle but didn't, without using non-local information from Bob.

Maybe you can start another thread when you finally are able to produce a non-local/non-real simulation which violates the appropriate bound (no mathematical errors, no comparing apples with oranges this time).
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby Ben6993 » Sat Jun 14, 2014 8:48 am

Hi Heinera,

I have a question, please. I have just run your non-local hidden variables program: http://rpubs.com/heinera/16727 and agree with the output.
Can you say if Alice and Bob's outcomes are available to inspect after running the program.
(I am not yet up to speed with R language yet.)
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: minkwe's challenge

Postby Heinera » Sun Jun 15, 2014 4:23 am

Ben6993 wrote:Hi Heinera,

I have a question, please. I have just run your non-local hidden variables program: http://rpubs.com/heinera/16727 and agree with the output.
Can you say if Alice and Bob's outcomes are available to inspect after running the program.
(I am not yet up to speed with R language yet.)

Hi, Ben

The vectors of observations are stored in the variables ca and cb, which are overwritten for each new combination of detector settings. You could just rename them to something unique for each detector setting pair (like da, db for the 0/135 combination, etc). Then they will be available for inspection after you run the program.
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: minkwe's challenge

Postby Ben6993 » Sun Jun 15, 2014 7:49 am

Hi Heinera

Thanks, I was a little unsure of obs, ca, cb. And I hadn't noticed a "sapply" function before, but have now looked it up. I have a few ideas and was thinking of trying them in excel VB but I ought really to move onto R for anything requiring generation of random data. In a CHSH setup using non-CFD and non-local, one could get a perfect CHSH score of 4 by choice of (A, B) pairs of (-1, +1) and (+1, -1) as the four pairs of (a,b) angles are being treated as independents. But four perfect correlations are unreasonable among the pairs of CHSH angles. So one can aim for reasonable correlations of 0.5*sqrt 2 as you have done.

...

A CFD setup of CHSH requires links between outcomes for different pairs of angles. That loses some independence of the setup compared with non-CFD. That reduces the limit to 2 (CFD) rather 4 (non-CFD). In the non-CFD and non-local setup using realistic data, even using QM, the average value 0.5*sqrt 2 is not broken [though I feel sure Richard wil correct me here], ie CHSH of 2*sqrt 2 is not broken. The QM results are not surprising, however, as QM uses a non-CFD and non-local setup.

I feel sure that nature will beat the CHSH limit of 2 because QM beats that limit. But that is not surprising if nature uses non-CFD and non-local. But what if nature uses CFD and local? Nature cannot beat CHSH=2 in a flatland setup, but it should be able to beat "2" in a non-flatland setup.

To my mind, the zero outcomes in simulations are puzzling. As is the need to reverse the sign of two of the four correlations in a 'non-flatland' tweak of the normal flatland calculation. The four correlations surely need to be combined before dropping out of geometric algebra and into flatland arithmetic. Despite the two puzzles, I am firmly with Joy that the geometry of space is responsible for both the apparent non-CFD in QM and the apparent non-locality in QM.

To my mind, laboratory space is the spatial 3D of spacetime (x,y,z,t) but particles live in more dimensions than that. For me, flatland exists in the laboratory but particles do not live only in flatland. Joy seems to say that there is no difference between laboratory space and a particle's space, and that the one, common space is more that the spatial 3D of spacetime. And that implies that all flatland calculations are suspect. Going back to Susskind's 1967 paper, as there is a 4pi periodicity to electron rotations in the laboratory space of rotating magnets, why is that ignored in the simulations? I think that Joy does not ignore it because I can think of 0 to 2pi as being in one trivector while 2pi to 4pi is covered by the trivector of the opposite basis? No doubt not an exact analogy.

An experimental detail ... does Bob ever know if he is measuring an electron or a positron?
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: minkwe's challenge

Postby Joy Christian » Sun Jun 15, 2014 8:35 am

Ben6993 wrote:To my mind, laboratory space is the spatial 3D of spacetime (x,y,z,t) but particles live in more dimensions than that. For me, flatland exists in the laboratory but particles do not live only in flatland. Joy seems to say that there is no difference between laboratory space and a particle's space, and that the one, common space is more that the spatial 3D of spacetime. And that implies that all flatland calculations are suspect. Going back to Susskind's 1967 paper, as there is a 4pi periodicity to electron rotations in the laboratory space of rotating magnets, why is that ignored in the simulations? I think that Joy does not ignore it because I can think of 0 to 2pi as being in one trivector while 2pi to 4pi is covered by the trivector of the opposite basis? No doubt not an exact analogy.

Hi Ben,

Much of what you say is correct, at least conceptually, and I certainly do not ignore the 4pi periodicity in my model. On the contrary, my model works because of the 4pi periodicity---although this may not be intuitively obvious. A parallelized 3-sphere is diffeomorphic to the group SU(2), which is the group relevant for the 4pi periodicity rather than the 2pi periodicity, unlike the group SO(3).

But your intuition about "particles living in more dimensions" is not entirely correct. 3-sphere is a 3D space, just like the regular laboratory space R^3. But the topology of these two spaces, S^3 versus R^3, are dramatically different. Now it is true that S^3 can be better understood (at least intuitively) if we embed it in R^4, which is a 4D space. But that does not change the fact that S^3 itself is a 3D space.

To understand this, think of the earth surface, which is a 2D space. But it is embedded in the ambient space, which is of course R^3, a 3D space. Now the tangent space at every point on the earth surface is R^2, a plane, or a 2D space. The relationship between S^3 and R^3 is almost the same, and it is the non-almost part that plays tricks on our minds, like the 4pi periodicity and the non-commutativiity of the graded bases that are required to understand the parallelization of the 3-sphere.

I hope this is helpful.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: minkwe's challenge

Postby Ben6993 » Sun Jun 15, 2014 2:38 pm

Hi Joy

Yes, that is helpful. You have said something like it before to me, but it did not stay long, if at all, in my intuition.

I will try to phrase it so it might stay. One of the problems of measuring a time or distance interval between events A and B is that one cannot be at both A and B simultaneously. Not if one is a point particle, that is. So I can see that a hypothetical tangential 3D space for point A is not a good one to use for point B. In essence, points A and B need their own hypothetical tangential 3D spaces. And those spaces are not very reliable for use the further one goes away from the points (which is presumably why they are hypothetical). Especially if the global space has a twist, eg near a black hole or near a compactified dimension of string theory.

It is fairly easy to have an intuition for a large scale curvature so that the tangential 3D spaces at A and B are almost identical when A and B are close but less so at vast distances. I can also see the need for different tangential spaces on the very small scale (near compactified dimensions) but it is hard to see why there would be just one twist around a lab room (say).

In my own model of particles the whole electron exists in all available dimensions. If the maximum dimensionality corresponds to a parallelized 7-sphere then the electron lives in the 7-sphere, though the spin does not need all those dimensions for a description. Which is why I think of the electron as being in a higher dimensional environment.

I tend to think of lab space being 3D, although Susskind's 4pi rotation periodicity maybe argues against that. So, as well as you needing to recast QM, you will also need a recasting of spacetime as 3D is not enough for the 7-sphere?

In my own visualisation, I see the normal spacetime/laboratory space as emergent, being like a 3D projection screen, whereas all the action takes place in the higher dimensions and a Rasch-like analysis generates distances between events in spacetime.

Are the extra dimensions of the 7-sphere genuinely different/extra dimensions. This is going to sound odd. I can imagine space being warped around a black hole but so long as particle A does not enter the BH there is no extra dimension intruding, yet tangential 3D spaces near point A will not extend very far from A, near a BH, before they become useless. In other words, is the 4th spatial dimension of geometric algebra a genuine extra dimension? [I think the answer will be yes because the twist is at every possible point in the 3D. Caused by compactified dimensions at every point in the 3D. Actually, I am not sure what the 'real' 3D is as I have only been talking about hypothetical tangential 3D spaces! Sorry, I said it would sound odd. And I have been rambling on far too much.]
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: minkwe's challenge

Postby gill1109 » Mon Jun 16, 2014 12:27 pm

Heinera wrote:I think I've explained all my reasoning about the computer model, and I also think that you somehow got it. So now I ask you to produce a LHV computer model that implements Joy's theory (or just any LHV model). Outcomes only in {-1, 1} (no post selection), since Joy says that outcome 0 doesn't exist, and its presence in your code was only because the code "is not advanced enough". And as Joy says, the model should reproduce "all quantum correlations".

Let's call it "the counter challenge". Whenever you have a submission ready, open a new thread.

Splendid. This is what we're waiting for. So far, Michel showed that he can do what Pearle did in 1970 and what Larsson and Gill showed could be done in 2004. What is the pointing of writing computer simulations which we already know are easy to do? Let's see a simulation of the experiment which Bell proposed in "Bertlmann's socks". No coincidence loophole, no detection loophole. I heard last week at Växjö that the experimentalists expect to have done it in one year from now (though it will probably take a few more months to get published).

I suspect that that experiment will get a Nobel prize for whoever does it.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: minkwe's challenge

Postby gill1109 » Mon Jun 16, 2014 10:48 pm

Ben6993 wrote:To my mind, laboratory space is the spatial 3D of spacetime (x,y,z,t) but particles live in more dimensions than that. For me, flatland exists in the laboratory but particles do not live only in flatland. Joy seems to say that there is no difference between laboratory space and a particle's space, and that the one, common space is more that the spatial 3D of spacetime. And that implies that all flatland calculations are suspect. Going back to Susskind's 1967 paper, as there is a 4pi periodicity to electron rotations in the laboratory space of rotating magnets, why is that ignored in the simulations? I think that Joy does not ignore it because I can think of 0 to 2pi as being in one trivector while 2pi to 4pi is covered by the trivector of the opposite basis? No doubt not an exact analogy.

Ben, please read "Bertlmann's socks" and explain to me why you think that experimentalists should estimate probabilities in lab experiments by anything else than relative frequencies.

QM tells us how to compute probabilities. In the laboratory we see relative frequencies. They match, wonderfully. You can use whatever fancy maths you like to simulate experiments but your simulations are supposed to simulate what the experimenter sees in the lab.

I recall that Luigi Accardi invented a simulation of a LHV model for the singlet correlations which worked by multiplying the outcomes +/-1 on one side of the experiment by root 2 (to get the CHSH bound 2 to go up to 2 sqrt 2).

Bryan Sanctuary on the other hand wants experimentalists to divide the observed correlations by 2 in order to bring 2 sqrt 2 down to sqrt 2. He has a LHV which generates half the singlet correlations, you see.

If you read Christian's one-page paper from 2008 (which contains in a nutshell material from the first chapter of his book) you'll see that he proposes that experimentalists compute correlations by dividing the observed product of outcomes by bivectorial theoretical standard deviations, resulting in a possibly quaternionic correlation.

None of these proposals ever caught on.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: minkwe's challenge

Postby Joy Christian » Tue Jun 17, 2014 12:54 am

gill1109 wrote:If you read Christian's one-page paper from 2008 (which contains in a nutshell material from the first chapter of his book) you'll see that he proposes that experimentalists compute correlations by dividing the observed product of outcomes by bivectorial theoretical standard deviations, resulting in a possibly quaternionic correlation.

This is the stupidest comment by Gill I have read so far, and I have read many stupid comments by him about my work: http://libertesphilosophica.info/blog/.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: minkwe's challenge

Postby gill1109 » Tue Jun 17, 2014 1:40 am

Joy Christian wrote:
gill1109 wrote:If you read Christian's one-page paper from 2008 (which contains in a nutshell material from the first chapter of his book) you'll see that he proposes that experimentalists compute correlations by dividing the observed product of outcomes by bivectorial theoretical standard deviations, resulting in a possibly quaternionic correlation.

This is the stupidest comment by Gill I have read so far, and I have read many stupid comments by him about my work: http://libertesphilosophica.info/blog/.

This response should be a good reason for everyone to find out for themselves. But I notice I got the year wrong. This is the paper I meant:

http://arxiv.org/abs/1103.1879
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: minkwe's challenge

Postby Joy Christian » Tue Jun 17, 2014 1:47 am

gill1109 wrote:
Joy Christian wrote:
gill1109 wrote:If you read Christian's one-page paper from 2008 (which contains in a nutshell material from the first chapter of his book) you'll see that he proposes that experimentalists compute correlations by dividing the observed product of outcomes by bivectorial theoretical standard deviations, resulting in a possibly quaternionic correlation.

This is the stupidest comment by Gill I have read so far, and I have read many stupid comments by him about my work: http://libertesphilosophica.info/blog/.

This response should be a good reason for everyone to find out for themselves. But I notice I got the year wrong. This is the paper I meant:

http://arxiv.org/abs/1103.1879

Mr. Everyone already knows how stupid Gill's comments usually are about my work. But perhaps Gill himself does not know, so here: http://arxiv.org/abs/1203.2529.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: minkwe's challenge

Postby gill1109 » Tue Jun 17, 2014 2:13 am

Joy Christian wrote:
gill1109 wrote:
Joy Christian wrote:gill1109 said "If you read Christian's one-page paper from 2008 (which contains in a nutshell material from the first chapter of his book) you'll see that he proposes that experimentalists compute correlations by dividing the observed product of outcomes by bivectorial theoretical standard deviations, resulting in a possibly quaternionic correlation".
This is the stupidest comment by Gill I have read so far, and I have read many stupid comments by him about my work: http://libertesphilosophica.info/blog/.

This response should be a good reason for everyone to find out for themselves. But I notice I got the year wrong. This is the paper I meant:

http://arxiv.org/abs/1103.1879

Mr. Everyone already knows how stupid Gill's comments usually are about my work. But perhaps Gill himself does not know, so here: http://arxiv.org/abs/1203.2529.

Again, Everyone can judge for themselves.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: minkwe's challenge

Postby Ben6993 » Tue Jun 17, 2014 3:31 am

Hi Richard

Richard wrote:
Ben, please read "Bertlmann's socks" and explain to me why you think that experimentalists should estimate probabilities in lab experiments by anything else than relative frequencies.


I have read the 17th June 1980 version of Bertlman's socks. Has that version been superceded? If so I will email you to request access to an up-to-date version. (I do not know how to use PM on this site.)

QM tells us how to compute probabilities. In the laboratory we see relative frequencies. They match, wonderfully. You can use whatever fancy maths you like to simulate experiments but your simulations are supposed to simulate what the experimenter sees in the lab.


I often used frequency distributions. They are great. Inconsistency of measurement can, however, attenuate correlations based on the raw data. But we can probably discount unreliability of measurement here as causing a main effect.

(Just a joke ... ) where is the unreliability in the following table?

Code: Select all
Category of cat           frequency
alive                          7
dead                           6 
dead and alive                 4.


Which after using a quantum eraser becomes:

Code: Select all
Category of cat           frequency
alive                       5
dead                        6 
dead and alive              6


I did not read Hans Raedt's (conference?) paper thoroughly, but I noticed he introduced an i-prob. Do you think that i-prob will be useful with a lab frequency distribution?

If you read Christian's one-page paper from 2008 (which contains in a nutshell material from the first chapter of his book) you'll see that he proposes that experimentalists compute correlations by dividing the observed product of outcomes by bivectorial theoretical standard deviations, resulting in a possibly quaternionic correlation.


Yes, I remember the disputes about the sd divisors in the correlation. If the correlation exists only in higher dimensions then that is fine by me as that higher-dimensional correlation is enough to make the outcomes deterministic.

Joy sees lab space as the same as the higher-dimensional space. I see it as different, though I admit I know relatively nothing! Fred often wrote about the moebius strip analogy where an observed left hand could be a genuine left hand on one side of the strip or a genuine right hand on the other side of the strip. The two being confounded in the laboratory. I take that confounding as suggesting that the laboratory space is different from the particle space. At the least, a confounding of the two orientations would make lab observations indcate less that the higher-dimensional truth.

That raises, for me at least, the question of where the hidden variables are sited. The underlying information lies in the higher dimensions. I think that the electron or positron is not in the laboratory space during time of flight, but they are in the higher dimensions. So the hidden variables are not observables (hence the term 'hidden'?). When Alice makes observation A, she does not know if she is measuring a LH positron, RH positron, LH electron or a RH electron. And she does not know if the spatial trivector twist will be + or - nor does she know the particle angle. Which all give rise to uncertainty in whether she will record a 1 or a -1. But there is enough information in the higher dimensions to remove any uncertainty in her measurement (except for her experimental error).

Likewise, the simulation must be done in higher-dimensional mathematical space as the hidden variables do not exist in the lab space. And, Ok, so far there is not agreement on the simulation outcome.

None of these proposals ever caught on.


Not yet.
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: minkwe's challenge

Postby gill1109 » Tue Jun 17, 2014 5:41 am

Ben6993 wrote:Likewise, the simulation must be done in higher-dimensional mathematical space as the hidden variables do not exist in the lab space. And, Ok, so far there is not agreement on the simulation outcome.

Simulation must be done in tbe space of the model, sure. QM model predicts laboratory relative frequencies. Lab experiment generates relative frequencies. Experimentalists find out if the two agree. If a theoretician wants a theory tested, then the theory had better predict possible empirically observable data. Remember Popper, falsifiability?
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: minkwe's challenge

Postby minkwe » Sun Dec 04, 2016 8:30 am

gill1109 wrote:Simulation must be done in tbe space of the model, sure. QM model predicts laboratory relative frequencies. Lab experiment generates relative frequencies. Experimentalists find out if the two agree. If a theoretician wants a theory tested, then the theory had better predict possible empirically observable data. Remember Popper, falsifiability?

Did I miss this admission from 2 years ago? The contents of this thread seems relevant now that the usual suspects are arguing otherwise at RW.

What are the empirically observable data predicted by Bell's Local Realistic Theory? B+B', B-B'?
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: minkwe's challenge

Postby thray » Sun Dec 04, 2016 10:28 am

minkwe wrote:
gill1109 wrote:Simulation must be done in tbe space of the model, sure. QM model predicts laboratory relative frequencies. Lab experiment generates relative frequencies. Experimentalists find out if the two agree. If a theoretician wants a theory tested, then the theory had better predict possible empirically observable data. Remember Popper, falsifiability?

Did I miss this admission from 2 years ago? The contents of this thread seems relevant now that the usual suspects are arguing otherwise at RW.

What are the empirically observable data predicted by Bell's Local Realistic Theory? B+B', B-B'?


Thanks, Michel. "Space of the model" is constrained by hardware, not by the program -- Gill knows what a true measure space is, and won't assign one to quantum theory, because that would clearly show that Bell-Aspect is not coherent without a nonlocal model. Proving what one assumed in the first place.

Note to Jay: It is not true that H + T = 1. The variables are mutually dependent -- one has to assign a probability a priori. And in that case, it appears as a number in the closed interval [0,1]. There is no unity in a probabilistic theory -- that was Einstein's objection. Is probability a foundational assumption, or is measure space a foundational assumption?
thray
 
Posts: 143
Joined: Sun Feb 16, 2014 6:30 am

Previous

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: ahrefs [Bot], Baidu [Spider] and 121 guests

CodeCogs - An Open Source Scientific Library