"Bell's theorem refuted" now published by EPL

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: "Bell's theorem refuted" now published by EPL

Postby Joy Christian » Sat Jun 05, 2021 4:10 am

Austin Fearnley wrote:
Your comment assumes that everyone agrees with you that you have already killed off Bell's Theorem.

I do not assume that everyone agrees with me. On the contrary, very few people agree with me. That is not surprising. New ideas often take decades to sink in.

Austin Fearnley wrote:
Years ago on the old version of s.p.f. in a discussion of your one-page paper (I think) Jos wrote along the lines that if S3 is locally flat then why should a curvature of the universe affect measurements in a lab. I hope I remember this correctly as I could probably not find that thread again now. I still have never seen that comment resolved.

The curvature of the Universe is not directly relevant to what happens in a lab, just as the curvature of the Earth is not directly relevant on a golf course. The S^3 geometry involved in my model has to do with the intrinsic curvature, not extrinsic curvature. Recall that S^3 is a spatial part of one of the solutions of Einstein's field equations of GR, which concern the intrinsic geometry of spacetime. I believe that the singlet correlations are not a signature of quantum entanglement, but evidence that the intrinsic geometry of the 3D space is S^3, not R^3.

Austin Fearnley wrote:
As I understand it, you would accept Bell's Theorem to be true if the universe were to be completely R3.

Yes, that is correct. The traditional interpretation of Bell's theorem holds if the intrinsic geometry of the Universe were R^3. But the latest evidence suggests that it is S^3, not R^3.

Austin Fearnley wrote:
Going back to Jos's comment, if your + and - refer to the universe's curvature, this assumes that the universe is curving both ways at the same time. (While being locally flat in each case.)

No, + and - in my model do not refer to the curvature of the Universe. They refer to the orientation (or handedness) of S^3, which is a completely different concept from its curvature.
.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: "Bell's theorem refuted" now published by EPL

Postby Austin Fearnley » Sat Jun 05, 2021 5:30 am

Hi Joy

I was referring to intrinsic curvature if that is indicated by the sign of the trivector in S3. Chappell et al associate the sign of the torsion with the direction of time's arrow. So the doubts are the same. How can two particles (say - & - in your model) be released in the lab into a universe with the wrong arrow of time. Where the alternative pairs + & + are going into the universe's correct arrow of time. My model has + & - trivectors associated with the particles 'personal' spacetimes launched into the universe with the normal time's arrow. My doubts about your model are undiminished but I doubt that further clarification here will help. Thanks anyway.
Austin Fearnley
 

Re: "Bell's theorem refuted" now published by EPL

Postby Joy Christian » Sat Jun 05, 2021 8:22 am

Austin Fearnley wrote:Hi Joy

I was referring to intrinsic curvature if that is indicated by the sign of the trivector in S3. Chappell et al associate the sign of the torsion with the direction of time's arrow. So the doubts are the same. How can two particles (say - & - in your model) be released in the lab into a universe with the wrong arrow of time. Where the alternative pairs + & + are going into the universe's correct arrow of time. My model has + & - trivectors associated with the particles 'personal' spacetimes launched into the universe with the normal time's arrow. My doubts about your model are undiminished but I doubt that further clarification here will help. Thanks anyway.

The sign of the trivector in S^3 does not indicate its curvature but its handedness. S^3 has a constant positive curvature whether it is left-handed, sign(I) = -, or right-handed, sign(I) = +.

Time is not relevant in the EPR-B experiments. The measurement results are observed at spacelike distances within S^3 at equal times. S^3 is a three-dimensional spacelike hypersurface within the four-dimensional space-time, and hence it is a surface of simultaneity. The trivector represents a volume form of S^3. Thus its sign has nothing to do with the arrow of time. It appears that you are talking about a different model altogether.
.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: "Bell's theorem refuted" now published by EPL

Postby minkwe » Sun Jun 06, 2021 12:13 pm

Austin Fearnley wrote:Well, my particle model allows a classical determination of Malus's Law. A cheat really as I reverse engineer my model from Malus's Law by differentiating cos^2. But I cannot get the Bell correlation from Malus without retrocausality. Malus means measuring one electron twice, in succession. So nothing spooky in the spins after successive measurements. Bell involves two measurements on different (entangled) particles and the entangled state must be engineered or maintained somehow. Spookily with QM and non-spookily with retrocausality. I imagine retrocausality will mean 'no quantum computers'.


Malus doesn't allow measuring one electron twice, that is impossible. The difference between Malus and Bell is just a difference between selecting on the fly and selecting after the fact. In Malus, you pass a stream of particles through one station, which selects a subset and transforms them in some way. Then you pass this new set again through a second station which again selects a subset based on their current properties. You measure the intensity of particles you have at the end and it obeys a certain relationship relative to the angle between the two stations. In other words, the relationship is telling you how the end result from the second station is related to the transformation/selection that happens on the first.

For Bell, you take two correlated streams and pass one through station A, and its sister through station B. At each station, the incoming stream is selected/transformed based on the setting. Then after the fact, you use the results of the measurements at A to match/select the results at B or vice versa in order to decide which particle at A corresponded with which particle at B and vice versa. After doing this, you end up with a relationship that is based on the angle between A and B. Bell is just a clever Malus experiment with two correlated streams, and two remote stations instead of two local ones. In the local Malus experiment, the information is carried from one station to the next by the transformed/selected particles themselves during the experiment. In the remote version (Bell), the information transfer is accomplished by the matching of individual particle results after the fact. Without the matching step, there are no results. Show me a Bell experiment that did not involve matching.

The delayed-choice quantum eraser experiment is very similar in this sense.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: "Bell's theorem refuted" now published by EPL

Postby Austin Fearnley » Mon Jun 07, 2021 1:16 am

Minkwe replied:
Malus doesn't allow measuring one electron twice, that is impossible. The difference between Malus and Bell is just a difference between selecting on the fly and selecting after the fact. In Malus, you pass a stream of particles through one station, which selects a subset and transforms them in some way. Then you pass this new set again through a second station which again selects a subset based on their current properties. You measure the intensity of particles you have at the end and it obeys a certain relationship relative to the angle between the two stations. In other words, the relationship is telling you how the end result from the second station is related to the transformation/selection that happens on the first.

For Bell, you take two correlated streams and pass one through station A, and its sister through station B. At each station, the incoming stream is selected/transformed based on the setting. Then after the fact, you use the results of the measurements at A to match/select the results at B or vice versa in order to decide which particle at A corresponded with which particle at B and vice versa. After doing this, you end up with a relationship that is based on the angle between A and B. Bell is just a clever Malus experiment with two correlated streams, and two remote stations instead of two local ones. In the local Malus experiment, the information is carried from one station to the next by the transformed/selected particles themselves during the experiment. In the remote version (Bell), the information transfer is accomplished by the matching of individual particle results after the fact. Without the matching step, there are no results. Show me a Bell experiment that did not involve matching.

That was my sloppy language. I know that you cannot measure the same electron twice and I did not mean it literally. I meant it in the sense that I still think of myself as the same person that I was when I played rugby in my youth. I cannot do that now as all the interactions I have had since my youth have changed me too much. Also, most of one's body content is replaced after seven years.

In my mind's eye, the electron and photon are described by my preon model. A left-handed electron going into an interaction has spin -0.5 and weak isospin -0.5. After the interaction that electron has gone and has been replaced by a right-handed electron with spin +0.5 and zero 'weak isospin'. So, yes, I do not really have the same electron being measured twice, and I agree with your description of Malus. I think of it in terms of polaroid sunglasses. I could do the Malus experiment with two polaroids put in front of one initial beam. The first polaroid (say Alice's sunglasses) cut the beam intensity to 50%. The second polaroid (Bob's) cuts the 50% down even further depending on the difference between the two polarising angles alpha and beta. So Malus has two successive measurements on one initial beam.
[An aside: another problem is seeing the Feynman diagram as simply a pathway for all the particles as weak isospin is not conserved in an interaction. IMO a lowest-generation higgs is involved in the interaction where an electron emits a photon. Behind the scenes, the higgs supplies or removes the weak isospin. The higgs must be there in the more complete QFT of the interaction but I have not checked.
In my preon model, a LH electron contains preon A while the RH electron contains Preon B. The photon contains B but not A. The higgs contains both A and B. So the photon cannot add or take away the Preon A in the interaction, but the higgs can. Also the generation 1 higgs is identical to the photon but with one B replaced by an A. As the lowest-generation higgs cannot decay into smaller particles it will not turn up at CERN in the sort of searches they are conducting.]

Next on to Bell. Yes, Bell is based on simultaneous measurements on two different but correlated beams. I am only interested in a 'quarter' Bell experiment where Alice has alpha = 0 deg in two dimensions and Bob has beta = 45 deg. In my quest for the theoretical minimum, I do not bother with cases where Alice makes measurements in more than one spin direction, do not bother with randomised detector settings. And do not bother with trying to prove Bell's Theorem. IMO, my 'quarter' Bell simulation can only reach a correlation of 0.707 if exact projections are used. Integer projections can never work because the very definition of cos theta requires exact projections. And I have recently realised that 0.707 is not even obtainable under QM in a generalised context. Unfortunately I cannot follow the selection principles, especially the modern once involving entangled states. So I can see that they have scope to be used (not consciously) to hide the truth. I did once convert one of De Raedt's Fortan programs into VB in the case of time window selections but it did not really help my understanding of the physics. And although I do not disagree with you on this, I am trying to get 0.707 by a fair means.

My 2D simulations always had a random or systematic full coverage of incoming particle polarisations or vectors of hidden variables. And all were measured and entered into the correlation calculations. One cannot fairly convert that situation into a correlation of 0.707. Well, I can do it using retrocausality, which I think is what is happening in reality. But other than that it cannot be done fairly.

Esail changes an expected particle polarisation before measurement by Bob from beta to alpha. This effectively enforces a Malus angle of alpha-beta on the experiment and converts a Bell into a Malus experiment. But I like this image as it is a way to think of what QM may be doing in its spooky action at a distance. Seeing this paper came at a time when I was wrestling with how QM calculated the 0.707 correlation using generalised particle incoming polarisations.

I realised next that Susskind's solution for QM was not generalised. His proof used Alice measuring along |up> while the incoming entangled singlet was |up, down> - |down, up>. Now that is blatantly a Malus experiment context. Alice measures along up and the incoming particle is a mixture of up and down! And Bob is also measuring a particle that is a mix of up and down and is measuring it at 45 degrees offset. Pure Malus.

I won't bother to write down the QM formula for the general case but Susskind does give the details necessary to write it down. Resolving that general case into 0.707 does not look possible to me. In the same way that I cannot make a simple but generalised simulation give more than 0.5 correlation (without using retrocausality). It is not possible as it is not convertible into a 'Malus' end point.

There were some discussion online in the past where actions at the detectors were seen to be semi-mystical. I have a classical model for electron and photon spins which can find S-G outcomes. Nothing mystical about it. (I am not meaning the measurement problem here, simply finding S-G outcomes using a classical model.)
Austin Fearnley
 

Re: "Bell's theorem refuted" now published by EPL

Postby Austin Fearnley » Mon Jun 07, 2021 2:26 am

Sorry, I made an error above.
I previously wrote in error:
Esail changes an expected particle polarisation before measurement by Bob from beta to alpha

Bob's particle polarisation is changed to alpha or minus alpha just before measurement by Bob from whatever the original incoming generalised entangled state was. This change converts from a generalised case to a Malus case.
Austin Fearnley
 

Re: "Bell's theorem refuted" now published by EPL

Postby gill1109 » Mon Jun 07, 2021 8:03 pm

minkwe wrote:Show me a Bell experiment that did not involve matching.

The four experiments of 2015. Predefined time intervals correspond to one another, independently of what happens in each time interval at each location. The Delft and Munich experiments are three party Bell type experiments. One looks at the joint probabilities of Alice and Bob’s outcomes, given Charlie’s, and given their settings. NIST and Vienna are more traditional two party experiments, but again, the statistics gathered are statistics of outputs from predefined corresponding time slots.

Show me a faithful Monte Carlo simulation of such an experiment!
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: "Bell's theorem refuted" now published by EPL

Postby minkwe » Fri Jun 11, 2021 4:18 pm

gill1109 wrote:
minkwe wrote:Show me a Bell experiment that did not involve matching.

The four experiments of 2015. Predefined time intervals correspond to one another, independently of what happens in each time interval at each location. The Delft and Munich experiments are three party Bell type experiments.

Oh, so what was the purpose of the third station, if not to match the other two? :roll:

Pre-defined time intervals are a form of matching which is almost identical to coincidence matching. This has been discussed extensively on PubPeer.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: "Bell's theorem refuted" now published by EPL

Postby minkwe » Fri Jun 11, 2021 4:32 pm

gill1109 wrote:Show me a faithful Monte Carlo simulation of such an experiment!


https://arxiv.org/abs/1608.02404
Abstract: It is shown that the data of the Hensen et al. Bell test experiment exhibits anomalous postselection that can fully account for the apparent violation of the CHSH inequality. A simulation of a local realist model implementing similar postselection is presented. The model produces an apparent violation of CHSH indistinguishable from that of the experiment. The experimental data also appears to violate no-signaling, and it is shown how postselection can produce an artifactual violation of no-signaling. The Hensen et al. experiment does not succeed in rejecting classical locality and therefore does not confirm quantum nonlocality.

https://arxiv.org/abs/1409.5158
Abstract: The Clauser-Horne (CH) inequality can validly test aspects of locality when properly applied. This paper analyzes a recent CH-based EPRB experiment, the Christensen et al. experiment. Full details of the data analysis applied to the experiment are given. An alternative analysis is also presented that considers the role of accidental coincidences and confirms and justifies the main analysis. It is shown that the experiment confirms locality and disconfirms the quantum joint prediction. To make sense of this surprising finding, the conclusion presents a new rational interpretation of the EPR paradox. The paper also contributes to promulgation of robust and correct data analysis by describing the important degrees of freedom that affect the analysis, and that must be addressed in the analysis of any EPRB experiment.

https://arxiv.org/abs/1507.06231
Abstract: Thanks to its immunity to the detection loophole, the Clauser-Horne/Eberhard inequality plays an important role in tests of locality and in certification of quantum information protocols based on entanglement. I describe a local model that violates the inequality using a plausible mechanism relying upon a parameter of the apparatus, the source emission rate. The effect is generated through the analysis of time-tagged data using a standard windowed coincidence counting method. Significantly, the detection times here are not functions of the measurement settings, i.e., the fair coincidences assumption is satisfied. This finding has implications for the design and interpretation of experiments and for quantum information protocols, as it shows that the coincidence window mechanism cannot be eliminated by a demonstration of independence of the detection times and settings. The paper describes a reliable coincidence counting method and shows that it delivers an accurate count of true coincidences. Recent experimental tests of local realism based on the Clauser-Horne/Eberhard inequality are considered and it is shown that in one case (Christensen et al.) the emission rate is appropriately limited to ensure valid counting, and the data supports locality; in a second case (Giustina et al.) the experiment neglects to appropriately limit the emission rate, and the claimed violation can be accounted for locally.

https://arxiv.org/abs/2005.03401
Abstract: We use discrete-event simulation to construct a subquantum model that can reproduce the quantum-theoretical prediction for the statistics of data produced by the Einstein-Podolsky-Rosen-Bohm experiment and an extension thereof. This model satisfies Einstein's criterion of locality and generates data in an event-by-event and cause-and-effect manner. We show that quantum theory can describe the statistics of the simulation data for a certain range of model parameters only.

https://arxiv.org/abs/2101.05370
Abstract: A 2015 experiment by Hanson and Delft colleagues provided new confirmation that the quantum world violates the Bell inequalities, being the first Bell test to close two known experimental loopholes simultaneously. The experiment was also taken to provide new evidence of quantum nonlocality. Here we argue for caution about the latter claim. The Delft experiment relies on entanglement swapping, and our main claim is that this geometry introduces new loopholes in the argument from violation of the Bell inequalities to nonlocality. In the absence of retrocausality, the sensitivity of such experiments to these new loopholes depends on the temporal relation between the entanglement swapping measurement C and the two measurements A and B between which we seek to infer nonlocality. The loopholes loom large if the C is in the future of A and B, but not if C is in the past. The Delft experiment itself is the intermediate case, in which the separation is spacelike. We argue that this leaves it vulnerable to the new loopholes, unable to establish conclusively that it avoids them. We also discuss the implications of permitting retrocausality for the issue of causal influence across entanglement swapping measurements.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: "Bell's theorem refuted" now published by EPL

Postby gill1109 » Sun Jun 13, 2021 5:36 am

minkwe wrote:
gill1109 wrote:
minkwe wrote:Show me a Bell experiment that did not involve matching.

The four experiments of 2015. Predefined time intervals correspond to one another, independently of what happens in each time interval at each location. The Delft and Munich experiments are three-party Bell-type experiments.

Oh, so what was the purpose of the third station, if not to match the other two? :roll:
Pre-defined time intervals are a form of matching which is almost identical to coincidence matching. This has been discussed extensively on PubPeer.

Pre-defined time intervals are not *exactly* identical to coincidence matching and not almost identical either. They are only superficially similar. In actual fact, they are very fundamentally different.
The Delft and Munich experiments have predefined time-slots, and constitute three-party experiments, in which one studies the joint distribution of Alice and Bob's outcomes given Charlie's, conditional on Alice and Bob's settings. Charlie has four outcomes and only one setting. Alice and Bob have two outcomes and two settings. 8-)
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: "Bell's theorem refuted" now published by EPL

Postby gill1109 » Sun Jun 13, 2021 5:44 am

minkwe wrote:
gill1109 wrote:Show me a faithful Monte Carlo simulation of such an experiment!


https://arxiv.org/abs/1608.02404
Abstract: It is shown that the data of the Hensen et al. Bell test experiment exhibits anomalous postselection that can fully account for the apparent violation of the CHSH inequality. A simulation of a local realist model implementing similar postselection is presented. The model produces an apparent violation of CHSH indistinguishable from that of the experiment. The experimental data also appears to violate no-signaling, and it is shown how postselection can produce an artifactual violation of no-signaling. The Hensen et al. experiment does not succeed in rejecting classical locality and therefore does not confirm quantum nonlocality.

https://arxiv.org/abs/1409.5158
Abstract: The Clauser-Horne (CH) inequality can validly test aspects of locality when properly applied. This paper analyzes a recent CH-based EPRB experiment, the Christensen et al. experiment. Full details of the data analysis applied to the experiment are given. An alternative analysis is also presented that considers the role of accidental coincidences and confirms and justifies the main analysis. It is shown that the experiment confirms locality and disconfirms the quantum joint prediction. To make sense of this surprising finding, the conclusion presents a new rational interpretation of the EPR paradox. The paper also contributes to promulgation of robust and correct data analysis by describing the important degrees of freedom that affect the analysis, and that must be addressed in the analysis of any EPRB experiment.

https://arxiv.org/abs/1507.06231
Abstract: Thanks to its immunity to the detection loophole, the Clauser-Horne/Eberhard inequality plays an important role in tests of locality and in certification of quantum information protocols based on entanglement. I describe a local model that violates the inequality using a plausible mechanism relying upon a parameter of the apparatus, the source emission rate. The effect is generated through the analysis of time-tagged data using a standard windowed coincidence counting method. Significantly, the detection times here are not functions of the measurement settings, i.e., the fair coincidences assumption is satisfied. This finding has implications for the design and interpretation of experiments and for quantum information protocols, as it shows that the coincidence window mechanism cannot be eliminated by a demonstration of independence of the detection times and settings. The paper describes a reliable coincidence counting method and shows that it delivers an accurate count of true coincidences. Recent experimental tests of local realism based on the Clauser-Horne/Eberhard inequality are considered and it is shown that in one case (Christensen et al.) the emission rate is appropriately limited to ensure valid counting, and the data supports locality; in a second case (Giustina et al.) the experiment neglects to appropriately limit the emission rate, and the claimed violation can be accounted for locally.

https://arxiv.org/abs/2005.03401
Abstract: We use discrete-event simulation to construct a subquantum model that can reproduce the quantum-theoretical prediction for the statistics of data produced by the Einstein-Podolsky-Rosen-Bohm experiment and an extension thereof. This model satisfies Einstein's criterion of locality and generates data in an event-by-event and cause-and-effect manner. We show that quantum theory can describe the statistics of the simulation data for a certain range of model parameters only.

https://arxiv.org/abs/2101.05370
Abstract: A 2015 experiment by Hanson and Delft colleagues provided new confirmation that the quantum world violates the Bell inequalities, being the first Bell test to close two known experimental loopholes simultaneously. The experiment was also taken to provide new evidence of quantum nonlocality. Here we argue for caution about the latter claim. The Delft experiment relies on entanglement swapping, and our main claim is that this geometry introduces new loopholes in the argument from violation of the Bell inequalities to nonlocality. In the absence of retrocausality, the sensitivity of such experiments to these new loopholes depends on the temporal relation between the entanglement swapping measurement C and the two measurements A and B between which we seek to infer nonlocality. The loopholes loom large if the C is in the future of A and B, but not if C is in the past. The Delft experiment itself is the intermediate case, in which the separation is spacelike. We argue that this leaves it vulnerable to the new loopholes, unable to establish conclusively that it avoids them. We also discuss the implications of permitting retrocausality for the issue of causal influence across entanglement swapping measurements.

You did not show me any faithful Monte-Carlo simulation of experiments either of the Vienna/NIST type nor of the Delft/Munich type. Your references are interesting but rather old. The most recent one (Price and Wharton) did not appear in any journal yet. I will give it a look but I don't expect anything new. The issues mentioned in the abstract have been discussed exhaustively in the published literature already.

Donald Graft points out known defects of the Delft and Munich experiments. Too small sample size and possibly defective random number generators. This does not show that that *type of experiment* is flawed. These in principle avoidable experimental defects are known and being addressed by experiments being conducted now. The experiments of Vienna/NIST type obviously could be improved (their deviations from the Bell inequality are physically rather small even though statistically as significant as one could wish). These experiments are also being re-done as we write.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: "Bell's theorem refuted" now published by EPL

Postby minkwe » Sun Jun 13, 2021 3:44 pm

gill1109 wrote:You did not show me any faithful Monte-Carlo simulation of experiments either of the Vienna/NIST type nor of the Delft/Munich type.

:shock: No matter what I show, you will dismiss it as not being "faithful". Once you place yourself as the gatekeeper of accepted evidence, there is no point in trying to convince you out of your entrenchment.

Such gatekeepers have been known to edit inconvenient information out of public records in order to generate a self-fulfilling prophecy. It usually goes as follows:

1. Make a dubious claim
2. Work tirelessly to prevent the publication of contradictory evidence/arguments by zealously writing letters to editors, demanding retractions, and editing inconvenient truths out of encyclopedias
3. Claim that no "published" evidence contradicting their position exists
4. ...
5. Profit
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: "Bell's theorem refuted" now published by EPL

Postby minkwe » Sun Jun 13, 2021 8:56 pm

Austin Fearnley wrote:Thanks. I looked at a 2019 paper on 'heralding' by Zhao et al but found it far too strange for me to learn anything. The practical side of Bell is a mystery to me.

It is made obscure rather intentionally in my opinion. For example:

Hensen et al. 2016 wrote:Second, we set larger (i.e. less conservative) heralding windows at the event-ready detector in order to increase the data rate compared to the first experiment. We start the heralding window about 700 picoseconds earlier, motivated by the data from the first test. We predefine a window start of 5426.0 ns after the sync pulse for channel 0 and 5425.1 ns for channel 1. We set a window length of 50 ns.


They talk of heralding windows but if you dig deeper, you realize they have just cleverly transformed the coincidence windows of yesteryears into "heralding windows". None of the sheeple touting their results asked them, how come the data rate increases if they increase the "heralding" window? The same thing applies to the Vienna people. They now call it "Pockel Cells". Nobody asked them how come the results change when the size of the "Pockel cell" changes? All they needed was a bunch of misguided people to erroneously claim that what they are doing is legitimate. However, they are all just doing cleverly disguised coincidence matching. They think because they don't use the word "coincidence" it means they have closed the coincidence loophole.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: "Bell's theorem refuted" now published by EPL

Postby Austin Fearnley » Mon Jun 14, 2021 1:14 am

minkwe wrote:
Nobody asked them how come the results change when the size of the "Pockel cell" changes? All they needed was a bunch of misguided people to erroneously claim that what they are doing is legitimate. However, they are all just doing cleverly disguised coincidence matching. They think because they don't use the word "coincidence" it means they have closed the coincidence loophole.

Presumably you are associating increased 'data rate' with increased mis-matching due to larger windows. I am OK with coincidence windows in general. What turns me off is having extra researchers eg Charlie or Victor which greatly complicates the situation and flight paths. I notice that Price and Whartons' recent arxiv paper mentions that the added complexity can undermine (a retrocausal explanation of ?) the Bell experiment in some instances.

IMO the retrocausal explanation of a very simple Bell experiment simulation is trivially true. I do not wish to spend time proving the truth of the retrocausality explanation in every possible complicated experimental context using many researchers adding error opportunities. I like error to be explained in an experiment but not by inserting more types of error from complicated sources.

In the same way that I do not need to cover every single complicated new experiment (some of which are being re-done as we write, according to Richard) because I have a working retro simulation for the simplest experiment, I similarly get turned off by retrocausal papers which start by abbots throwing stones through convent windows. [Though no doubt I ought to read many more of them :( ]

I did grind through a recent online interview on quantum cause and effect (Liefer interviewing Spekkens) touching on abbots' misdemeanours which was less tortuous than reading a philosophical text:
https://www.youtube.com/watch?v=xHGgxG0uccE

For my theoretical minimum, the cause and effect issue of retrocausality is at an S-G measurement by Alice. Is the beam of positrons polarised along alpha before Alice measures or after? Take 'before' to be as measured by Alice's clock. I cannot think how to prove the answer to that.

Anyway, I was led to retrocausality by my Preon model, not by philosophers. I wrote in 2017: https://vixra.org/pdf/1709.0021v1.pdf
"And the requirement for QCD-red to have a time arrow
representing the direction of flow of the 'red' brings me to imagine the QCD-red
dimension as a compactified version of our space plus time with a time’s arrow. "

My preon model has micro-dimensions with many combinations/aggregates of alignments of + and - time directions within a single fundamental particle. I only realised in 2020 that retrocausality of antiparticles could explain the Bell QM correlation.
Austin Fearnley
 

Re: "Bell's theorem refuted" now published by EPL

Postby gill1109 » Mon Jun 14, 2021 6:44 am

minkwe wrote:
gill1109 wrote:You did not show me any faithful Monte-Carlo simulation of experiments either of the Vienna/NIST type nor of the Delft/Munich type.

:shock: No matter what I show, you will dismiss it as not being "faithful". Once you place yourself as the gatekeeper of accepted evidence, there is no point in trying to convince you out of your entrenchment.

Such gatekeepers have been known to edit inconvenient information out of public records in order to generate a self-fulfilling prophecy. It usually goes as follows:

1. Make a dubious claim
2. Work tirelessly to prevent the publication of contradictory evidence/arguments by zealously writing letters to editors, demanding retractions, and editing inconvenient truths out of encyclopedias
3. Claim that no "published" evidence contradicting their position exists
4. ...
5. Profit

I think that your problem is, Michel, that a faithful (pre-defined time-slots; random setting choices; binary outcomes; strictly enforced locality) Monte-Carlo simulation of that type of experiment which moreover reliably violates appropriate Bell inequalities is impossible. I suspect you have realised that, and that is the reason you resort to insinuations as to my personal integrity.

Please show me something. I will look at it and tell you if I have any issues with it.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: "Bell's theorem refuted" now published by EPL

Postby minkwe » Mon Jun 14, 2021 1:17 pm

gill1109 wrote:I think that your problem is, Michel, that a faithful (pre-defined time-slots; random setting choices; binary outcomes; strictly enforced locality) Monte-Carlo simulation of that type of experiment which moreover reliably violates appropriate Bell inequalities is impossible. I suspect you have realised that, and that is the reason you resort to insinuations as to my personal integrity.

I've provided examples above but you claim they are not "faithful". Now if you are the judge of what is "faithful", then what's the point. Is it not a fact that you are just one of many Bell proponents who actively try to prevent the publication of opposing viewpoints?

I'm a bit amused that you still believe what you call faithful simulation ie "pre-defined time-slots; random setting choices; binary outcomes; strictly enforced locality" is impossible. Writing such a simulation is so trivial it's a waste of my time. What I'm more certain about is that if I were to do it, You will re-define "faithful" to be something else as we've seen in the past, and then we'll be back at the beginning with many hours wasted. In fact, I'm certain that you are ready to amend your definition of "faithful" in 5, 4, 3, 2 ... posts.

So no, I can't realize something I believe is false. The simulation of such experiments is not impossible. What is impossible, is the idea that Bell proponents can be convinced out of their position.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: "Bell's theorem refuted" now published by EPL

Postby FrediFizzx » Mon Jun 14, 2021 1:37 pm

minkwe wrote: ... What is impossible, is the idea that Bell proponents can be convinced out of their position.

You can say that again. However, you are much too polite. I would say "Bell fanatics" instead of "Bell proponents". And yes..., they will keep shifting the goalposts no matter what you do.
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: "Bell's theorem refuted" now published by EPL

Postby gill1109 » Tue Jun 15, 2021 5:31 am

minkwe wrote:
gill1109 wrote:I think that your problem is, Michel, that a faithful (pre-defined time-slots; random setting choices; binary outcomes; strictly enforced locality) Monte-Carlo simulation of that type of experiment which moreover reliably violates appropriate Bell inequalities is impossible. I suspect you have realised that, and that is the reason you resort to insinuations as to my personal integrity.

I've provided examples above but you claim they are not "faithful". Now if you are the judge of what is "faithful", then what's the point. Is it not a fact that you are just one of many Bell proponents who actively try to prevent the publication of opposing viewpoints?

I'm a bit amused that you still believe what you call faithful simulation ie "pre-defined time-slots; random setting choices; binary outcomes; strictly enforced locality" is impossible. Writing such a simulation is so trivial it's a waste of my time. What I'm more certain about is that if I were to do it, You will re-define "faithful" to be something else as we've seen in the past, and then we'll be back at the beginning with many hours wasted. In fact, I'm certain that you are ready to amend your definition of "faithful" in 5, 4, 3, 2 ... posts.

So no, I can't realize something I believe is false. The simulation of such experiments is not impossible. What is impossible, is the idea that Bell proponents can be convinced out of their position.

I agree that it’s trivial to write such a simulation. What I say is that It’s impossible that the simulation will reliably, reproducibly, violate Bell inequalities. Let’s behave like scientists. We disagree on a certain issue. I’m extremely curious to know how you think you can do it. Please give us some example code. I don’t plan to move any goal-posts. I suspect you still haven’t realised where the goal-posts stand. I have described where my goal-posts stand in several publications.

I’m not ready to believe that firmly established mathematical theorems are false, without seeing very strong evidence.

Notice, my challenge to you is not about simulating past experiments, it’s about simulating the experiment envisioned by Bell in “Bertlmann’s Socks”. This is the experiment that four experimental groups around the world tried to do, with varying degrees of success, in 2015. Delft and Munich had much too small sample sizes. Vienna and NIST got tiny deviations (in the physical sense) from the Bell bound. There are suspicions that their random number generators were biased. You are just “anti” anyone who is “pro” Bell. Since your two simulation models of many years ago I haven’t noticed any new contributions from you.

BTW if you think Wikipedia pages can be improved, go ahead and do it.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: "Bell's theorem refuted" now published by EPL

Postby minkwe » Tue Jun 15, 2021 6:30 pm

gill1109 wrote:I agree that it’s trivial to write such a simulation. What I say is that It’s impossible that the simulation will reliably, reproducibly, violate Bell inequalities. Let’s behave like scientists. We disagree on a certain issue. I’m extremely curious to know how you think you can do it. Please give us some example code. I don’t plan to move any goal-posts. I suspect you still haven’t realised where the goal-posts stand. I have described where my goal-posts stand in several publications.

The ground can be shifted too.


I’m not ready to believe that firmly established mathematical theorems are false, without seeing very strong evidence.

This statement by itself is a goalpost shifting move. Either you are being disingenuous or just severely mistaken. Bell's theorem is not a firmly mathematical theorem. Half of the theorem (Bell's inequalities), is firmly established. The other half, and probably the most significant part of it (violation by QM) is anything but a theorem as we've been explaining for many years. You must have realized this by now so why the subterfuge? I believe Fred has even reminded you quite a few times that NOTHING can violate Bell's inequalities.

Notice, my challenge to you is not about simulating past experiments, it’s about simulating the experiment envisioned by Bell in “Bertlmann’s Socks”.

:lol: First you said the recent experiments (NIST, Vienna) can't be faithfully simulated. Now you've amended your goalpost to "the experiment envisioned by Bell in ...". Do you see why nobody takes this seriously?
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: "Bell's theorem refuted" now published by EPL

Postby gill1109 » Thu Jun 17, 2021 8:38 am

minkwe wrote:
gill1109 wrote:I agree that it’s trivial to write such a simulation. What I say is that It’s impossible that the simulation will reliably, reproducibly, violate Bell inequalities. Let’s behave like scientists. We disagree on a certain issue. I’m extremely curious to know how you think you can do it. Please give us some example code. I don’t plan to move any goal-posts. I suspect you still haven’t realised where the goal-posts stand. I have described where my goal-posts stand in several publications.

The ground can be shifted too.


I’m not ready to believe that firmly established mathematical theorems are false, without seeing very strong evidence.

This statement by itself is a goalpost shifting move. Either you are being disingenuous or just severely mistaken. Bell's theorem is not a firmly mathematical theorem. Half of the theorem (Bell's inequalities), is firmly established. The other half, and probably the most significant part of it (violation by QM) is anything but a theorem as we've been explaining for many years. You must have realized this by now so why the subterfuge? I believe Fred has even reminded you quite a few times that NOTHING can violate Bell's inequalities.

Notice, my challenge to you is not about simulating past experiments, it’s about simulating the experiment envisioned by Bell in “Bertlmann’s Socks”.

:lol: First you said the recent experiments (NIST, Vienna) can't be faithfully simulated. Now you've amended your goalpost to "the experiment envisioned by Bell in ...". Do you see why nobody takes this seriously?


Michel, I have told you what were the blemishes in the 2015 experiments. I challenge you to simulate better ones. I think it’s impossible. You seem to be saying that it would be a piece of cake. You are bluffing. You are avoiding discussion of the exact location of the goalposts, but I have published several papers setting their location as precisely as I could.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

PreviousNext

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 71 guests

cron
CodeCogs - An Open Source Scientific Library