Coming Soon!

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: Coming Soon!

Postby Joy Christian » Sun Sep 26, 2021 1:09 am

FrediFizzx wrote:@gill1109 Nothing can be thrown away until the analysis section does it. I didn't write the analysis section. John did a very long time ago.

Well, the analysis section was originally written by Michel Fodje in his original simulation of 2013, and it is the same one used by Aspect et al. in their experiments in the 1980s.
.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 2:30 am

Joy Christian wrote:
FrediFizzx wrote:@gill1109 Nothing can be thrown away until the analysis section does it. I didn't write the analysis section. John did a very long time ago.

Well, the analysis section was originally written by Michel Fodje in his original simulation of 2013, and it is the same one used by Aspect et al. in their experiments in the 1980s.

Yes, and it is an analysis which is no longer used since 2015, when experiments were done in which there were no "no-show" events. Every outcome is +1 or -1. "zero" outcomes (no particle detected?) no longer exist.

If one wants to take account of "no shows" you have several options.

One is to make a further *untestable* assumption that particles arriving or not is a process completely independent of the settings and of hidden variables in the physics. This is called the "no enhancement" hypothesis. It might be a physically reasonable assumption. But if you are investigating fundamental principles of physics, then I think you should not use it. Using it, introduces a loophole! The detection loophole, in fact.

Another option is to use Larsson's adjusted CHSH inequality, raising the bound "2" to some amount depending on the detection rates. It may or may not be useful. If detection rate is too low, the adjusted bound is bigger than 2 sqrt 2.

Yet another option is to used generalized Bell inequalities specially developed for situations where outcomes are not binary.

Fred's detection rate is too low. It is already well known that local realism can imitate QM correlations at such a detection rate.

NB I define the detection rate how experimental physicists do it today, not how Fred defines it. According to Fred, a zero outcome is also a detection. By modern standards, a zero outcome is just a label used to say that no particle was detected. In present-day experiments, the experimental unit is a (pre-defined) time window. Thanks to present-day technology, one can make sure that *every* time window produces either +1 or -1 *and* at the same time, one can violate CHSH. Before 2015 nobody could do this. Various new technologies and innovations made the 2015 experiments possible. To be sure, they weren't perfect, and right now experimentalists are doing better ones still. Expect new, more perfect, loophole-free experiments within a year or two.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby FrediFizzx » Sun Sep 26, 2021 2:47 am

@gill1109 There is no freakin' detection loophole in this model. Stop making up nonsense. Nothing can be taken out until the analysis process sorts it out. Only valid A and B matches get through the analysis.
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 3:08 am

FrediFizzx wrote:@gill1109 There is no freakin' detection loophole in this model. Stop making up nonsense. Nothing can be taken out until the analysis process sorts it out. Only valid A and B matches get through the analysis.

I know. I'm not making up nonsense. Your analysis sorts it out, I agree. Your analysis is not the analysis done in present-day experiments. It might have worked for Aspect in the 80's and for Weihs in 1998 but it is no longer acceptable and no longer necessary.

I'd like to see you do a simulation of a 2015+ experiment.

Nobody ever did yet! You would become very famous if you managed to simulate the 2015 experiments in a local-realistic way.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 4:00 am

This code shows how the rate of a successful outcome +/-1 on both sides of the experiment depends on Bob's setting, while Alice's remains fixed.

If you observed this in an experiment, you would see that something was badly wrong.

Code: Select all
set.seed(1234)
M <- 100000
theta <- runif(M, 0, 360)
beta1 <- 0.32
beta2 <- 0.938
phi <- 3
xi <- 0
s1 <- theta
s2 <- theta + 180
cosD <- function(x) cos(pi * x / 180)
sinD <- function(x) sin(pi * x / 180)
lambda1 <- beta1 * cos(theta/phi)^2
lambda2 <- beta2 * cos(theta/phi)^2


rate <- function(a, b){
Aa1 <- ifelse(abs(cosD(a - s1)) > lambda1, sign(cosD(a - s1)), 0)
Aa2 <- ifelse(abs(cosD(a - s1)) < lambda2, sign(sinD(a - s1 + xi)), 0)
Bb1 <- ifelse(abs(cosD(b - s2)) > lambda1, sign(cosD(b - s2)), 0)
Bb2 <- ifelse(abs(cosD(b - s2)) < lambda2, sign(sinD(b - s2 + xi)), 0)
(sum(Aa1*Bb1 != 0) + sum(Aa2*Bb2 != 0))/(2 * M)
}
a <- 0
for (b in seq(0, 180, 10)) print(rate(a, b))


Results:
Code: Select all
> a <- 0
> for (b in seq(0, 180, 10)) print(rate(a, b))
[1] 0.61838
[1] 0.571115
[1] 0.537205
[1] 0.50808
[1] 0.48663
[1] 0.4737
[1] 0.466005
[1] 0.45846
[1] 0.452975
[1] 0.45328
[1] 0.451135
[1] 0.45632
[1] 0.462215
[1] 0.469665
[1] 0.48309
[1] 0.50451
[1] 0.53272
[1] 0.568165
[1] 0.61838

This shows that the chance that *both* detectors register a detection depends on *both* settings. The "no-enhancement" hypothesis is empirically disproved. The data therefore may not be correctly analysed assuming that detection of two particles is independent of the settings. It would be a physically not expected situation. Any experimentalist who saw it (and it is easy to see) would know their experiment is useless.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 5:46 am

Now I have an interactive Python version:
Code: Select all
import numpy as np
M = 100000
np.random.seed(1234)
def sinD(theta): return(np.sin(np.pi * theta/180))
def cosD(theta): return(np.cos(np.pi * theta/180))
phi = 3
xi = 0
beta1 = 0.32
beta2 = 0.938
theta = np.random.uniform(M, 0, 360)
s1 = theta
s2 = theta + 180
lambda1 = beta1 * (np.cos(theta/phi))**2
lambda2 = beta2 * (np.cos(theta/phi))**2
a = 0
Aa1 = np.where(abs(cosD(a - s1)) > lambda1, np.sign(cosD(a - s1)), 0)
Aa2 = np.where(abs(cosD(a - s1)) < lambda2, np.sign(sinD(a - s1 + xi)), 0)
b = 45
Bb1 = np.where(abs(cosD(b - s2)) > lambda1, np.sign(cosD(b - s2)), 0)
Bb2 = np.where(abs(cosD(b - s2)) < lambda2, np.sign(sinD(b - s2 + xi)), 0)
sum(Aa1*Bb1 + Aa2*Bb2)/(sum(Aa1*Bb1 != 0) + (sum(Aa2*Bb2 != 0)))


I just run iPython from the command line (copy-pasting the above code after the prompt):
Code: Select all
(base) richard@Kuon-2 ~ % ipython
Python 3.8.8 (default, Apr 13 2021, 12:59:45)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.22.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import numpy as np
   ...: M = 100000
   ...: np.random.seed(1234)
   ...: def sinD(theta): return(np.sin(np.pi * theta/180))
   ...: def cosD(theta): return(np.cos(np.pi * theta/180))
   ...: phi = 3
   ...: xi = 0
   ...: beta1 = 0.32
   ...: beta2 = 0.938
   ...: theta = np.random.uniform(M, 0, 360)
   ...: s1 = theta
   ...: s2 = theta + 180
   ...: lambda1 = beta1 * (np.cos(theta/phi))**2
   ...: lambda2 = beta2 * (np.cos(theta/phi))**2
   ...: a = 0
   ...: Aa1 = np.where(abs(cosD(a - s1)) > lambda1, np.sign(cosD(a - s1)), 0)
   ...: Aa2 = np.where(abs(cosD(a - s1)) < lambda2, np.sign(sinD(a - s1 + xi)),
   ...: 0)
   ...: b = 45
   ...: Bb1 = np.where(abs(cosD(b - s2)) > lambda1, np.sign(cosD(b - s2)), 0)
   ...: Bb2 = np.where(abs(cosD(b - s2)) < lambda2, np.sign(sinD(b - s2 + xi)),
   ...: 0)
   ...: sum(Aa1*Bb1 + Aa2*Bb2)/(sum(Aa1*Bb1 != 0) + (sum(Aa2*Bb2 != 0)))
Out[1]: -0.6976744186046512
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby FrediFizzx » Sun Sep 26, 2021 6:02 am

@gill1109 Yeah, you're posting a bunch of nonsense to distract from the fact that your goose is cooked. You are finished! Now, we have not one but TWO ways to kill Gill's theory. :mrgreen: :mrgreen: :mrgreen: :lol: :lol: :lol:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 7:11 am

FrediFizzx wrote:@gill1109 Yeah, you're posting a bunch of nonsense to distract from the fact that your goose is cooked. You are finished! Now, we have not one but TWO ways to kill Gill's theory.

Well, that's what you think.

I put an iPython notebook up on internet using PyCharm

https://datalore.jetbrains.com/notebook/KgRiuPjGym5qCVYpppxS6E/oPj0fJdpRkekL7a3L9PChO

Python is not that difficult to learn and with the help of the "Numpy" library of numerical and matrix oriented functions, it was easy to convert my R script to Python. It runs faster than ever. Joy and Fred: feel free to use my code. I believe it is a fair conversion of Fred's latest Mathematica code to these other languages.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby FrediFizzx » Sun Sep 26, 2021 11:47 am

Why would I want to run my program in some geeky code when I have one of the best programs in Mathematica. I can do 20 million trials in 9 minutes.
Just more nonsense distraction from your doomed fate. :mrgreen: :mrgreen: :mrgreen:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Coming Soon!

Postby jreed » Sun Sep 26, 2021 3:38 pm

Fred, since you didn't think you would learn anything from my suggestion about using Total, I ran your program with it. I used m=50,000 so I ended up with 100,000 trials. Here's what Total found:

In[35]:= Total[Abs[A]]
Out[35]= 62334 + 37666 Abs["no result"]

Total sums all the absolute value of all elements of A. What this shows is that about 62% of the trials are used, and 37% are not detected. Here's your detection loophole for all to see.
jreed
 
Posts: 176
Joined: Mon Feb 17, 2014 5:10 pm

Re: Coming Soon!

Postby FrediFizzx » Sun Sep 26, 2021 4:26 pm

jreed wrote:Fred, since you didn't think you would learn anything from my suggestion about using Total, I ran your program with it. I used m=50,000 so I ended up with 100,000 trials. Here's what Total found:

In[35]:= Total[Abs[A]]
Out[35]= 62334 + 37666 Abs["no result"]

Total sums all the absolute value of all elements of A. What this shows is that about 62% of the trials are used, and 37% are not detected. Here's your detection loophole for all to see.

Nope! Not sure how you learned math but 62334/50000 = 124.668 percent. There is NO detection loophole in the model! You Bell fanatics are finished! :mrgreen: :mrgreen: :mrgreen:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 8:50 pm

jreed wrote:Fred, since you didn't think you would learn anything from my suggestion about using Total, I ran your program with it. I used m=50,000 so I ended up with 100,000 trials. Here's what Total found:

In[35]:= Total[Abs[A]]
Out[35]= 62334 + 37666 Abs["no result"]

Total sums all the absolute value of all elements of A. What this shows is that about 62% of the trials are used, and 37% are not detected. Here's your detection loophole for all to see.

John, let’s try to be consistent in terminology. I think we should use the word “trial” in a consistent the way. I like to think that each trial has a unique sequence number. Each trial has one setting “a” and one setting “b”. Fred’s innovation is to let Alice and Bob each collect *two* outcomes per trial. We can call them “channel 1” and “channel 2”. The outcome in each case (four cases per trial!) can be -1, 0, or +1. [I know of no experiment where this is done. Fred is simulating an experiment which was never ever done]

Fred creates, for each trial, two complete cases: a quadruple of Alice and Bob’s inputs and outputs; one set for channel 1 and one set for channel 2.

[I would suggest that he actually for each trial now tosses a fair coin, keeping only channel 1 or channel 2 data. That way, the number of trials and the number of cases finally used to compute correlations is the same, M. The correlations are much the same if M is large anyway.]

Anyway, Fred computes *conditional* correlations *given* that both particles are detected. Just like everyone was forced to do in the 70’s, 80’s and 90’s of the last century. Nowadays we no longer have to do that: we can arrange that there are no “0”s.

If I denote Fred’s outcomes by x and y, both in {-1, 0, +1}, he computes the sum of the products divided by the number of products not equal to zero. That equals the average of the product for pairs with no “0” outcome.

Fred and Joy like to say that the particle pairs where a zero turns up never actually existed. I would imagine that they did exist but one or the other was not detected. Those are two interpretations. They don’t change the maths. Those pairs were created in the computer. Fred chooses to discard them. Mathematically, it’s just the good old detection loophole. It’s pretty easy to fake the quantum correlations by a local realistic mechanism with such a low detection rate, as has been known for 50 years.

By the way, if you compute the total number of output pairs with x times y not equal to zero, for each setting pair a,b, you’ll see that this number depends strongly on (a, b). In fact it depends on a-b. That’s very unphysical! Every experimentalist who got data like this would completely re-build their experiment. How can the production of complete particle pairs depend on the difference between the settings?
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 10:30 pm

https://rpubs.com/gill1109/fred
Here's Fred's code converted to R. Notice the two pictures. A lovely close-to negative cosine for the correlations. And an interesting plot showing how experimental efficiency varies with the difference between the settings. Between 60% and 40%. I took M = 100 thousand and in the final data set there are therefore 2 * M sets of two settings and two outcomes. About half of them are what Joy and Fred call "virtual", particle pairs which never existed. But they did exist briefly in Fred's program.

I'm finding out how to use Python and especially Numpy. I can convert Mathematica code lines, line by line, to R and to Python, now Fred got his code sorted out. Still have to learn how to draw plots in Python.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby gill1109 » Sun Sep 26, 2021 11:08 pm

FrediFizzx wrote:Why would I want to run my program in some geeky code when I have one of the best programs in Mathematica. I can do 20 million trials in 9 minutes.
Just more nonsense distraction from your doomed fate. :mrgreen: :mrgreen: :mrgreen:

Took me exactly the same time in R also with M = 20 million! 9 minutes. Same algorithm. I see some ways to speed it up. Python is faster still.

These tools are not "geeky". They are free, they are open source, they are used intensively by real scientists all over the world. Why pay all those dollars to Wolfram for Mathematica?

Why don't you take a look at the graph of the pair production rate as function of "a" minus "b"?

Image
For a bigger picture see:
https://gill1109.com/2021/09/27/adventures-with-r-and-python/
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby FrediFizzx » Mon Sep 27, 2021 12:26 am

gill1109 wrote:
FrediFizzx wrote:Why would I want to run my program in some geeky code when I have one of the best programs in Mathematica. I can do 20 million trials in 9 minutes.
Just more nonsense distraction from your doomed fate. :mrgreen: :mrgreen: :mrgreen:

Took me exactly the same time in R also with M = 20 million! 9 minutes. Same algorithm. I see some ways to speed it up. Python is faster still.

These tools are not "geeky". They are free, they are open source, they are used intensively by real scientists all over the world. Why pay all those dollars to Wolfram for Mathematica?

Why don't you take a look at the graph of the pair production rate as function of "a" minus "b"?

Why don't you take a look at these graphs. Mathematica is so much more sophisticated than R there is just no comparison. The home version didn't cost that much.

20 million trials one degree resolution as usual. Total Events = 19968074

Image

Deviation from -cosine curve.

Image

Mean deviation = -1.58038*10^-7 very small!

Time to admit your theory is finished. Kaput! :mrgreen: :mrgreen: :mrgreen:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Coming Soon!

Postby gill1109 » Mon Sep 27, 2021 3:42 am

FrediFizzx wrote:Why don't you take a look at these graphs. Mathematica is so much more sophisticated than R there is just no comparison. The home version didn't cost that much.
20 million trials one degree resolution as usual. Total Events = 19968074
...
Deviation from -cosine curve.
...
Mean deviation = -1.58038*10^-7 very small!
Time to admit your theory is finished. Kaput! :mrgreen: :mrgreen: :mrgreen:

Your graphs do not contradict my theory. Your graphs conform to my theory. The tricks you use have been well known and well studied for 50 years.
Please see if you can reproduce my "smile" graph in Mathematica, as well.
Image
Here are the correlations
Image
Here is the R code; I will add Python soon (and improve the R)
https://gill1109.com/2021/09/27/adventures-with-r-and-python/
The "smile" is the fraction of trials (counting channel 1 and channel 2 as separate trials) with neither outcome zero, as a function of .
2 times 20 million trials at 5 degree resolution. (Factor 2: two channels. M = 20 million). It was a very easy job to translate Mathematica to R.
I'll code this again a bit more efficiently and I'll do 1 degree resolution, and tell you how long it takes. The graphs need better annotation too (title, subtitle, names of axes).

I'm still learning Python.
Last edited by gill1109 on Mon Sep 27, 2021 3:48 am, edited 1 time in total.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby FrediFizzx » Mon Sep 27, 2021 3:46 am

gill1109 wrote: ... Fred and Joy like to say that the particle pairs where a zero turns up never actually existed. I would imagine that they did exist but one or the other was not detected. Those are two interpretations. They don’t change the maths. Those pairs were created in the computer. Fred chooses to discard them. Mathematically, it’s just the good old detection loophole. It’s pretty easy to fake the quantum correlations by a local realistic mechanism with such a low detection rate, as has been known for 50 years.

Nope! As usual you have it all wrong. That happens in your strawman model not ours. There is absolutely no detection loophole in our model. Of course there is in your strawman.

We have a beam splitter that goes to two polarizers and then to two detectors that both can detect +/-1. For each side. All data including the { } "no result" is passed to the analysis section. The analysis process does the A and B matching and if it can't get a good match ignores the data. We end up with the amount of valid events that got thru analysis very near to the original amount of trials.

You Bell fanatics are finished and you really should just admit defeat now so you don't look any more foolish than you already are. :mrgreen: :mrgreen: :mrgreen: :lol:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Coming Soon!

Postby gill1109 » Mon Sep 27, 2021 3:56 am

FrediFizzx wrote:
gill1109 wrote: ... Fred and Joy like to say that the particle pairs where a zero turns up never actually existed. I would imagine that they did exist but one or the other was not detected. Those are two interpretations. They don’t change the maths. Those pairs were created in the computer. Fred chooses to discard them. Mathematically, it’s just the good old detection loophole. It’s pretty easy to fake the quantum correlations by a local realistic mechanism with such a low detection rate, as has been known for 50 years.

Nope! As usual you have it all wrong. That happens in your strawman model not ours. There is absolutely no detection loophole in our model. Of course there is in your strawman.

We have a beam splitter that goes to two polarizers and then to two detectors that both can detect +/-1. For each side. All data including the { } "no result" is passed to the analysis section. The analysis process does the A and B matching and if it can't get a good match ignores the data. We end up with the amount of valid events that got thru analysis very near to the original amount of trials.

You Bell fanatics are finished and you really should just admit defeat now so you don't look any more foolish than you already are.

Well, I have a different opinion as to who is looking foolish.

Yes, you get near the original number of valid events. You first of all double the original number with two detectors each getting -1, nothing, or +1, but you have arranged that half are invalid. Very cunning.

Trouble is, in real experiments of the old type, there were indeed two detectors on each side, but their outcomes were: "click", or "no click". You did not have two detectors on each side *each* with possible outcomes "+ click", nothing. or "- click".

The experiment you have simulated, Fred, has never ever been done! It's incredibly ingenious and quite amusing. See if you can get it published somewhere. Seems that lots of journals will publish almost anything these days. If I'm asked to referee it, I'll give my honest opinion.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Coming Soon!

Postby FrediFizzx » Mon Sep 27, 2021 5:07 am

gill1109 wrote: .. Yes, you get near the original number of valid events. You first of all double the original number with two detectors each getting -1, nothing, or +1, but you have arranged that half are invalid. Very cunning. (nonsense deleted)

How did I arrange that half are invalid? I did no such thing. You just need to admit you are toast. Burnt toast! :mrgreen: :mrgreen: :mrgreen:
.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Coming Soon!

Postby gill1109 » Mon Sep 27, 2021 7:26 am

FrediFizzx wrote:
gill1109 wrote: .. Yes, you get near the original number of valid events. You first of all double the original number with two detectors each getting -1, nothing, or +1, but you have arranged that half are invalid. Very cunning. (nonsense deleted)

How did I arrange that half are invalid? I did no such thing. You just need to admit you are toast. Burnt toast! :mrgreen: :mrgreen: :mrgreen:

You wrote your code. Are you telling me you have no idea what it does?

I suggest you read Pearle (1970) or my paper about the Pearle model. You might recognise my "smile" graph showing how the rate of production of particle pairs in your model depends on the difference between the two measurement angles.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

PreviousNext

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 73 guests

CodeCogs - An Open Source Scientific Library