Heinera wrote:FrediFizzx wrote: The outcomes are functions of the angle and lambda *only*. And also carry an index "n" to keep track of what is correlated to what. The outcomes are NEVER a function of "n".
.
You can just copy your code snippet that generates your hidden variables (the spin-vector) to both computers. Then you have the setup described by Gull. Use n to keep things in sync, e.g. as a seed to an RNG.
You could say that “n” stands for time. It seems physically reasonable to allow that there is time variation in the dynamics. The two computers represent the two overlapping parts of the whole experiment. The overlap equals the source. The idea is that the state of *all* the hidden variables in the entire physical system of detectors plus source *at time zero* are represented by the state, duplicated, of both computers, at the time the programs are started. If nature is really deterministic (God does not throw dice) then after this, the state at later times is a function of the state at earlier times.
Gull does need to assume that the states of the two computers at time “n” does not depend on the settings introduced at earlier times. Gill’s (2003) theorem was designed in order to take account of that possibility.
https://arxiv.org/abs/quant-ph/0110137,
Accardi contra Bell (cum mundi): The Impossible Coupling
Richard D. Gill
An experimentally observed violation of Bell's inequality is supposed to show the failure of local realism to deal with quantum reality. However, finite statistics and the time sequential nature of real experiments still allow a loophole for local realism, known as the memory loophole. We show that the randomized design of the Aspect experiment closes this loophole. Our main tool is van de Geer's (2000) supermartingale version of the classical Bernstein (1924) inequality guaranteeing, at the root n scale, a not-heavier-than-Gaussian tail of the distribution of a sum of bounded supermartingale differences. The results are used to specify a protocol for a public bet between the author and L. Accardi, who in recent papers (Accardi and Regoli, 2000a,b, 2001; Accardi, Imafuku and Regoli, 2002) has claimed to have produced a suite of computer programmes, to be run on a network of computers, which will simulate a violation of Bell's inequalites. At a sample size of thirty thousand, both error probabilities are guaranteed smaller than one in a million, provided we adhere to the sequential randomized design. The results also show that Hess and Philipp's (2001a,b) recent claims are mistaken that Bell's theorem fails because of time phenomena supposedly neglected by Bell.
Journal reference: pp. 133-154 in: Mathematical Statistics and Applications: Festschrift for Constance van Eeden. Eds: M. Moore, S. Froda and C. Léger. IMS Lecture Notes -- Monograph Series, Volume 42 (2003). Institute of Mathematical Statistics. Beachwood, Ohio