I agree 100% with this statement by Michel: "A lot of apparent problems can be resolved simply by use of consistent definitions, and use of precise language. Progress on actual problems will come only through clear and consistent definition of concepts, precise use of language, and sound logical reasoning."

But I don't think that the problem of "what is randomness" will ever be resolved that way. I think that careful use of precise language and sound logical reasoning has shown that there cannot be a simple resolution. Michel is very optimistic. I do admire his optimism and ambition. Can he do what nobody else has been able to do in several thousand years? Of course, some people, e.g. Bruno de Finetti, Richard von Mises, Ed Jaynes, and others, were ambitious enough to believe they had done it. They have dedicated followers, to this day. A lot of physicists think everything was solved by R.T. Cox, but outside of physics, no one has heard of him

https://en.wikipedia.org/wiki/Cox%27s_theorem. The academic debate continues, and new physical insights give new twists to old arguments. There is renewed debate about Dempster-Shafer theory, which supposes that the usual rules of probability theory are wrong. Probability is subjective, and one *must* distinguish between degrees of belief and degrees of plausibility. Some people argue that this is the "right" theory of uncertainty for legal reasoning

https://research.vu.nl/en/publications/a-new-look-at-conditional-probability-with-belief-functions, "A new look at conditional probability with belief functions" by Ronald Meester, & Timber Kerkvliet.

I explain

the mathematics of probabilistic reasoning to my students by appeal to familiar examples (dice, coins, both fair and unfair; insurance, roulette; "observation roulette"). I also explain to them some of the

different paradigma's which are out there. Frequentist, subjective (de Finetti), Laplace. Von Mises' collectives, Kolmogorov complexity. I gave expert evidence on behalf of a company exploiting a game with some features of roulette but, according to the company, a game of skill. The Netherlands state has a monopoly on games of chance. And makes a lot of money from the half a dozen or so legal Dutch casinos. But the legal definition of a game of chance is ... pretty meaningless, to a scientist. I showed using statistical methodology that most players were actually losing less money at the game than if they were playing with no skill at all. In other words, most players were actually employing skill to increase their chances of winning. Hence it was legally, in my opinion, a game of skill, not chance. The judges didn't agree I think the law should be rewritten so as to make it clear what it is that our lawmakers, working on our behalf, object to. They have some deep-seated instinctive objection to people making money by allowing other people to place bets, yet do admit the need for insurance companies, and a state lottery. (That was how Kolmogorov escaped the wrath of Stalin. Probability is contrary to Marxist-Leninism. Kolmogorov was summoned to explain what he was doing. He pointed out that the operation of the state lottery did depend on skilful knowledge of mathematical probability theory. He lived to tell the tale).

As a mathematician, I don't have to define randomness.

I distinguish mathematical models of reality, from reality itself. The same abstract mathematical framework can have very different interpretations when applied to different fields. Presently I have the insight that independence is a more fundamental notion than randomness. There is fascinating new work reported in the recent book by Jonas Peters et. al on machine learning and causality, on an information-theoretic approach inspired by algorithmic complexity theory to the question of how to motivate statistical independence assumptions in physics. With applications to Bell's inequality.

http://web.math.ku.dk/~peters/elements.html Jonas Peters, Dominik Janzing, Bernhard Schölkopf: Elements of Causal Inference: Foundations and Learning Algorithms.

Also fascinating is how Boole's attempt to see probability as an extension of logic ran aground on the problem of how to define prior distributions representing ignorance in situations with many variables with arbitrary dependence between them.

When I was young, I was ready to give concise clear definitions of randomness. The more I have learnt about it, however, the more I realise how little I know Nowadays I think it is reasonable to believe that some randomness (quantum randomness) is not "merely" epistemic, not merely an expression of uncertainty and actually reducible to lack of knowledge of initial conditions or to hyper-sensitive dependence on initial conditions in what is essentially a deterministic system.

I think that that point of view is no solution at all: it is just an infinite regress. I think that Bell's theorem suggests we consider quantum randomness to be some kind of bottom line, some kind of fundamental feature of nature. Because otherwise, we have to believe in some kind of exquisite coordination between setting choices at a photo-detector and detector outcomes at another distant detector, because both are the result of purely deterministic chains of events set in motion at the time of the big bang, hence Alice's detector knows in advance what setting Bob is using.

But Tim Palmer from Oxford

https://www2.physics.ox.ac.uk/contacts/people/palmer thinks this is what happens, it seems.

I have not decided yet whether or not his reasoning holds water. I'm suspicious.

https://www.mdpi.com/1099-4300/20/5/356.

Experimental Non-Violation of the Bell Inequality

T. N. Palmer

Department of Physics, University of Oxford

Entropy 2018, 20(5), 356;

https://doi.org/10.3390/e20050356Received: 7 April 2018 / Revised: 24 April 2018 / Accepted: 2 May 2018 / Published: 10 May 2018

(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)

Abstract

A finite non-classical framework for qubit physics is described that challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the primacy of a fractal-like ‘invariant set’ geometry IU in cosmological state space, on which the universe evolves deterministically and causally, and from which space-time and the laws of physics in space-time are emergent. Consistent with the assumed primacy of IU , a non-Euclidean (and hence non-classical) metric gp is defined in cosmological state space. Here, p is a large but finite integer (whose inverse may reflect the weakness of gravity). Points that do not lie on IU are necessarily gp -distant from points that do. gp is related to the p-adic metric of number theory. Using number-theoretic properties of spherical triangles, the Clauser-Horne-Shimony-Holt (CHSH) inequality, whose violation would rule out local realism, is shown to be undefined in this framework. Moreover, the CHSH-like inequalities violated experimentally are shown to be gp -distant from the CHSH inequality. This result fails in the singular limit p=∞ , at which gp is Euclidean and the corresponding model classical. Although Invariant Set Theory is deterministic and locally causal, it is not conspiratorial and does not compromise experimenter free will. The relationship between Invariant Set Theory, Bohmian Theory, The Cellular Automaton Interpretation of Quantum Theory and p-adic Quantum Theory is discussed.