Detection efficiencies in Gisin-Gisin x Pearle.

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Sun Mar 09, 2014 10:55 pm

I already read Adenier, and I already read Hess and de Raedt. I understand them perfectly well.

Please read sections 2 and 9 of my paper, carefully.

You wrote "We can not substitute actual outcomes from a different set of particles for counterfactual outcomes of a single set of particles."

Nobody is doing that. There is (1) a substitution of theoretical mean values by empirically observed averages. And there is (2) a "fair sampling" assumption, aka no-conspiracy, or freedom. So (1), statistical error has to be allowed for, and (2) we need to assume that the particle pairs on the basis of which one particular sample correlation was observed, are a random sample from all particle pairs. Or at least, a representative sample from all the particle pairs. Taking a random sample is a good way to guarantee that.

The fact of a local hidden variables theory ensures that we have counterfactual definiteness.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby minkwe » Mon Mar 10, 2014 9:46 am

gill1109 wrote:Nobody is doing that. There is (1) a substitution of theoretical mean values by empirically observed averages.

The theoretical mean values are means from a single ensemble, only one of which can be measured, the rest of which are therefore counterfactual. The empirically observed averages are averages from 4 disjoint ensembles all of which are measured and therefore actual. So your claim that nobody is doing that is false.

And there is (2) a "fair sampling" assumption, aka no-conspiracy, or freedom.

No amount of fair sampling, no-consipriacy, freedom, can cause 4 disjoint sets of ensembles to not be disjoint. Rather, you need conspiracy and restriction of freedom in order for the upper bound of the expression:

<a1b1> + <a2b2′> + <a3′b3′> − <a4′b4>

To be less than 4 for 4 disjoint ensembles. For 4 disjoint ensembles of particles, each correlation is a free variable in the expression, with an upper bound of 1 and a lower bound of -1. If you combine 4 free variables in an expression such as above, and there is no consipiracy and there is freedom, the upper bound of that expression can NEVER be less than 4. So your claim that fair sampling or no-conspiracy helps you here is false.

So (1), statistical error has to be allowed for

Upper-bounds are immune to statistical error. An upper bound can NEVER be violated by statistical error. To say that the upper bound of <a2b2′> is +1 and the lower bound is -1, is a mathematical fact, which is impossible to violate even by statistical error. The outcomes a2 and b2' can only have values (+1, -1). It is not difficult to see that not even statistical error can violate the upper bound. The same applies to the full expression, it is IMPOSSIBLE to have a value above 4 in any experiment, no matter how many particles you measure, and how biased your random sampling. Therefore excuses about statistical error are just that.

(2) we need to assume that the particle pairs on the basis of which one particular sample correlation was observed, are a random sample from all particle pairs. Or at least, a representative sample from all the particle pairs. Taking a random sample is a good way to guarantee that.

That is a very naive assumption. It is well known that it is impossible to "guarantee" random sampling of a hidden variable. Read up on the Bertrand paradox. Besides, the CHSH and Bell inequalities do not rely on random samples but rather on the same ensemble. The inequalities rely on the limited degree of freedom between counterfactual outcomes. The same person can not be tall and not-tall at the same time. One of two different people can be tall while the other is not-tall at the same time. Claiming that you selected the two people at random and that they are fair samples does not mean you can claim to have disproven the fact that the same person can not be tall and not tall at the same time. Read up on degrees of freedom. Your response completely misses the point.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Mon Mar 10, 2014 9:48 am

Please read Sections 2 and 9 of my paper, carefully. Is Theorem 1 a true theorem, yes or no? (That's in Section 2). After that, we can discuss the application of Theorem 1 to computer simulation models like yours, in Section 9.

You'll need to refresh your memory concerning the Hoeffding inequalities which are used in the proof of Theorem 1 (in the appendix). You can find them on wikipedia.

Since I intend to stay polite on this forum, I won't say what I think about everything else you have just written.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby minkwe » Mon Mar 10, 2014 10:57 am

gill1109 wrote:Is Theorem 1 a true theorem, yes or no? (That's in Section 2).

Theorem 1 does not magically convert 4 disjoint ensembles of particles to not be disjoint. Nor does any other section or theorem in any of your papers.
Section 2 of your paper does not magically create the same number of degrees of freedom in 4 disjoint sets of particles as in the single set assumed in the derivation of the CHSH and Bell inequalities. Nor does any other section or theorem in any of your papers or any paper for that matter.
You'll need to refresh your memory concerning the Hoeffding inequalities which are used in the proof of Theorem 1 (in the appendix). You can find them on wikipedia.

Hoeffding inequalities do not magically cause 4 disjoint ensembles of particles to have a non-null intersection, nor does any other inequality or statistical trick.

The irrelevance of Bell inequalities in Physics (http://vixra.org/pdf/1305.0129v1.pdf)
Rosinger wrote:It was shown in [1], cited in the sequel as DRHM, that upon a correct
use of the respective statistical data, the celebrated Bell inequalities
cannot be violated by quantum systems. This paper presents in more
detail the surprisingly elementary, even if rather subtle related basic
argument in DRHM, and does so together with a few comments which,
hopefully, may further facilitate its wider understanding.
...
Once again, and quite regrettably as far as many in the physics com-
munity are concerned, it cannot be overemphasized that the inequal-
ities in section 5, as much as those in the present section, are purely
mathematical, and as such, they have absolutely no need for any kind
of so called "physical" considerations in their proofs.


Therefore, let us repeat once more that it is one of the major merits of
DRHM to have pointed out so clearly the essential and so far hardly
known fact that the inequalities in section 5, as well as those in this
section, simply cannot be violated either by classical, or by quantum
physics. And they cannot be violated, precisely due to the fact that
they only depend on mathematics, and of course, logic.



The error is so clear, anyone who wants to see it, sees it.

Since I intend to stay polite on this forum, I won't say what I think about everything else you have just written.

You know, it is possible to clearly and politely state why you believe my mathematical arguments are wrong.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Mon Mar 10, 2014 11:17 pm

Your "arguments" are based on severe misconceptions. Their premises are false. You talk in general terms, you hold numerous false preconceptions, and copy sweeping false statements by erroneous authors or take other author's true statements completely out of context.

Theorem 1 does *not* convert four disjoint ensembles to be not disjoint. It seems to me that you have not digested the statement of the theorem, let alone studied the proof. Please tell me if you think the theorem is true or false. If you think it is false, please exhibit the false reasoning in the proof.

Theorem 1 and its proof answers all your objections (all 12, over on another thread) or shows them to be baseless. I am not going to discuss my work with you when you evidently haven't (or can't) read it. Section 2 does some elementary probability. It is self-contained. Uncontroversial. You have not identified anything wrong with it. You clearly completely misinterpret what you see. You have powerful preconceptions which make you blind. It is not a good scientific attitude.

Rosinger cites DRHM (de Raedt, Hess, Michielsen). DRHM confirm that Larsson and Gill's analysis is correct. Indeed "so far so hardly known"! But known to a few ... known to Gill, known to Larsson.

De Raedt has known my work for a long time. We have discussed both of our works with one another. We fully agree on all substantial points. Similarly, Giullaume Adenier has known my work for at least 15 years. We have discussed his and my ideas together. We are in full agreement.

Read Section 2 of my paper carefully and tell me specifically where you think there is an error. This is mathematics, not philosophy, not physics. Definition, lemma, proof, theorem, proof. All your comments so far tell me that you do not understand the basic concepts.

If there is a mathematical error, I will withdraw the paper. Show me the error. Point to the specific line where the error is made.

First we will deal with Section 2. It concerns random selection of four disjoint sets of rows from an Nx4 table of +/-1's. You'll need to study the appendix, too.

These parts of the paper can be read independently of the rest.

Only after that, will we discuss computer simulations of Bell violations (Section 9). This section can be read independently of the rest (it only depends on Section 2).

Only after that, will we discuss physics.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby minkwe » Tue Mar 11, 2014 1:09 pm

Richard, your accusations and personal attacks don't deserve a response. Though it is news to me and probably to Adenier and DeRaedt that you are in full agreement with them.
Last edited by minkwe on Tue Mar 11, 2014 2:14 pm, edited 1 time in total.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby minkwe » Tue Mar 11, 2014 2:13 pm

gill1109 wrote:De Raedt has known my work for a long time. We have discussed both of our works with one another. We fully agree on all substantial points. Similarly, Giullaume Adenier has known my work for at least 15 years. We have discussed his and my ideas together. We are in full agreement.


Then you are in full agreement with De Raedt when he says:
http://arxiv.org/pdf/1108.3583.pdf
John Bell’s inequalities have already been considered by Boole in 1862. Boole established a one-to-one correspondence between experimental outcomes and mathematical abstractions of his probability theory. His abstractions are two-valued functions that permit the logical operations AND, OR and NOT and are the elements of an algebra. Violation of the inequalities indicated to Boole an inconsistency of definition of the abstractions and/or the necessity to revise the algebra. It is demonstrated in this paper, that a violation of Bell’s inequality by Einstein-Podolsky-Rosen type of experiments can be explained by Boole’s ideas. Violations of Bell’s inequality also call for a revision of the mathematical abstractions and corresponding algebra. It will be shown that this particular view of Bell’s inequalities points toward an incompleteness of quantum mechanics, rather than to any superluminal propagation or influences at a distance.


And you are in full agreement with De Raedt when he says:
http://arxiv.org/pdf/1112.2629.pdf
Data produced by laboratory Einstein-Podolsky-Rosen-Bohm (EPRB) experiments is tested against the hypothesis that the statistics of this data is given by quantum theory of this thought experiment. Statistical evidence is presented that the experimental data, while violating Bell inequalities, does not support this hypothesis.


Or when they say:
http://arxiv.org/pdf/0907.0767.pdf
The examples (counterexamples) with the patient-investigations and the relation of these examples to EPR experiments prove, at least in the opinion of these authors, that neither realism nor Einstein locality need be abandoned because of a violation of Bell’s inequalities.


And you are in full agreement with Adenier when he says:
http://arxiv.org/pdf/0705.1477.pdf
...
As shown elsewhere18, the validity of the fair sampling can actually be questioned on the basis of experimental data.
...
In the meantime, analysis shows that it indeed is still possible to ascribe properties to objects independently of observation, and contrary to David Mermin’s statement2, I would thus argue that Einstein’s attacks against the metaphysical underpinning of quantum theory are still valid today, and do not contradict nature itself.


But as you will say, all those are taken out of context, but I can bet that you won't say what the correct context is, and why you believe the words themselves should be understood to mean the opposite from what they clearly are saying. Instead you will just point me to section 2 and 9 of your paper, and claim that I am speaking nonsense.

I say 2 + 3 is not 6 and you respond by accusing me of speaking nonsense because in your paper you have proven that 1 + 1 = 2, then you turn around and challenge me to show you where the argument that 1 + 1 = 2 is false, completely ignoring my argument. I'm done. Feel free to continue to believe what I've been saying is nonsense, you are in good company. But it is not healthy for either of us to continue discussing this topic. I will write a paper commenting on the errors in your papers, then you will have an opportunity to respond in a formal manner. I'm done trying to reason with you here.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Tue Mar 11, 2014 11:58 pm

I am in full agreement with almost all of de Raedt's statements here, and with Adenier's first statement (not his second - though if you re-define realism like he wants to do it, one could agree. Bohr would agree with the re-definition. But Einstein would not agree with it).

So far, the experimental evidence does not force us to relinquish locality or realism.

One should indeed look very carefully at the underlying logic and algebra. In fact, one should also bring probability and statistics into the game. It has been badly neglected by all those physicists. They are not trained in probabilistic and statistical thinking. In fact, their training is antagonistic of such thinking.

Also look carefully at the metaphysics. But avoid empty word games. This is one reason why computer simulation models are so important. They focus the mind, wonderfully. You make a major contribution here.

One can be very critical of even the best experiments done to date.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby Mikko » Wed Mar 12, 2014 7:58 am

gill1109 wrote:The fact of a local hidden variables theory ensures that we have counterfactual definiteness.

Seems right if determinism and factual definetess are assumed, but can it be proven? Or does it depend on some other assumption that we naively make without noticing? Anyway, if factual definiteness is not assumed, counterfactual definiteness seems unlikely. With non-determinism it is less clear as any particular subsystem (such as a measurement) can be modelled as deterministic with random input from environment or a probabilistic or otherwise indeterministic subsystem.
Mikko
 
Posts: 163
Joined: Mon Feb 17, 2014 2:53 am

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Thu Mar 13, 2014 12:55 am

Mikko wrote:
gill1109 wrote:The fact of a local hidden variables theory ensures that we have counterfactual definiteness.

Seems right if determinism and factual definetess are assumed, but can it be proven? Or does it depend on some other assumption that we naively make without noticing? Anyway, if factual definiteness is not assumed, counterfactual definiteness seems unlikely. With non-determinism it is less clear as any particular subsystem (such as a measurement) can be modelled as deterministic with random input from environment or a probabilistic or otherwise indeterministic subsystem.

A probabilistic local hidden variables model adds some extra randomization. If you represent this mathematically in the language of probability theory, then you are letting measurement outcomes also depend on some random variables X. But a random variable X is just a mathematical function X(omega) of an underlying randomly chosen point omega from some set Omega. Your stochastic local hidden variables model is actually a deterministic local hidden variables model with a "bigger" space of hidden variables, which might include. components which we think physically as belonging to the meadurement stations or environment as well as the source.

I don't know what you mean by "an otherwise indeterministic subsystem". But anyway, once we try to operationalize any model by writing computer programs which simulate it, we just have determinististic functions to play with. No problem. Tossing dice and coins is a completely deterministic process. All randomness in nature is, as far as we know, pseudo randomness - or quantum randomness. The big question is whether quantum randomness is "just" pseudo randomness, too.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby Mikko » Thu Mar 13, 2014 2:49 am

gill1109 wrote:I don't know what you mean by "an otherwise indeterministic subsystem".

There are theories that allow alternative possibilities without assigning any probabilities. I don't know any that would be interesting for physics but they can be useful, e.g., for safety analyses.

Any non-deterministic system can be converted to a deterministic one. But what do probabilities mean in a deterministic world? The probabilities of quantum mechanics quite obviously do mean something.
Mikko
 
Posts: 163
Joined: Mon Feb 17, 2014 2:53 am

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Thu Mar 13, 2014 3:01 am

Mikko wrote:
gill1109 wrote:I don't know what you mean by "an otherwise indeterministic subsystem".

There are theories that allow alternative possibilities without assigning any probabilities. I don't know any that would be interesting for physics but they can be useful, e.g., for safety analyses.

Any non-deterministic system can be converted to a deterministic one. But what do probabilities mean in a deterministic world? The probabilities of quantum mechanics quite obviously do mean something.

Experimentalists know what quantum probabilities mean, and what classical probabilities mean. You cannot predict in which channel the next ohoton will de detected, and you can't predict the next coin-toss outcome, and (if I don't tell you a whole lot of stuff) you can't predict the next outcome of the pseudo random generator in Python (or R, or C++, or Mathematica, or Java). About the coin: we could oredict the outcome pretty well on the basis of a video film of its initial spinning as long as the way the coin is "stopped" is determined without the intervention of yet more unpredictable physical processes (like my hand trying to catch it).

All these systems exhibit long run stabilities which are well known and the basis of the financial success of insurance companies, lotteries, ... and they're all the sanme.

The big question is, is there a fundamental difference between the QM probabilities and the classical ones? Einstein was sure that the answer would be no. John Bell dashed that hope (at least, that is the present concensus view).
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby FrediFizzx » Thu Mar 13, 2014 4:31 pm

gill1109 wrote: The big question is, is there a fundamental difference between the QM probabilities and the classical ones? Einstein was sure that the answer would be no. John Bell dashed that hope (at least, that is the present concensus view).


QM probabilities are non-linear; most classical probabilies are linear. But that is wrong now since Joy has shown us that the predictions of QM can be accomplished in a local realistic classical way. So Einstein was right after all. But... there is still the situation of Lucien Hardy's fifth axiom, "Continuous reversibility" as the difference between quantum and classical.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Fri Mar 14, 2014 2:03 am

It's amusing how every time you are personally losing an argument Fred, you write a last posting in large capital letters and lock down the thread! This has now happened for the third time.

I know you are the proud sponsor and owner of this forum, and I'm very grateful for you for setting it up, but your policy towards unwelcome truths does not do you credit.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby FrediFizzx » Fri Mar 14, 2014 12:17 pm

Ditto and your comment is off topic. Please stay on topic.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Sun Mar 16, 2014 11:26 pm

FrediFizzx wrote:
gill1109 wrote: The big question is, is there a fundamental difference between the QM probabilities and the classical ones? Einstein was sure that the answer would be no. John Bell dashed that hope (at least, that is the present concensus view).


QM probabilities are non-linear; most classical probabilies are linear. But that is wrong now since Joy has shown us that the predictions of QM can be accomplished in a local realistic classical way. So Einstein was right after all. But... there is still the situation of Lucien Hardy's fifth axiom, "Continuous reversibility" as the difference between quantum and classical.

Joy claims to have shown us that the predictions of QM can be accomplished in a local realistic classical way. So far, these claims have got gained a great deal of support.

Every year, there are several such claims. They tend to be forgotten too, on an annual basis.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby FrediFizzx » Sun Mar 16, 2014 11:49 pm

Well... since we are staying off-topic. What is your excuse for not forgetting Joy's claims then? Is it because deep down you know there is something to it and you don't want to be left out of the party should and when it happens? :-) But I think you really could make a better attempt at a full understanding of the math. How come it all looks just fine to me and I understand it perfectly well? Personally, I like best a combination of the Geometric Algebra version with version 2.0. Having that e_0 original vector helps the understanding of the GA model. So how does that relate to the Pearle version? See... I'm trying to drive the discussion somewhat back on topic.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Mon Mar 17, 2014 2:46 am

FrediFizzx wrote:But I think you really could make a better attempt at a full understanding of the math. How come it all looks just fine to me and I understand it perfectly well? Personally, I like best a combination of the Geometric Algebra version with version 2.0. Having that e_0 original vector helps the understanding of the GA model. So how does that relate to the Pearle version? See... I'm trying to drive the discussion somewhat back on topic.

How come you understand it all, while I know for sure that it is a failure? I think the difference is what I would call "mathematical discipline". I'm a mathematician. Fred and Joy are physicists. For a physicist, mathematics is just the language which he or she uses to describe nature. He or she has recourse to physical insight, physical intuition, and of course, the final arbiter is experiment; it's nature. So if the abstract mathematics which seems to be implied by the physicist's discourse actually does not track the discourse, then mathematics is at fault. The physicist knows what they want to say and finds simply that so far mathematician's attempts to provide the tools to say what they have to say, has been inadequate.

Why I am still interested in Joy's model? I am not interested in it at all, as a model of quantum entanglement.

I am interested in geometric algebra, and I'm interested in possible connections between the algebra of S^3 and of S^7 to quantum physics.

I am interested in computer simulations of local hidden variables theories, and I have contributed to our understanding of the limits of such simulations.

I'm interested in science outreach. How come no science journalist has shown any understanding at all of Bell's theorem? How come every year a new c****pot theory is launched, sometimes gets some attention in the media, and is then silently forgotten again? Why do people believe weird things? In particular, how come especially smart people are able to believe particularly weird things? The psychology and the sociology.

How could a well-known and respected, senior quantum optics researcher (member of the Royal Dutch Academy of Sciences), do an experiment and get it published in one of the top journals, in which he reports violation of Tsirelson's bound, hence definitive disproof of quantum mechanics, without anyone noticing that something odd is going on?

How could another well-known and respected, younger, senior quantum optics researcher do a GHZ experiment and tell reporters (who believed every word) that the data of just a finite number of runs of his experiment disproved local realism? When just a finite number of runs of his own experiment proved that he had not actually got the GHZ state at all - he obtained some "impossible" outcomes!

There is something badly wrong with most physicists' understanding of logic and mathematics. It doesn't stop the top experimentalists from doing brilliant experiments, but I am worried about physics education and the public's understanding of what physicsts are doing, if the physicists themselves have clearly no idea at all.

Why are really smart people like Michel Fodje and Joy Christian, who claim to be scientists, so blinded by their own prior beliefs that they refuse even to start to think about some logical consequences of selecting rows at random from an Nx4 spreadsheet?
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Mon Mar 17, 2014 7:36 am

PS this is for me the real mystery. Michel Fodje is convinced I am a bad guy. He is in the business of writing computer programs which simulate singlet correlations by local hidden variables, taking advantage of well known "loopholes" (detection loophole, coincidence loophole). I have mathematically investigated what can be attained by these means. Fodje's hero de Raedt has even built on the Larsson-Gill analysis in order to create nice event-based simulations exploiting the coincidence loophole. Why does Fodje refuse even to make the aquaintance of the mathematical theory which is behind this? Which he could use to his own advantage?!

Similarly, Christian is promoting various computer simulations of his own model. He is even making use of Pearle's detection loophole model. Why does he not even wish to become aquainted with the mathematics which could tell him how far that model can be improved. What can be done with it, what can't be done with it?

The same mathematics tell us what can and cannot be done in the QRC.

Mathematics is useful. It certainly is relevant and useful regarding all these computer simulation models. Yet those who should be most interested in learning about the relevant mathematics, just jeer at it. It's a strange attitude to encounter, in science. It is similar to the phenomenon of "tunnel vision" in forensic research and criminal prosecution. At some point those investigating a crime are so convinced that their theory of what actually happened is true that they become unable even to see relevant evidence, evidence which is relevant ... but pointing in the opposite direction. This is a recipe for numerous miscarriages of justice in the past. Similar "tunnel vision" is associated with scientific stagnation in the past. Everyone could have seen that there was something wrong with the current theory but preferred to look the other way.

Of course, the mathematics I'm talking about is rather mundane. Not as sexy as S^7, octonions and bivectors. But the mundane stuff has to be taken account of too.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Detection efficiencies in Gisin-Gisin x Pearle.

Postby gill1109 » Mon Mar 17, 2014 10:54 am

PPS: see also http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=21&t=34 (insanity in modern physics)
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

PreviousNext

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: Majestic-12 [Bot] and 78 guests

cron
CodeCogs - An Open Source Scientific Library