Law of Excluded Middle says you can only have truth values 0 and 1.
Law of Exclusive Middle says there is no 0, there is no 1, all truth values must fall on the open interval (0, 1) noninclusive.
Consider that 0.999... = 1. Consider 0.999... as a magnitude of truth, or an expected magnitude.
For anything which is true, we can [a] be certain of it's truth with a certainty of 1, or [b] we can be uncertain of this truth with magnitude 0.999... . Quantitatively they are indistinguishable because .999... = 1. The only difference being qualitative. In one case we are certain, in the other we are uncertain.
Certainty and uncertainty are equivalent in this example. We can easily generalize this and make lots of examples.

# William Kuch

## Someone is shy

William hasn't completed a profile. Should we look for some other people?

## Comments & conversations

Looks like Godel used modal logic to make some proofs of things.
But yeah, it seems like a tool for studying logic through semantics. I'm quite shocked to find the use of equivalence here, instead of so many other places where it should be used for example in math and physics.
If we say that the excluded middle is equivalent to the exclusive middle, it's easy to derive that random and non-random are equivalent. This equivalence would be present in the mathematical models, and would also be assumed to apply to physical reality. I have many good examples to support this, and the more I search for counterexamples it seems that all I can find is more compelling reasons to support it.
To say that "random and nonrandom are equivalent" is a statement which begs a lot of justification. I have that justification in the form of highly credible examples and some very solid mathematics. It is all really very simple, nothing complex whatsoever. A child could understand my argument.
And, it almost sounds absurd to say that random and nonrandom are equivalent. But the numbers dont lie. Think of how absurd it seems to say that different frames of reference are equivalent. Yet we know that Relativity is indeed correct.
If you start with equivalence at LEM, Law of Identity and Law of Noncontradiction, then all of these results follow and I believe that Relativity itself can be explained in terms of these first principles. So, if true, this would be a very big deal. I simply cannot do it alone. I need to share the idea wherever possible and elicit as much high quality criticism as possible, and if it is wrong then so be it, but if it's right then mankind gets a great big birthday gift.

Thank you for that suggestion !! I actually did find something which sounds a lot like what I've been claiming.
Quoting Wikipedia's article on Modal Logic
"Likewise, a prefixed "diamond" () denotes "possibly p". Regardless of notation, each of these operators is definable in terms of the other:
[square p] (necessarily p) is equivalent to [not diamond not p] ("not possible that not-p")
[diamond p] (possibly p) is equivalent to [not square not p] ("not necessarily not-p")"
This is strikingly similar to what I have been proposing. The only differences I can see (at this point) is that modal logic is typically used in epistemology and I dont know whather this has been applied to physics as of yet. It is possible, but I need to search for that in the literature.
The other main difference I can see is that I am creating a very robust analogy to probablity theory. There may be a bridge from modal logic to probability theory, but Im not aware of it. More research needed on my end. Additionally, in my view the Law of Excluded Middle is just a model of existence. While modal logic seems to be more concerned with semantics or epistemology, my approach explicitly looks at tangible things. Law of Excluded Middle is one model, the Exclusive Middle is another model. Fuzzy logic combines them into a cohesive theory, while I would let them stand apart and simply say that they are equivalent.
Thanks for that excellent comment and I'll have to research whether this has ever been applied in physics or math. I think that it should be. It seems very near to what Ive been arguing, there is an application of equivalence, but did they ever connect the dots to relativity ? I kind of doubt that. I think it's worth a look.

In my view neither case is degenerative. You may have the emergence or order from a disordered system, or emergence of disorder from one which is deterministic. But neither case is really degenerative. Together they form a duality. The duality is held together by assumption of equivalence. And in fact we can come very close to proving the equivalence of these two structures, but I am more comfortable simply saying that the duality forms a consistent system.
By embracing a quantitative tool which has duality built into it, we can look at the wave particle duality with this new tool and understand it in a whole new way. Perhaps for the first time.
I want to apply this work to Bell's Inequality, various works of Alain Aspect and others. That is my goal. Either to do it myself, or provide a tool for others to follow. I want to create a new tradition within science which embraces the duality of random and nonrandom, that acknowledges that they are equivalent and proceeds from there.

That is precisely correct. Every problem that is solvable using mathematics of logic should have 2 solutions. One deterministic, and the other essentially uncertain or stochastic.
For example, the number 5. It can be regarded as a known value. It is known with an implied (but unstated) inherent certainty. We could however regard this quantity as an expected value, in which case it is precisely the same magnitude but it's "qualitative" properties have changed, it is now inherently uncertain.
The magnitudes 5 (known) and 5 (expected) are identical, but their qualities are different. For an empiricist, what this means is that there must be two equivalent models of the entire universe. One deterministic, the other essentially uncertain or stochastic. Both models would produce the same exact numbers. One universe is deterministic, and the other is stochastic. They must be equivalent. The connections to Relativity and QM are obvious.
This should also be true for the entirety of all mathematics. I have come close to a proof for the general case for "all of mathematics", but I do not have that proof completed. I have come close and dont want this line of research to be forgotten. That is why I engage in online debate.

Deterministic and stochastic models are different. They will always be different. But this difference is qualitative.
The reason we can make these things equivalent is because quantitatively they are the same. In other words, whether you have a stochastic model, or a deterministic model, you get the same numbers. You always get the same answers. You get two answers to every possible problem. One is certain, and the other is uncertain or an expected value. If we say that these are equivalent, then we have a kind of duality in everything that we are doing.
You can probably see that this would give a fresh new approach to resolving paradoxes regarding the wave-particle duality of light. It also provides a framework for offering an easy to understand justification for the occurrence of which-way information in QM. There are many benefits of this approach. This is just the tip of the iceberg. My purpose here is to popularize this view, or share it with as many people as possible because I am confident that it is correct.

Incorrect. When generated by the random algorithm, you let the process go through infinitely many iterations to generate the triangle. The two fractals, generated by their respective algorithms are identical with the possible exception of a few points needed to start the random process, but that can be eliminated by selecting a starting point properly.

That is in fact incorrect. Please check those facts with an external source. As I stated above, .999... = 1 identically. It is not an approximation. They are exactly equal. Further, this is universally regarded as fact throught the math community worldwide.
I cannot debate this topic here, it is off topic. This thread is about equivalenve and applying it to the Law of the Excluded Middle. Please read the Wikipedia article I cited or check with a mathematician. I cant use this space to debate something which is considered to be an elementary fact of mathematics, this is why calculus works, I wont debate that here. Again, it is universally accepted that 0.999... = 1 identically, it is not an approximation.
Thanks for the feedback but Im trying to stay on topic.

The fact that .999... = 1 is accepted as fact universally throughout the math and science community. I wont debate that here, please refer to this Wikipedia article or check with anyone in the math community http://en.wikipedia.org/wiki/0.999...
Another example might be helpful. Bob has an apple, and Mary has an apple. Bob is 100% certain that his apple exists and he assigns a truth value T=1 to this magnitude of existence. Mary is not certain that her apple exists at all, she is permanently uncertain whether it exists or not. But it is very, very convincing that it does, however she remains uncertain of it. She says that her apple exists with truth value T= .999... to reflect the inherent existential uncertainty of this apple.
Bob is certain, Mary is uncertain. However, quantitatively these are identical views to hold because .999... = 1. Qualitatively they are different. But quantitatively, they are in fact quite the same.
We have proven that these two approaches to the existence of apple are quantitatively equivalent.
This has very little to do with epistemology, or the study of words and language. I am doing philosophy of math & physics, definitely not epistemology.

If you are a mathmatician, logician or physicist I can change your mind with an example as follows.
Consider two universes. Call them U1 and U2. In U1 things either exist, or they do not. In U2 everything is partially existent, and we say that things "might exist" instead of saying that they "do exist". Let U1 and U2 be considered equivalent, just like in relativity.
Now we go out and apply our theory. We see a tree. An observer claims that he is certain that it exists and gives it's existence a truth value = 1. A second observer is never sure of anything, ever, but the existence of tree is pretty convincing and he assigns it's existence a truth value of .999... .
Since .999... = 1, precisely, it is clear that in terms of quantifying these truth values that these two situations are indeed equivalent. The same thing would hold for nonexistent objects.
What I am saying is that whether something is certain or uncertain, that this "certainty / uncertainty" is a quality which we ascribe to things. Clearly these two qualities are quite opposite each other. However, they also have a quantitative aspect and that these quantities can be equivalent.
If I exist with truth value = 1, then I exist with absolute certainty. There is no question about it.
But if I say that I "might" exist, I am making a statement which is based on uncertainty. Consider that this is expressed as a potential, and that in this case the truth value = .999...
Clearly, .999... = 1 and the only difference is whether we are certain or uncertain. Quantitatively, these two models are equivalent.
Random and nonrandom are equivalent.