Fuzzy logic is silly

There are many variations of fuzzy logic and fuzzy set theory. Each variation has different rules for combining the "reasonableness" or "plausibility" of two different subsets or events. For example, the fuzzy measure F(A) of a set A and the measure F(B) of a set B may require that the measure F(A+B) of the combined set A+B should have the maximum value

F(A+B) = Max[ F(A), F(B) ].
Notice that the resulting value does not depend on whether A and B share any elements. Other choices might be multiplication and geometric averaging. In all cases, it is assumed that F(A+B) can be calculated from only F(A) and F(B). The math is simple by virtue of a risky assumption.

Probability theory requires that you acknowledge sets may share elements. The probability of the combined set A+B uses a dependent probability:

P(A+B) = P(A) P(B|A).
P(B|A) is the probability of B assuming the pre-existence of A. P(A) and P(B) are insufficient to calculate P(A+B) unless you assume P(B|A) = P(B), which assumes A and B are disjoint. (B=A implies that P(B|A)=1.)

What makes the rules of probability special? Probability satisfy simple axioms, like transitivity, that seem essential for a meaningful, well-behaved measure. For example, the Kolmogorov system of probability considers only sets, and uses four simple axioms. From these axioms, you can derive all familiar arithmetical operations, such as Bayes' formulation of dependent probabilities. You begin with a complete, closed collection of all possible sets. The axioms are

Fuzzy rules always violate the last axiom because they cannot ask whether sets share elements. Fuzzy logic must then use maxima rather than addition to avoid problems with normalization. The fourth axiom is logically equivalent to many others. For example, if B contains A (if B implies A), then P(B) >= P(A). Do you really want your system of logic to violate such a rule? All expected rules of transitivity depend on this fourth axiom.

Why would fuzzy logic violate the rules of probability? Because the arithmetic is simpler? Because it appears to be "deterministic"? The usual answer is that "it works." It works in the sense that a non-linear system of equations can be "trained" to produce a known set of answers from a known set of questions. When the equations fail on new questions, then one can argue that the equations were not sufficiently well trained. If such arguments appeal to you, then I recommend more easily manipulated non-linear equations, such as neural networks.

Bill Harlan, 2000


Return to parent directory.