Traditional logic is binary: either a proposition is true or it is false, either black or white. However, we live in a world of gray or of even more colors. The advantage of binary logic is that it is relatively simple. The disadvantage is that it sometimes fails to match reality. (Students have trouble with traditional logic, but nonetheless it is simpler than more modern logics.)
Currently, computers use traditional logic internally. However, some computer programs, especially financial programs, try to model a more complex environment than one in which some facts are known to be true and others are known to be false. All financial programs, for example, handle interest rates, which are a way of combining a preference for the present with uncertainly about the future.
I am confident that over the next generation, we will see more computer programs come to rely on logics that are gray. There are two parts to such an action: one is the sensing part, which assigns a value to whatever is perceived. The second part is the method of combining multiple values.
In the 1980s, David McAllister invented `certainty factors', which can be used to express how accurate, truthful, or reliable you judge perceptions, and the arithmetical procedures to go with them. A certainty factor is your judgement of how good your evidence is, how `suggestive' it is. David McAllister's metric enables you to combine various judgements (or, as McAllister intended, the metric enables a computer to combine various lines of evidence).
Like interest rates, certainty factors are expressions of unknowing. Unlike interest rates, they focus on your (or a machine's) judgement of an element at a specific time; they do not express preference for the contemporary over the future.
People specify certainty or uncertainty with a judgement that the evidence is `suggestive' or `strongly suggestive' or maybe `weakly suggestive'.
In the arithmetic, these values are assigned numbers, such as 0.6 for `suggestive' and 0.8 for `strongly suggestive'. Calculation proceeds from there. For example, two different lines of `suggestive' evidence combine to be `strongly suggestive', but no matter how many lines of evidence you have, you never know for sure.
A computer will make measurements, each of which will be precise; but the program will attribute an accuracy to each precision perhaps noise effects the measurement, perhaps something else and the accuracy will cause the computer to label the element as `suggestive' or give it some other value.
Traditional logic makes use of the simplest type of perception, categorical. The logic element is either in or out of a category. The type of perception is called a `scale', meaning a way of making judgements. The word comes from the notion of measuring.
In court, a jury may have to judge whether one person's testimony is more credible than another's. This is a more complex judgment scale, an `ordinal' scale.
In 1944, Louis Guttman noted that all forms of measurement belong to one of four types of scale: categorical, ordinal, interval, and ratio. Guttman was thinking of measurement, but his scales to apply to human social structures and, more generally, to basic mathematics. Guttman was talking about the different ways people can make judgements about what they see, with or without the help of a ruler or other gauge.
(All this talk of mathematics, human judgement, and logic fits together in a wonderful way. That is one reason I am taken by these notions; not only do they appear right, but they look beautiful.)
Certainty factors are a curious hybrid of an ordinal scale that humans use to specify an uncertainty which is more or less suggestive, an interval scale for computers, numbers like 0.6 and 0.4, and a ratio scale for the actual arithmetic.
As I said, traditional logic presumes a statement is either true or false. The metaphor for this kind of logic is that of a cup. Either your proposition is contained, like tea in a cup, and is true, or it is outside, spilled, and is false. The categories are inside, true, or outside, false. There is no third option.
An ordinal scale is like a hierarchy of army ranks; a colonel has a higher rank than a lieutenant. Human use of McAllister's certainty factors imposes an ordering as well: some propositions are more highly suggestive than others.
The arithmetic of certainty factors presumes the scale is interval (The actual algebra of computer calculations presumes more.) Because they form an interval scale, two uncertainties can be compared to one another. The comparison enables you to say whether one is suggestive and the other is equally suggestive or more or less.
A ratio scale requires that you compare apples to oranges. Using an interval scale, you can say that three apples are more than two. However, such a way of thinking does not help you decide whether to eat an orange. For a ratio scale, you need to be able to compare apples to oranges.
In business, such a comparison is often done by price: some form of money is used as a `numeraire'. This has been a customary and commonplace way of handling a ratio scale activity since before the invention of money. (Moreover, businesses use interest rates to compare money flows over time.) Computerized spreadsheets enable accountants to consider alternatives readily: what if the dollar weakens faster than I anticipate? what if interest rates rise more than I expect?
But price is only one criterion which people employ to make judgments. Sometimes they use taste. Sometimes beauty. Sometimes goodness.
McAllister's certainty factors are one mechanism that could be used to expand computer programs to handle more complex logics. I do not know whether `certainty factors' themselves will be the mechanism, or something else.
However, as I said earlier, I am confident that we will see more mechanized logics based on ordinal, interval, and ratio scales than before.