Self-Deception
"A foolish consistency is the hobgoblin of little minds, adored by little statesmen, and philosophers and divines" - Emmerson
The concept of self-deception has long been an attractive issue of study for philosophers interested in common language analysis. The attraction rises from the inherent paradox that arises from the state of being self-deceptive and the process of becoming self-deceived . Normally, one models what it means to be self-deceived after what it means to be deceived by someone else. It is often held that for you to deceive me, you must get me to believe wholeheartedly in a proposition that you know, or at least sincerely believe to be false. Following this model, then, self deception occurs when I can get myself to believe a proposition that I truly know is false. The paradox is that in order for self deception to be valid, an individual must hold at the same time, two contradictory propositions (p and not-p) and believe them both to be true. There have been various responses to this paradox from outright rejection of the concept, through to a rejection of the principle of contradiction. In this paper, I wish to consider 3 responses to the paradox of self-deception. 1) Partitioning strategies, 2) Redefinition Strategies, and 3) Para consistent Logic. I will claim that neither of these responses provides a sufficient explanation or solution to the problem. I will argue that, although there certainly seems to be valid cases of self-deception as we conceive it through common language, the analytic attempts have thus far been relatively unsuccessful.
Partitioning Strategies
Many philosophers have made sense of the self-deception paradox by partitioning the mind into separate logical subjects. In doing so, an individual can hold a proposition and the opposite of the proposition without being in a state of contradiction. If, for example, I hold the proposition p and not-p, and I believe them to be true, I am in a state of contradiction because I do not actually believe anything. However, if the mind consists of two or more parts, I can hold the proposition p (along with various reasons for p) within one part, ad the proposition not-p (along with reasons) in the other part. So long at these two parts are distinct, there will never be a contradiction.
One such practitioner is Ralphael Demos. He accepts the standard view that self-deception involves a contradiction:
"Self deception exists, I will say, when a person lies to himself, that is to say, persuades himself to believe what he knows is not so. In short, self-deception entails that B believes both P and not-P at the same time (1960, p.588).
But he divides the mind into distinct parts, separated by "levels of awareness":
"There are two levels of awareness possible; one is a simple awareness, the other awareness together with attending, nor noticing. It follows that I may be aware of something without, at the same time noticing it or focusing my attention on it" (p.593)
Therefore, for Demos, the deceiver is capable of simultaneously believing p and not-p "because he is distracted from the former" (p.594).
The problem with this view is stated succinctly by Herbert Fingarette (1969), who argues that Demo's own position is paradoxical. He points out that deception must be an intentional act. Therefore, if I am to deceive you, I must intend to pull the wool over your eyes. If I am merely wrong about a proposition I convince you to believe, but I sincerely believe that it is true, I have not deceived you - I have only demonstrated by ignorance. Similarly, if I am to deceive myself, I must do so intentionally. Therefore, the 2 propositions p (true) and not-p (false) and I notice not-p, but do not notice p (because not-p is distracting me) I am not self-deceived, merely ignorant of p. Furthermore, Finagarette argues, and individual cannot intentionally fail to notice p, because she has already noticed it: "it appears that it is just because he already appreciates the incompatibility of his beliefs that the self-deceiver 'deliberately ignores' the belief he abhors" (p.16). In this case, the individual would be arty to two contradictory beliefs, and the portioning strategy advanced by Demos has not helped us with the paradox.
Redefinition Strategies
Most common approaches to self-deception insist that the deception seen in self-deceptive situations is the same as the deception seen in interpersonal models. However, there are scholars who believe that the interpersonal models of deception are insufficient to describe self deception. As Mele (1987) states:
"One approach to resolving the paradox of self-deception is to abandon some pertinent features of interpersonal models of he phenomenon. Perhaps it is typically true that when A deceives B, A knows or believes the truth and intentionally gets B to believe a falsehood. But must the self-deceiver know or believe the truth and believe the negation of the true propositions?" (1987, p.8)
Mele answers in the negative, arguing that self-deception occurs because individuals prefer certain beliefs over others. These preferences direct individuals to manipulate the truth value of the respective claims. Thus,
". . . because, e.g., he (the subject takes a certain datum d to count against p, which proposition he wants to be the case, he may intentionally or unintentionally shift his attention away from d whenever he has thoughts of d; but to do this he need not believe that p is false"(1983, p.372).
Therefore, on this view, there is no requirement that the person believes not-p (regarded as true). I a mother wants her son to be good, but the relevant data would lead her to believe otherwise, she might manipulate the implications of the data, or the content of the data itself to represent that belief, but she does not also have to have a belief that her on is not-good the paradox of having two contradictory beliefs is thus avoided.
This is a nice distinction, but we are now left to wonder on the relationship between an individual perceiving a data set and the sae individual drawing conclusions based upon that data set. For Mele's distinction to hold, it seems that an individual would be able to perceive a data set, compare it with the desired proposition, and recognize that the two are inconsistent, ad then modify the dataset to match the desired belief, all without ever holding the contrary position. If a mother can recognize the data set about her son (i.e., that he steals from people and is continually violent), and she can recognize that these facts are inconsistent with her to have a good son, given Mele's logic, two cases are possible. If she manipulates this data "intentionally", she is aware of the not-good proposition and is in fact holding two contradictory beliefs. If, on the other hand, the data manipulation is done unintentionally ("presumably without awareness), it is being accomplished at a lower level of "awareness". If we claim that there are levels of awareness, we have another variant of a portioned mind theory, ad the problems we have already discussed crop up again. It seems that Mele's distinction is thus not valid.
Para consistent Logic Strategies
Up until now we have looked at theories that have had a goal of saving the concept or, self-deception from contradiction. This is because it is taken for granted by most that the formal logic principle of contradiction is as basic as you can get in philosophy; that is, all arguments should be free from contradiction. However, in recent logic theory, the foundational truth of the contradiction principle is being questioned by proponents of Para consistent logic . As it should be expected, this approach has been applied to the problem of self-deception. One of the primary examples of this approach can be found in da Costa and French (1990), who's explicit intention, is to:
"Liberate discussion of self-deception from the shackles of a purely classical logic, thereby permitting a separation of the more philosophical issues from those which might properly be described as "logical" (p.179).
Put simply, their approach will put forward "a Para consistent system which seems capable of accommodating contradictory beliefs" so that the paradox of self-deception no longer exists.
This is an interesting approach that is coupled with a full alternate logic system in the appendix of the paper. The details of logic systems are, for the purpose of this paper, less important than the rationale for adopting this approach. Da Costa and French begin by accepting the distinction between an unintentional inconsistency and an intentional inconsistency. They claim, quite reasonably, that a person may easily enter upon an unintentional inconsistency:
"We are not logically omniscient in the sense of being able to immediately deduce all the consequences of a given proposition that can be deduced by anyone that demands such omniscience is clearly asking too much"(p.185).
Unintentional inconsistencies are not of major interest, however. In fact, there presence in any belief system is a trivial claim; since they are unintentional they are not present in mind at all. What is interesting, for da Costa and French, is what happens when an individual becomes aware of an inconsistency in her belief system. In this case, the inconsistency becomes intentional, and:
"It might be argued that the only rational thing to do is to suitably rearrange one's set of beliefs with a view to eliminate the contradiction of the system"(p.186)
This is where the problem lies, for da Costa and French. They claim that because our standard belief systems are full of an incalculable number of inconsistent deductive chains, it may be impossible to eliminate an intentional inconsistency completely anyway. This is their reasoning for suggesting that the principle of contradiction be "weaken[ed] or abandoned altogether" (p.177)
They then turn to attack the principle of consistency on what they call pragmatic grounds. They ask us to consider an individual that has p and not-p in his mind. Both propositions, they claim reflect the process of reasoning over some sort of empirical data. There is usually lots of empirical data, and this data is usually able to be mobilized for different arguments. As such, an individual holding the propositions p and not-p might well have god reasons for both positions because the data supports them equally well. Thus,
"The removal of inconsistency in this manner may therefore be a practically impossible, or near impossible, undertaking. One might have good reason to hold both of a pair of contradictory beliefs. Thus, for example, the same empirical evidence might equally support the same two conflicting theories, or different, but equally acceptable, pieces of evidence might support two contradictory propositions within a given theory . . . "(p.186).
In this view, a mother who sees her son might view the facts (i.e., "that he is kind towards a senior") and hold the position that her son is good. On the other hand, she might view another set of facts (i.e., that he steals and is violent) and hold the proposition that her son is not-good. Da Costa and French believe that this consists in holding two contradictory propositions; and, more importantly, they hold that this inconsistency is inherent for human beings trying to understand the complex world. As this is par for human existence, we should not be rigid on the logic of contradiction and perhaps a logic that abandons this principle is in line with how we process information anyway (p.190). With this new logic that accepts contradictions, self-deception would be a valid concept, despite the fact that it depends on contradiction.
This argument sounds acceptable at first glance. However, if considered, I think there is something wrong with it. The main thrust of the argument is that human beings naturally hold inconsistent thoughts and there is no ridding ourselves of inconsistency. As such, we are setting our goals too high to ask for consistent thoughts by utilizing the principle of contradiction. This argument depends; it seems, on two things. First, it depends on a set of facts that are so infinitely complex that we can never determine them all (or know when we have); and second, it depends on individuals "collecting" distinct propositions based upon differing data.
Although the first dependency may not be self-obvious, it is sound enough to leave it be here. However, it is not at al clear that individuals may draw distinct propositions based on differing data. That every month will draw two inconsistent propositions from the fact that "my son steals" and "my son is kind to seniors" require much more argument than is present. Rather than drawing the inconsistent -distinct- propositions "my son is god" and "my son is not good", it seems equally likely that the conclusion might be the consistent claim that "my son is imperfect". Or, assuming that an individual might value private property more than a principle of kindness, these two facts might still manifest themselves into the propositions "my son is not-god". I suggest here that da Costa and French are relying upon a theory of data collection and synthesis that they have certainly not argued for. It is not clear that humans are naturally inconsistent at al, except in the sense that they've defined it.
Conclusions: Is Self-Deception a Viable Concept?
When I first began this paper, I believed self-deception to be a viable notion, resting not on a contradiction between two contradictory propositions, but on a distinction between the real state of affairs and an individual's Ideal state of affairs. I am now uncertain that this claim can be made. After all, would this not still be a contradiction in beliefs? For instance, a contradiction in the belief f what is real and what one wish to be real? Perhaps there is some room to move with this claim because a wish might not hold the same epistemic status that a belief would (i.e., the status of a proposition). Unfortunately, I am unable to pursue this here. I am certain that the issue of self-deception is more complex than I initially had thought. I do not find the solutions to the problem presented here particularly satisfying. However, I hope I have shown with some clarity why they are so. Perhaps there are further views that I have not yet considered, and I should like to pursue this issue further at some point in the future.
Bibilography
- Arruda, A.I, (1980.n "A Survey of Paraconsistent Logic", n A.I Arruda, R. Chuaqui, and N.C.A da Costa, eds. Mathematical Logic in Latin America (North Holland, 1980). P.1-41
- daCosta, N.C.A & French, S (1990). "Belief, Contradiction and the Logic of Self-Deception", American Philosophical Quarterly, Vol. 27, p.179-197.
- Demos, R. (1960), "Lying to Oneself". Joural of Philosophy Volume 57, p.588-595
- Finagarette, H. (1969) Self-Deception. London: Routeledge and Kegan Paul.
- Mele, A. (1983), "Self-Deception". Philosophical Quarterly vol. 33, p.365-377.
- Mele, A (1987), "Recent Work on Self Deception", American Philosophical Quarterly, vol. 24, Jan 1987.
- Priset, G., Routely, R & Norman, J. (eds) (1989). Paraconsistgent Logic: Essays on the Inconsistent (Munich: Philosophia Verlag; 1989).
- For a discussion of the state / process paradoxes of self-deception see Mele (1987)
- There are many other partitioning arguments that differ from Demo's, but the length of this paper precludes their mention.
- There are a number of examples of Para consistent logics. See Arruda, A. (1980) and Priest (1989) for examples. They all seem to follow what might be called a "naturalist" approach.
- And, to be quite honest, I would be deceiving myself if I thought I understood the system that is presented.
- This is probably a tendentious claim. If an intentional consistency is removed from the system, it may still conflict with other claims. But if these claims are unknown, they would be unintentional consistencies. If they are unintentional consistencies, we would not be aware of them anyway, so how would our intentional system be affected?


