Werner Gitt’s book In The Beginning Was Information [Free PDF] is superb and provided the initial inspiration for all I have done with Information Theory. I owe Werner Gitt a debt of gratitude for producing one of the most clear and profound science books I’ve ever read.
The thesis of Gitt’s book is that Claude Shannon’s information theory directly implies design in biology because the existence of language is always preceded by intelligence.
Gitt connects this to John 1:1:
“In the beginning was the WORD and the WORD was with God and the WORD was God. Through Him all things were made….” Language and information are the basis of all creative acts because Jesus Christ is the basis of all language; Jesus Christ IS the language of God.
I’ve said in other places that I’ve found many criticisms of Gitt and all of them are wrong. I was specifically asked about these pages:
I was challenged by a blog reader that TalkOrigins had exposed the errors of Gitt’s arguments. It’s time to set the record straight.
So let’s take these in turn. This material is based on information retrieved from TalkOrigins February 19, 2011:
A striking contradiction is readily apparent in Gitt’s thinking- he holds that his view of information is an extension of Shannon, even while he rejects the underpinnings of Shannon’s work. Contrast Gitt’s words
(4) No information can exist in purely statistical processes.
Theorem 3: Since Shannon’s definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning.
with Shannon’s statement in his key 1948 paper, “A Mathematical Theory of Communication”
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
It becomes very difficult to see how he has provided an extension to Shannon, who purposely modeled information sources as producing random sequences of symbols (see the article Classical Information Theory for further information). It would be more proper to state that Gitt offers at best a restriction of Shannon, and at worst, an outright contradiction.
TalkOrigins is misinterpreting both Gitt and Shannon on this point. Shannon’s mathematical analysis can only be applied to the statistical aspects of language because the meaning of any statement cannot be reduced to a number. But Shannon is very clear that the semantical aspect of communication objectively exists.
When Shannon says, “These semantic aspects of communication are irrelevant to the engineering problem” what he means is that a communication system doesn’t care about the meaning of its message, it only cares about its contents. In the introduction to the book “A Mathematical Theory of Communication” published by the University of Illinois Press, Shannon’s co-author Warren Weaver talks of the importance of the meaning of a message.
Relative to the broad subject of communication, there seem to be problems at three levels. Thus it seems reasonable to ask, serially:
Level A: How accurately can the symbols of communication be transmitted? (The technical problem)
Level B: How precisely do the transmitted symbols convey the desired meaning? (The semantic problem)
Level C: How effectively does the received meaning affect conduct in the desired way? (The effectiveness problem)
The semantic problems are concerned with the identity, or satisfactorily close approximation, in the interpretation of meaning by the receiver, as compared with the intended meaning of the sender. This is a very deep and involved situation, even when one deals only with the relatively simpler problems of communicating through speech.
He says, “it is clear that communication either affects conduct or is without any discernible and probable effect at all.”
Then he goes on to say:
So stated, one would be inclined to think that Level A is a relatively superficial one, involving only the engineering details of good design of a communication system; while B and C seem to contain most if not all of the philosophical content of the general problem of communication.
The mathematical theory of the engineering aspects of communication, as developed chiefly by Claude Shannon at the Bell Telephone Laboratories, admittedly applies in the first instance only to problem A, namely, the technical problems of accuracy of transference of various types of signals from sender to receiver. But the theory has, I think, a deep significance which proves that the preceding paragraph is seriously inaccurate….
Thus the theory of level A is, at least ot a significant degree, also a theory of levels B and C.
So in other words TalkOrigins is saying that information theory is concerned with the transmission of randomly generated sequences of symbols and only cares about the statistical aspects of the transactions. This is 100% wrong. Because Weaver says on page 7:
The information source selects a desired message out of a set of possible messages (this is a particularly important remark, which requires considerable explanation later). The selected message may consist of written or spoken words, or of pictures, music, etc.
What TalkOrigins is doing is denying the importance of meaning in a signal, then conflating this with the fact that Shannon’s formulas cannot discern meaning but only the accuracy of the signal. This is an egregious misrepresentation of Shannon. It’s inexcusable.
Let’s go on – it gets better:
In SC2 Gitt notes that Chaitin showed randomness cannot be proven (see Chaitin’s article “Randomness and Mathematical Proof”), and that the cause of a string of symbols must be therefore be known to determine information is present; yet in SC1 he relies on discerning the “ulterior intention at the semantic, pragmatic and apobetic levels.” In other words, Gitt allows himself to make guesses about the intelligence and purpose behind a source of a series of symbols, even though he doesn’t know whether the source of the symbols is random. Gitt is trying to have it both ways here. He wants to assert that the genome fits his strictly non-random definition of information, even after acknowledging that randomness cannot be proven.
You can’t prove randomness. But you can prove non-randomness! Shannon and Gitt show that you can identify non-random statistical aspects of a signal (“ergodic” means statistically consistent aspects of language).
You cannot study language without guessing about the intelligence and purpose behind a source of a series of symbols. That’s the very definition linguistics and cryptography. In linguistics and cryptography, you often don’t know the source of a message; you’re trying to accurately guess.
The fact that randomness cannot be proven does not contradict Gitt at all – it contradicts Neo Darwinism! You can’t prove randomness but you can prove non-randomness. It’s possible to prove that all languages including the Genetic Code are non-random.
Neo-Darwinism says that random copying errors create evolution and random processes created the genetic code in the first place. Both of these statements are by definition impossible to prove. Neo Darwinism is by definition scientifically unprovable. And every mathematician that studies randomness knows this. (This is why I advocate a theory of evolution by systematic re-arrangement of genes. That is scientific. Random mutation theory is not.)
Next from TalkOrigins:
Gitt describes his principles as “empirical”, yet the data is not provided to back this up. Similarly, he proposes fourteen “theorems”, yet fails to demonstrate them. Shannon, in contrast, offers the math to back up his theorems. It is difficult to see how Gitt’s “empirical principles” and “theorems” are anything but arbitrary assertions.
Neither do we see a working measure for meaning (a yet-unsolved problem Shannon wisely avoided). Since Gitt can’t define what meaning is sufficiently to measure it, his ideas don’t amount to much more than arm-waving.
TalkOrigins is pretending that Gitt said he could prove his theorems. Gitt explicitly stated that they are to be taken as true until any exception can be found. All logical propositions ultimately rest on unprovable statements (after Gödel). Gitt has offered these as theorems and the burden of proof is on TalkOrigins to show that any of Gitt’s theorems are incorrect. They have not done so. We all know from experience that every one of Gitt’s theorems matches our experience.
More from TalkOrigins:
By asserting that data must have an intelligent source to be considered information, and by assuming genomic sequences are information fitting that definition, Gitt defines into existence an intelligent source for the genome without going to the trouble of checking whether one was actually there. This is circular reasoning.
Gitt does not assert that data must have an intelligent source to be considered information. He asserts that all sequences of symbols that are non-repeating and have statistics, syntax and semantics that we know the origin of have an intelligent source. That is not circular reasoning. This is standard inductive inference.
Thus every objection TalkOrigins makes to Gitt in this article is shown to be wrong.
Let’s to on to the 2nd article:
1. The genetic code is not a true code; it is more of a cypher.
This is patently false. It contradicts all the other genetics literature defining codes, the genetic code and the reasons why we call DNA the genetic code. I’ve got a stack of biology books and none of them call DNA a cipher. Most do not even contain the word cipher.
Hubert Yockey shows why DNA is a code in his book “Information Theory, Evolution and the Origin of Life” (Cambridge University Press, 2005) http://www.amazon.com/gp/product/0521802938?ie=UTF8&tag=httpwwwperryc-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0521802938
You can read a summary of Yockey’s definitions at www.evo2.org/faq
To say that DNA is not a code is an inexcusable misrepresentation of one of the most basic facts in all of science. TalkOrigins should be ashamed of this. I’m amazed this article even exists.
An essential property of language is that any word can refer to any object. That is not true in genetics. The genetic code which maps codons to proteins could be changed, but doing so would change the meaning of all sequences that code for proteins, and it could not create arbitrary new meanings for all DNA sequences. Genetics is not true language.
Webster’s Dictionary defines language as (2): a systematic means of communicating ideas or feelings by the use of conventionalized signs, sounds, gestures, or marks having understood meanings
DNA fits most definitions of language. Not necessarily all. The stipulation that any word can refer to any object is minor. It also may very well be wrong because in theory there’s an infinite number of possible configurations of biological machines.
The word frequencies of all natural languages follow a power law (Zipf’s Law). DNA does not follow this pattern (Tsonis et al. 1997).
Zipf’s law has nothing to do with the definition of language. What does this have to do with anything? This is irrelevant. Furthermore, depending on the analysis used, DNA does in fact follow Zipf’s law. See for example http://www.ecology.kyoto-u.ac.jp/ecology/activities/images/Nowak_Sympo_abs.pdf
Let’s go to the third article:
[Creationist] Claim: In every case where a machine’s origin can be determined, it is the result of intelligent agency. (A machine is a device for transmitting or modifying force or energy.) Out of billions of observations, there are no exceptions. It should be considered a law of nature that machines, including those in living organisms, have an intelligent cause.
1. The claim is an argument by analogy: Life is like man-made objects in containing machines, therefore it is like man-made objects in having an intelligent cause. It suffers the weaknesses of all arguments by analogy. In particular, it ignores dissimilarities between life and design, and the similarity has questionable relevance to intelligence.
My answer: The use of the word code in biology is not analogy (Yockey, 2005). Therefore the argument that life is designed is not an argument from analogy.
Many machines occur in nature without the involvement of intelligence or, indeed, of any kind of life. The following list is far from exhaustive.
* Inclined planes, perhaps the simplest type of machine, are ubiquitous on earth. Functions include causing waves to break and making it easier for animals to climb heights.
* Ice wedges, another form of wedge, contribute significantly to erosion.
* Molecular bonds function as springs as they transmit and distribute forces through materials.
* Thunder clouds generate electrical forces.
* The earth as a whole is a dynamo, converting mechanical motion of convection into a magnetic field.
* Geysers produce eruptions which are predictable and fairly regular. If Paley’s watch can be considered a machine, surely Yellowstone’s Old Faithful is a machine, too, but I have never heard any suggestion that it is designed.
The problem here is an unacceptably vague definition of machine. An inclined plane is a machine by one definition, but that’s obviously not what the creationist was talking about. Thanks to TalkOrigins for a straw-man argument.
In my work I have focused on machines that use or produce code – i.e. variations on the Turing Machine. None of the above machines are Turing machines in the remotest sense and none of them produce codes.
Other machines are created by life but not by intelligence. Genetic algorithms design or help to design many kinds of machines, from antennae to jet engines (Marczyk 2004). One may attempt to argue that items designed by a genetic algorithm inherit the intelligent agency of the algorithm’s designer, but this misses the point that no human mental activity directs the immediate operation of the algorithm. In some cases, for example in some electronic circuits, the algorithmically-designed results show no resemblance to their human-designed versions, and indeed, cannot be explained via human design methods (Koza et al. 2003).
All Genetic Algorithms originate from conscious beings. There are no known exceptions. At all times, GA’s are obeying the rules of a man-made code. Intelligence is always necessary to have a GA.
Thus we see that at every single point, TalkOrigins has failed to identify flaws in Gitt’s work. I would say the same of every other atheist critique. Gitt’s book remains an outstanding expose of the flaws of materialistic biology.
Perry MarshallDownload The First 3 Chapters of Evolution 2.0 For Free, Here – https://evo-2.org/3-free-chapters/
Where Did Life And The Genetic Code Come From? Can The Answer Build Superior AI? The #1 Mystery In Science Now Has A $10 Million Prize. Learn More About It, Here – https://www.herox.com/evolution2.0