But I am talking about Gitt information, you've just chosen to respond about other things. For example, I pointed out that one of the key differences between Shannon and Gitt information is that Gitt includes meaning as part of his definition.
I would just like to add a few points to the discussion of the way "mainstream science" works and support the comments by Percy. Gitt's definitions are a very good example, because the issue really does not need to be about evolution or creationism but about how a scientific term is used in science and how papers produce impact. Gitt's books (like most general books for the lay audience) do not play by academic rules for inclusion into mainstream science. And as far as I can tell, Gitt has never attempted to publish a paper about his notion of information in an academic journal (please correct me if I am wrong) so we do not even know whether he has made an effort to attract information scientists to his point of view.
I have commented on his book elsewhere. The main conclusions he makes in his book in the final chapter are about the amount of information contained in the Bible, and I have not seen anyone who has successfully summarized his logic regarding evolution. I think this is because it is not logical, but that is simply my opinion based on my background in information theory. I am happy to let someone make another effort to summarize it if they feel there is some important logic there that I have missed. http://www.evolution...st=
Mainstream science and publications in mainstream science do have a number of rules. Graduate students are typically trained to understand these rules. Many of these rules have very good reasons and many are simply tradition. Many rules are a combination of both.
For example, you are required to cite all relevant predecessors and accurately describe the history of the science that led to your paper. You will be rejected quickly if you demonstrate that you do not understand the history correctly, even if you have a worthwhile experiment or hypothesis. You must demonstrate you understand the issues and history of any field where you wish to publish. You can not reinvent ideas or take credit for ideas that have already been published. Even if you do get published, a poorly written paper will likely be ignored even if it has good ideas.
Another rule is that you must understand the accepted definitions in your field. Whether you are using a term like "meter" or "axiom" or "entropy" or "information" you must understand precisely what the terms mean in your field. You are welcome to include new terms and new information. However, you cannot simply redefine terms at your convenience. Science depends on consistent use of terms and again, any paper that is not consistent will be rejected.
For example, the scientific term "information" is not the same as that used in everyday language. It is extremely well defined, and serves as the basis of modern computers, image, compression, communication across the internet, biology, chemistry, physics etc etc. It is the standard. It is mainstream and it is extremely powerful and useful. In any scientific paper where you use the term properly, scientists will know exactly what you mean without needing to explain it. You may not like the term, but that is tough. You will have as much luck changing that as changing the definition of "meter".
However, even Shannon was well aware that his definition comes up short when we want to describe the "meaning" of the stimulus. Hundreds of academic papers have addressed this issue and you will see terms like "semantic information" to qualify the definition. Even on this website, I believe Fred has made an effort to restrict and qualify the use of the term (quite correctly) because of the confusion by those that have scientific training and assume that the term "information" applies to the standard scientific term (Shannon information).
Ok. Now let me try to briefly explain why including "meaning" is so difficult when talking about information.
Imagine I have four possible signals and assume they have equal probability. Call them
A, B, C, and D.
Now imagine that you do not know what signal I am about to send but you are waiting patiently.
If I give you the answer C, I have provided you information. From the Shannon point of view, I have given you two bits of information.http://en.wikipedia....iki/Information
The view of information as a message came into prominence with the publication in 1948 of an influential paper by Claude Shannon, "A Mathematical Theory of Communication." This paper provides the foundations of information theory and endows the word information not only with a technical meaning but also a measure. If the sending device is equally likely to send any one of a set of N messages, then the preferred measure of "the information produced when one message is chosen from the set" is the base two logarithm of N (This measure is called self-information).
Now imagine, that each of the letters has the following meanings.
A. represents (means) the Oxford English Dictionary
B. represents the book of Genesis
C. represents my cat
D. represents nothing.
Ok. Now, how much information is represented by "C"? There is no easy way to quantify this from the point of view of meaning. It depends a lot on the receiver (if you knew my cat was Siamese does C provide more meaning?). It also depends on the honesty of the signal and the interpretation. How much information is sent if you thought C represented your cat but I thought it represented my cat?
These difficulties do not imply that the general mathematical theory of information can not include 'meaning'. However, semantic information quickly gets into a quagmire and there is no simple approach to quantifying and measuring the "meaning" of a signal. Many people have tried and failed.
If someone here really thinks Gitt is making a substantial contribution to information theory then they should encourage Gitt to begin by taking out the religion (and the evolution) and submit a proper mathematical paper to a journal of information science. If they then think they can apply that new theory to evolutionary theory, then they should follow that paper with a mathematically rigorous paper (with proper citations) applying that mathematical theory to evolutionary theory.
Creationists may not like these rules, but these are the rules of science. No one in science gets a free ticket. Most graduate students in mainstream science never end up with papers that have serious impact. It is hard work and usually takes a deep understanding of the field to be capable of communicating a new idea - and it is even harder coming up with a new idea that has merit.
Well, hope that helps. Although I am sure few Creationists would agree. For most areas outside of mainstream science (astrology, aliens, bigfoot, ESP etc), I have noticed that is often easier to believe there is a conspiracy against them than to accept the possibility that it is their science that has failed.