Jump to content


Information


  • This topic is locked This topic is locked
16 replies to this topic

#1 Guest_92g_*

Guest_92g_*
  • Guests

Posted 09 April 2005 - 05:58 PM

Gert Korthof has a web site that gives about as fair an evaluation of both sides of this debate as I've found from an evolutionist. I find it odd to see that he sees information the way he does, and still believe in evolution, but I think he's on the right track....:rolleyes:

The amount of meaningful information in a string of symbols depends on the amount of matches it has with strings in an established dictionary. This is valid for human language but also for DNA language. Geneticists determine the meaning of a piece of DNA by searching in an established 'gene dictionary' (established from many other species). The rest of the DNA string can be assumed random until a match is found with a dictionary word. This method has recently been demonstrated by a team that used a bigger human genome 'gene-dictionary' (11 in stead of 2 databases) and found 65.000 - 75.000 matches (in stead of 30.000).


http://home.wxs.nl/~...f/kortho44a.htm

Terry

#2 Guest_Paul C. Anagnostopoulos_*

Guest_Paul C. Anagnostopoulos_*
  • Guests

Posted 10 April 2005 - 07:19 AM

Seems an odd definition to me, too. The fact that we haven't discovered some of the information in DNA does not mean that it isn't there. If this were the case, then all the information compiled by alien civilizations is also not information.

Of course, he is using the term "meaningful information," so maybe that's a special type of information.

~~ Paul

#3 Guest_92g_*

Guest_92g_*
  • Guests

Posted 10 April 2005 - 07:45 AM

The point is that even some evolutionists understand that the genetic code stores information that has meaning. That's in opposition to those who say that there is no meaning in the genetic code, and that life is just a chemical process.

Terry

#4 chance

chance

    Veteran Member

  • Banned
  • PipPipPipPip
  • 2029 posts
  • Age: 51
  • no affiliation
  • Atheist
  • Australia

Posted 10 April 2005 - 01:40 PM

Gert Korthof has a web site that gives about as fair an evaluation of both sides of this debate as I've found from an evolutionist.  I find it odd to see that he sees information the way he does, and still believe in evolution, but I think he's on the right track....:rolleyes:
http://home.wxs.nl/~...f/kortho44a.htm

Terry

View Post


To my way of thinking, a valid code or cipher has ‘hidden’ meaning thus:

If C= p, A = X, T = h, then ‘cat’ would equal pXh if the substitutions relationships were changed on a daily basis, then ‘cat’ could equal ‘dog’. This way the original language carries information to a receiver, yet the language is still English (garbled English).
For DNA to qualify as code it would have to do the same, e.g. DNA string GAT = protein X, will ATG ever produce protein X, ? because if it were a code it should be able to.

From the same link I found this quote

The kind of 'information' produced by a mindless computer program or a natural, physical mindless process is cheap, worthless, meaningless information. Just let it run forever and it produces an endless stream of 'information'! That is not what humans call information. There is the real paradox.
Another way of stating the paradox is: a long random string of letters has a higher Information Content then a book of the same length. This is so because a random string is hardly compressible at all, while the content of a book can easily be compressed (as everyone knows who uses PC compression programs as pkzip, winzip, etc). In fact a random string has the maximum amount of information. This definition of information was not invented by Paul Davies, but by Shannon, Chaitin and Kolmogorov. The next table shows increasing information content according to the mathematical definition:

highly repetitious sequences  - book  - random string 


In everyday life the opposite is true: a random string of letters obviously has a lower, maybe the minimum of Information Content:

random string   - highly repetitious sequences  - book 


So the ranking of books and random strings on the basis of the compressibility criterion yields a result clearly contradicting common sense knowledge. The point is that the word 'information content' is misleading outside the mathematical context. I propose to call that kind of Information Content just what it is: compressibility and nothing more.

Emphasis mine.


92g Had a bit of a cross post, I edited this post to include some text from your link thus the small difference in the newly created “does DNA contain information”

Edited by chance, 10 April 2005 - 02:43 PM.


#5 Guest_Calipithecus_*

Guest_Calipithecus_*
  • Guests

Posted 10 April 2005 - 03:07 PM

The amount of meaningful information in a string of symbols depends on the amount of matches it has with strings in an established dictionary. This is valid for human language but also for DNA language

I don't even agree that this is valid for human language, let alone 'DNA language'.
This random text generator produces grammatically correct sentences like this one:

"If postcapitalist nationalism holds, we have to choose between structuralist objectivism and Sartreist absurdity. Thus, the subject is interpolated into a nationalism that includes culture as a reality."

Such randomly assembled sentences use words which may be found in a standard dictionary (as well as some which may be found only in a more specialized one), but they are nonetheless utterly deviod of meaningful content. That they are so difficult to distinguish from the style they parody is a fact that I DO consider meaningful, but that meaning is not itself a property of language.

#6 Fred Williams

Fred Williams

    Administrator / Forum Owner

  • Admin Team
  • PipPipPipPip
  • 2467 posts
  • Gender:Male
  • Location:Broomfield, Colorado
  • Interests:I enjoy going to Broncos games, my son's HS basketball & baseball games, and my daughter's piano & dance recitals. I enjoy playing basketball (when able). I occasionally play keyboards for my church's praise team. I am a Senior Staff Firmware Engineer at Micron, and am co-host of Real Science Radio.
  • Age: 52
  • Christian
  • Young Earth Creationist
  • Broomfield, Colorado

Posted 10 April 2005 - 05:35 PM

That a random string has maximum information is easily the most common error made in information theory. None of the people Korthof cited that I know of have any experience with information theory. Essentially, Shannon’s paper was misunderstood, and too much was read out of it. Dr Tom Shnieder, (an evolutionist and Marxist) is trained in information science and has a good article that refutes al these downright ludicrous claims about randomness:

http://www.lecb.ncif...ncertainty.html

(unfortunately his website didn’t load; I hope it is just a temporary thing)

Dr Schneider and I disagree on many things (he tends to limit information to the Shannon level, though not completely), but he is spot right on many of his papers on Shannon information. I highly recommend his primers (he even has a page on me, if you can find it. :rolleyes: )

Fred

#7 Guest_92g_*

Guest_92g_*
  • Guests

Posted 10 April 2005 - 06:02 PM

So the ranking of books and random strings on the basis of the compressibility criterion yields a result clearly contradicting common sense knowledge. The point is that the word 'information content' is misleading outside the mathematical context. I propose to call that kind of Information Content just what it is: compressibility and nothing more.


I think you missed the point. He's not saying that its not possible to define Information Content outside of mathematics. To the contrary, he's looking for a definition.

My proposal is Dictionary based Information Content: information content based on a dictionary specified in advance



I should also say that I'm not promoting anything beyond his observation that DNA has a semantical aspect. That's the critical point.

Terry

#8 chance

chance

    Veteran Member

  • Banned
  • PipPipPipPip
  • 2029 posts
  • Age: 51
  • no affiliation
  • Atheist
  • Australia

Posted 10 April 2005 - 07:21 PM

That a random string has maximum information is easily the most common error made in information theory. None of the people Korthof cited that I know of have any experience with information theory. Essentially, Shannon’s paper was misunderstood, and too much was read out of it. <snip>

View Post

Agreed.

I think this is a big part of the confusion, A mathematicians use of the word ‘information’ in context of code has a different meaning to that generally used in common speech. So counterintuitive is the concept of random text containing more information than a book (as put forward by Shannon) that picking it apart out of context is a meaningless exercise.

From what I have been able to understand, ‘information’ re YEC is implying ‘meaning’, while ‘information’ according to Gert Korthof implies compressibility.

#9 chance

chance

    Veteran Member

  • Banned
  • PipPipPipPip
  • 2029 posts
  • Age: 51
  • no affiliation
  • Atheist
  • Australia

Posted 10 April 2005 - 07:38 PM

I think you missed the point.  He's not saying that its not possible to define Information Content outside of mathematics.  To the contrary, he's looking for a definition.
<snip>

View Post

I read this as a clarification to show they were not the same thing.

In your second posted quote, I must admit that have difficulty in understanding that paragraph. In full -

Of course there is a relation with the presence of 'dictionary words' and compressibility. But there is no linear relation between compressibility and meaning. My proposal is Dictionary based Information Content: information content based on a dictionary specified in advance. This is not the maximum compressibility that communication engineers and webgraphics designers are after, however it is definitely not subjective or arbitrary, because it easily can be implemented and executed by a computer program and it will capture 'meaning' in a quantitative way. Additonally it shows that meaning is relative to a dictionary. A French text tested with a Dutch dictionary will result in very low values. The software itself can test which dictionary yields the best result.

But I think he is saying that meaning is only apparent if the correct dictionary is used when decoding, which makes sence that code needs to be pre arranged with transmitter and receiver, (or that the code is breakable and can be understood).

#10 Guest_Paul C. Anagnostopoulos_*

Guest_Paul C. Anagnostopoulos_*
  • Guests

Posted 11 April 2005 - 06:50 AM

That a random string has maximum information is easily the most common error made in information theory. None of the people Korthof cited that I know of have any experience with information theory. Essentially, Shannon’s paper was misunderstood, and too much was read out of it. Dr Tom Shnieder, (an evolutionist and Marxist) is trained in information science and has a good article that refutes al these downright ludicrous claims about randomness:

A random string does have maximum Kolmogorov information. The mistake is to conflate Kolmogorov and Shannon information.

~~ Paul

#11 Fred Williams

Fred Williams

    Administrator / Forum Owner

  • Admin Team
  • PipPipPipPip
  • 2467 posts
  • Gender:Male
  • Location:Broomfield, Colorado
  • Interests:I enjoy going to Broncos games, my son's HS basketball & baseball games, and my daughter's piano & dance recitals. I enjoy playing basketball (when able). I occasionally play keyboards for my church's praise team. I am a Senior Staff Firmware Engineer at Micron, and am co-host of Real Science Radio.
  • Age: 52
  • Christian
  • Young Earth Creationist
  • Broomfield, Colorado

Posted 11 April 2005 - 11:35 AM

Agreed.

I think this is a big part of the confusion, A mathematicians use of the word ‘information’ in context of code has a different meaning to that generally used in common speech.  So counterintuitive is the concept of random text containing more information than a book (as put forward by Shannon) that picking it apart out of context is a meaningless exercise.
 
From what I have been able to understand, ‘information’ re YEC is implying ‘meaning’, while ‘information’ according to Gert Korthof implies compressibility.

View Post


The problem is that Shannon was not offering uncertainty H as a measurement of the received information, but instead the reduction of uncertainty as a measure of received information. Dr Schnieder’s web site is back up. Here are some good pages to check out:

http://www.lecb.ncif...ncertainty.html
http://www.lecb.ncif...s/pitfalls.html

Again, Dr Schnieder and I have great disagreements over what information is (mostly only because of its implications on the origins debate), but we are almost lock-step on what Shannon information is! It is a greatly misunderstood topic and it was refreshing when I stumbled across his web page about 7 years ago.

Fred

#12 Fred Williams

Fred Williams

    Administrator / Forum Owner

  • Admin Team
  • PipPipPipPip
  • 2467 posts
  • Gender:Male
  • Location:Broomfield, Colorado
  • Interests:I enjoy going to Broncos games, my son's HS basketball & baseball games, and my daughter's piano & dance recitals. I enjoy playing basketball (when able). I occasionally play keyboards for my church's praise team. I am a Senior Staff Firmware Engineer at Micron, and am co-host of Real Science Radio.
  • Age: 52
  • Christian
  • Young Earth Creationist
  • Broomfield, Colorado

Posted 11 April 2005 - 11:58 AM

A random string does have maximum Kolmogorov information. The mistake is to conflate Kolmogorov and Shannon information.

~~ Paul

View Post


This is essentially correct, though I believe it somewhat misstates the intent of Kolmogorov information.

For those who are interested, Kolmogorov Complexity is determined by the shortest algorithm required to produce the message (algorithmic information). For example, if two word processors have the exact same features, regardless of how large the program is, they have the same amount of information. Here is another example:

“Paul believes in evolution, something I would term a fairytale, that has no tangible evidence to support it and therefore why it is a fairytale; hence he is a fairytale lover!”

“Paul believes in the fairytale of evolution that has no tangible evidence to support it”.

Both have the same amount of Kolmogorov information, though the 2nd has more Shannon information (decreased uncertainty at the receiver). This is the intent behind Kolmogorov complexity for measurement of information. So regarding a random string, to produce one doesn’t take much information. A random string has essentially very little information. But if the random string becomes someone’s password, then to reproduce it would require the password copy so at that point the Kolmogorov information is at a maximum for that string.

Fred

#13 chance

chance

    Veteran Member

  • Banned
  • PipPipPipPip
  • 2029 posts
  • Age: 51
  • no affiliation
  • Atheist
  • Australia

Posted 11 April 2005 - 01:35 PM

<snip> Here is another example:

“Paul believes in evolution, something I would term a fairytale, that has no tangible evidence to support it and therefore why it is a fairytale; hence he is a fairytale lover!”

“Paul believes in the fairytale of evolution that has no tangible evidence to support it”.

Both have the same amount of Kolmogorov information, though the 2nd has more Shannon information (decreased uncertainty at the receiver). This is the intent behind Kolmogorov complexity for measurement of information. So regarding a random string, to produce one doesn’t take much information. A random string has essentially very little information. But if the random string becomes someone’s password, then to reproduce it would require the password copy so at that point the Kolmogorov information is at a maximum for that string.

Fred

View Post


Nicely explained, I think that clears up quite a few of the difficulties I have been having with the mathematical aspects of information.

I shall take a look at the links later.

#14 Guest_Paul C. Anagnostopoulos_*

Guest_Paul C. Anagnostopoulos_*
  • Guests

Posted 11 April 2005 - 03:46 PM

“Paul believes in evolution, something I would term a fairytale, that has no tangible evidence to support it and therefore why it is a fairytale; hence he is a fairytale lover!”

“Paul believes in the fairytale of evolution that has no tangible evidence to support it”.

Both have the same amount of Kolmogorov information, though the 2nd has more Shannon information (decreased uncertainty at the receiver). This is the intent behind Kolmogorov complexity for measurement of information. So regarding a random string, to produce one doesn’t take much information. A random string has essentially very little information. But if the random string becomes someone’s password, then to reproduce it would require the password copy so at that point the Kolmogorov information is at a maximum for that string.

How do you figure they both have the same amount of K-information?

The question is, how much information does it take to produce a given random string? Since it is incompressible, it takes an amount of information equal to the length of the string.

The other problem with random strings is that you cannot algorithmically produce a true random string.

~~ Paul

#15 Guest_Paul C. Anagnostopoulos_*

Guest_Paul C. Anagnostopoulos_*
  • Guests

Posted 11 April 2005 - 03:47 PM

The problem is that Shannon was not offering uncertainty H as a measurement of the received information, but instead the reduction of uncertainty as a measure of received information.

Bingo. There is no information in 50 coin flips if both sides of the coin are heads.

~~ Paul

#16 Method

Method

    Banned

  • Banned
  • PipPipPip
  • 174 posts
  • Age: 29
  • no affiliation
  • Agnostic
  • State of Bliss

Posted 12 April 2005 - 01:39 PM

And to add to Paul's question, how can we tell between a randomly generated genetic sequence and a genetic sequence from a living being? Can we do this by looking at the genetic sequence alone, or do you have to look at protein function?

For instance, the phrase "Hand me a glass" has meaning outside of the action it causes in others. It seems to me that a genetic sequence has no meaning outside of the protein function it creates, or more generally the action it causes in the environment. This seems to indicate that DNA does not have a level of abstraction.

#17 Fred Williams

Fred Williams

    Administrator / Forum Owner

  • Admin Team
  • PipPipPipPip
  • 2467 posts
  • Gender:Male
  • Location:Broomfield, Colorado
  • Interests:I enjoy going to Broncos games, my son's HS basketball & baseball games, and my daughter's piano & dance recitals. I enjoy playing basketball (when able). I occasionally play keyboards for my church's praise team. I am a Senior Staff Firmware Engineer at Micron, and am co-host of Real Science Radio.
  • Age: 52
  • Christian
  • Young Earth Creationist
  • Broomfield, Colorado

Posted 14 April 2005 - 04:51 PM

How do you figure they both have the same amount of K-information?

View Post


They define the same object. I don’t have much time to look around, but check some of Chaitin’s papers as they are readily available. I did find this after a quick search:

“If the simplicity function is just the length of the instructions, we are then trying to find a minimal description, i.e., an optimally efficient encoding of the data
sequence.” – Minsky [http://www.umcs.maine.edu/~chaitin/ibm.pdf#search='chaitin%20information%20given%20string']

Fred




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users