I recently came across this website by Perry Marshall, which makes a really interesting proof of the existence of God.
The argument is basically that DNA constitutes information (a code), yet all information that we know of is the product of a mind. Randomness cannot create information. Therefore, God exists.
Lovely argument. Now let's pick some holes.
1) My first observation is that this argument is almost exactly the same as entropy. The argument is that DNA is a low entropy state. Yet randomness always increases entropy. Therefore DNA cannot be the product of random processes, therefore it must be the work of God (or Maxwell's Demon).
However this argument is invalid because localised decreases in entropy are perfectly possible, and expected, even though the entropy of the system as a whole increases.
Considering that the site claims to make use of information theory, it presumably is aware of information entropy: http://en.wikipedia.org/wiki/Information_entropy
It follows that DNA has a low information entropy, since the DNA sequence is not random. This is exactly the same as for physical entropy, so I think the low (information) entropy argument for God will fall for the same reasons. In fact, it has been proven that information is equivalent to thermodynamics, so the argument is essentially just thermodynamics, case closed. See also here.
2) The "information" in the whole system is ignored. You cannot look at isolated parts of the system and claim entropy reversal. You need to look at the (information) entropy of the entire system. Therefore you need to look at ALL of the random mutations, not just the successful ones. Whilst there may be lower (information) entropy in the successful DNA, there will be higher (information) entropy in the unsuccessful mutations. The overall positive (information) entropy (from useless information) outweighs the overall negative entropy from useful information.
This is entirely in keeping with the second law of thermodynamics.
3) Genetic algorithms "create information", yet they are not a "mind". Therefore it is untrue to state that information is always the product of the mind.
http://en.wikipedia.org/wiki/Genetic_algorithm
http://www.talkorigins.org/indexcc/CF/CF011.html
Marshall is in denial that genetic algorithms actually work.
4) Infinite number sequences, such as the sequence of digits in the number pi, could be regarded as "truly random" sources. It is a fact that the digits of pi contain the complete works of Shakespeare (with probability=1). For a given sequence of N bits, you would expect to wait for approximately 2^N bits in the sequence to pass before you find it.
Therefore it is untrue that random sources cannot create information.
From a random source, if you wait a finite time, you will find the works of Shakespeare, your entire genome, and indeed the genomes of every organism on the planet, again with 100% probability. Of course, you would be waiting for a long, but finite, time.
The problem is that this time is much longer than the age of the universe. This is the equivalent of a cell popping out of mid-air, which is too unlikely to contemplate, even given the size and age of the universe. There is much more likely to be some kind of boot-strap process, whereby life started as very simple molecules, not complete cells.
The problem isn't creating the information, it is in deciding whether the information is any good. This is where natural selection comes in. The argument is then "only a mind can select good information". With natural selection, the point is that nature selects the best information. This is why I spell God n,a,t,u,r,e.
5) Computers regularly create information. For example they can calculate the numbers in a spreadsheet, forces in a bridge, digits of pi, weather forecasts etc. This information didn't come from a mind. There was no foreknowledge of the result of a calculation (otherwise, what would be the point in running the calculation?)
Therefore it is untrue that all useful information is the product of a mind.
(Note that even as the computer runs, the entropy of the system as a whole increases, even though there is a local decrease in information entropy produced by the computer).
6) Useful information isn't necessarily created or consumed by minds. The genetic information isn't consumed by the mind, it is read by messenger-RNA, and was working perfectly well before we observed it. Similarly computers also produce and consume vast quantities of useful information, the bulk of which is internal and isn't generated by, or seen by, human beings.
This again disproves the idea that only minds produce useful information.
7) There is an attempt to distinguish "codes" from "information." A code (by Marshall's definition) is created by an intelligence. This is an abuse of the word code. This is just a circular argument. If you define a code to be created by intelligence, then by tautology, intelligence creates codes. That proves nothing. Whether codes are traditionally man-made is irrelevant.
Maybe the definition of a code is "useful information", as opposed to "useless information". The problem is then "who distinguishes useful from useless information?" The question is incorrectly phrased: it should be "what distinguishes useful information from useless information". (Natural selection springs to mind).
Information is only useful if there is some means to decode it. In biology DNA is decoded by messenger RNA - it has nothing to do with the mind.
It is fully correct that the narrow definition of code (as the product of a mind) does not apply to DNA. It is only the broader definition (as useful information) which makes sense.
8) Evolution occurs in test tubes. We aren't talking about speciation here, we are talking about useful mutations creating stronger organisms. For example, you can make microbes more heat-resistant. According to Marshall, this would be impossible.
9) Why would God need a mind? I spell God "nature" and I also choose to not personify him (sorry, it). A brain is an evolved survival organ for an animal, what has that got to do with the creation of physics?
(Irrelevant, but half of this argument is about God after all.)
In fact, if we say "God created everything", then that works for your definition of God, and my definition of God. But this (like the definition of a code) is a tautology and doesn't gain us an iota of insight.
10) The brain is governed by physics. Some say it is governed entirely by physics (i.e. there is no soul playing puppet-master - in fact if there was then what would be the point of a brain?) Therefore physical processes are ultimately responsible for the output of our brains. Therefore physical processes create information.
11) This whole argument smells like "if it looks designed, then it is designed", or the "how else, but God" arguments. Information is being used as a synonym for design, not as a mathematical concept.
12) Remember, that intelligent design has never been observed - it is an unproven hypothesis. We ought to be able to catch it in a lab, and see entropy reversals before our eyes that are so unlikely that the more likely explanation is God. I really would be delighted if this was proven to be the case. It would be a valid experiment.
13) Marshall has a demo that mutation only destroys information. The flaw with this demonstration is that there is no natural selection. It isn't allowed to compete with the original mutation (or other mutations) to see which is better. That's like putting a frog in a blender. In nature, information is copied not destroyed, with a very low mutation rate.
If in nature, you never got any natural selection, then the progressive mutation of our genes would indeed destroy us. However natural selection is a negative entropy force which counteracts the positive entropy of random mutation.
If you increase the mutation rate or take away natural selection, then the power of mutation outweighs the power of natural selection, and the genome degrades. This is precisely what happens if you irradiate fruit flies.
14) I've put together my own demo that shows that useful information can be created using mutation. A tower of blocks is stacked, and the objective is to make the tower lean as far as possible without toppling the blocks.
In this demonstration, there are 10 "genes" which are 10 numbers representing the positions of each wooden block. The genes are mutated, resulting in some towers which lean further than others, and other towers falling over. The best tower is selected, and the process repeats.
How far out can the tower lean? Is it one block width? Two block-widths? 3/2 block widths? Infinite? What is the best design of the tower? I didn't know before the program was run, and the individual genes didn't know either. It was only by running the genetic algorithm that I was able to learn this, thereby gaining information.
So where did this information come from? It came from the fitness function, which supplies up to 1 bit of information to the genome per generation. In the demo, you can turn off the fitness function, and guess what, the information is lost due to mutation.
At the end of the simulation, the design is stable and the effects of mutation and natural selection are in balance.
15) Natural selection creates information. Mutation destroys information. The two effects are in equilibrium. If you only think about mutation, you are missing half of the argument.
For example, suppose animal A and animal B have a fight. Animal A has longer horns than animal B. Animal A kills animal B and impregnates the entire herd. We've just gained some information here: long horns are good. This information gets replicated throughout the herd.
The only reason animal A and B could fight in the first place is because they have energy in their metabolism, which, guess what, comes from the low entropy from sunlight. We see that low entropy from the sun is translated into low (information) entropy in the genome via natural selection.
So I think I've cracked it. The source of the information is natural selection, and not God after all.
Conclusion
The fundamental mistake that Marshall made was to ignore the information-creating properties of natural selection, which counteracts the information-destroying effects of mutation. In fact it took me a while to spot it.
I object to the claim that all useful information is the product of a mind. I think I have clearly demonstrated that information is created from loads of non-intelligent sources, such as random number sequences, genetic algorithms, and computer algorithms in general.
I also object to the claim that only a mind can select useful information. Again a perfectly good process, natural selection, explains this perfectly. Given a choice between a simple process (natural selection), and divine intervention, I favour the scientific explanation. But that's just me.
Thanks again Perry for a great idea, though I am still sceptical.
The argument is basically that DNA constitutes information (a code), yet all information that we know of is the product of a mind. Randomness cannot create information. Therefore, God exists.
Lovely argument. Now let's pick some holes.
1) My first observation is that this argument is almost exactly the same as entropy. The argument is that DNA is a low entropy state. Yet randomness always increases entropy. Therefore DNA cannot be the product of random processes, therefore it must be the work of God (or Maxwell's Demon).
However this argument is invalid because localised decreases in entropy are perfectly possible, and expected, even though the entropy of the system as a whole increases.
Considering that the site claims to make use of information theory, it presumably is aware of information entropy: http://en.wikipedia.org/wiki/Information_entropy
It follows that DNA has a low information entropy, since the DNA sequence is not random. This is exactly the same as for physical entropy, so I think the low (information) entropy argument for God will fall for the same reasons. In fact, it has been proven that information is equivalent to thermodynamics, so the argument is essentially just thermodynamics, case closed. See also here.
2) The "information" in the whole system is ignored. You cannot look at isolated parts of the system and claim entropy reversal. You need to look at the (information) entropy of the entire system. Therefore you need to look at ALL of the random mutations, not just the successful ones. Whilst there may be lower (information) entropy in the successful DNA, there will be higher (information) entropy in the unsuccessful mutations. The overall positive (information) entropy (from useless information) outweighs the overall negative entropy from useful information.
This is entirely in keeping with the second law of thermodynamics.
3) Genetic algorithms "create information", yet they are not a "mind". Therefore it is untrue to state that information is always the product of the mind.
http://en.wikipedia.org/wiki/Genetic_algorithm
http://www.talkorigins.org/indexcc/CF/CF011.html
Marshall is in denial that genetic algorithms actually work.
4) Infinite number sequences, such as the sequence of digits in the number pi, could be regarded as "truly random" sources. It is a fact that the digits of pi contain the complete works of Shakespeare (with probability=1). For a given sequence of N bits, you would expect to wait for approximately 2^N bits in the sequence to pass before you find it.
Therefore it is untrue that random sources cannot create information.
From a random source, if you wait a finite time, you will find the works of Shakespeare, your entire genome, and indeed the genomes of every organism on the planet, again with 100% probability. Of course, you would be waiting for a long, but finite, time.
The problem is that this time is much longer than the age of the universe. This is the equivalent of a cell popping out of mid-air, which is too unlikely to contemplate, even given the size and age of the universe. There is much more likely to be some kind of boot-strap process, whereby life started as very simple molecules, not complete cells.
The problem isn't creating the information, it is in deciding whether the information is any good. This is where natural selection comes in. The argument is then "only a mind can select good information". With natural selection, the point is that nature selects the best information. This is why I spell God n,a,t,u,r,e.
5) Computers regularly create information. For example they can calculate the numbers in a spreadsheet, forces in a bridge, digits of pi, weather forecasts etc. This information didn't come from a mind. There was no foreknowledge of the result of a calculation (otherwise, what would be the point in running the calculation?)
Therefore it is untrue that all useful information is the product of a mind.
(Note that even as the computer runs, the entropy of the system as a whole increases, even though there is a local decrease in information entropy produced by the computer).
6) Useful information isn't necessarily created or consumed by minds. The genetic information isn't consumed by the mind, it is read by messenger-RNA, and was working perfectly well before we observed it. Similarly computers also produce and consume vast quantities of useful information, the bulk of which is internal and isn't generated by, or seen by, human beings.
This again disproves the idea that only minds produce useful information.
7) There is an attempt to distinguish "codes" from "information." A code (by Marshall's definition) is created by an intelligence. This is an abuse of the word code. This is just a circular argument. If you define a code to be created by intelligence, then by tautology, intelligence creates codes. That proves nothing. Whether codes are traditionally man-made is irrelevant.
Maybe the definition of a code is "useful information", as opposed to "useless information". The problem is then "who distinguishes useful from useless information?" The question is incorrectly phrased: it should be "what distinguishes useful information from useless information". (Natural selection springs to mind).
Information is only useful if there is some means to decode it. In biology DNA is decoded by messenger RNA - it has nothing to do with the mind.
It is fully correct that the narrow definition of code (as the product of a mind) does not apply to DNA. It is only the broader definition (as useful information) which makes sense.
8) Evolution occurs in test tubes. We aren't talking about speciation here, we are talking about useful mutations creating stronger organisms. For example, you can make microbes more heat-resistant. According to Marshall, this would be impossible.
9) Why would God need a mind? I spell God "nature" and I also choose to not personify him (sorry, it). A brain is an evolved survival organ for an animal, what has that got to do with the creation of physics?
(Irrelevant, but half of this argument is about God after all.)
In fact, if we say "God created everything", then that works for your definition of God, and my definition of God. But this (like the definition of a code) is a tautology and doesn't gain us an iota of insight.
10) The brain is governed by physics. Some say it is governed entirely by physics (i.e. there is no soul playing puppet-master - in fact if there was then what would be the point of a brain?) Therefore physical processes are ultimately responsible for the output of our brains. Therefore physical processes create information.
11) This whole argument smells like "if it looks designed, then it is designed", or the "how else, but God" arguments. Information is being used as a synonym for design, not as a mathematical concept.
12) Remember, that intelligent design has never been observed - it is an unproven hypothesis. We ought to be able to catch it in a lab, and see entropy reversals before our eyes that are so unlikely that the more likely explanation is God. I really would be delighted if this was proven to be the case. It would be a valid experiment.
13) Marshall has a demo that mutation only destroys information. The flaw with this demonstration is that there is no natural selection. It isn't allowed to compete with the original mutation (or other mutations) to see which is better. That's like putting a frog in a blender. In nature, information is copied not destroyed, with a very low mutation rate.
If in nature, you never got any natural selection, then the progressive mutation of our genes would indeed destroy us. However natural selection is a negative entropy force which counteracts the positive entropy of random mutation.
If you increase the mutation rate or take away natural selection, then the power of mutation outweighs the power of natural selection, and the genome degrades. This is precisely what happens if you irradiate fruit flies.
14) I've put together my own demo that shows that useful information can be created using mutation. A tower of blocks is stacked, and the objective is to make the tower lean as far as possible without toppling the blocks.
In this demonstration, there are 10 "genes" which are 10 numbers representing the positions of each wooden block. The genes are mutated, resulting in some towers which lean further than others, and other towers falling over. The best tower is selected, and the process repeats.
How far out can the tower lean? Is it one block width? Two block-widths? 3/2 block widths? Infinite? What is the best design of the tower? I didn't know before the program was run, and the individual genes didn't know either. It was only by running the genetic algorithm that I was able to learn this, thereby gaining information.
So where did this information come from? It came from the fitness function, which supplies up to 1 bit of information to the genome per generation. In the demo, you can turn off the fitness function, and guess what, the information is lost due to mutation.
At the end of the simulation, the design is stable and the effects of mutation and natural selection are in balance.
15) Natural selection creates information. Mutation destroys information. The two effects are in equilibrium. If you only think about mutation, you are missing half of the argument.
For example, suppose animal A and animal B have a fight. Animal A has longer horns than animal B. Animal A kills animal B and impregnates the entire herd. We've just gained some information here: long horns are good. This information gets replicated throughout the herd.
The only reason animal A and B could fight in the first place is because they have energy in their metabolism, which, guess what, comes from the low entropy from sunlight. We see that low entropy from the sun is translated into low (information) entropy in the genome via natural selection.
So I think I've cracked it. The source of the information is natural selection, and not God after all.
Conclusion
The fundamental mistake that Marshall made was to ignore the information-creating properties of natural selection, which counteracts the information-destroying effects of mutation. In fact it took me a while to spot it.
I object to the claim that all useful information is the product of a mind. I think I have clearly demonstrated that information is created from loads of non-intelligent sources, such as random number sequences, genetic algorithms, and computer algorithms in general.
I also object to the claim that only a mind can select useful information. Again a perfectly good process, natural selection, explains this perfectly. Given a choice between a simple process (natural selection), and divine intervention, I favour the scientific explanation. But that's just me.
Thanks again Perry for a great idea, though I am still sceptical.
Comments
I discussed the simulation with a friend, who pointed out that finding the optimum design depends on the way you've done the mutations. When a block is moved, the ones above automatically move with it. If only the one block moved, then there'd be no selection pressure on the blocks below the top and it'd go nowhere.
I also noticed that you have two mutations on each time step, which is essential as well. The lower blocks initially go too far to the right, and have to move left again. But, with a single mutation, selection would prevent them from moving left. You need a pair of simultaneous mutations, moving a lower block left and a higher block right.
So my friend and I decided that the example shows the power of natural selection, but also the fragility of it. Selection can solve some problems, but not others.
In evolution, the environment that does the selecting consists of living organisms as well as inanimate objects like rivers. Information certainly gets transferred all over the place as these things interact. But I don't see how information can be created through such interactions.