Science and Scientists
#13
Your question is already taken care of in the article under discussion. Here is the relevant excerpt:

'Gregory Chaitin is a pioneer of algorithmic information theory (AIT). To understand the essence of AIT, consider a very simple example. Take the set of all positive integers, and ask the question: How many bits of information are needed to specify all these integers? The answer is an absurdly large number. But the fact is that this set of data has very little information content. It has a structure which we can exploit to write an algorithm which can generate all the integers, and the number of bits of information needed to write the algorithm is indeed not large. So the algorithmic information content in this problem is small.

One can generalize and say that, in terms of computer algorithms, the best theory is that which requires the smallest computer program for calculating (and hence explaining) the observations. The more compact the theory, the smaller is the length of this computer program. Chaitin’s work has shown that the Ockham razor is not just a matter of philosophy; it has deep algorithmic-information underpinnings. If there are competing descriptions or theories of reality, the more compact one has a higher probability of being correct. Ockham’s razor cuts away all the flab. Let us see why.

In AIT, an important concept is that of algorithmic probability (AP). It is the probability that a random program of a given length fed into a computer will give a desired output, say the first million digits of π. Following Bennett and Chaitin’s pioneering work done in the 1970s, let us assume that the random program has been produced by a monkey. The AP in this case is the same as the probability that the monkey would type out the same bit string, i.e. the same computer program as, say, a Java program suitable for generating the first million digits of π. The probability that the monkey would press the first key on the keyboard correctly is 0.5. The probability that the first two keys would be pressed correctly is (0.5)2 or 0.25. And so on. Thus the probability gets smaller and smaller very rapidly as the number of correctly sequenced bits increases. The longer the program, the less likely it is that the monkey will crank it out correctly. This means that the AP is the highest for the shortest programs or the most compact theories. The best theory has the smallest number of axioms.

In the present context, suppose we are having a bit string representing a set of data, and we want to understand the mechanism responsible for the creation of that set of data. In other words, we want to discover the computer program (or the best theory), among many we could generate randomly, which is responsible for that set of data. The validation of Ockham’s philosophy comes from the fact that the shortest such program is the most plausible guess because it has the highest AP.'

For more details please see the book 'Thinking About Gödel and Turing' by Chaitin (2007). I can send you the pdf version of the book. Just send me an email.
[+] 2 users Like Vinod Wadhawan's post
Reply
#14
(03-Apr-2011, 07:09 PM)Vinod Wadhawan Wrote: ... In AIT, an important concept is that of algorithmic probability (AP). It is the probability that a random program of a given length fed into a computer will give a desired output, say the first million digits of π. ...

Thank you Prof. Wadhawan.

On thinking more about this, I realized that it is important for students of science to make a distinction between algorithmic complexity and representational complexity. The AP-based argument from AIT above does establish Occam's Razor as a means to validate, as far as algorithmic/Kolmogorov complexity is concerned. In that discussion, it is assumed (for clarity and without any loss of generality) that all algorithms being considered, employ the same representation (here, binary string). It is a pre-requisite that ought to go without saying, that we must make sure that any algorithms that we compare must first be stated in the same representation scheme, for the comparison to be meaningful.

Neuroscience is one area where stating two algorithms in a common representational scheme is, to say the least, non-trivial. A working implementation of cognitive processes is available in our neural 'wetware' i.e. the biological implementation, but any proposed theoretical model of the same cognitive processes is stated in a representation that is different from the (as yet poorly understood) representation employed by Evolution. So, if the predictions of a theoretical model of a cognitive process are to be tested for their performance, it is non-trivial in practice to attribute poor performance to limitations in the algorithm itself, or in its representation, or in its implementation. Beginning students of neuroscience are advised to therefore place their investigations in the framework of David Marr's 3 Levels, so that assumptions at all these levels are stated explicitly. Occam's Razor maybe a reliable guide in one level (algorithmic) but not as much in another (say, the representational level). At the representational level, Occam's Razor may still be a guide to truth under the implicit assumption that 'parsimony implies evolutionary fitness', but 'evolutionary conservation' i.e. the necessity of evolution to work off of existing though sub-optimal components, may mean that the most economical representation is not the one employed by the biological organism.

To summarize, Occam's Razor means different things at different levels of abstraction (algorithmic, representational, implementational) and caution must be exercised in interpreting it, considering the different kinds of assumptions that come into play at each level.


A side note:
When we come down to the implementational level from the representational level, Occam's Razor almost reduces to a methodological recommendation, yielding a 'handier' rather than a 'truer' theory. One might think of the epitrochoidal and elliptical orbits the geocentric and heliocentric (helio-focal ?) models respectively as different implementations of the same algorithm namely "Choose a co-ordinate system and fit a curve to the observed orbit". Thus defined, there is not much to choose between these two models from an algorithmic standpoint, but application of Occam's Razor at the 'implementational level' is justified under the rationale that a model with less computational overhead and hence less proneness to calculational errors and more 'comprehensible' and usable, is to be preferred. If 'comprehensibility by humans' is seen as a measure of 'empirical truth' then that would amount to biocentrism; unless of course we clarify that here the term Occam's Razor is being used at the implementational level as a methodological recommendation, while Occam's Razor at the algorithmic level would be impartial to both competing versions if they happen to be implementations of the same algorithm. Historians of science can speculate if Sir Isaac Newton could have so clearly and convincingly demonstrated the application of his Laws of Motion and Gravitation to planetary motion, had we worked in the epitrochoidal framework that was both more cumbrous and arcane, than in the elliptical framework provided by Kepler, though as we have seen, both frameworks fit the empirical data.

[+] 1 user Likes arvindiyer's post
Reply
#15
(03-Apr-2011, 11:27 AM)arvindiyer Wrote:
(03-Apr-2011, 08:54 AM)P11 Wrote: It seems to me that all the competing theories would be correct, but the the more compact one is more useful.

Here's an exquisite illustration from the BBC Documentary 'The Story of Science' of the fact that more than one competing theory maybe correct, just like the cumbersome epitrochoids could still yield predictions of planetary motion; but the more useful and handier theory is the compact one with Kepler's ellipses.

Arvid, that is exactly what I am trying to say.

(03-Apr-2011, 12:04 PM)Vinod Wadhawan Wrote: If there are two competing theories, the one based on axioms with a smaller algorithmic information content is more likely to be correct for thermodynamic reasons.
Why should one be more correct? Let's consider an example. Let's say a computer is prints the integers sequentially, starting from zero. To explain this phenomenon we can devise several theories.

One theory can be that the computer has a huge table of the integers in it's memory and that the comp is just printing the numbers from the sequence.

Another theory is that at each step the computer is doing an addition of the last integer printed and 1 and prints the result.

Given just the output on the computer screen, one cannot know what is actually happening inside the computer. Therefore, both the theories are correct. But the second theory is more useful because it requires far lesser memory and a simple rule to predict the next number on the screen.
Reply
#16
(03-Apr-2011, 10:45 PM)arvindiyer Wrote: Occam's Razor almost reduces to a methodological recommendation, yielding a 'handier' rather than a 'truer' theory.

You are right, Occam's Razor is a methodological recommendation to yield a 'handier' rather than a 'truer' theory.

(03-Apr-2011, 01:02 PM)arvindiyer Wrote: An understanding of this may be crucial to decide whether Occam's Razor must be treated by a student of science as (i)merely a recommendation for methodological effectiveness to facilitate discovery, or (ii) an epistemological criterion in its own right which can serve to validate a discovery.

I think the problem in understanding Occam's Razor and Science arises because Science is often, I think wrongly, seen as a means to discovery.

I think it is more appropriate to say that a certain theory is invented than to say it is discovered. For example, Newton did not discover the theory of Gravitation in the same sense as Columbus discovered America. It is more appropriate to say that Newton invented the theory of gravitation as an explanation of certain phenomenon.
Reply
#17
1. Science is done by people, not by computers.

2. People formulate theories. They may USE computers for testing their theories.

3. Hypotheses are proposed and formulated by people. Good hypotheses may lead to theories. Really good and successful theories have a tendency to become so compact that we start calling them laws of Nature.

4. It is generally true that a really fundamental law has the smallest algorithmic information content. Comprehension is compression.

5. Some scientists are trying to formulate the most compact 'theory of everything' (string theory). Others feel that a good 'theory of everything' is impossible to formulate. The debate and the efforts go on.
Reply
#18
The thermodynamic laws in information come into the picture when you take the entropy of the bits of information into account. Then second law of thermodynamics states that the entropy of the universe is always increasing. In that case out of two competing theories one with more entropy is more likely.

A word of caution to be utilised here is that more likely does not make it the evident truth. Truth is a quantity that is only experimentally proven. To put this into perspective, you can claim that the moon is made of cheese in its core. It could very well be. This is as viable a hypothesis as that the moon is made of a rocky inner core. Maximum Entropy Method would point to the rocky moon hypothesis being more likely than the one with cheese. Doing so it does not rule out the option, the only true way of doing that would be through experimental evidence. This is scientific methodology. To arrive at the previous step many would simply invoke occam's razor ruling out the cheese hypothesis, and in doing so would be flouting scientific methodology of verifying the hypothesis with experimental/observational evidence.

Occam's razor is not science. Maximum Entropy Method is an estimation methodology that will provide you with an estimate of how likely is an outcome, and not what is the outcome. Hope this clears up the issue that crept up earlier.

On a side note, if you guys are interested in more such topics i would be glad to write about anything that I can claim a fair understanding of. Things to do with astronomy, physics and cosmology. I do this in a blog which has been inactive for a few months as my core audience (once students of a cosmology course i started finished up) has moved away taking along with it the incentive to write. If there is a need to discuss questions such as the occam's principle or the anthropic principle a blog would be a good starting point for discussions like the one in this thread. I am open to hearing all your views. In the meantime feel free to check out cosmicconundrums.wordpress.com for my blog. Thank you.
Reply
#19
1. Objective and reproducible experimental evidence is supreme. Everything else is less important.

2. As I have emphasized repeatedly in my articles at Nirmukta.com, it is only for an ISOLATED system that entropy cannot decrease. For an OPEN system, entropy CAN decrease locally; otherwise there would be no spontaneous emergence of order and pattern-formation etc.

3. Statements regarding the Occam razor are always in terms of PROBABILITIES only.
Reply
#20
I would argue that science per se is not objective but intersubjective. Science also cannot make any absolute claims about objective reality because it itself does rely on a few axioms, most notably the existence of multiple observers of phenomena who can communicate their observations with each other.

It is however the tool with the most discriminatory power between competing hypotheses, but this still does not allow it to make absolute claims, since scientific theories work on the principle of empirical adequacy.
Reply
#21
The debate about the origin of the universe is not likely to end anytime soon. On top of that, Gödel tells us that certain statements are true for no reason, at least within the concerned system of nontrivial formal logic. So where do we go from here? One option is to adopt the approach made very clear by Stephen Hawking in a recent book. I shall illustrate its application for justifying materialism and objectivism.

Does an object exist when we are not viewing it? Suppose there are two opposite models or theories for answering this question (and indeed there are!). Which model of ‘reality’ is better? Naturally the one which is simpler and more successful in terms of its predicted consequences. If a model makes my head spin and entangles me in a web of crazy conclusions, I would rather stay away from it. This is where materialism wins hands down. The materialistic MODEL is that the object exists even when we humans are not observing it. This model is far more successful in explaining what some people insist on calling ‘reality’ than the opposite model. And we can do no better than build models.

In fact, we adopt this approach in science all the time. There is no point in going into the question of what is ‘reality’ (somebody please try defining reality formally and self-consistently). We can ONLY build models and theories, and we accept those which are most successful in explaining what we humans observe collectively. I said ‘most successful’. Quantum mechanics is an example of what that means. In spite of being so crazily counter-intuitive, it is the most successful and the most repeatedly tested theory ever propounded. I challenge the creationists and the idealists and their ilk to come with an alternative and more successful model of ‘reality’ than quantum mechanics.

Personally I have aversion to certain questions that professional philosophers raise quite unnecessarily. If anybody can do better than what the above approach achieves, I would like to know how.
Reply
#22
(12-Jun-2011, 12:54 AM)ARChakravarthy Wrote: I would argue that science per se is not objective but intersubjective. Science also cannot make any absolute claims about objective reality because it itself does rely on a few axioms, most notably the existence of multiple observers of phenomena who can communicate their observations with each other.

It is however the tool with the most discriminatory power between competing hypotheses, but this still does not allow it to make absolute claims, since scientific theories work on the principle of empirical adequacy.

It is not the fact that science doesn't make absolute claims or that it relies on certain premises that should count in making the determination between objectivity and subjectivity.

Something is intersubjective when the claims being discussed are subjective ones and not objective ones, and those subjective claims are agreed upon by a community of subjects. Science is concerned with objective claims. Of course subjective biases influence our objective claims, but the scientific claims are objective and the subjective bias is to be eliminated as best we can.

Values claims are intersubjective. So, there are certainly intersubjective premises in science, such as coherence, mathematical consistency and certain aspects of inferential logic. But even those premises are necessarily tested constantly.

If we declare scientific claims as intersubjective (not just some of the premises they are based on) we are declaring that essentially there is nothing objective and that reality is a construct. We're relegating objectivity to an abstract metaphysical realm.
"Fossil rabbits in the Precambrian"
~ J.B.S.Haldane, on being asked to falsify evolution.
[+] 1 user Likes Ajita Kamal's post
Reply
#23
Science is an activity, and therefore the objectivity/subjectivity divide doesn't apply (all activities are both objectively true and subjectively reasoned).

The scientific method, on the other hand, can be scrutinized along those lines. If one does so, it is clear that this method is the best way of asking objective questions (even when it comes to subjective phenomena). Of course, there are subjective questions about both objective and subjective phenomena that are necessarily beyond the realm of science (science cannot be used to meaningfully ask a subjective question, and can only ask objective questions about subjective phenomena).

"Fossil rabbits in the Precambrian"
~ J.B.S.Haldane, on being asked to falsify evolution.
Reply
#24
Full Disclosure: I haven't read the previous posts in this thread.

My two cents follows.

Science is a human activity. What makes science successful is this. If a scientist is found to be wrong, his career is finished. Scientists are not particularly honest but there is a process in place to weed out the dishonest ones. Also, a premium is placed on describing their results in testable ways. This does not mean that every single paper published is automatically verified by someone else. But the potential for doing so is there, since the details are all published. Once in a while (like in the cold fusion fiasco) people indeed attempt to replicate to check it out. At other times, they replicate only to get to a point from which they can do other stuff. If they cannot replicate, that raises alarms. A published paper only has a certain amount of respect. When other people use its results and in the process end up replicating the results, it gains more respects.

I have an article that I like on the subject of the scientific method. It is written by my favorite scientist. I don't know how to attach it. Let me figure out way.

Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Updated science comprehension of reality Dov Henis 3 3,665 06-Feb-2013, 02:06 AM
Last Post: Dov Henis
  Science in India Vinod Wadhawan 5 5,223 17-Jan-2013, 12:11 AM
Last Post: Cityboy
  Silly Nicknames in Science: The Goddamned Particle and The Eight-fold Way sojourner 9 9,923 05-Jul-2012, 10:35 AM
Last Post: nick87
  E.O. Wilson's 'Advice to Young Scientists' arvindiyer 0 3,228 29-Jun-2012, 07:25 AM
Last Post: arvindiyer
  Stephen Wolfram's A New Kind Of Science: Computing A Theory Of Everything Ajita Kamal 17 15,474 23-Nov-2011, 07:24 PM
Last Post: Vinod Wadhawan
  On scientific curiosity : How much science should the 'masses' know to stop believing karatalaamalaka 9 5,833 16-Sep-2011, 10:44 PM
Last Post: karatalaamalaka
  Should scientists (just) be scientists? arvindiyer 10 5,848 22-May-2011, 03:25 AM
Last Post: P11



Users browsing this thread: 1 Guest(s)