Is falsifiability taken seriously by today's science?
#1
I always thought of Popper's falsifiability as a demarcation of pseudo-science and supernatural explanations from actual science. But a few recent articles make me wonder if falsifiability is taken seriously in actual science. Of course, this maybe how science has been working all along, but then the point would be about how to make it better.

The articles:
  • The Truth Wears Off, with a rather sensationalist conclusion. This one popularized the term "The Decline Effect" - how scientific theories decline with time in predicting outcomes.
  • Another article on the same subject but with more tempered reporting - Lies, Damned Lies, and Medical Science
  • It’s Science, but Not Necessarily Right - on some badly conducted studies which still haven't been retracted because of difficulty in replicating or because journals do not publish replication results.

The cause seems to be favoring studies that propose new theories more than studies which falsify existing theories.

The result of all this is that it is taking more time for scientific theories to become established or be proven wrong. A good example is Andrew Wakefield's fraudulent study linking autism with vaccines. It took four years to other teams to confirm that the study's conclusions are not valid and it took over 10 years to retract that paper. By then a self-sustaining anti-vaccine movement has formed which is causing the comeback of diseases like measles.

Of course, this varies with fields. Some fields (like psychology) have this problem more than the others (like astronomy).

I'm not an academic, so I would love to hear other opinions on if this way of putting the issue overblown or is it really something of concern.
[+] 2 users Like Lije's post
Reply
#2
(03-Jul-2011, 03:26 PM)Lije Wrote: The cause seems to be favoring studies that propose new theories more than studies which falsify existing theories.

Thank you for initiating a discussion on this important contemporary trend in the philosophy of science.

This trend seems to have possessed the zeitgeist of the technological and management worlds before becoming conspicuous in the science world proper. We may identify as one of the prime movers for this 'shift' Edward de Bono of Six Thinking Hats fame who considers objective evaluation for the purpose of falsification as only a part of our collective cognitive toolkit which comprises also of a 'hypothesis generating plant' of sorts (green hat in his coinage) and due acknowledgment of emotions influencing decisions ('red hat'). While many decision-makers find de Bono's six hat picture compelling, he seems to deliberately overstate the case in many of his books, implicating 'critical thinking' of the Socratic kind for unduly limiting the imagination of decision-makers in society and blinkering them to possibilities other than the ones they are evaluating and falsifying, and presenting almost as a panacea his trademark 'lateral thinking' which he believes will unlock the imagination and help us more fully exploit our cognitive resources.

Critical thinking to de Bono is centered around falsification i.e. "NO" and lateral thinking about possibilities i.e. "YES". He suggests liberating the narrative from "YES" and "NO" by means of a word he calls "PO". Quoting from here:

Quote:NO is the basic tool of the Logic System.
YES is the basic tool of the Belief System.
PO is the basic tool of the Creative System.

Yes and No to de Bono are 'tools of judgment' while PO helps exploit benefits of (temporarily) 'suspending judgment'. Here are some examples of the word PO in action (and in fact I was toying with the idea of starting a thread for PO games somewhere here. )

Whenever the word holistic is bandied about with abandon, we need to keep our baloney detection kits close at hand. Edward de Bono's suggestion seems to be, "By all means use your baloney detection kits in good time, but not before first giving yourself some space for madness and allowing yourself to be surprised, so that your kits find more than just baloney!"

Returning to the world of science, both the 'lateral thinking' which helps in hypothesis generation and construction of alternative narratives, and 'critical thinking' which is crucial to evaluate the alternative narratives are indispensable and prizing one unduly over the other can be counterproductive. But I can think of one reason why many modern-day scientists are warming up to this. Thanks to the explosion in data acquisition, storage and analysis capabilities in recent decades there seems to be a greater paucity of hypotheses and inferences than of data itself. Many laboratories from geophysics to neuroscience to the nascent field of computational social science have tons of data waiting to be 'mined' and made sense. Research students who are fecund at coming up with plausible hypotheses seem less common than those who will 'turn the crank of standard hypothesis testing' when they are handed a hypothesis. This could be one reason why 'the cause seems to be favoring studies that propose new theories more than studies which falsify existing theories.'

[+] 2 users Like arvindiyer's post
Reply
#3
(03-Jul-2011, 10:01 PM)arvindiyer Wrote: Critical thinking to de Bono is centered around falsification i.e. "NO" and lateral thinking about possibilities i.e. "YES". He suggests liberating the narrative from "YES" and "NO" by means of a word he calls "PO". Quoting from here:

Quote:NO is the basic tool of the Logic System.
YES is the basic tool of the Belief System.
PO is the basic tool of the Creative System.

The question then would be can lateral thinking be used for disbelief and can the YES be expanded to include that. The person who comes up with a hypothesis might be looking for ways to confirm that their hypothesis explains the data, but may not be looking hard enough for cases where their hypothesis may fail.


Reply
#4
(06-Jul-2011, 06:15 PM)Lije Wrote: The question then would be can lateral thinking be used for disbelief and can the YES be expanded to include that. The person who comes up with a hypothesis might be looking for ways to confirm that their hypothesis explains the data, but may not be looking hard enough for cases where their hypothesis may fail.

The function of lateral thinking is essentially to generate options suspending all evaluation, and it goes without saying that it is critical thinking that must eventually be employed to evaluate the options so generated. Edward de Bono's prescriptions seem to be made under the mistaken assumption that critical thinking is such a mainstream and dominant form of thinking that it will be invoked anyway, and it is lateral thinking that needs all the emphasis and popularization it can get. It is true though that there is a prevalent misconception that lateral thinking supplants rather than supplements critical thinking. A self-serving hypothesis cannot strictly be considered a product of assumption-free 'lateral thinking' because there is a premature evaluation of self-interest therein!

Under the falsification paradigm (and the method of 'hypothesis testing' which operationalizes it), critical thinking predominates and for good reason. However, philosophers of science such as Paul Feyerabend (once a pupil of Karl Popper who coined 'falsificationism') have argued that falsificationism may neither be an adequate philosophical underpinning of the scientific method, nor provide an adequate narrative of the history of science. To be fair, Feyerabend too does not seem to dismiss falsification outright, but argues for the methodology of falsification to be precede in practice by a dose of 'theoretical anarchism'. The risks of 'theoretical anarchism' being used to shoe-horn post-modernism into the discourse are all too real. Duly acknowledging that risk, we must however also acknowledge that for disciplines in a pre-methodological phase there is no choice but theoretical anarchism! As Dr. V S Ramachandran says often, the cognitive sciences are not yet in a theory-driven Maxwell stage but in a data-driven Faraday stage.

Stepping out of the hard sciences and out of the falsification paradigm proper, lateral thinking does seem the tool of choice in such academic exercises as counterfactual history or for that matter, gedankenexperiments in general..
[+] 1 user Likes arvindiyer's post
Reply
#5
This TED talk on the Google Ngram viewer explains how this tool is itself an outstanding contemporary example of how, 'thanks to the explosion in data acquisition, storage and analysis capabilities in recent decades there seems to be a greater paucity of hypotheses and inferences than of data itself'.

On these forums, we have ourselves played a bit with the Ngram viewer, in this intro post, as well as here and here.

Staying on topic, feeding some of the keywords of the above discussion into the Ngram calls into question some of the conclusions presented above.

[Image: chart?con‌tent=critical%20thinking%2Clat...r_end=2008]

[Image: chart?con‌tent=heuristic%2Cfalsification...r_end=2008]

For example, in recent decades, 'critical thinking' seems to retained an upper hand over 'lateral thinking' which enjoyed its heyday only during the decade it was introduced. So, saying that a corporate and popular romanticization of lateral thinking is ongoing, maybe an exaggeration.

One other thing maybe of more interest and concern. Since the mid-1990s almost ALL of these terms which are to do with the organization of thought and investigation, show a downward slope (or an end of growth). Does this mean a declining interest and declining attention towards methods to 'make sense of the data we have'? Isn't this especially serious, given that it is since the advent of the Internet that more information is at our disposal and it would be a pity if the right tools to use it are not adequately learnt? Or more optimistically, I maybe missing the right Ngrams which show upward trends about what sort philosophy of science is being favored in this Information Age. Any guesses or findings?
Reply
#6
Firstly, I would like to mention a meta-problem about the philosophy of science. Is the epistemology contained in the philosophy of science (Hume, Bacon, Popper, Kuhn, Lakatos, etc.) a way to understand how science has progressed thus far, or is it capable of directing us on how it should be done?

Next, let us examine the criticism of Popper's insistence on falsification as a test of science. A prominent critic of falsification as the foremost test for identifying pseudoscience was Imre Lakatos. The theory of 'falsification' has the basic requirement that there exist tests to determine falsifiability. Lakatos argues against this premise. To illustrate this, he uses the example of how Neptune was discovered from perturbations in the orbit of Uranus. The observations that led to the discovery of Neptune was that the Newtonian calculations of the orbit of Uranus did not conform with observations. This can be misconstrued as a critical test for the falsifiability of Newtonian mechanics, and thus used to determine that Newtonian mechanics is being wrong. However, we know that is not true. The perturbations were due to the hitherto undiscovered knowledge concerning the existence of Neptune. Falsification is different from an inability to corroborate. This distinction was not stated by Popper. Thus, Lakatos claims that we don't necessarily have to discard scientific theories whenever we encounter anomalies, unlike what Popper recommends. He takes the more nuanced view that science becomes pseudoscience (or wrong science) if over time the truth diverges further and further from observations, and as the anomalies increase in number.

Popper does allow for applied methodology and treats it as being different from the notion of provability/falsifiability. Popper claims that in applying a theory, errors may creep in and only after accounting for these errors are we supposed to comment on the validity of the theory in question. But this approach is problematic because it happens often that the distinction between the theory itself and its application are not clear. In the example above, anomalies in the orbit of Uranus were established by several experiments. So in essence, there is nothing wrong in the applied methodology. So, an assertion that Newtonian mechanics is wrong, is perfectly reasonable, and absurdly so.

Lakatos proposes that we discard Popper's view of falsifiability and look instead at 'research programs' as a whole, instead of individual falsifiable hypotheses, and look at the direction in which it is going. This is similar to Kuhn's view of 'paradigms' (as Lakatos acknowledges). Thus, continuing on the Uranus-Neptune example, the approach is to exhaustively eliminate all possibilities for the anomaly within the framework of Newtonian mechanics. One of the possible causes of the anomaly in Uranus' orbit, then becomes- 'a huge mass in the vicinity of Uranus, which we are yet to discover'. We see this happening repeatedly in science, especially since Popper's death. Whenever scientists encounter anomalies in existing research programs, they posit a placeholder to account for the anomaly and move on. Only when the foundations of the system are in question, following numerous anomalies, do scientists revisit theories.

Lakatos says in his book, 'The methodology of scientific research programmes,' (spelling due to British publisher)

Quote:Thus, in a progressive research programme, theory leads to the discovery of hitherto unknown novel facts. In degenerating programmes, however, theories are fabricated only in order to accomodate known facts.

Now Lije said in the opening post,
Quote:The result of all this is that it is taking more time for scientific theories to become established or be proven wrong. A good example is Andrew Wakefield's fraudulent study linking autism with vaccines. It took four years to other teams to confirm that the study's conclusions are not valid and it took over 10 years to retract that paper. By then a self-sustaining anti-vaccine movement has formed which is causing the comeback of diseases like measles.

If we look at it in the framework of research programs and paradigms (Lakatos or Kuhn), Andrew Wakefield's paper was a single hypothesis challenging the status quo, and it shouldn't have been given the publicity in the first place. The press would have done well to wait for the general direction of the research program (which we may broadly describe as 'the effectiveness of vaccines in positively benefiting humans'). Sure enough, the hypothesis was quickly shown to be pseudoscience as other experimenters examined Wakefield's claim and showed it to be fraudulent. So, rather than a shortcoming of the scientific community (in its disregard for falsifiability), I would think that it is a serious problem with the news media's inability to appreciate the true nature of scientific progress. If the Uranus anomaly was discovered c. 2000, we would probably have seen headlines reading, 'Is Newtonian mechanics wrong?' As Ioannidis points out in the Atlantic article, it is not just news media that is misinterpreting research, but also doctors themselves. The reality is that modern medical research is slower than we would like it to be. Each journal paper doesn't represent a paradigm shift and not every year is 1905. While criticizing news media and doctors, we shouldn't lose sight of our own overeagerness to find the answer to all questions within our lifetimes, or worse, when it comes to medical research, within the duration of our illness.



--------------------------------
Here is a recent addition related to this question. Science writer Jonah Lehrer wrote provocatively in a recent edition of Wired that 'science is failing us.' http://www.wired.com/magazine/2011/12/ff...tion/all/1

He doesn't directly talk about falsifiability of the Popper kind. He looks at the scientific approach of breaking down a complex issue by building causal relationships with simpler parts and the phenomenon as a whole. Does additional information acquired in the form of research necessarily lead to scientific progress? The specific examples he considers are that of recent research into cholesterol regulating drug trocetrapib that lead nowhere by the end of a multibillion dollar research project at Pfizer, hormone replacement therapy, the usefulness of Vitamin E supplements, and the (mis)diagnosis of back pain.

Quote:Even when a system is dissected into its basic parts, those parts are still influenced by a whirligig of forces we can’t understand or haven’t considered or don’t think matter.

Clearly, by strengthening standards for competence, one can argue that these are instances of incompetent application of the scientific method.

In my opinion, Jonah Lehrer, here and elsewhere, is being more alarmist than necessary. In reality, these are not 'failings of science', but 'failings of non-scientists and incompetent scientists'. Specifically, it is the failure of the society in general in misunderstanding the nature of science progress.
[+] 1 user Likes karatalaamalaka's post
Reply
#7
(03-Jul-2011, 10:01 PM)arvindiyer Wrote: Thanks to the explosion in data acquisition, storage and analysis capabilities in recent decades there seems to be a greater paucity of hypotheses and inferences than of data itself. Many laboratories from geophysics to neuroscience to the nascent field of computational social science have tons of data waiting to be 'mined' and made sense.

In this NYT op-ed, David Brooks reviews the increasing influence of and confidence in 'Big Data' approaches to predicting human behavior in fields like behavioral economics. Brooks writes:
Quote:The theory of big data is to have no theory, at least about human nature. You just gather huge amounts of information, observe the patterns and estimate probabilities about how people will act in the future.

This isn't the first time or the first discipline in which such approaches have gained in influence. In his book 'Nerve Endings', Dr. Richard Rapport writes about the late 19th century:

Quote:At the same moment, the pragmatic Oliver Wendell Holmes reasoned that "the law is nothing more or less than what judges do." Holmes did for the law what Darwin did for evolution and what Maxwell did for gases, writes Louis Menand, when "he applied to his own special field the nineteenth-century discovery that the indeterminacy of individual behavior can be regularized by considering people statistically at the level of the mass." We hope for more certainty from the laws of science but are just as often disappointed.

Earlier in the same book, Dr. Rapport describes an epoch in the history of science where 'science pushed aside metaphysics'. Is the Big Data Revolution just a neologism attempting to describing the contemporary interplay of rationalism and empiricism or is it an epoch in its own right where, in a manner of speaking, 'science pushes aside theory itself'? David Brooks hopes not and concludes the article with some misgivings tempering the current bravado surrounding Big Data.

Quote:One of my take-aways is that big data is really good at telling you what to pay attention to. It can tell you what sort of student is likely to fall behind. But then to actually intervene to help that student, you have to get back in the world of causality, back into the world of responsibility, back in the world of advising someone to do x because it will cause y.
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  Science and realism ramesh 33 15,613 08-Jun-2013, 08:14 PM
Last Post: Captain Mandrake
  Sam Harris: Science can answer moral questions sushikadanakuppe@gmail.com 3 4,351 07-Nov-2011, 11:26 AM
Last Post: arvindiyer
  A critique of Kuhn's philosophy of science Lije 0 4,253 30-Oct-2011, 01:21 PM
Last Post: Lije
  Science & Philosophy - A dichotomy? vvjoshi 4 3,656 12-Jun-2011, 12:03 AM
Last Post: ARChakravarthy



Users browsing this thread: 1 Guest(s)