Saturday, April 25, 2009

Research Methodology

In last week’s posting I discussed the difficulty of establishing your conclusions. It is a good idea to reflect on this aspect before you start your investigation. Without such reflection, there is a danger that your investigation may not be as useful as you hoped.
The result of your reflections will become a section of your thesis called “Research Methodology”. In it, you will set out what your primary research intends to achieve, and how you will go about it. For example, if you plan to conduct a survey, this section will consider how extensive the survey needs to be, who should be selected to participate, and how to avoid bias, ethical problems etc. If you plan to develop some theory, this section will consider what evidence might encourage people to believe that your theoretical contribution will be useful.
There are some general approaches that have been found to be useful over the years. It often saves a lot of time if you can simply characterise your methods as following one of these standard patterns. There are also subject-specific habits: scientific disciplines, by and large, prefer “quantitative methods”, which involve measuring things and aim at quantitative predictions or formulae. Many quantitative methods involve constructing experiments, to try out some technique in controlled, and possibly well-understood conditions.
For example, if you want to establish the usefulness of a formula which computes something (a dependent variable) in terms of some input variables, it is very convincing if an experimental design manages the find some way of keeping all but one of the input variables constant, and exploring the dependency of the result on the remaining one. Few scientific disciplines really offer this as a practical methodology: the input variables are rarely independent, and there is always a suspicion that the dependent variable depends on some other factors which have not been taken into account.
Social sciences, media studies etc, are often comfortable with “qualitative” or narrative-based methods. These work well in situations where everybody understands that things are more complicated than we can imagine; nevertheless we hope that by focusing on some aspects of a situation, and describing it in a particular way, we can exhibit some characteristic behaviour or responses that may be useful in aiding our understanding or developing policy. In this case, we would hope to find corroborating examples that could be analysed the same way.
The quantitative/qualitative divide is rather a simple and crude one however, and there are several methodologies that are in a sense orthogonal to it: these are:
The scientific or hypothetico-deductive method, developed by Francis Bacon (c1610) and William Whewell (c.1850). Following extensive analysis last century by Karl Popper (c.1950), Thomas Kuhn (c.1980), and Paul Feyerabend (c.1980), its current version is more practical than the one suggested by either of these proponents. If you use this method, your research methodology section can be quite short. (It is normal practice to cheat, and refine your hypotheses as you start gathering data.)
Action research, developed by Kurt Lewin around 1944, views the researcher as intervening in a problem situation and learning from the effect of their interventions on the situation and how it is viewed.
Grounded Theory, developed by Glaser and Strauss in 1967, is a radical approach in that the researcher tries to elucidate the concepts and problematic from the evidence in a four-stage process, keeping the prior theory to a minimum.
All three of these methodologies can be used in both qualitative and quantitative approaches. But they are really impossible to mix.

Saturday, April 18, 2009

The validity of your conclusions

In this piece I want to examine the sort of conclusions that can be drawn from your research. How much evidence you need, how it is analysed and how the arguments proceed, will all depend on the habits and customs of your academic discipline. But inevitably you will gather some evidence and draw some conclusions.

For the conclusions to be interesting, they should be applicable in situations that replicate some aspects of your evidence. If you studied the use of sarcasm in Shakespeare's plays, people could well find it interesting to take your conclusions and see how they might apply to those of Kyd or Marlowe. If you studied the suspension of integrity constraints in enterprise applications, you might hope that your conclusions could also apply to some other enterprise applications than the particular ones you studied. If you were studying the economic effects of the current crisis in banking, despite the unusual and possibly unique aspects of the current difficulties, academics will attempt to find ways of drawing some general lessons.

After all, if the evidence you examined was so unusual, so conditioned by circumstance, or your generalisation so feeble, the conclusions might end up being too weak to be of interest. The common practice of the discipline will be your guide. If your evidence about fluctuations in the earth's magnetic field was gathered in 2006, it would be unusual to limit your conclusions to that year. But on the other hand, at several stages in the history of the earth, the magnetic field has changed direction, so that there is no logical reason to suppose it might not do so again. Many recipes in computing are validated by analysing their performance on a few well-known data sets: but how can you be sure that similar performance will be observed for other data? More generally, sources of evidence need to be carefully chosen not to have specific traits that limit the generality of the discussion.

Niels Bohr, one of the founders of quantum mechanics, was fascinated by the Heisenberg principle of indeterminacy, and viewed it as complementarity: position and momentum at the quantum level are complementary properties, and the more precisely one of these quantities in known, the greater the uncertainty in the other. In explaining the principle, he drew on other examples of complementarity, famously of truth and clarity. He observed that the more precise we are in describing something completely as it is, the more complex and unclear our description becomes, while a clear statement, such as a soundbite or slogan, will never precisely specify anything. Similarly, the more careful we are in owning up to the limits of the evidence we have gathered, the more we constrain the claimed area of validity of our conclusions.

In his famous discussions with Einstein (1949), he addresses this very aspect: 'simply that by the word "experiment" we refer to a situation where we can tell others what we have done and what we have learned and that, therefore, the account of the experimental arrangement and of the results of the observations must be expressed in unambiguous language'.

This is why research methodology sets out to propose hypotheses or research questions based on previous research results, carefully setting out the parameters of an investigation and proposing in advance the limits on the applicability of the conclusions that may be reached. In practice, authors of PhD theses or published papers always cheat a bit, and refine some of these details during the investigation. Any surprises in, or reservations about, these conclusions can be discussed in the section of the thesis entitled "Suggestions for further work".

This is also what was wrong with Karl Popper's famous metatheory of falsifiability. Today, we say instead that negative results do not falsify a theory: usually they merely qualify it, defining its inapplicaibility to the particular circumstances of the results, and instigating further research into its limits of applicability, or a new theory that somehow applies better or more generally.


Bohr, N: Discussions with Einstein on Epistemological Problems in Atomic Physics, Albert Einstein: Philosopher-Scientist (1949), publ. Cambridge University Press, 1949.

Popper, K: Conjectures and Refutations (1963) London: Routledge and Kegan Paul, 1963, pp. 33-39

Friday, April 10, 2009

Contribution to knowledge

The contribution to knowledge is the main criterion for the award of PhD. Both words carry a weight of meaning and context, and are subtly different in different academic disciplines. Many academic journals apply the same test when deciding whether an article is worthy of publication, but practice does vary, so that the criterion of publishable quality is not quite as robust as the test for contribution to knowledge. This discussion leads into one of the most difficult areas of PhD supervision, and there are consequences for the role of the research supervisor in acting as a guide through the ground rules of the discipline.

As discussed in other entries in this blog, the word knowledge intends a limitation to what has been established in the academic discipline. Knowledge is established by means of a rigorous and verifiable process of investigation and analysis, in other words, is accumulated only through published research. There is a total ban on any references to personal experience or other forms of private evidence. To refer to previous knowledge in the discipline we must also avoid citing published research that is no longer regarded as correct, or has been superseded or qualified by later work.

In order to contribute to such a body of research, a researcher must ensure that every statement is either established in previous (but still accepted) research, or is safely concluded from the evidence they are contributing. The rules for what counts as a safe conclusion vary from one academic discipline to another, and the process of examination of research is mostly about this aspect. From a logical point of view, such conclusions are a form of induction and this is known to be dangerous.

The word contribution is intended to mean that the research conclusions are not already known. Potentially, and usually in fact, this means that the results are a surprise. (Dretske’s theory of information provides a quantified notion of “surprisal”.) If they are a surprise in the academic discipline then establishing the results will require a convincing rational argument. In examining a PhD thesis, examiners are looking for just such rigour, according to the requirements of the academic discipline. I hope to discuss this aspect next.

Dretske, F (1981) Knowledge and the flow of information, MIT Press. 0-262-0-04063-8.

Saturday, April 4, 2009

Some notes on originality

Here are some ways that your research can be original:

1. You can look at topics that people in your discipline have not looked at before.

2. You can add to knowledge in a way that has not been done before.

3. You can test existing knowledge in an original way.

4. You can provide a single original technique, observation or result in an otherwise unoriginal but competent piece of research.

5. You can bring new evidence to bear an old issue.

6. You can say something nobody has said before: this will work if you can show how your investigation provides a justification for your new saying.

7. You can carry out empirical work that has not been done before: this will work if you can use it to support existing knowledge or to show the limitations of existing assumptions.

8. You can make a synthesis of things that have not been put together before.

9. You can make a new interpretation of someone else’s material or ideas: this will work if your interpretation can be shown to shed new light on the things they were investigating.

10. You can take a technique and apply it to a new area. People will want to know if it works in that area, or if a new theory is needed there.

11. You can be cross-disciplinary and use different methodologies.

You get the idea - I hope... I am sure you can add lots more ways for your contribution to be original.

No blog entry on originality in research can be regarded as useful if it does not deal with two famous quotations, both of which strike a chord in every piece of research:

Voltaire is quoted as saying: "Originality is nothing but judicious imitation. The most original writers borrowed one from another. The instruction we find in books is like fire. We fetch it from our neighbor’s, kindle it at home, communicate it to others, and it becomes the property of all."

It is no disrespect to the great men of science if we use their ideas as a starting point for our own, and/or come up with alternative or improved theories. It is no disrespect to their stature either if we summarise their ideas, condensing them into a preface for our own contribution. The key aphorism here is the one about how dwarfs standing on the shoulders of giants can see more and further because the giants lift them up. Poor footnoting in Robert Burton's 1623 preface to The Anatomy of Melancholy confused the scholarly trail here. Isaac Newton famously used the phrase in 1675. For me the phrase stands as the epitome of the twelfth-century renaissance and this is where the earliest known version is from (John of Salisbury 1159, Roger Bacon 1267).

The real basis of today's research tradition, however, is from Francis Bacon's Instauratio Magna (1620). His lasting contribution to research methodology was to require public evidence for every assertion, and create a path that anyone can follow. I will return to this theme soon.

Merton, Robert K.: On the Shoulders of Giants: A Shandean Postscript (Houghton Mifflin, 1985) 978-0151699629. [The above paragraph on the aphorism was updated in 2013 in homage to the update to the Wikipedia article.]

See Wikiquote: "Plusieurs auteurs anglais nous ont copiés, et n'en ont rien dit. Il en est des livres comme du feu de nos foyers; on va prendre ce feu chez son voisin, on l’allume chez soi, on le communique à d’autres, et il appartient à tous. " [updated 2013]