Wednesday, September 9, 2009

Agents and Bee Foraging

Some recent work at UWS was inspired by the real or imagined activities of a colony of foraging bees. Things got awkward when the model that was being used for experiments turned out to be different from the way bees actually forage. The supervisors said “all models are wrong, but some are useful” and of course wanted to explore the proposed approach, but the student largely lost interest. To my surprise, some readers felt that the study of unnatural systems was intrinsically repugnant, and that the story illustrated the need for science and religion to work hand in hand.
Suppose that a multi-agent system, with the task of looking for a particular sort of cluster in a large data set, observes a potential sub-cluster. We can imagine an automated step of spawning a new agent trained to look for further evidence of such a cluster. However, it is a bit fanciful to think of this new agent as a specially trained infant bee, as bees may learn the habits of the nest, but do not seem to receive the sort of individual instruction found in species with nuclear families. Other work at UWS examined the development of language in interacting groups of automata, and the introduction of a new word in that experiment is not unlike the introduction of a new agent in this one, since the introduction of the word implies a new subset of individuals that use it.
Leaving aside the biological inspiration, could a commercial system be imagined with similar properties? We could imagine such a system working in the data centre of a large supermarket or bank. If new agents can be spawned in this way, there would undoubtedly be issues of monitoring or control. A novel data cluster might result in a massive generation of new agents which might appear as unexpected additional activity in the system. In a commercial data centre it is possible that such an event would lead to suspicions of an intrusion or system fault.
If the autonomous agents are required to do a lot of status reporting to explain what they are up to, the additional monitoring traffic might create so many external messages as to call into question the wisdom of using agents. On the other hand, if the reporting traffic was cleverly aggregated within the swarm, a coherent report could be made to a monitor that a particular observation led to the deployment of 123,000 agents to investigate the possible existence of a new cluster, and this activity had now ended. Some computing systems build this sort of observable surface over chaotic, Brownian, internal motion; just as the apparently random behaviour of autonomous bees creates a regular-shaped nest. For example, network management systems aggregate event reports that have a shared cause, and (doubtless) Microsoft’s performance reporting systems do something similar.
In this way, the investigation of circumstances for and rules for the creation of a new agent leads to a new and interesting control problem, where the new problem is that of explaining the new situation that has arisen, in terms that make sense to those who have not been tracking all the details…

Monday, September 7, 2009

On Scholarship 2.0

During August Reinventing Academic Publishing Online appeared on Scholarship 2.0. It is a polemic against what its authors see as an exclusive establishment consisting of the "top academic journals" that only the richest universities can afford, and a self-serving institutional system that distorts the academic process in order to make the job of funding bodies and appointing committees easier.
Now there are many misguided people who think that there are such things as "top academic journals" where the best computing research is to be found, and regrettably some of these people do appear to hold positions of power and influence. But the fault is theirs alone.
I believe strongly in the value of computing research conferences: but the large ones have pursued profit at the expense of discrimination. It is easy to find dreadful papers presented at even the best conferences, with half-baked ideas and without results or any pretence at evaluation. But it is precisely the Web and Web 2.0 that allows us to find quality independently of the vehicle used for publication.
I have been following with some interest the response in UK academia to the recent Research Assessment Exercise. The Computing panel noted that excellent articles were to be found in journals with low impact factors, and conversely. They were astonished at the huge number (1247) of refereed journals that submitted articles had appeared in, and were amazed that relatively few university departments had submitted conference publications. They restated their policy that conference papers could be just as good as those found in the "top-rated journals".
Interested readers can follow this debate in the Conference of Heads and Professors of Computing and the consultation about the Research Excellence Framework.
Not only is their analysis of the last RAE excellent: so are the proposals for the next RAE, which will recognise that originality, rigour and impact are not usually found in a single publication. So let's just get on with the research, and leave the task of re-inventing academic publication to those who have time for it.
(Update 9 Nov: and in the meantime, join, which looks like a good Web 2.0 scholarship repository!)

Saturday, July 4, 2009

On Truth and Information

In recent years, several philosophers of computing, such as Fred Dretske(1932-) and Luciano Floridi (1964-), have established to a great many people’s satisfaction that for something to be called information, it should allow us to learn something that is true. They argue that false information is not information, just as a false policeman is not a policeman.

In my June 20 contribution to this blog I rather incautiously mentioned the pursuit of the truth about (the laws of) the physical world, and I hinted that I felt this was not the business of science, or research for that matter. Truth is a great concept and a noble ideal, but as I mentioned in the April 18 contribution, the more truthful we make any statement, the less clear it is, and the narrower the scope of its application. In a way, the only absolutely truthful statements are the formal tautologies of pure mathematics, the necessary truths that depend on nothing, and so add nothing to our knowledge. Conversely, any statement that is not necessarily true is merely (by definition) contingently true. It might be true, that is, if (um..) everything were really as it says. There are few philosophers, and even fewer scientists, who feel that this sort of circular discussion is worth while.

And yet, precisely this kind of argument has fascinated people for millennia. Descartes’ necessary truth “cogito, ergo sum” (1641) was the start of his argument to prove the existence of God and the immortality of the soul. A very similar exercise by Bernard Lonergan (1904-1984) achieved a wide currency in 1956. Now the reader should always smell a rat if someone claims to prove a contingent truth from a necessary one. The Australian philosopher David Stove (1927-1994) catalogued a large number of such arguments from Plato to Habermas: explaining that since they want to make everyone accept their opinion, it is a good trick to make it appear merely a logical deduction from a necessary truth. The trick can be made to work, as he explains, with the support of some impressive but contradictory concept, in the same way that division by zero can be used to create a convincing-looking proof. Usually though, such philosophers are vainly trying to use logic to establish some belief which predates their attempt and will outlast their failure.

To return to computing: every piece of information, according to all of us who follow Floridi and Dretske, contains its very own claim of contingent truth. By labelling it as information, we claim that it is not just some sample data: it will allow a suitably placed observer to learn something about the real world (Dretske 1981), at the time the information was constructed. This field of thinking has its own thought-experiments, for example, a bear-track in the woods contains the information that a bear passed that way whether or not anyone ever notices the spoor, and anyone suitably placed to notice it can learn this content.

The raw data from a survey, or a sheaf of newspaper cuttings, may well contain a lot of information that can be drawn to our attention by a suitably placed researcher. Unlike the bear-track in the woods, however, the survey was collected, and the newspaper articles written, by humans who have well-known tendencies to misperception, mistake, misinformation… So while the Internet doubtless contains a lot of useful information, I know it also contains much that is erroneous, ill-informed, and misleading. In research we don’t accept any of it uncritically. We try to stick to good sources of information, we try for honesty in our evidence gathering, and we try to take care in our conclusions, in our mission to increase the stock of knowledge in our academic discipline.

Saturday, June 27, 2009


Whose research is it? Yours. The basic idea will be something that the supervisor is an expert in (or you need another supervisor), but since you are making a new contribution to knowledge, at the end of the process you will be the world expert in your subject.
Your supervisor plays hugely important roles throughout the process though. At the start, they will help you with the relevant literature and established approaches. As the research progresses, they will help with methodology, with planning the research, and helping you phrase your research questions. Once you have parts of your thesis in draft, they will provide an invaluable critique of the flow of argument, and the construction of your thesis as a piece of rationally-argued writing. Your supervisor will also play a crucial role in selecting your external examiners, and being your supporter and eyes and ears during the viva.
Above all, throughout the process, they are following your journey, engaging in the discussions, playing the part of reader of your thesis and papers, reacting in the ways that your audience and your examiners might to the parts of your work that are new and surprising, so that you can fine-tune your arguments and make sure there are no loose ends.
The relationship between student and supervisor can sometimes be stormy - it is always a two way process, and a second supervisor can sometimes play a useful role in getting things back on track. It can and should be inspiring.

Saturday, June 20, 2009

All models are wrong

.. but some models are useful (George Box et al, 2009, p.61). What makes a model useful? Some theories of science have made grand descriptions in terms of prediction, explanation etc, but it really comes down to a consensus. Today (June 20th) it is reported that the British Government has decided that the spelling rule “I before e except after c” should no longer be taught in schools because the large number of exceptions made it useless. Such a rule is of course one of observation rather than a law of nature, but on close inspection it is easy to find hidden qualifications to any law you care to mention.
Until very recently, many in the scientific community used to imagine that they were discovering the truth about how the physical world works. Whewell (1833, p.256) quotes Lagrange’s opinion “that Newton was fortunate in having the system of the world for his problem, since its theory could be discovered once only”. Now a lake can be discovered only once, but systems are merely constructed, and many refinements and re-interpretations will be possible. Twentieth century physics revealed unimaginable strangeness, needing many alternative and conflicting models to apply to quantum mechanics, diffraction, cosmology, etc., and there was some useful criticism of old notions such as “final causes” (basically, boundary conditions at infinity).
Many researchers in the late nineteenth and early twentieth century searched only for natural laws expressible in terms of differential equations. Since this search followed so closely after the development of the calculus it appears with hindsight that these men with a new hammer suddenly saw nails everywhere.
The same hindsight opens our eyes to the serious untruths in their “natural laws”: on close inspection a natural law does not actually apply everywhere, but only (um..) where it applies (e.g. in the absence of discontinuities, in a neighbourhood of the origin). To be fair, talk of truth or laws was mostly a habit of speech: the models described in these laws are useful in telling us what to expect in the sort of situation for which the model was designed. In other situations or on close inspection we might need a different or more refined model.
As with final causes, or the ether, models can be useful even when they conflict with other models (seem counter-intuitive) or don’t fit with current ideas of causality. For example, classical field theory remains useful, even though we know that action at a distance is impossible, and that there is a better model based on radiation. Non-existent lakes will eventually be removed from atlases, but models will continue as long as somebody finds them useful.

Box, G. E. P; Luceo, A.; Paniagua-quinones, M. d. C. (2009): Statistical Control by Monitoring and Feedback Adjustment, 2nd ed. (Wiley) ISBN 0470148322
Whewell, W. (1833) Astronomy and General Physics: Considered with Reference to Natural Theology

Saturday, June 13, 2009

Evaluation and testing

If your contribution to knowledge is a better way of doing something, coming up with the idea (and maybe implementing it somehow) is only half the battle. The real work will come with evaluation, and this will need a methodology all on its own.
Most new algorithms are tested against data sets found in the literature and things are interesting if in some sense your idea performs better than its predecessors in these tests. But there are often problems with this approach – and I have occasionally seen a conspiracy of silence where anyone can see that the comparison is not entirely fair. After all, if your new algorithm is suited to a particular class of problem not previously addressed, then why test in some previous or different scenario? There are difficulties in using data from a different problem scenario, or comparing with an algorithm that was actually tackling a different issue. This sort of test is rarely more than the researcher’s equivalent of “Hello world”. The conspiracy of silence arises because this test is standard in the literature, and is used for convenience even though everyone knows that the data is unrealistically simplistic, or has been cleaned to remove any real-world difficulties.
On the other hand, artificial data, designed to exhibit the sort of issue that your idea helps to solve, has a value. It is dealing with the sort of hypothesis that starts “Problems reported with this aircraft control system may be associated with …” and your investigation is as much about exploring some peculiar feature that might occasionally occur in the data, and exploring responses to such a feature from the existing algorithms and yours, to see if the hypothesised feature was the source of the actual difficulty.
If your contribution arises from some solving a real-world problem, it will probably need a lot more work to collect real-world data and draw real-world conclusions. Space and time considerations limit most PhD theses, and all conference papers, to artificial, toy examples. Maybe though, some preliminary data can be analysed within the PhD (and lead to a job with the company with the problem so you can work on the real data) and the evaluation part can be beefed up with actual comments about the value of the contribution from those more familiar with the real-world problem.
In addition, the implementation of the idea in your PhD will probably have the nature of a prototype, which will need to be re-implemented within a real-world control system. Again, space and time considerations make it unlikely that real production software will be used in your PhD or any publication resulting from it. But actual adoption of the approach within the industrial process will count as a complete proof of the value of your contribution, and any progress in this direction should go in your thesis.

Saturday, June 6, 2009

Ethics and bad research

Like “health and safety”, the phrase “research ethics” tends to elicit weary groans from many researchers. A full discussion is obviously out of place in this blog, but it seems obvious that research should not do actual harm without a very convincing argument. What I would like to focus on is whether bad research is ever ethical.
By bad research I mean research that is poorly thought out, where data cannot reliably support the sort of investigation for which they were collected. People have used up time, and costs incurred, for no benefit. In my view such research is always unethical, since its value (roughly zero) does not justify the trouble it has taken. It may cause actual harm, possibly even to the whole process of research, if enough people find it ridiculous. Future funding, or the cooperation of potential subjects, may be affected if research does not seem to be useful.
You therefore need to explain why your research really is useful, and why your subjects have to answer a long list of strange-looking questions. This explanation is for when you approach potential subjects, supervisors and sources of funding. You need to be open about what your research is about and what its expected benefits are. You must not use any deception in your approach to any of these people. Not can you say (yet) what the conclusions will be. Sometimes (rarely?) it will not be possible without compromising the research outcomes to tell your subjects what the hypothesis is, but you must be able to explain the expected area of benefit of the research and why they have been approached. Also, you should never collect data without discussing these aspects first.
An anecdote may help explain this point. Suppose you are at a management training course and you are given a set of objectives to prioritise. It is late in the day, and they all look important so you just take the first six and make up some spurious reasons for your choice. You could well be irritated if these priorities are fed back to senior management in your company – in two ways. Because your careless reasoning may be subjected to more scrutiny tan you would like, and this reflects badly on you. But more importantly, you fear that these may be the wrong priorities, and if you had known they would be used, you would have taken more care over them. Because of the careless way the data has been collected you do not know if your performance will be unfairly judged, or if the organisation will now change its behaviour as a result of bad data.
The selection of sources of data, whether from human subjects or more generally, requires the greatest care, and has been discussed in an earlier entry in this blog, as the chosen pattern will have a crucial bearing on the scope of validity of your conclusions. What data you collect, and how, will limit its interpretation, and this too has been discussed in the entry on research methodology. You probably won’t need to get ethical clearance unless your research involves living subjects, but the application you make before you start will provide a concise overview of the plan for your research and how the conclusions will be drawn. Even if you don’t need ethical clearance, you should protect your research by thinking these things out.

Saturday, May 30, 2009

The MPhil safety net

If you don’t yet have a contribution to knowledge that can be communicated in a PhD thesis, don’t despair. This might be because you are at an early stage in your research, still looking at the literature to find a suitable gap, or because your investigation hasn’t yet led to any conclusions. The latter case is easier since by this stage you should be fairly sure that the investigation will lead to conclusions that ought to be of interest. You supervisors will advise you what sort of conclusions will interest your academic audience: it is important to be guided by their advice.
But it does sometimes happen that a line of research leads nowhere, or simply rediscovers something that is already in the literature. This is not a source of any shame: discuss the problem openly with your supervisors. Maybe they can advise on an interesting change of direction, or an improvement to the investigative machinery you are using.
If not, then the options are to start again with a new problem, or to write up the research as an MPhil instead of PhD. MPhil is a perfectly respectable degree, and especially if your work has led to a useful overview of the literature, digest of existing theory, and description of the primary work that you have carried out, then simply submit it for MPhil.
How does the MPhil thesis differ from PhD? They both have an abstract, introduction, literature review, primary research, conclusions and suggestions for further work. The formal difference is in the primary research section. They both will give an account of the initial investigations into the research question posed in the Introduction. But the PhD thesis will then discuss the process of refinement identifying the contribution to knowledge, and the more detailed investigation that establishes this securely.
If your career is already beyond the PhD stage, and your research is for establishing a track record in an area that is new to you, it is unlikely that anything useful will be gained by attempting to publish results that don’t make a contribution to knowledge. Use the work in your lectures by all means, but don’t waste the time of reviewers and editors.

Saturday, May 23, 2009

Write the abstract

Your abstract will be a small work of art. It will be about 250 words, but will describe your contribution to knowledge in a way that sets it in the context of current work in your academic discipline.
There are major limitations on the style of an abstract. You do not have the luxury of a list of references: the text must be self-contained, and there is obviously little space for any account of your evidence or methods, or for quotations from other work. Even more than the rest of the thesis, the abstract should avoid using the first person, or equivalent phrases like “the author”, since the abstract, like the blurb on the outside of a paperback, is in the style of a review of your work.
The PhD examination is likely to pay particular attention to the wording of the abstract. Everything it promises must be delivered in the thesis in a very obvious way – it is a good idea to ensure that phrases used in the abstract should appear as titles in the table of contents. Thus in miniature it gives an overview of your thesis.
It is a good idea to write a first draft of the abstract quite early on in the research. If you can’t describe your contribution to knowledge in 250 words, you should refine your ideas until you can.
On the other hand, once you have stated your contribution to knowledge in this succinct way, every part of the final thesis will contribute to establishing it. Any material, no matter how clever, that is not directly relevant to this task is likely to be removed from the thesis, either by you or your examiners.

Saturday, May 16, 2009

What are you going to do?

Your primary research might be an experiment, a proof-of-concept prototype, some new insights, a new way of performing, creating or analysing something. Whatever it is, you need to think about how it will look BEFORE you start. How will your academic community look at it? Will they recognise it as an original contribution? How can you convince them it is any good? What counts as being good? – is it being well thought-out, conferring some advantage over current methods, deeper in some sense, more effective in some way?
You need to think first about what tests your community regards as important. Looking at some other contributions in your field, how have they validated or evaluated their contributions? Looking at published reviews or criticisms of other people’s work, what issues excite the interest of the reviewers?
Thinking about such questions will help focus your primary research. Whatever you do, there is little use for messing up. You have one life, and usually just one shot at a full-time PhD. The PhD experience is so much of a trial that you will almost certainly never do another one, though with any luck you will in future supervise many seekers of the way. (Your first successful PhD supervision will be another life-changing experience, but that is a story for another day.)
What would count as messing up? To avoid disaster, think about your sources of data – if there are to be interviews or surveys, choose and plan carefully. If it is an experiment, make sure you think of everything, and calibrate your tools and tests. If it is a prototype, make sure it meets some tangible need and that you have people who understand that need and how to recognise a solution. If it is some new method or technique, make sure it can be compared with existing methods using available or standard comparison techniques.
Once you have answers to all of these questions, look again at your research proposal (prospectus, abstract, manifesto). Does it still look interesting from the viewpoint of a typical member of your academic discipline? If not, you need to go back, and consider how it might be made more interesting, following the rules given in earlier posts.

Saturday, May 9, 2009

The refinement process

Along with the research methodology, the early part of your research will construct an outline plan for your investigation. For PhD, this almost always has at least two stages of primary research. The following notes are intended to apply for any of the possible research methodologies for your PhD - and even for other sorts of doctorate such as those involving performance.
Your carefully chosen literature review leads to the starting points for your investigation, and your research question suggests how the investigation will begin. But at this stage you should have no fixed ideas of how things will turn out - after all, if you already know the answer to your research question, then it is not research. So it is to be expected that following a first stage of your investigation, there will be a process of refinement of the research question and the direction of the investigation.
This refinement is the magically effective ingredient in modern inductive investigation, and was not fully taken into account in some of the founding writings on science (e.g. Whewell), not in the discussions of them by philosophers such as Rorty or Feyerabend. Some of them would regard such refinement as a kind of cheating, since for them it was crucial always to say in advance how evidence would be collected, examined, and evaluated. For them, if the researchers were to say: "we notice something really interesting here that will now be the focus of the rest of our research", they would be invalidating the whole project up to that point.
But to researchers, it is entirely legitimate to focus on what is new or surprising. For example if the initial results (surprisingly) bear out someone's ideas in a novel setting, then it seems entirely proper to explore the edges of the area where they seem to apply. If the initial results don't seem to match anyone's existing ideas, then further work is suggested to try to sketch out what ideas would fit better.
Rorty tells us not to cheat too much...

Saturday, May 2, 2009

How to argue

So, you have some conclusions, and you want to convince your readers. You fear that some of them will not easily accept your results - perhaps they prefer a different theory or look at things differently. You also, as a minimum, want to establish that your conclusions will represent a real contribution to knowledge.
How do you proceed? The first thing to remember is never to attack other researchers or their ideas, however wrong you think they are. They are merely waiting for you to tell them what you have observed and what can be learned from it.
Secondly, remember that these are your peers: potential admirers and collaborators. Don't talk down to them. They are just as bright as you, but they haven't direct experience of the results of your investigation.
Thirdly, remember that the rules of the games allow for no speciall skills. Never suggest that you can see something that others cannot, or that you have access to evidence that is not available to others. Francis Bacon (p.350, 61) "Our method of discovering the sciences is such as to leave little to the acuteness and strength of wit, and indeed rather to level wit and intellect. For, as in the drawing of a line or accurate circle by the hand, much depends upon its steadiness and practice, but if a ruler or compass be employed there is little occasion for either; so it is with our method."
Finally, remember that we often see just what we expect. (ibid p347, 41) "all the perceptions, both of the senses and the mind, bear reference to man, and not to the universe, and the human mind resembles those uneven mirrors, which impart their own properties to different objects, from which rays are emitted, and distort and disfigure them." Our observations not not any truer than others.
Instead, think of it as a shared journey of discovery, where we build painstakingly on the results of previous work. As Francis Bacon (p.346, 32) said: "The ancient authors, and all others, are left in undisputed possession of their honours. For we enter into no comparison of capacity or talent, but of method; and assume the part of a guide, rather than of a critic." You must imagine the reader is accompanying you on a walk through the evidence. You are pointing out objects of interest, recalling what others have said about them, encouraging a closer look at certain aspects that you believe lead to new insights.
Because you have studied other academic writing in your discipline, you know the rules of the game, and you know what sort of presentation will be regarded as conclusive. You work with your readers, and avoid combative writing.
Bacon, F (1620) Instauratio Magna
Rorty, R (1979) Philosophy and the Mirror of Nature 978-0631129615 (Wiley)

Saturday, April 25, 2009

Research Methodology

In last week’s posting I discussed the difficulty of establishing your conclusions. It is a good idea to reflect on this aspect before you start your investigation. Without such reflection, there is a danger that your investigation may not be as useful as you hoped.
The result of your reflections will become a section of your thesis called “Research Methodology”. In it, you will set out what your primary research intends to achieve, and how you will go about it. For example, if you plan to conduct a survey, this section will consider how extensive the survey needs to be, who should be selected to participate, and how to avoid bias, ethical problems etc. If you plan to develop some theory, this section will consider what evidence might encourage people to believe that your theoretical contribution will be useful.
There are some general approaches that have been found to be useful over the years. It often saves a lot of time if you can simply characterise your methods as following one of these standard patterns. There are also subject-specific habits: scientific disciplines, by and large, prefer “quantitative methods”, which involve measuring things and aim at quantitative predictions or formulae. Many quantitative methods involve constructing experiments, to try out some technique in controlled, and possibly well-understood conditions.
For example, if you want to establish the usefulness of a formula which computes something (a dependent variable) in terms of some input variables, it is very convincing if an experimental design manages the find some way of keeping all but one of the input variables constant, and exploring the dependency of the result on the remaining one. Few scientific disciplines really offer this as a practical methodology: the input variables are rarely independent, and there is always a suspicion that the dependent variable depends on some other factors which have not been taken into account.
Social sciences, media studies etc, are often comfortable with “qualitative” or narrative-based methods. These work well in situations where everybody understands that things are more complicated than we can imagine; nevertheless we hope that by focusing on some aspects of a situation, and describing it in a particular way, we can exhibit some characteristic behaviour or responses that may be useful in aiding our understanding or developing policy. In this case, we would hope to find corroborating examples that could be analysed the same way.
The quantitative/qualitative divide is rather a simple and crude one however, and there are several methodologies that are in a sense orthogonal to it: these are:
The scientific or hypothetico-deductive method, developed by Francis Bacon (c1610) and William Whewell (c.1850). Following extensive analysis last century by Karl Popper (c.1950), Thomas Kuhn (c.1980), and Paul Feyerabend (c.1980), its current version is more practical than the one suggested by either of these proponents. If you use this method, your research methodology section can be quite short. (It is normal practice to cheat, and refine your hypotheses as you start gathering data.)
Action research, developed by Kurt Lewin around 1944, views the researcher as intervening in a problem situation and learning from the effect of their interventions on the situation and how it is viewed.
Grounded Theory, developed by Glaser and Strauss in 1967, is a radical approach in that the researcher tries to elucidate the concepts and problematic from the evidence in a four-stage process, keeping the prior theory to a minimum.
All three of these methodologies can be used in both qualitative and quantitative approaches. But they are really impossible to mix.

Saturday, April 18, 2009

The validity of your conclusions

In this piece I want to examine the sort of conclusions that can be drawn from your research. How much evidence you need, how it is analysed and how the arguments proceed, will all depend on the habits and customs of your academic discipline. But inevitably you will gather some evidence and draw some conclusions.

For the conclusions to be interesting, they should be applicable in situations that replicate some aspects of your evidence. If you studied the use of sarcasm in Shakespeare's plays, people could well find it interesting to take your conclusions and see how they might apply to those of Kyd or Marlowe. If you studied the suspension of integrity constraints in enterprise applications, you might hope that your conclusions could also apply to some other enterprise applications than the particular ones you studied. If you were studying the economic effects of the current crisis in banking, despite the unusual and possibly unique aspects of the current difficulties, academics will attempt to find ways of drawing some general lessons.

After all, if the evidence you examined was so unusual, so conditioned by circumstance, or your generalisation so feeble, the conclusions might end up being too weak to be of interest. The common practice of the discipline will be your guide. If your evidence about fluctuations in the earth's magnetic field was gathered in 2006, it would be unusual to limit your conclusions to that year. But on the other hand, at several stages in the history of the earth, the magnetic field has changed direction, so that there is no logical reason to suppose it might not do so again. Many recipes in computing are validated by analysing their performance on a few well-known data sets: but how can you be sure that similar performance will be observed for other data? More generally, sources of evidence need to be carefully chosen not to have specific traits that limit the generality of the discussion.

Niels Bohr, one of the founders of quantum mechanics, was fascinated by the Heisenberg principle of indeterminacy, and viewed it as complementarity: position and momentum at the quantum level are complementary properties, and the more precisely one of these quantities in known, the greater the uncertainty in the other. In explaining the principle, he drew on other examples of complementarity, famously of truth and clarity. He observed that the more precise we are in describing something completely as it is, the more complex and unclear our description becomes, while a clear statement, such as a soundbite or slogan, will never precisely specify anything. Similarly, the more careful we are in owning up to the limits of the evidence we have gathered, the more we constrain the claimed area of validity of our conclusions.

In his famous discussions with Einstein (1949), he addresses this very aspect: 'simply that by the word "experiment" we refer to a situation where we can tell others what we have done and what we have learned and that, therefore, the account of the experimental arrangement and of the results of the observations must be expressed in unambiguous language'.

This is why research methodology sets out to propose hypotheses or research questions based on previous research results, carefully setting out the parameters of an investigation and proposing in advance the limits on the applicability of the conclusions that may be reached. In practice, authors of PhD theses or published papers always cheat a bit, and refine some of these details during the investigation. Any surprises in, or reservations about, these conclusions can be discussed in the section of the thesis entitled "Suggestions for further work".

This is also what was wrong with Karl Popper's famous metatheory of falsifiability. Today, we say instead that negative results do not falsify a theory: usually they merely qualify it, defining its inapplicaibility to the particular circumstances of the results, and instigating further research into its limits of applicability, or a new theory that somehow applies better or more generally.


Bohr, N: Discussions with Einstein on Epistemological Problems in Atomic Physics, Albert Einstein: Philosopher-Scientist (1949), publ. Cambridge University Press, 1949.

Popper, K: Conjectures and Refutations (1963) London: Routledge and Kegan Paul, 1963, pp. 33-39

Friday, April 10, 2009

Contribution to knowledge

The contribution to knowledge is the main criterion for the award of PhD. Both words carry a weight of meaning and context, and are subtly different in different academic disciplines. Many academic journals apply the same test when deciding whether an article is worthy of publication, but practice does vary, so that the criterion of publishable quality is not quite as robust as the test for contribution to knowledge. This discussion leads into one of the most difficult areas of PhD supervision, and there are consequences for the role of the research supervisor in acting as a guide through the ground rules of the discipline.

As discussed in other entries in this blog, the word knowledge intends a limitation to what has been established in the academic discipline. Knowledge is established by means of a rigorous and verifiable process of investigation and analysis, in other words, is accumulated only through published research. There is a total ban on any references to personal experience or other forms of private evidence. To refer to previous knowledge in the discipline we must also avoid citing published research that is no longer regarded as correct, or has been superseded or qualified by later work.

In order to contribute to such a body of research, a researcher must ensure that every statement is either established in previous (but still accepted) research, or is safely concluded from the evidence they are contributing. The rules for what counts as a safe conclusion vary from one academic discipline to another, and the process of examination of research is mostly about this aspect. From a logical point of view, such conclusions are a form of induction and this is known to be dangerous.

The word contribution is intended to mean that the research conclusions are not already known. Potentially, and usually in fact, this means that the results are a surprise. (Dretske’s theory of information provides a quantified notion of “surprisal”.) If they are a surprise in the academic discipline then establishing the results will require a convincing rational argument. In examining a PhD thesis, examiners are looking for just such rigour, according to the requirements of the academic discipline. I hope to discuss this aspect next.

Dretske, F (1981) Knowledge and the flow of information, MIT Press. 0-262-0-04063-8.

Saturday, April 4, 2009

Some notes on originality

Here are some ways that your research can be original:

1. You can look at topics that people in your discipline have not looked at before.

2. You can add to knowledge in a way that has not been done before.

3. You can test existing knowledge in an original way.

4. You can provide a single original technique, observation or result in an otherwise unoriginal but competent piece of research.

5. You can bring new evidence to bear an old issue.

6. You can say something nobody has said before: this will work if you can show how your investigation provides a justification for your new saying.

7. You can carry out empirical work that has not been done before: this will work if you can use it to support existing knowledge or to show the limitations of existing assumptions.

8. You can make a synthesis of things that have not been put together before.

9. You can make a new interpretation of someone else’s material or ideas: this will work if your interpretation can be shown to shed new light on the things they were investigating.

10. You can take a technique and apply it to a new area. People will want to know if it works in that area, or if a new theory is needed there.

11. You can be cross-disciplinary and use different methodologies.

You get the idea - I hope... I am sure you can add lots more ways for your contribution to be original.

No blog entry on originality in research can be regarded as useful if it does not deal with two famous quotations, both of which strike a chord in every piece of research:

Voltaire is quoted as saying: "Originality is nothing but judicious imitation. The most original writers borrowed one from another. The instruction we find in books is like fire. We fetch it from our neighbor’s, kindle it at home, communicate it to others, and it becomes the property of all."

It is no disrespect to the great men of science if we use their ideas as a starting point for our own, and/or come up with alternative or improved theories. It is no disrespect to their stature either if we summarise their ideas, condensing them into a preface for our own contribution. The key aphorism here is the one about how dwarfs standing on the shoulders of giants can see more and further because the giants lift them up. Poor footnoting in Robert Burton's 1623 preface to The Anatomy of Melancholy confused the scholarly trail here. Isaac Newton famously used the phrase in 1675. For me the phrase stands as the epitome of the twelfth-century renaissance and this is where the earliest known version is from (John of Salisbury 1159, Roger Bacon 1267).

The real basis of today's research tradition, however, is from Francis Bacon's Instauratio Magna (1620). His lasting contribution to research methodology was to require public evidence for every assertion, and create a path that anyone can follow. I will return to this theme soon.

Merton, Robert K.: On the Shoulders of Giants: A Shandean Postscript (Houghton Mifflin, 1985) 978-0151699629. [The above paragraph on the aphorism was updated in 2013 in homage to the update to the Wikipedia article.]

See Wikiquote: "Plusieurs auteurs anglais nous ont copiés, et n'en ont rien dit. Il en est des livres comme du feu de nos foyers; on va prendre ce feu chez son voisin, on l’allume chez soi, on le communique à d’autres, et il appartient à tous. " [updated 2013]

Saturday, March 28, 2009

Your contribution

At this point, you have an area of interest to you, and you have a feeling for the expectations of your particular subject area.Putting this another way, you have a set of starting points in the form of existing published papers, and you know what aspects made them publishable.

Here comes the big question. What will you do? You will conduct an investigation, and your account of it and its conclusions will get published. As we have seen, although the computing discipline is all about software, the actual software is not considered published research.

If you are developing new algorithms or software frameworks, think carefully about the journals you want your work to get into. Do they publish that sort of thing? They will probably want them to be evaluated, so think how you can show your work is better that what is currently available.

If you are developing new theory, mathematical approaches, or decision models, then the journals of interest to you are likely not to be interested in the software itself. How will you validate the new mathematical approach?

If you are applying existing ideas in a challenging area, or considering snags in them, then presumably you are coming up with some fresh approaches. Will your suggestions be useful to others? Will your modified success criteria command general acceptance? Can you write up your case study in a way that allows your approaches to work in other situations?

How much can your experiments be generalised? Can you show that the evaluation data you used were in some way typical? The time to consider all of these issues is now, before you start work. By thinking things through at this stage, you will be considering your own work from the viewpoint of a potential reader. Why this test? Why that interviewee? Why that data?

For students we call this the research methodology section - a title that is really too grand for simply ensuring that what we plan to do will be of publishable quality. Being clear on what we can contribute is the first step towards framing the research question, which we will consider next.

Saturday, March 21, 2009

Secondary Research

By now you have a potential topic, and a set of relevant, recent research papers. Soon you will need to start thinking what your contribution to this topic might be. First though, the existing work needs a good deal more study.

Critically analyse each of the papers in your collection: if one of them is really interesting from your point of view, look to see what papers have cited it. Carefully consider what is the contribution of the paper. Has it given you a theoretical framework for investigation? How has it been validated? What primary evidence does it review? What tests does it apply? How does it demonstrate its value?

Partly these questions allow you to assess the research itself. More importantly for your future research, it allows you to learn about the research culture that is current in your discipline. What sort of experimental analysis is regarded as conclusive, what sorts of aspects are the ones to take into consideration, and what sort of contributions are seen as valuable. You will almost certainly find some aspects of this a little strange. Rather than dismiss them in favour of your own prejudices, though, look in nearby articles to see if conforming to the approach and culture is a price that you will need to pay for getting your ideas published.

There probably aren't enough details in the paper, especially if it is computing related. Retrace the steps it describes and check that you understand all of them. If there is source code, look at it, install it, compile it, and try to trace parts of the execution. If you are going to use this as a starting point of your research, then you really need to be able to repeat the tests it applies. If you get stuck, or really need to contact the authors for their data or code, check that there aren't other avenues to explore before you pester the authors. Bear in mind that not everyone is willing to share their code, for reasons that might be justifiable.

All of this comes under the heading of secondary research, because no matter how much time this takes, none of this is your original work. Even if you improve their description of their framework, implement it on another platform, or run more interesting tests, it is still secondary work.

Just occasionally there might be some primary research that directly follows from such a paper. Looking at its evidence base, or combining it with evidence from another source, you may be able to make, and validate, some significant improvement to their approach, or re-interpret their approach from a very different point of view.

It will be more usual, though, for your primary research to address a research question that is not directly considered in the papers you have found. That will be the next topic in this blog.

Saturday, March 14, 2009

Finding a research topic

As discussed below, we need to base our research on existing research. That means we need to find a topic that other people have recently been researching, and look at their work.
It may be that what we would like to research is simply not in the current research literature. This can happen with software technologies or products. We would like to know more about them, apply them to new situations or take them further. But it won't count as research unless we can find a group of researchers who think it does. This seems a bit like chicken-and-egg - how does a new topic get started?
Maybe we need to find an angle that is in the current research. A good place to start for this is Google Scholar (under more> in the main Google search pages). Search for your idea there and look at the sort of documents that Google Scholar provides.
Now be careful: Google Scholar is a great start, but it contains many things that are not research papers. Watch out for things that seem to be in the academic community and look for how academic researchers describe topics that might be relevant to your line of enquiry.
Try limiting your search to recent documents (since 2006, say).
The next step is to access The Web of Knowledge (
All researchers use this. To use it, you need to access it from your university, or from home with your Athens or Shibboleth login. You can use it to review your own publications, to see which publications are sensible to publish in, find the very latest research and see who is working on it.
From there you can go to, which can help you to create a research profile online, identifying all your own publications. Best of all, if you have a research track record you can see how many times your papers are being cited (and by whom): nowadays this is the final measure of your standing in the research community.
You can search for articles by title, content, authors etc. This will all take you a while but don't give up. Everything you need for research is guaranteed to be there somewhere. If you can't find it at first, go back and read the notes above: if it counts it will be there, but you may need to use different words to describe the area of interest.

Saturday, March 7, 2009

Software and Research

It can be disappointing for people with good programming skills that software development per se is not regarded as research at all.

To count as research, work must (a) be based on an existing body of knowledge in an academic discipline (b) investigate some question along a path others can follow (c) form conclusions based on public evidence (d) result in a contribution to knowledge according to the standards of the academic community. As I argued in my chapter in "Interdisciplinary Research" (Atkinson et al 2006) this has been the plan for research set out since the days of the founding of the Royal Scociety and other national academies.

Naturally, the identities and expectations of academic communities change over time. The verifiability requirements imposed by different communities are very diverse: the rules for experimental design in medical research are very different from the rules for experimental design in marine biology or the criteria for evidence used in social economics. In the computing community we no longer see articles with titles such as "XYZZY: An Interactive Whizzywig" in our respected journals (and maybe we never did, despite examples such as Stallman 1981).

As a result, research applicants who approach us with ideas such as "I want to use JavaFX on the Semantic Web" pose us a particular problem. It can be easy to see the kind of innovative software they plan to produce, but it fails the research criteria given above. For example, we fail immediately to find examples of research literature that they can use as starting points for their research: the manifest starting point of JavaFX is not really found anywhere in the research literature. At the time of writing (7 March 2009) it can be found in the title of just one MSc thesis and no journal articles. The Semantic Web is of course a different story, and so patiently we try to persuade the student to use start from one of the lines of research in the literature, and maybe find a way to bring JavaFX into the work at some stage.

If software itself is not seen as a contribution to knowledge, it is more obviously useful for other purposes within the research paradigm. It can demonstrate the viability of a mathematical or analytical approach. Programs can be used to conduct experiments on some other research idea. Most such experiments, however, are carried out using throw-away software. Some new way of visualizing something, or a new framework or pattern might be of interest, but the research would be about the impact or usefulness of the approach rather than the actual code.

So, what are the options for computing research? We can
  • Develop some new theoretical approach or algorithm for problems of some type

  • Investigate the best way of doing something (reasonably general)

  • (Re)evaluate some suggestions by other researchers in some slightly different context

  • Examine the demand for some kind of computing approach

We should try to avoid

  • Discussing or comparing solutions from different vendors
  • Improving the efficiency of existing solutions

Comments on these ideas are welcome.


Atkinson, J., Crowe, M. (eds)(2006) Interdisciplinary Research: Diverse Approaches in Science, Technology, Health and Society, Wiley, Chichester, 978-1-86156-47-2: also Chapter 1: Research Today

Stallman, R.: EMACS: The Extensible, Customizable Display Editor. ACM Conference on
Text Processing, 1981

Update 11 April 2009: Interestingly the recent Research Assessment Exercise RAE2007 in the UK did accept some published software as high-quality research outputs. But I believe the comments above contain good advice for PhD students.

(Re-) Starting Research in Computing

My idea for this blog is to share ideas for getting going in Computing Research, as a sort of self-help group. Like many academics (I suppose) my own research has got stuck and needs restarting. Over the decades I have gained a great deal of research experience, leading projects, supervising PhD students, examining... and have recently been busy on other things.
I also enjoy writing software too much, and teaching diverse new topics, and neither of these help to maintain a research focus.
So now I need to choose a new topic, and in case it proves useful to other academics in the same position I plan to document the process. I'm 61 but that's not too old - I still learn new things, and just this week passed one of Sun's professional exams. I have some ideas for different angles to write about in this blog, and hope to add an article about once a week.
Comments and contributions to this blog are welcome - please email me. But put the blog title in the subject line please, as I have a strong email filter.