Daniel Sarewitz on scientific excellence

The following post is based on a talk given by Daniel Sarewitz (Arizona State University) titled

‘Research for policy: Is there an essential tension between scientific quality and societal value?’

June 18 2014, The Hague, The Netherlands

Assessments of the quality of scientific research are often seen as something self-explanatory. Common wisdom has it that scientific advancements are self-evident, as can be seen in recent developments in engineering, medicine, and theoretical physics. In his lecture, Daniel Sarewitz (Professor of Science and Society and Co-Founder and Co-Director of the Consortium for Science, Policy & Outcomes at Arizona State University) showed how criteria of scientific excellence are usually applied with very little discussion and that expert communities deploy a narrow framework when evaluating scientific excellence. Can scientific excellence be determined on the basis of criteria developed by scientists themselves? If not, who should assess scientific quality?

As Sarewitz argued, the issue of scientific quality or scientific excellence is becoming more and more of an issue across the scientific enterprise. A recent article in The Economist titled ‘How Science Goes Wrong’ revealed unreliable, non-replicable, and carelessly conducted research that yields biased results as a salient feature of the scientific enterprise. In the Netherlands, ministers tend to praise Dutch scientists for their competitive ‘edge’ in terms of acquiring European research funding, which is seen as a sound indicator of scientific excellence. However, discontent is brewing among Dutch scientists. A number of prominent Dutch researchers recently started a joint initiative under the moniker ‘Science in Transition’. Those involved claim science is in need of a fundamental reform, which is due to a variety of issues such as the present-day perverse focus on citations and amounts of publications, both of which are seen as indicators of excellence.

Thus, there is no shortage of discontent. But how did science end up there in the first place? According to Sarewitz, an important part of the issue is related to questions pertaining to scientific excellence, which the scientific community still largely bases on a single set of narrow and unambiguous criteria. The presumption is that scientific quality can be assessed by scientists themselves, as long as they remain close to their field of expertise. This adage of ‘cobbler, stick to thy last’ can also be used to fend off the involvement of ‘non-scientists’, such as policymakers, and public participation more generally. However, studies of scientific results indicate that research that is considered excellent by the standard of scientific communities (‘internalist’ criteria) is of little value outside of the laboratory. For example, it is generally assumed that experiments on the impact of medicine on animals in the laboratory correlate to the impact of these medicine on the human body. Although this is often not the case in practice, results from such scientific experiments can be valued as scientifically sound, or even ‘excellent’.

In recent years, the imbroglios of scientific knowledge, technological innovations, and socio-economic developments have become more intricate. The community of scientists is growing larger and expanding geographically. Scientists work on more and more complex issues, using more sophisticated research instruments, and often transgress disciplinary boundaries. One attempt to deal with these developments comes from the scientific enterprise itself in the form of the bibliometric assessment of science, which takes as its starting point the idea that quantities of citations by one’s scientific peers is an indicator of scientific excellence. However, using an example from biomedical literature, Sarewitz showed how scientific communities are biased in that they tend to defend their interests in various ways, e.g. by replicating research findings. Positive bias tends to push out research methods and approaches in favor of others more in line with the interests of a particular scientific enterprise. Ultimately, positive bias could mean that science is no longer able to explain phenomena because results of scientific research are not evaluated properly or updated due to new findings and unexpected results. Science becomes self-caricaturing.

In short, on the one hand, science is pushing more and more into very relevant, real-world problems that can be characterized by uncertainty and complexity. On the other hand, positive bias and defending interests particular to scientific communities precludes the very process many would consider ‘scientifically sound’, i.e. delivering rigorous answers by means of controlled and replicable experiments. The current state of science provides counterproductive incentives in this respect, such as the aforementioned emphasis on citations as an indicator of excellence, which results in a large, opaque scientific enterprise unable to address and answer the questions it should address.

Given this state of the scientific enterprise, Sarewitz claims scientific excellence can no longer be judged on the basis of internalist criteria, using practices, techniques and tools of the scientific community itself. In addition, leaving the attribution of scientific excellence as a matter to be resolved by the scientific community itself becomes a more difficult position to maintain: can the issues with judging excellence be met by improving the assessment process within the confines of closed scientific communities? More and more people would answer this question with a resounding ‘no’. Internalist criteria are insufficient in this regard: scientists need to look beyond the boundaries of their own communities to assess the quality of their work. Expert communities tend to have a narrow normative and political framework that prevents them from making more inclusive and broader assessments.

But who should be up at bat? Who has the ability to assess scientific quality other than scientists themselves? As difficult as those questions can be answered, there are indications that a new science is in the making. Examples of improved social accountability and legitimacy of science are the use of social media to connect patients with diseases that cannot yet be cured to virtual clinical trials, and ‘responsible innovation’ movements that urges scientists working in areas such as nanotechnology and synthetic biology to consider the repercussions of their work.

How will these changes take place vis à vis the vested interests that will continue to adhere to internalist criteria? Sarewitz is confident science will change for the better. The crisis that is currently brewing shows clearly that internalist criteria are running out of steam. Disadvantageous outcomes of scientific research will damage the integrity and legitimacy of science to such an extent that changes are inevitable. The challenge that lies ahead is building a set of shared norms that provides sufficient counterweight to internalist criteria and allows for a more open and heterogeneous science enterprise.

Links
The Economist article ‘How science goes wrong’
Science in Transition

LEAVE A COMMENT