SciELO - Scientific Electronic Library Online

 
vol.18 número2 índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

RFO UPF

versão impressa ISSN 1413-4012

RFO UPF vol.18 no.2 Passo Fundo Mai./Ago. 2013

 

Editorial

 

Strengthening citations for evaluating scientific quality

 

Gilson Luiz VolpatoI

 

Every scientist wants to ask questions and find answers, solving puzzles on varying scales. This search is not a solo journey. Although intense competition for solutions emerges, the scientific system requires at least cooperation among scientists. This leads to the need for publication, a system that exchanges information and certifies authorship. Publication is part of the scientific enterprise: inquiring about the natural world (including the human social world through social sciences), finding answers, and communicating the results to peers.

In such a scientific environment, evaluation systems arise. Who has the best academic performance? Who judges that? How can we reliably evaluate it? In this editorial, I focus on this topical and relevant issue by proposing a system that I developed based on scientific backgrounds. One assumption is that the assessment cannot contradict the basis of science, i.e., the genuine basis of "doing science" as shown above.

What does a scientific citation mean? The scientific text is an argument defending conclusions. In such an argument, citations lead the reader to the referenced text. In fields such as philosophy, citations also generally indicate the authority of the information. In science, citations function differently; they lead the readers to the empirical support of the referenced information. Thus, information in a scientific text obtains support from empirical evidence and represents a fundamental agreement among scientists. A reference showing only who said something lacks scientific character (empirical support); it is a logical fallacy known as an "appeal to authority". Therefore, in a scientific discourse, citations empirically strengthen the information and the resulting argument. This strengthening is often positive. The author accepts the information and uses it to solidify and expand some knowledge. Critiques of cited information are also valuable in science, although they are less frequent. Even when literature is cited to be criticized, this literature was important enough to merit a criticism. The information improved an argument and thus also contributed to building knowledge. However, the absence of citations of a scientific text certainly indicates something weak: the text might be incorrect or irrelevant or may have been ignored or even not found (in any case, it did not impact science1).

The above considerations justify why the number of citations has received great attention in several tens of evaluation indexes of scientific activity today. However, as citations depend on human actions (scientists), they involve ethical and epistemological problems. Not all citations in a scientific study correspond to the theoretical background I discussed above. Even so, an article was cited for some reason, and this provides merit to the quoted study. Despite some reported frauds [e.g., citation stacking (see Noorden, 2013) and excessive self-citations] and citation mistakes (Todd et al., 2010, including data for some areas), these faults most likely do not importantly increase scientific indexes during a career. Furthermore, we also have other indexes that consider other aspects of citations, such as the internationalization index (Kosmulski, 2010), which indicates how many different countries have cited a particular scientist or journal. Despite all of these flaws, the scientific community still uses citations as good metrics, as I defended above, because citations indicate the active participation of scientists in selecting information. Therefore, my main premise is that the citation profile is a genuine indicator of scientific quality. A citation is like a vote in a democratic election. We have problems with voting, just as in citations, but we do prefer voting to other systems.

The problems that impinge on citations are difficult social issues because their solutions require ethical transformations that are unimaginable in our current scientific environment. Here, I propose a feasible change that minimizes such problems and strengthens the participation of citations in indexes used to evaluate scientific quality: a change in the database system.

Currently, several bibliographic databases are now available in which scientists look for peer-reviewed scientific literature while developing research. Each database aggregates only a small portion of the whole universe of scientific literature (e.g., Web of Science, MEDLINE, Scopus). Each database has its own evaluation rules for accepting literature (journals, books etc.), including some criteria that are not related to science; for instance, editorial profile, production quality, number of journals already covered in the database, number of articles published a year, etc. Some databases are more recognized than others, a situation that imposes bias when searching for scientific literature (famous databases are preferred and their literature is thus privileged; thus, some important literature might be neglected because the papers were published in non-prestigious journals).

Considering my reasoning in this editorial, I strongly defend that worldwide journals in a topic of interest should be available for each literature review by the scientific community (the genuine evaluator). Although this is scientifically required, in fact, this does not occur. Only a few databases are consistently consulted, a bias that means that many international papers are not part of the scientific discourse. Thus, I propose the creation of a unique universal system for searching the scientific literature that should be reliable and freely available to any scientist. This system would be a joint effort of countries envisioning a world with more equal access to the scientific knowledge. In this proposal, each peer-reviewed journal must be included (except those excluded due to fraud). Thus, journals in prestigious databases (e.g., Web of Science, Scopus, MEDLINE) and those in less-prestigious ones would have the same opportunity to be found by readers in this universal database, giving the reader access to all of the available literature regarding her/his issue of interest (the reader is thus the only selection barrier for literature citation, as it should be!). The exclusion of literature due to language should remain, but this problem is expected to be gradually corrected by the authors, as the database is unique. This system could also eliminate problems caused by the cost of downloading papers from non-open-access journals because a universal database increases competition among articles, and journals should rapidly shift production costs from the readers to the authors. This universal system would certainly give greater visibility to each journal (especially frequently neglected journals), providing a more honest and trustworthy system for literature searches. This system is crucial to strengthening citations for any citation-based index. Without such database correction, looking for new citation-based indexes will still incorporate the substantial discrimination against certain journals (and papers) imposed by databases. The only acceptable exclusion criterion is that imposed by scientists when performing their studies.

 

Referências

1. Kosmulski, M. Hirsch-type index of international recognition. Journal of Informetrics 2010; 4(3):351-357.