VAD

The Journal Impact Factor in African Studies

/

/

The Journal Impact Factor in African Studies

The Journal Impact Factor (JIF) is the best-known bibliometric indicator of academic journal prestige and influence. Important African Studies related journals such as African Affairs (Oxford UP), The Journal of African Cultural Studies (Taylor and Francis) and The Journal of Modern African Studies (Cambridge UP) list their journal’s impact factor on their homepage. However, the JIF is not appropriate for measuring the quality of academic work, especially in African Studies.

The Journal Impact Factor was originally developed by Eugene Garfield to assist librarians in acquiring their journal collections. The Journal Impact Factors are published annually as a commercial product by Clarivate Analytics as part of the Journal Citation Reports (JCR). The journals included in the “Core Collection” of the multidisciplinary citation database “Web of Science” and the citations listed therein serve as data (Alexandra Schütrumpf 2019).

The JIF indicates the average number of times an article in a journal has been cited in other publications in a given year. For this purpose, the JIF aggregates the number of citations to articles published in a journal and then divides this sum by the number of articles published.

Database-schema-1895779_1280 Pixabay CC

The higher the value of the JIF, the greater the prestige and influence of a journal. The JIF has become a mark of quality – not only for the journals, but also for academics themselves, especially when it comes to decision-making on their recruitment or promotion.

However, this use of JIF metrics is fundamentally flawed: as early as the early 1990s it was clear that this average was distorted by a very small number of frequently cited articles. This makes it an inappropriate statistic to say anything about individual articles (and their authors) (Tennant et al. 2019). The averages not only conceal large differences between articles in the same journal, but also that citations are not necessarily good measures of research quality (Stephen Curry 2018).

The explanation for the persistent widespread use of the JIF as a generally accepted tool for evaluating journals seems to be based primarily on the fact that the JIF is easy to understand and quickly identifiable rather than on its actual relationship to research quality.

Disadvantages of the JIF for African Studies

The JIF has additional disadvantages that are particularly relevant to the field of African Studies: it is geographically and linguistically limiting.

Geographically: the use of JIF and journal ranking metrics leads to a confusion between the outreach of a journal and its quality, as academic journals from Africa, Latin America and Southeast Asia are hardly recorded in the “Web of Science” citation database (2018).

Linguistically: another disadvantage of the JIF is the (near) exclusion of research in languages other than English.

Use of the JIF

The concerns about the misuse of the JIF in research evaluation are legitimate because many universities, especially research-intensive institutions, continue to promote the use of the JIF in assessments of researchers (McKiernan, Alperin and Fleerackers 2019).

Although inappropriate, many countries, including for example South Africa, use a two-tier rating system that automatically assigns a higher score (e.g. Type A) to articles published in JIF-rated journals indexed in international databases, and a lower score (e.g. Type B) to those published locally.

A good ten years ago, the German Research Foundation (DFG 2010) spoke out against the use of JIF as a qualitative evaluation criterion for academic achievement. Only recently (2019), the Web of Science of Clarivate Analytics – the organization that calculates the JIF – published a report explaining how misuse of the JIF can “disguise actual research performance”.

Alternatives to the JIF

Instead of the JIF, more informative and easily accessible article-level bibliometric indicators can be used, as for example “Altmetric“. A number of initiatives now propose alternative systems for research assessment, including the Leiden Manifesto and the San Francisco Declaration on Research Assessment (DORA). Recent developments (2019) around Plan S, an initiative for Open Access publications, call for the implementation of such initiatives as well as fundamental changes in the way research is conducted.

Facebook
Twitter
LinkedIn
Email
Print