Assessing Scholarly Influence: Proposing New Metrics

Please download to get full document.

View again

of 13
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report

Public Notices


Views: 3 | Pages: 13

Extension: PDF | Download: 0

Related documents
Association for Information Systems AIS Electronic Library (AISeL) ICIS 2008 Proceedings International Conference on Information Systems (ICIS) 2008 Assessing Scholarly Influence: Proposing New Metrics
Association for Information Systems AIS Electronic Library (AISeL) ICIS 2008 Proceedings International Conference on Information Systems (ICIS) 2008 Assessing Scholarly Influence: Proposing New Metrics Duane P. Truex III Georgia State University, Michael J. Cuellar North Carolina Central University, Hirotoshi Takeda Paris Dauphine University, Follow this and additional works at: Recommended Citation Truex III, Duane P.; Cuellar, Michael J.; and Takeda, Hirotoshi, Assessing Scholarly Influence: Proposing New Metrics (2008). ICIS 2008 Proceedings This material is brought to you by the International Conference on Information Systems (ICIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in ICIS 2008 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact ASSESSING SCHOLARLY INFLUENCE: PROPOSING NEW METRICS Evaluer l'influence académique : proposition de nouvelles métriques Completed Research Paper Duane P. Truex III Georgia State University 35 Broad Street Atlanta GA Michael J. Cuellar North Carolina Central University 1801 Fayetteville St. Durham NC Hirotoshi Takeda Georgia State University 35 Broad Street Atlanta GA Paris Dauphine University Abstract This study examines the use of the Hirsch family of indices to assess the scholarly influence of IS researchers. It finds that while the top tier journals are important indications of a scholar s impact, they are neither the only nor indeed the most important sources of scholarly influence. In effect other ranking studies, by narrowly bounding the venues included in those studies, effectively privilege certain venues by declaring them as more highly influential than they are when one includes broader measures of scholarly impact. Such studies distort the discourse. For instance, contrary to the common view that to be influential one must publish in a very limited set of US journals, our results of the impact of scholars published in top tier European IS journals are of similar influence to authors publishing in the MIS Quarterly, ISR and Management Science even though they do not publish in those venues. Keywords: Researcher Ranking, Citation Analysis, Hirsch Index, h-index, Contemporary Hirsch Index, hc index, g-index Résumé Cette étude examine l'utilisation des indices de la famille de Hirsh pour l évaluation de l'influence académique des chercheurs en systèmes d'information (SI) en fonction de leurs publications. Nos résultats montrent que, bien que publier dans des journaux de première catégorie est une indication pertinente de l influence académique, cela n est ni la seule, ni la plus importante des sources d influence académique. Ils montrent également que le champ des SI devrait adopter les indices de la famille de Hirsch. En effet, en restreignant fortement le nombre de revues inclues dans leurs analyses, de nombreuses études de classements privilégient certaines revues et postulent que ces dernières sont plus influentes qu elles ne le sont en réalité, lorsque l on inclut de plus larges mesures d influence académique. Ainsi, de telles études biaisent le débat. Par exemple, notre étude remet en question la croyance répandue qu il est absolument nécessaire de publier dans les meilleures revues américaines pour développer une influence académique. Elle montre notamment que l impact des chercheurs qui publient dans les meilleures revues européennes en SI Twenty Ninth International Conference on Information Systems, Paris Breakthrough Ideas Tracks représente une influence académique comparable à celle de chercheurs publiant dans MIS Quarterly, ISR et Management Science, quand bien même ils ne publient pas dans ces revues. Short Abstract in Native Language This paper argues that existing methods of assessing the scholarly influence are biased; and, that the IS field should adopt the Hirsch family of indices as better measures of scholarly influence. It demonstrates these measures using a set of scholars publishing in the European Journal of IS (EJIS) and the Information Systems Journal (ISJ) and who were identified in Lowry et al (2007). Résumé Cette étude montre que les méthodes permettant d évaluer l'influence académique des chercheurs en systèmes d information (SI) sont biaisées ; de fait, les chercheurs en SI devraient probablement adopter les indices de la famille de Hirsch qui constituent de meilleures mesures de l influence académique. L étude démontre la pertinence de ces mesures en utilisant un ensemble d articles parus dans l European Journal of Information Systems (EJIS) et Information Systems Journal (ISJ) précédemment identifiées par Lowry et al. (2007). Introduction The object of this paper is to propose a new method for the assessment of scholarly influence. We argue that existing methods are subjective and methodologically suspect. We therefore suggest that the IS field take advantage of the 80 years of work done by Information Sciences discipline in assessing scholarly influence. In that line, we argue that the IS field should assess scholarly influence using the Information Science based Hirsch family of indices rooted in the Google Scholar search engine. We believe that by adopting this methodology, the IS field can overcome many of the issues related to bias (Walstrom et al. 1995), and politics (Gallivan et al. 2007). Ranking methodology has come under scrutiny by different studies and editorial pieces in both US and European journals. (Alexander et al. 2007; Baskerville 2008; Clark et al. 2007b; Molinari et al. 2008; Peffers et al. 2003; Rainer et al. 2005) There have been arguments that journal rankings use an unfair method dependent on where you conduct your research or that the process of ranking itself forces researchers to focus on safe or even trivial topics. (Powell et al. 2008) European researchers have argued that journal rankings tend to exaggerate the importance of North American journals and institutions (Baskerville 2008; Harzing 2008b; Kateratanakul et al. 2003; Mingers et al. 2007; Powell et al. 2008; Willcocks et al. 2008). This point has also been made in the fields of accounting (Lee et al. 1999) and in Management (Collin et al. 1996). Other authors challenge the efficacy of any reasonable single measure for adjudging the worth of a scholar and espousing a need to bring the whole process under control. Other studies advocate for the removal from consideration practitioner and non-research publications that conflate consideration of a scholar s research contributions (Gallivan et al. 2007). The research described in this paper arises from a stream of inquiry that takes all these issues and challenges to be serious and essential questions for our discipline. We take this task on for several reasons. Firstly, just as financial analysts require vocabularies and tools by which they can compare the performance and worth of firms in the same industry, and indices to compare firms in different and at times disparate industries (e.g., IBM and General Motors Corp.) university administrators require vocabularies and metrics to compare scholars across disparate disciplines. Secondly, as a field we need measures that enable us to assess our own scholarly influence relative to other fields. Thirdly, within our field, the competitive hiring and tenure and promotion processes suggest that there needs to be something besides purely subjective or political processes to make career-altering decisions. Finally, and maybe more influentially for us, we feel strongly that it is the breadth, depth, and persistence of a scholar s work that should be considered as part of a person s intellectual legacy and not just a single number representing a ranking or a hit rate. To that end we are looking to understand and apply a set of measures to help consider a scholar s legacy. This paper is but one stage in that larger program of inquiry. We think that such a collection of measures would likely include various analyses of a scholar s publications, including where when and with whom the scholar has 2 Twenty Ninth International Conference on Information Systems, Paris 2008 Truex, Cuellar, & Takeda / Assessing Scholarly Influence published and other measures of the network of influence the scholar has had. The later element would require various types of citations and co-citation analyses. But for this present work we are developing a single component of the larger proposed technique set. That is, we examine how the Hirsch family of citation statistics may provide a fairer measure of scholarly influence than presented by current approaches. A caveat: we are not addressing the issue of scholarly quality directly. We certainly recognize that any field may maintain certain measures for assessing quality, but we see that issue as important work-in-process which we do not tackle at this time. We are of the opinion that the two notions of influence and quality are, however, often confounded in the literature. The argument goes as follows: journal x is ranked as among the best. Author y publishes in this journal and author z does not. Author y is therefore better. This scenario is flawed for two reasons. First, there is little objective evidence that publications in top-tier journals are necessarily consistently of higher quality than articles published in other venues (Singh et al. 2007). In fact there is suggestion in this present study that influence, often used as a surrogate measure for quality, may be venue agnostic. And, secondly, the question of the rankings of best and top-tier journals is a political process and one with inherent biases. Walstrom, Hardgrave, and Wilson (1995) tested some of these biases. Using consumer behavior theory, they developed a theory of bias in journal ranking in survey analysis by academics. They surveyed IS researchers to test six hypotheses about biases that affect ranking decisions derived from their theory and found bias arising from underlying discipline, familiarity, and research interest. Other examples of systematic bias are leveled at survey approaches to journal rankings. For instance, given a list of journals, respondents are inclined to select from the list provided even if that list is incomplete, called an anchoring effect (Chua et al. 2002). Another example of bias in ranking studies was that respondents might take a variety of different considerations into account instead of simply assessing journal influence. They may consider differential weights given to rigor vs. relevance, methodological approaches, as well as personal preferences and whether they have been published in the journal etc. (Podsakoff et al. 2005). Thus research supports the notion that current methods of journal ranking are systematically biased. Indeed, the entire concept of quality is problematic. It is not possible to provide a definition of quality that will be universally adopted. Quality is, in fact, a social construction. As Introna (2004) suggests publication in a journal, even a top tier journal, is not a designation of quality, but rather as a sign of successful conformance to a regime of truth or a paradigm of methodology (Kuhn 1996). We therefore argue that citation based statistics measure only how well a publication source (e.g. an author, a journal or an institution) is successful in negotiating the publication process in various venues. Historically, some venues have been harder to negotiate. This difficulty has been taken as a measure of the quality of the article. However, in this paper we eschew discussion of quality and rather refer to the idea of influence, which we view as the uptake of the ideas in the article as measure by its citations. 1 In this present work we also explore another bias, namely, the question of regional, linguistic and cultural difference in publication and scholarly influence. That is we explicitly examine the question of whether publication in adjudged top tier US versus top-tier European journals signals a difference in scholarly influence. The paper proceeds as follows: In the next section we briefly examine the literature exploring measures of scholarly influence of individual scholars. We then point out weaknesses in these measures and propose the Hirsch family of statistics as an improvement on the metrics used to assess influence. We discuss the findings, examine the limitations of the study and show how it provides pointers to our continued project seeking a set of better means to influence scholarly value. 1 We were motivated to undertake this research program by the way the larger discourse on academic importance and quality has begun to turn in our field, particularly in the United States. Relative hierarchies and importance rankings of academic researchers, departments and journals are being reinforced and reified with positions given and accepted unproblematically. One position in particular rankles: that there is (in our reading and hearing) an unwillingness to accept into the discourse that other sources of knowledge generation, dissemination and quality assessment exist other than those of the established paradigm. To the extent that the discourse is being closed versus opened on these points we believe we must respond by testing and challenging prevailing assumptions. Members of this authorial team are critical social theoretic researchers, and they shape part of the research agenda. Our position is that quality is not an objective measure; quality is a community sanctioned measure and statements about quality are, in part at least, political statements. Other papers in this research stream explicitly will address the question of quality and the ways power elites in the journal ranking discourse are reified by repetition. Twenty Ninth International Conference on Information Systems, Paris Breakthrough Ideas Tracks Literature Review As Gallivan and Benbunan-Fich (2007) point out our field has a rich tradition of research about research with more than 40 published works addressing the issue of journal rankings and scholarly output. Interest in this topic is not limited to our own field. The question of measuring research output by publication counts is prevalent in many of the social sciences (Bar-Ilan 2008; Collin et al. 1996; Lee et al. 1999). This recognition of the importance of such metrics is also accompanied by disaffection with extant methods each of which is seen to privilege one class of researcher or one class of journals. Thus our own work joins a chorus of work seeking a holy grail of scholarly achievement assessment. Those papers typically fall into one of two broad categories. The first stream considers the relative importance of specific publication venues. These are the so-called journal ranking studies. Examples of these articles are well reviewed in Gallivan and Benbunan-Fich (2007) and include (among others): (Alexander et al. 2007; Baskerville 2008; Clark et al. 2007b; Ferratt et al. 2007; Geary et al. 2004; Hardgrave et al. 1997; Harzing 2008b; Kodrzycki et al. 2005; Korobkin 1999; Kozar et al. 2006; Lowry et al. 2004; Martin 2007; Mingers et al. 2007; Mylonopoulos et al. 2001; Nelson 2006; Nerur et al. 2005; Peffers et al. 2003; Podsakoff et al. 2005; Rainer et al. 2005; Walstrom et al. 2001; Walstrom et al. 1995; Whitman et al. 1999; Willcocks et al. 2008). The second, and more sparsely populated stream, examines the productivity of individual, and on occasion, collections of researchers. Examples from this stream include: (Athey et al. 2000; Chua et al. 2002; Clark et al. 2007a; Gallivan et al. 2007; Huang et al. 2005; Lowry et al. 2007b; Lyytinen et al. 2007). The two streams are interrelated because on the one approach used to assess scholarly worth has been citation counts in top-tier journals. A third stream of work papers focuses primarily on the metrics and methods used in the first two streams, or propose improvements or replacements to those extant methods. Examples include: (Abt 2000; Banks 2006; Bar-Ilan 2008; Batista et al. 2006; Bornmann et al. 2005; Bornmann et al. 2006; Bourke et al. 1996; Braun et al. 2006; Egghe 2005; Egghe 2006; Egghe 2007b; Egghe et al. 2006; Glanzel 2006; Liang 2006; Molinari et al. 2008; Raan 2006; Saad 2006; Schubert 2007; van Raan 2006; Zanotto 2006). To illustrate the first stream we point to three successive Walstrom and Hardgrave articles (Hardgrave et al. 1997; Walstrom et al. 2001; Walstrom et al. 1995). They created a survey instrument asking respondents to rate a list of journals and to add journals missing entries from an auxiliary list or from experience. Their instruments, administered to a sampling of IS academics selected from sources such as the ISWorld Directory of MIS Faculty, were then averaged to create the mean scores for each journal. These scores were then arranged in a ranking table. A second example in the scholar and institution assessment stream typifying the using citation analysis approach is provided by Lowry, Karuga and Richardson (2007b). They counted citations for articles published in MIS Quarterly, Information Systems Research and the IS articles published in Management Science as retrieved from Thomson s Web of Science. They counted authors and institutions using unweighted, weighted, and geometric methods of assessing the authors contributions (Chua et al. 2002). They then reported the most frequently cited authors, institutions, institutions by journal, articles, with each reported segment broken out by three 5-year eras: , , and As seen in the examples provided above, current methods used to derive various rank orders of scholarly influence typically fall into one of two categories. The first deals with the use of survey analysis methods and the second with scientometric methods. Survey analysis takes perspective data from researchers in the field. Scientometric methods use data analysis of scholarly output taking data from databases. Typically scientometric analysis involves citation analysis using some library database. The survey methodology has been under scrutiny for its subjective nature and a perceived North American bias. (Gallivan et al. 2007; Lyytinen et al. 2007; Willcocks et al. 2008). Recent studies have begin to explore the notion of the North American centricity of IS research outlets. Lyytinen et al. (2007) noted the relative paucity of participation by non-north American authors in leading journals where European IS scholars representing 25% of all IS scholars only represent 8-9% of those published in the field s top-tier journals (Lyytinen et al. 2007). Gallivan and Benbunan-Fich (2007) noted that in Huang and Hsu s (2005) highly cited article of the top 30 IS scholars there were no Europeans and only two women on the list set out to examine why. Thus IS scholars have begun to examine the ways in which we assemble ranking IS journal impact and scholar influence to see if there exists systematic bias in the method. Survey methods are generally thought to have four other flaws. The first has been termed the problem of the Path Dependency (Galliers et al. 2007). The idea is that studies about journal rankings necessarily draw on previous studies on rankings, which, in turn draw on earlier studies on ranking. With each survey certain journals reappear 4 Twenty Ninth International Conference on Information Systems, Paris 2008 Truex, Cuellar, & Takeda / Assessing Scholarly Influence and are imprinted or reified in the study methodology. Thus we have a kind of reification by repetition in the way studies are conducted making it relatively more difficult for newer or niche journals to break into the rankings. Another way to look at this is that the conduct of ranking studies whereby the researcher must replicate and extend previous work provides consistency from study to study but also breeds a kind of conformity. Secondly, and related to the first problem, is that a number of factors lend to make c
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!