Differentiation strategy and rankings in higher education: Role of rankings in building a strategy

Differentiation Strategy and Rankings in Higher Education: Role of Rankings in Building a Strategy, by Magdalena Iordache-Platis. In: Dima A. (eds) Doing Business in Europe (2018). Contributions to Management Science. Springer, Cham

Abstract (emphasis mine)

The contemporary higher education environment is dominated by uncertainty. Institutions do not disappear overnight in this industry, but study programmes decline even dramatically. Presently, ranking methodologies and indicators contribute to different and dynamic positioning of institutions at national or international level, based on a particular approach or a field-based one. Building a proper development strategy is a complex task for academic leadership. The chapter reveals the need of integrating the information provided by rankings into the decisions and actions in higher education institutions to achieve sustainable development. The main objectives of the chapter are to understand the dynamism of the contemporary competitive environment in higher education sector, to clarify the differentiation strategy as a solution for being stable on the educational market, to identify the role of rankings in defining an effective strategy. The topic is relevant for the students, contributing to their knowledge of differentiation strategy in general, but also on its applications in higher education, in particular; they will not only become more aware of the large possibilities of differentiation strategy implementation, but also better decision-makers about educational providers.

International rankings conceptual clarifications

Interesting excerpts:

Therefore, considering all the aforementioned connections between ranking dimensions and institutional missions, the steps to follow to generate the change towards the differentiation should be:

determine the higher education option for the ranking dimensionassess the current state of the ranking dimensiondefine possible institutional changespredict the competitor’s changes related to the chosen dimensionimplement the change.

A differentiation strategy is a way of competing in which institutions look for unfitness, through selecting one or several ranking dimensions. Higher education institutions become able to better perform on the market, but only in the case of student awareness or other stakeholder awareness, according to the specific objectives. If the students do not know or do not trust rankings, having a differentiation strategy and investing in it is similar to the case of no differentiation at all. In other words, a differentiation strategy is worth building and developing only if the students, as beneficiaries of it are aware and understand it properly. In this context, communication to the public is most important. Media and institutional press office contribute to the strategy building. If the communication is direct, continuous and clear, the strategy is effective. In case of a lack of communication, the differentiation does not reach the potential public and its impact becomes minor.

Model of ranking-based differentiation strategy for higher education institutions (Source:
Author)
University RankingLeave a comment

Measuring the academic reputation through citation networks via PageRank

Open access preprint Measuring the academic reputation through citation networks via PageRank, Massucci and Docampo, arXiv (2018).

Abstract:

The objective assessment of the prestige of an academic institution is a difficult and hotly debated task. In the last few years, different types of University Rankings have been proposed to quantify the excellence of different research institutions in the world. Albeit met with criticism in some cases, the relevance of university rankings is being increasingly acknowledged: indeed, rankings are having a major impact on the design of research policies, both at the institutional and governmental level.

Yet, the debate on what rankings are exactly measuring is enduring. Here, we address the issue by measuring a quantitive and reliable proxy of the academic reputation of a given institution and by evaluating its correlation with different university rankings. Specifically, we study citation patterns among universities in five different Web of Science Subject Categories and use the PageRank algorithm on the five resulting citation networks. The rationale behind our work is that scientific citations are driven by the reputation of the reference so that the PageRank algorithm is expected to yield a rank which reflects the reputation of an academic institution in a specific field.

Our results allow to quantifying the prestige of a set of institutions in a certain research field based only on hard bibliometric data. Given the volume of the data analysed, our findings are statistically robust and less prone to bias, at odds with ad–hoc surveys often employed by ranking bodies in order to attain similar results. Because our findings are found to correlate extremely well with the ARWU Subject rankings, the approach we propose in our paper may open the door to new, Academic Ranking methodologies that go beyond current methods by reconciling the qualitative evaluation of Academic Prestige with its quantitative measurements via publication impact.

The institutional network of cross-citations in the Telecommunication Engineering WoS category. Each node of the network is an academic institution featured both in the Telecommunications ARWU GRAS and as an affiliation in at least one publication of the Telecommunication Engineering WoS category. Edges are citations from a publication produced by an institution to those authored by another one (10% of the total edges are plotted). The node size is proportional to the number of publications.
Ranking, Research, University RankingLeave a comment

China’s science, technology, engineering, and mathematics (STEM) research environment

China’s science, technology, engineering, and mathematics (STEM) research environment: A snapshot, by Xueying Han, and Richard P. Appelbaum, PLOS One (2018).

Abstract (emphasis mine):

In keeping with China’s President Xi Jinping’s “Chinese Dream,” China has set a goal of becoming a world-class innovator by 2050. China’s higher education Science, Technology, Engineering, and Math (STEM) research environment will play a pivotal role in influencing whether China is successful in transitioning from a manufacturing-based economy to an innovation-driven, knowledge-based economy. Past studies on China’s research environment have been primarily qualitative in nature or based on anecdotal evidence. In this study, we surveyed STEM faculty from China’s top 25 universities to get a clearer understanding of how faculty members view China’s overall research environment. We received 731 completed survey responses, 17% of which were from individuals who received terminal degrees from abroad and 83% of which were from individuals who received terminal degrees from domestic institutions of higher education. We present results on why returnees decided to study abroad, returnees’ decisions to return to China, and differences in perceptions between returnees and domestic degree holders on the advantages of having a foreign degree. The top five challenges to China’s research environment identified by survey respondents were: a promotion of short-term thinking and instant success (37% of all respondents); research funding (33%); too much bureaucratic or governmental intervention (31%); the evaluation system (27%); and a reliance on human relations (26%). Results indicated that while China has clearly made strides in its higher education system, there are numerous challenges that must be overcome before China can hope to effectively produce the kinds of innovative thinkers that are required if it is to achieve its ambitious goals. We also raise questions about the current direction of education and inquiry in China, particularly indications that government policy is turning inward, away from openness that is central to innovative thinking.

ResearchLeave a comment

Measuring Student Success: A Value-Added Approach

Book chapter Pounder J.S. (2018) Measuring Student Success: A Value-Added Approach. In: Fardoun H., Downing K., Mok M. (eds) The Future of Higher Education in the Middle East and Africa. Springer, Cham.

Abstract:

The notion of what constitutes a ‘quality’ university has been challenged by the 2014 Gallup-Purdue Survey (Great Jobs, Great Lives: The 2014 Gallup-Purdue Index Report, Gallup, Inc., 2014). This survey of 30,000 US university alumni revealed that engagement and feelings of well-being beyond the university and into the workplace have little to do with the prestige of the university and much to do with having caring professors and being afforded opportunities for experiential learning. The Survey has shifted the focus from what university professors value to what students value. Assuming universities are interested in what students think, the issue then becomes one of assessing ‘value added’, and this paper examines one university’s approach to addressing this issue.

StudentLeave a comment

Are university rankings useful to improve research? A systematic review

Open access Are university rankings useful to improve research? A systematic review, by Vernon, Balas, and Momani, PLOS One (2018).

Abstract (emphasis mine):

Introduction
Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement.

Methods
A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted.

Results
A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems.

Discussion
No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

Conflicting global rankings of an illustrative research university (per most recent published results, 2016).
University RankingLeave a comment

Implementation of preference ranking organization method for enrichment evaluation on selection system of student’s achievement

Open access Implementation of preference ranking organization method for enrichment evaluation (Promethee) on selection system of student’s achievement, by Karlitasari, Suhartini, and Nurrosikawati, IOP Conference Series: Materials Science and Engineering (2018) Volume 332, conference 1.

Abstract:

Selection of Student Achievement is conducted every year, starting from the level of Study Program, Faculty, to University, which then rank one will be sent to Kopertis level. The criteria made for the selection are Academic and Rich Scientific, Organizational, Personality, and English. In order for the selection of Student Achievement is Objective, then in addition to the presence of the jury is expected to use methods that support the decision to be more optimal in determining the Student Achievement. One method used is the Promethee Method. Preference Ranking Organization Method for Enrichment Evaluation (Promethee) is a method of ranking in Multi Criteria Decision Making (MCDM). PROMETHEE has the advantage that there is a preference type against the criteria that can take into account alternatives with other alternatives on the same criteria. The conjecture of alternate dominance over a criterion used in PROMETHEE is the use of values in the relationships between alternative ranking values. Based on the calculation result, from 7 applicants between Manual and Promethee Matrices, rank 1, 2, and 3, did not change, only 4 to 7 positions were changed. However, after the sensitivity test, almost all criteria experience a high level of sensitivity. Although it does not affect the students who will be sent to the next level, but can bring psychological impact on prospective student’s achievement.

StudentLeave a comment

THE to launch new innovation ranking

From THE’s website (excerpts):

Times Higher Education is developing plans for a pioneering ranking focused on universities’ impact on society, to be launched at THE’s Innovation and Impact Summit in South Korea in April 2019.

Confirming that the 2019 Innovation and Impact Summit will be held in partnership with one of the world’s leading research and technology universities, Korea Advanced Institute of Science and Technology (KAIST), Phil Baty, THE’s editorial director for global rankings, also announced that THE has begun to collect new data on university-business interactions, with plans to consult on a range of new performance metrics for a ranking to be launched and debated at the summit.

Times Higher Education is delighted to be building on its 15 years of experience in global rankings and data analysis to develop, in full partnership with the global university community, new performance indicators in this exciting emerging area. After exploring some of our ideas at our inaugural Innovation and Impact Summit in 2017 with the Hong Kong Polytechnic University, there is no better place to scrutinise and explore the resulting datasets and analyses than at one of the world’s most high-impact institutions, KAIST, in a country that completely transformed its economy in a matter of decades through research and innovation.”

Currently, THE collects data on the research income universities attract from business and industry, which forms one of 13 performance indicators in its World University Rankings. But a new pilot data collection exercise has been initiated in parallel to data collection for the World University Rankings, covering:

Income from business consultancyTurnover of all active spin-off activitiesNumber of active spin-off companies (active for at least three years) – broken down by those with some institutional ownership and those not owned by the institution.

The data could be combined with a range of existing datasets, for example on university-industry co-authored research publications, and patent and licensing data, to form a final ranking.

University RankingLeave a comment

Predicting U.S. News & World Report ranking of regional universities in the South using public data

Predicting U.S. News & World Report ranking of regional universities in the South using public data, Ph.D. dissertation by Angela E. Henderson (2017)

Process chart showing steps used by USNWR to calculate the 2016 Best Colleges institutional rankings.

Abstract:

Using correlational analyses and multiple regressions, this study uses U.S. News & World Report’s (USNWR) 2016 college rankings data and data from the National Center for Education Statistics’ (NCES) Integrated Postsecondary Education Data System (IPEDS) to examine variables that explain institutional peer assessment score and rank. This study focused on the 97 institutions included in USNWR’s 2016 Best Regional Universities (South) ranking list.

Analyses in this study addressed four major foci: 1) correlations between USNWR subfactor data values and selected IPEDS proxies, 2) IPEDS variables that explained variance in peer assessment score, 3) IPEDS variables that explained variance in rank, and 4) the extent to which rank could be predicted based on these results.

The results of this study indicated three main findings. First, USNWR subfactors with direct or indirect IPEDS proxies were highly correlated with the identified proxies. Second, more than 85% of variation in peer assessment score could be explained by five or fewer proxy variables, which differ dependent upon institution sector (private or public). Third, more than 85% of variation in institutional ranking could be explained by five proxy variables and without the inclusion of the peer assessment score subfactor. Collectively, findings suggest USNWR rankings are no more than a reflection of institutional outcomes and financial resources.

Percentage of predicted ranks classified into same decile as actual ranking.
University RankingLeave a comment

Universities’ Global Ranking Criteria Modification According to the Analysis of Their Websites

Universities’ Global Ranking Criteria Modification According to the Analysis of Their Websites, by Mohammed Al-Hagery, IJCSNS (2017) Vol. 17 No. 12 pp. 67-78

A Modified Indicators and Weights for ARWU

Abstract:

Global universities are subject to the academic ranking every year. One of the common ranking types that are applied annually is called the Academic Ranking of World Universities (ARWU). It developed by a team of researchers and experts. The ARWU is composed of a set of common criteria related to academic tasks and it does not include any indication or factor relevant to the recent technology, such as the websites of universities. Actually, there is a lack to find out the relationship between universities’ global ranking and their website features. Therefore, this research aimed at updating the current ranking model by adding a new criterion reflexing the websites’ features related to its contents and structure. This research focuses on universities as two classes ranked and unranked. This process includes extract, analyze websites’ datasets, visualize the initial results, study the relationship and the significant differences between the two classes if found, and modify the ARWU by updating the criteria list & their weights. A special S/W tool applied to analyze websites and to extract the required data. This research contributes to modify and enhance the ARWU model to be more comprehensive than the current one. The involvement of universities’ websites in the ranking process will encourage universities to improve their websites to achieve a higher-ranking level amongst leading universities. Furthermore, it gives a good chance for all universities to participate in the global ranking competition, especially the universities that have excellent outcomes and perfect websites.

Design of the ARWU Modified Model Components
University RankingLeave a comment