The one thing academia loves above everything else is to quantify and attach numbers to all sorts of things. From citation counts to journal impact factors to the ubiquitous H-index - all are attempts to quantify scholarly influence. Naturally, given the amount of time and effort a senior academic spends in “raising” an aspiring one, there has never been any doubt that an index would eventually come up to measure this important dimension of academic life : mentorship.
The attraction of such an index is but obvious. Mentorship - training and preparing younger people (according to their abilities) to become the next generation of torch bearers - is a serious responsibility and oftentimes one of the most time consuming activities. Yet, it is also the least visible form of scholarly labour attracting little or no recognition. Apart from the number of PhD students supervised, and the attempt on the part of some professional societies (like the Royal Astronomical Society, UK) to recognise exceptional individual mentors, mentorship has largely remained outside the world of bibliometric indicators.
The recently proposed "Mentorship Index" (M-index) along with an online calculator (https://jef.works/Mentorship-Index) created by Jean Fan, a Biomedical Engineer at Johns Hopkins University, attempts to fill this void - by quantifying a scientist's contribution in mentoring junior researchers. This is defined as the number of publications in which an academic appears as the last author (mentor) while the first author (mentee) has fewer than ten total publications.
This is built on the inherent assumption that a young first author (who has less than ten publications) is typically a junior researcher - an under-grad or grad student or a post-doctoral fellow - who carried out most of the work; whereas the last author is usually the supervisor of the project.
Unfortunately, the usefulness (or lack thereof) of the M-Index depends heavily on this specific assumption about authorship convention - which does not hold true across disciplines. Within Physics itself, practices vary widely between sub-fields. In High Energy Physics, for example, author ordering is strictly alphabetical (similar in Mathematics). Astrophysics, on the other hand, has a mixed bag. Small collaborations mostly follow the first-author convention assumed in designing the M-Index, while large collaborations (like those working in Gravitational Waves) adopt alphabetical ordering. The same researcher may use different author ordering convention depending upon the context.
There exist other factors that make the premise of identifying a mentor through the last author position problematic. Consider, for example, the scientists who manage major experimental or instrumentation facilities. Their names typically appear on numerous publications because they enable the observations or experiments. Their role is essential, but it is not the same as mentoring a junior researcher. Nevertheless, the index would count such papers as evidence of mentorship.
Consider another scenario not uncommon in large research groups. A graduate student may develop a methodology or a code that the group continues to use even after the student graduates out. Depending upon the situation, the original developer may continue to be listed as the last author on subsequent papers as recognition of their foundational contribution. Clearly, here the authorship reflects intellectual lineage rather than active supervision.
The M-Index cannot (yet) track such nuances.
The database underlying the index introduces additional complications. The metric relies on records from OpenAlex. While OpenAlex is a promising initiative, its coverage and author-identification systems are still evolving. In Astrophysics, for example, the benchmark literature resource remains the NASA Astrophysics Data System (ADS), which integrates journal publications, datasets, and preprints from arXiv into a specialised discovery platform. OpenAlex simply doesn't hold a candle to NASA-ADS.
More fundamentally, mentorship may resist simple numerical representation.
Good mentorship involves far more than co-authoring papers. It includes guiding research direction, helping students develop intellectual independence, supporting them through failures, and advising them on career choices. None of these activities can be reliably inferred from authorship lists.
Conversely, there are well-known cases in which senior researchers demand authorship on papers while contributing little to the work. A metric based purely on co-authorship may therefore reward precisely the behaviour it is meant to measure.
Of course, none of these means the idea should be dismissed entirely. Many years ago, when citation indices first became widely available, as young researchers we watched (in horrified fascination) how reputed senior scientists indulged in acrimonious citation-index throwing games. I am happy to see that my contemporaries are displaying a decidedly mature response to M-Index by taking this as an interesting experiment rather than a serious evaluation tool.
With careful refinement and perhaps by incorporating other indicators such as supervision records, student outcomes, or collaborative networks - we might eventually transform this index into something more meaningful. Until then, the scientific community would do well to treat it with caution. After all, not everything that matters in academia can be captured by a number.

No comments:
Post a Comment