Tag Archives: altmetrics

Alternative Metrics ARE Common Metrics

3 Mar

I few weeks back, I was invited by the good folks at Altmetric to take part in a webinar to discuss my use of alternative metrics in my work as an evaluator for the UMass Center for Clinical & Translational Science. The webinar is available online, but for those who might want to see my slides and read the transcript from my part, here you go:

Slide01

Slide02I thought that I’d start with an overview of what’s happening regarding evaluation from the National perspective, since NCATS, the National Center for Advancing Translational Sciences, as the overseer of the CTSA program, steers the ship, so to speak. For those unaware, NCATS is the Center within NIH that oversees the 62 CTSA programs across the country.

The Common Metrics Initiative is a fairly new – or I should say re-newed/re-tooled – working group coordinated by some of the Principal Investigators from the CTSA sites. FYI, the proper jargon for a site is now, hub. So when you see the word “hub,” it refers to an individual CTSA site, such as the UMass Center for Clinical & Translational Science, where I work. Consortium refers to all of the sites, as a whole.

The Common Metrics Initiative came about in an effort to better measure, evaluate, and disseminate the impact of translational science and the concepts behind it. If you think about it, the idea of translational science is that by eliminating some of the barriers and obstacles that exist between biomedical research, clinical research, and clinical practice, discoveries that improve health will move from the lab bench to the bedside, i.e. patient care, faster. The questions of how we measure the truth of this idea is what’s behind establishing a set of common metrics – a uniform, standard set of metrics that measure the speed, efficiency, and quality of this large practice called translational science.

With multiple centers, an infinite amount of programs and research projects, countless individuals involved as researchers, clinicians, students, subjects… you can easily imagine how difficult it is to come up with a common set of metrics that everyone will collect and analyze. But it’s certainly an important thing to do, not only so that we can evaluate our respective individual hubs, but also so that we can compare across hubs.

Briefly, there are four key areas that have been identified as targets for the implementation of common metrics – workforce development (this involves training opportunities for individuals to learn, among other things, how to conduct clinical research); resources and services of each CTSA site; the collective impact of all the programs, functions, and such of an individual site; and the larger CTSA consortium as a whole.

Slide03For today’s talk, I’m going to focus on the area where alternative metrics are most useful. NCATS defines the different resources and services each hub offers as the following:

  • Biomedical informatics
  • Biostatistics, Epidemiology, Research Design and Ethics (commonly known as “the BERD”)
  • Pilot project funding
  • Regulatory Knowledge and Support
  • Community Engagement in Research
  • And Navigational Support – how well those administering hubs connect people to the resources and services that they need.

Slide04Further, I want to focus on the first three bits within this area, BMI, the BERD, and Pilot Funding.

Slide05As the evaluator for the UMCCTS, my job is basically all about answering questions. It’s a good thing that I was a librarian already, since answering questions is the librarians forte. I can also say that one of the things that I love most about being a librarian is answering interesting questions, and my role now certainly offers up a few interesting questions like these:

How effective are our resources and services – from bioinformatics to the parts of the BERD – in contributing to translational research?

When we give people money for pilot research, how well does this research then generate funding for further research? And then, what’s the impact of that research? How is it transforming health care practice and, ultimately, health?

And then the big elephant in the room, not mentioned on this slide, how do we go about answering these questions? The idea of identifying and analyzing a core set of common metrics is one attempt, but what should those metrics be?

These are big, difficult, and very interesting questions.

Slide06Of course, we start with the usual suspects. We count things. How many new projects are initiated? How many people are involved? How many trans-disciplinary collaborations are formed? How many students and new investigators are mentored and trained? How many publications result from the research done? How much new grant funding is obtained to further the work? Remember, the pilot funds offered by CTSA hubs are seed funds. They are meant to help get projects started, not fund them forever.

But what else besides these common metrics can we look to to draw a bigger picture of the success of our work? This, you guessed it, is where we look at alternative or altmetrics.

Slide07So let’s take an example. Here’s a paper authored by one of our researchers and funded, in part, through the resources and services of the UMCCTS. When we’re counting publications as a measure of success, it’s one that I can count. The other thing that I can count that’s fairly traditional is the number of citing articles. We know that this is a relatively good marker for impact – someone citing your work means that they used your work, in some way, to further their own. So the original work is having an effect. In this case, I could point out that 141 other publications needed to use this publication somehow. So we’ve got a reach, in the simplest terms, of 141 – 141 people, projects, research studies, something. This we can say from these two metrics: 1 paper, 141 citations.

But as we all know, today’s communication tools allow for much broader – and easier – dissemination of science. One of my goals, in my work, (you could see it as a challenge and/or an opportunity, too), is to help researchers and funders and other stakeholders appreciate the value of these other tools. To help them see how these tools give us a whole set of other metrics that can help us evaluate the impact of the work.

Slide08This particular paper is always a good example because you can clearly see, via the Altmetric tool, how far it’s traveled beyond the strict confines of scholarly, scientific publications. It’s also reached news outlets and social media users. It’s reached a wide cross-section of people – the general public, patients, other health care practitioners, other researchers in different disciplines. These are also important. We can argue over the level of importance, as compared to citations, but it’s difficult to ignore them – to claim that they have nothing to say when it comes to the measurable impact of this one paper.

The other reason that I like to use this particular paper as an example, besides its impressive Altmetric donut, is because the final author listed – one of the co-PIs for this work – also happens to be the PI for our CTSA site. She’s my boss. The big boss. She’s one of the PIs involved in the evaluation initiatives for NCATs. The first time that I demoed the Donut for her, she loved it. How could she not? Apart from the non-biased reaction that it’s good to see one’s work being shared, it’s also a great ego boost. Researchers, in case you don’t know, are a little bit competitive by nature. They like to see a good score, a good result, a big donut… you name it. They like it.

For those of us trying to reach the goal of bringing altmetrics into favorable light within very traditional disciplines, being able to show this type of example to your stakeholder, in this instance, my boss … it works.

Slide09So day to day, I spend a lot of time at my rock pile doing these sorts of things. I establish collections of publications, related to different groups within the UMCCTS. I maintain those collections regularly – using Collections within MyNCBI in Pubmed, or Scopus and SciVal – two tools available to me thanks to the Library of UMass Medical School. I collect data related to the common metrics outlined by NCATS, but I also collect the altmetrics. I track them all. And then I report on them all via progress reports and infographics (my latest love). It’s an ongoing – never ending – project, but it’s certainly interesting to step back from time to time and look at the big picture, the story, that all of these metrics, together, tell us.

I recently finished the final progress report for the first 5-year funding cycle of our CTSA. It really was impressive to see where we are today, in comparison to where we were just 8 or 9 years ago, when the idea of establishing a clinical and translational science center at UMass Med first took hold.

Slide10“Telling the story” is what my PI reminds me is my job. Using common and alternative metrics, I can tell the story of this one clinical research scholar who, over the past several years, has published 18 papers related to work she’s done in this program. These papers have been cited, she’s worked with many colleagues as co-authors, she’s developed a number of effective collaborations, she’s presented her work locally, regionally, and nationally, and she’s received several new grants to help her continue in her area of research. She’s also reached the public, patients, and other health care providers through multiple means.

Based on all of these metrics, I can write a pretty good story of how well this one doctor is utilizing the resources of the UMCCTS to inform practice and improve care. In a nutshell, I can tell a story of the impact of her research. If I repeat the same for each of our clinical scholars, or a group of researchers utilizing one of our research cores, or one or more of our pilot-funded projects … the story, the picture, gets bigger and, hopefully, clearer. Our Center is making a difference. That’s what we want to show. And that’s possible through the use of all of these tools and metrics.

Slide11Finally, I want and need to give a shout-out to my former colleagues and friends over at the Lamar Soutter Library here at UMass Med. I worked in the Library for 10 years before moving to the UMCCTS a little over a year ago now. It’s the work that I did in the library that first enabled me to build a relationship with our Center, and then inspired me to approach them to do the evaluation work that I do for them now. Kudos to the LSL for all of the initiatives carried out related to scholarly communications and research impact. I think together we’re helping change the environment around here and raising the level of awareness and acceptance of altmetrics.Slide12

Add it Up

10 Oct

The theme for this month’s international Open Access Week celebration is entitled, “Redefining Impact” and will focus on alternative metrics (altmetrics) and the emerging realization that there are better ways to measure the reach of one’s research than simply how many times a published article about it is cited. Publication certainly has value, but in today’s world with so many faster, far-reaching, and varied means of communication, scientists and others in academia need to recognize – and track – how well their work is or isn’t getting to its intended audience (and, perhaps, beyond).

Similarly, with funding harder to come by and the cost of everything rising exponentially, e.g. healthcare, education, food, clothing and shelter, the pressure is on from administrators, funding bodies, and the general public for these expensive endeavors to demonstrate their value. If the NIH gives a scientist several million dollars to carry out research, the expectation is that the outcomes will be worthy of that grant funding. If you pay $100,000+ for a four-year education, you expect to walk across the stage four years later with more than a piece of paper claiming you have a degree. More and more, we want demonstrable value for our investments.

countable quoteIn such an environment, we have to begin to investigate these altmetrics. For libraries, the traditional practice of tracking gate counts, circulation statistics, reference transactions, collection size, and other straightforward numbers that measured … well … numbers, it is past time for looking at alternative means to really answer the question(s) of our worth. As Steve Hiller, director of assessment for the University of Washington Libraries noted in an article for Information Outlook last year, we need to ask, “What makes a library good?” We need to look at these traditional metrics and ask if they’re truly yielding measures that matter. (Information Outlook, 16(5), Sept/Oct, 2012) 

Yet, value is a difficult thing to measure in numbers, of course, and this is what makes the task so difficult and, often, elusive. There are many articles and blog posts and online discussions on the topic of assessing the value of libraries, written by people with much more expertise in the area than I have. If you’re interested, I’d recommend Megan Oakleaf’s white paper, The Value of Academic Libraries, that grew from the ACRL-commissioned study of the same name, as a starting point.

In this post, I want to ask instead how we measure not the value of libraries, but librarians. What are the altmetrics that we need to collect on ourselves to demonstrate that the work we do matters to our patrons? As an academic librarian, I’ve built my portfolio of those tools that we tell researchers to build themselves. I have my ORCID profile, my ResearcherID, and my ImpactStory. I’ve registered this blog with ScienceSeeker. I have a LinkedIn account. I put my presentations on SlideShare. I tweet prodigiously (@mandosally).

These things have been successful in raising my profile within my profession. They’ve garnered me a small band of loyal followers, invitations to speak at conferences and to be part of webcasts, the opportunity to teach classes to a number of library staffs, and the odd-but-thrilling connection with a few real celebrities. It’s all wonderful stuff and I wouldn’t trade a bit of it. But… what does any of it say regarding my value as a librarian to the research community that I serve here at the University of Massachusetts Medical School? How do I measure that? What altmetrics are there that I can track and collect and show to my administrators to prove to them that I am, in fact, adding value to the work of the people that I serve and thus, ultimately, to the library?

I thought of this question earlier in the week when I was putting together a traditional altmetric profile (how’s that for an oxymoron?) for a faculty member here. As part of OA Week, we want to give a presentation on altmetrics and my library profile just isn’t going to cut it for an audience of researchers, so I asked Dr. Sherry Pagoto if I could use her as a guinea pig to set up all of the previously-mentioned profiles for her. Her reply was, “Cool! Yeah, I’d love to see this data (I think!).  I’ve been wanting to set this kind of thing up but haven’t gotten to it, so this will be fun!” Later, when I had her ImpactStory profile pretty well done, I tweeted it (of course!) and it prompted this “transaction”:

Sherry ImpactStoryTo me, this is an unequivocal demonstration of my value as a librarian on that particular day. I did my job and I did it very well and I have the proof, in a tweet, of this fact. Great, isn’t it? But short of taking screen shots of tweets and email replies, short of catching conversations with grateful patrons on video and posting them to YouTube, short of saving notes and phone messages and journal entries describing “good days”, how do I systematically capture all of this “value”? It’s a challenge. It’s perhaps THE challenge that any and all of us who work in information, innovation, and intellect, and the service roles that operate in those realms, face. It’s perplexing.

This week I’ve been reading Kim Dority’s book, Rethinking Information Work, and I really resonated with her sentiment that ultimately we are all self-employed.

And believe it or not, this is good news. Because if we understand that regardless of our current employment situation we are solely responsible for the well-being of our careers (and paychecks), that means we can take control. We can focus not on lifetime employment, but on lifetime employability.

One thing that I often find myself saying to colleagues, particularly newer grads from library schools, is that when you successfully embed yourself in the work of your patrons, your own value – and your job security – rises much more than if you were only trying to prove your value to your library directors and managers. This is because, if you want to talk numbers, there are more of them than there are library directors and managers. The word gets out that you’re worth having around – that you can do this and that and the other thing that they never knew before. And suddenly, you have done for yourself what can’t easily be captured on any annual evaluation, but is worth much more. You have made yourself employable, regardless of any circumstance. In a time of tight budgets and job cuts and the very real struggles of librarians to keep their libraries open, this is likely the biggest asset you can have.

Perhaps for a long time, librarians depended upon their libraries for their value. We counted on the intrinsic value of the institution to give us worth. Perhaps today, however, it’s the institution that is dependent upon those of us who work in it to bring that value back. And this is why, I believe, we need to shift the discussion from measuring the value of libraries to measuring the value of librarians. Those are the altmetrics that I’m still waiting to see emerge.