Archive | Evaluation RSS feed for this section

What Goes Around, Comes Around…

13 Oct

At this time last year, meaning the month of October, I was feeling like a real world traveler, spanning the globe from Massachusetts to DC to Edinburgh and Stirling, Scotland, taking in a trio of really thought-provoking meetings in some wonderful venues. It was something and I vowed to myself that I would do my best to do the same, i.e. travel to an international conference, every year for the rest of my professional days. Taking part in conferences and meetings with people from other parts of the world opened my eyes – and my mind – to a whole host of new perspectives. I was inspired.

Well, here we are, a year later – surely one of the strangest years I could imagine – and the same conferences are all taking place. They’re still international in scope and the content is terrific, but alas, like everything else these days, I’m attending them via a screen; zooming in from my home or my office. And like everything else these days, it’s just missing something for me.

All that said, I’m grateful to be well, grateful to be working, and grateful that I have the means to keep on going. I know that too many people all over the world lack this good fortune right now.

But back to conferencing, one thing I enjoy most about attending a conference is feeling that charge of excitement and enthusiasm that comes with hearing intellectually stimulating stuff. I find myself writing down a dozen ideas for research studies. I come away with a stack of readings. I think, “Why the heck didn’t I get that PhD?”

Well, I didn’t because I thought that, at 39, I was too old to pursue such. I talked myself out of it. And let’s just say that coming up on 20 years hence, I’m not talking myself into it now. BUT, reflecting on a number of the talks and and research presentations that I’ve taken in over last week (NIH Bibliometrics and Assessment Symposium) and this (Transforming Research 2020), I realized something fascinating. At least to me. I realized that way back in 2002, when I had a question about a certain pattern that I observed in exercise physiology research and publications, and I followed it up with an independent study … by golly, I was doing bibliometric analysis!

I’ve always tied this experience to ending up earning a library science degree and pursuing my current career, but only within the past couple of weeks have I put together the pieces and seen (1) how much they truly were aligned and (2) how research continues on in the area. So here’s what happened:

As a grad student at Ithaca College, working on my MS in exercise physiology, I attended a regional meeting of the American College of Sports Medicine. At the meeting, a grad student (female) presented her research that observed the affect of a particular supplement on a group of subjects performing a particular physical training task. After she finished, an established faculty member (older, white, male) asked her about the subjects of her study. In short, her subject pool had consisted of only females. He questioned her on the legitimacy of generalizing any findings of a study that had not included males and said any study using women needed to specifically state that it was a study on women.

Next up, a grad student (male) presented his research that observed the affect of a particular supplement on a group of subjects performing a particular physical training task. His subject pool contained only males. I bet you’re ahead of me in guessing that, well, he didn’t get the same question regarding the generalization of his findings, nor how he titled his research.

And this happened again. And again. I looked through the program and took note of this oddity, and first chance I got, I asked my mentor, “What the hell is up with that?!” Thus was the seed for my independent study, “Current trends in exercise science research: A feminist cultural studies analysis.” I went to the library, went to the stacks, pulled 20 years worth of volumes of several prominent exercise science journals off the shelves, and began taking note of every title of every study looking at the affects of some intervention on training outcomes. (No Scopus or Web of Science, friends. I’m talking bound journals, paper, and pencil. This took awhile!)

[As an aside, my thesis topic also looked at sex differences, but related to factors of muscle fatigue, not words.]

Fast forward 20 years and I’m sitting in conferences attended by biomedical researchers, publishers, bibliometrics and research assessment practitioners, and librarians and here are some of the titles of studies authored and/or cited by the speakers so far:

Plus, the topic of the affects on COVID-19 on the female workforce in research and medicine, well that’s already targeted for study. Stay tuned for the many studies that will surely be published on this.

So what does all of this mean? Well, personally, I find it really interesting that a little spark that I noticed so long ago, didn’t just find only me. I think had I followed it up with that doctorate, I’d likely be doing this very research today with some of these same people. And honestly, I had no idea that was a possibility. It’s nice to know people are still studying and writing about the topic. It’s also frustrating and infuriating that it goes on, but… that’s another post.

All in all, the topic of diversity, equity, and inclusion is being discussed an awful lot today (rightly so), but it’s been a topic for a long, long time. As one speaker said, “We know a lot about what we know. But where is the change?” That’s the real question, isn’t it? And it’s where the real work is. Time to get busy.

Alternative Metrics ARE Common Metrics

3 Mar

I few weeks back, I was invited by the good folks at Altmetric to take part in a webinar to discuss my use of alternative metrics in my work as an evaluator for the UMass Center for Clinical & Translational Science. The webinar is available online, but for those who might want to see my slides and read the transcript from my part, here you go:

Slide01

Slide02I thought that I’d start with an overview of what’s happening regarding evaluation from the National perspective, since NCATS, the National Center for Advancing Translational Sciences, as the overseer of the CTSA program, steers the ship, so to speak. For those unaware, NCATS is the Center within NIH that oversees the 62 CTSA programs across the country.

The Common Metrics Initiative is a fairly new – or I should say re-newed/re-tooled – working group coordinated by some of the Principal Investigators from the CTSA sites. FYI, the proper jargon for a site is now, hub. So when you see the word “hub,” it refers to an individual CTSA site, such as the UMass Center for Clinical & Translational Science, where I work. Consortium refers to all of the sites, as a whole.

The Common Metrics Initiative came about in an effort to better measure, evaluate, and disseminate the impact of translational science and the concepts behind it. If you think about it, the idea of translational science is that by eliminating some of the barriers and obstacles that exist between biomedical research, clinical research, and clinical practice, discoveries that improve health will move from the lab bench to the bedside, i.e. patient care, faster. The questions of how we measure the truth of this idea is what’s behind establishing a set of common metrics – a uniform, standard set of metrics that measure the speed, efficiency, and quality of this large practice called translational science.

With multiple centers, an infinite amount of programs and research projects, countless individuals involved as researchers, clinicians, students, subjects… you can easily imagine how difficult it is to come up with a common set of metrics that everyone will collect and analyze. But it’s certainly an important thing to do, not only so that we can evaluate our respective individual hubs, but also so that we can compare across hubs.

Briefly, there are four key areas that have been identified as targets for the implementation of common metrics – workforce development (this involves training opportunities for individuals to learn, among other things, how to conduct clinical research); resources and services of each CTSA site; the collective impact of all the programs, functions, and such of an individual site; and the larger CTSA consortium as a whole.

Slide03For today’s talk, I’m going to focus on the area where alternative metrics are most useful. NCATS defines the different resources and services each hub offers as the following:

  • Biomedical informatics
  • Biostatistics, Epidemiology, Research Design and Ethics (commonly known as “the BERD”)
  • Pilot project funding
  • Regulatory Knowledge and Support
  • Community Engagement in Research
  • And Navigational Support – how well those administering hubs connect people to the resources and services that they need.

Slide04Further, I want to focus on the first three bits within this area, BMI, the BERD, and Pilot Funding.

Slide05As the evaluator for the UMCCTS, my job is basically all about answering questions. It’s a good thing that I was a librarian already, since answering questions is the librarians forte. I can also say that one of the things that I love most about being a librarian is answering interesting questions, and my role now certainly offers up a few interesting questions like these:

How effective are our resources and services – from bioinformatics to the parts of the BERD – in contributing to translational research?

When we give people money for pilot research, how well does this research then generate funding for further research? And then, what’s the impact of that research? How is it transforming health care practice and, ultimately, health?

And then the big elephant in the room, not mentioned on this slide, how do we go about answering these questions? The idea of identifying and analyzing a core set of common metrics is one attempt, but what should those metrics be?

These are big, difficult, and very interesting questions.

Slide06Of course, we start with the usual suspects. We count things. How many new projects are initiated? How many people are involved? How many trans-disciplinary collaborations are formed? How many students and new investigators are mentored and trained? How many publications result from the research done? How much new grant funding is obtained to further the work? Remember, the pilot funds offered by CTSA hubs are seed funds. They are meant to help get projects started, not fund them forever.

But what else besides these common metrics can we look to to draw a bigger picture of the success of our work? This, you guessed it, is where we look at alternative or altmetrics.

Slide07So let’s take an example. Here’s a paper authored by one of our researchers and funded, in part, through the resources and services of the UMCCTS. When we’re counting publications as a measure of success, it’s one that I can count. The other thing that I can count that’s fairly traditional is the number of citing articles. We know that this is a relatively good marker for impact – someone citing your work means that they used your work, in some way, to further their own. So the original work is having an effect. In this case, I could point out that 141 other publications needed to use this publication somehow. So we’ve got a reach, in the simplest terms, of 141 – 141 people, projects, research studies, something. This we can say from these two metrics: 1 paper, 141 citations.

But as we all know, today’s communication tools allow for much broader – and easier – dissemination of science. One of my goals, in my work, (you could see it as a challenge and/or an opportunity, too), is to help researchers and funders and other stakeholders appreciate the value of these other tools. To help them see how these tools give us a whole set of other metrics that can help us evaluate the impact of the work.

Slide08This particular paper is always a good example because you can clearly see, via the Altmetric tool, how far it’s traveled beyond the strict confines of scholarly, scientific publications. It’s also reached news outlets and social media users. It’s reached a wide cross-section of people – the general public, patients, other health care practitioners, other researchers in different disciplines. These are also important. We can argue over the level of importance, as compared to citations, but it’s difficult to ignore them – to claim that they have nothing to say when it comes to the measurable impact of this one paper.

The other reason that I like to use this particular paper as an example, besides its impressive Altmetric donut, is because the final author listed – one of the co-PIs for this work – also happens to be the PI for our CTSA site. She’s my boss. The big boss. She’s one of the PIs involved in the evaluation initiatives for NCATs. The first time that I demoed the Donut for her, she loved it. How could she not? Apart from the non-biased reaction that it’s good to see one’s work being shared, it’s also a great ego boost. Researchers, in case you don’t know, are a little bit competitive by nature. They like to see a good score, a good result, a big donut… you name it. They like it.

For those of us trying to reach the goal of bringing altmetrics into favorable light within very traditional disciplines, being able to show this type of example to your stakeholder, in this instance, my boss … it works.

Slide09So day to day, I spend a lot of time at my rock pile doing these sorts of things. I establish collections of publications, related to different groups within the UMCCTS. I maintain those collections regularly – using Collections within MyNCBI in Pubmed, or Scopus and SciVal – two tools available to me thanks to the Library of UMass Medical School. I collect data related to the common metrics outlined by NCATS, but I also collect the altmetrics. I track them all. And then I report on them all via progress reports and infographics (my latest love). It’s an ongoing – never ending – project, but it’s certainly interesting to step back from time to time and look at the big picture, the story, that all of these metrics, together, tell us.

I recently finished the final progress report for the first 5-year funding cycle of our CTSA. It really was impressive to see where we are today, in comparison to where we were just 8 or 9 years ago, when the idea of establishing a clinical and translational science center at UMass Med first took hold.

Slide10“Telling the story” is what my PI reminds me is my job. Using common and alternative metrics, I can tell the story of this one clinical research scholar who, over the past several years, has published 18 papers related to work she’s done in this program. These papers have been cited, she’s worked with many colleagues as co-authors, she’s developed a number of effective collaborations, she’s presented her work locally, regionally, and nationally, and she’s received several new grants to help her continue in her area of research. She’s also reached the public, patients, and other health care providers through multiple means.

Based on all of these metrics, I can write a pretty good story of how well this one doctor is utilizing the resources of the UMCCTS to inform practice and improve care. In a nutshell, I can tell a story of the impact of her research. If I repeat the same for each of our clinical scholars, or a group of researchers utilizing one of our research cores, or one or more of our pilot-funded projects … the story, the picture, gets bigger and, hopefully, clearer. Our Center is making a difference. That’s what we want to show. And that’s possible through the use of all of these tools and metrics.

Slide11Finally, I want and need to give a shout-out to my former colleagues and friends over at the Lamar Soutter Library here at UMass Med. I worked in the Library for 10 years before moving to the UMCCTS a little over a year ago now. It’s the work that I did in the library that first enabled me to build a relationship with our Center, and then inspired me to approach them to do the evaluation work that I do for them now. Kudos to the LSL for all of the initiatives carried out related to scholarly communications and research impact. I think together we’re helping change the environment around here and raising the level of awareness and acceptance of altmetrics.Slide12

The Art of Collaboration

12 Nov

[The following is my monthly column for the November issue of the UMCCTS newsletter.]

One of the goals of the UMCCTS is to promote and facilitate collaboration across departments and disciplines, thus effectively reducing barriers between the basic and clinical sciences, and ultimately speeding the pathway between the discovery and implementation of new treatments, therapies, and the like that improve health. One means of demonstrating collaboration is through co-authorship. The networks that develop between authors of publications give us a picture of how individuals are connected and where collaborations exist.

Social network analysis is the process of investigating social structures through the use of network and graph theories. It characterizes networked structures in terms of nodes (individual actors, people, or things within the network) and the ties or edges (relationships or interactions) that connect them. (Wikipedia, Social Network Analysis

For this month’s column, let’s look at an example of a social network analysis that shows the co-authorship relationships between members of the Division of Health Informatics and Implementation Science in the Department of Quantitative Health Sciences (QHS). QHS is one of the newest departments at UMMS, with several of the senior faculty arriving on campus only about 6 years ago. The research that the Department does in developing innovative methodologies, epidemiological research, outcomes measurement science, and biostatics is integral to the nature of clinical translational research. By examining the co-authorship relationships of members of the Health Informatics group, we get a snapshot of how well these faculty members are connecting with other departments, other disciplines, and even other institutions. In short, we see how and where collaborations have developed and thus how well the UMCCTS goal of building them is being met.

To do this analysis, we first need to identify all of the publications authored by at least one of the Division’s faculty members for the period of time that s/he has been part of the Division, as well as all of the unique co-authors associated with these papers. In doing this, I found 221 publications authored by 716 different individuals. Using Sci2, a toolset developed at Indiana University, I was able to analyze the patterns and create a visualization showing the connections between the co-authors.

Informatics Division CoAuthor Network

One thing that we clearly see is that several faculty members are prominent hubs in the network, meaning they co-author many papers with many people. Drs. Houston and Allison are the most obvious examples here. We can also see that a number of branches grow from the periphery. At the base of each of these is a faculty member from the Division (counterclockwise from upper right, Drs. Cutrona, Hogan, Shimada, Mattocks, and Yu). Finally, we note that even hubs that are less connected to the clustered middle, e.g. Drs. Yu and Pelletier, are still linked, representing the reach of the collaborative network that the Division has formed over the past years.

Tools like Sci2, Scopus, SciVal, and ISI Web of Science provide another way, i.e. a visual demonstration, of the success of our programs and the impact of the translational science being done by the members of the UMCCTS.

Sci2 Team. (2009). Science of Science (Sci2) Tool. Indiana University and SciTech Strategies, https://sci2.cns.iu.edu.