Archive | Neuroimaging Project RSS feed for this section

Follow the Leader

17 Sep

I read a really interesting post on the Harvard Business Review’s blog yesterday titled, “Convincing Employees to Use New Technology.” Any regular reader of my blog knows that I’m fascinated with new technologies, behavior change, and the intersection of the two. I’m particularly interested in how they come into play in science and in libraries, the two places where I spend my working hours. For all that technology has done to reshape both of these areas, I continue to be amazed at how reluctant many scientists and librarians are to try new things and adopt them into their work habits and processes. Despite a growing body of evidence that helps us see which tools work well and which don’t, what behavior changes improve efficiency and which create distraction, and how we can more effectively advance our information dissemination, sharing, and networking, many still say, “No thank you!

The post from HBR hits on several reasons that might explain the reluctance, not the least of which is the lack of investment companies or organizations or institutions place upon adoption of these tools. 

The real return on digital transformation comes from embedding new work practices into the processes, work flows, and ultimately the culture of organizations. But even in cases where the value of adoption is understood, cost containment often takes over. Faced with limited budgets, companies focus on the most tangible part first – deploying the technology. Adoption is left for later, and often “later” never comes. (Didier Bonnet)

I’ve observed this pattern on multiple occasions, but one of the clearest was when I was working on a study involving the use of Twitter to help people lose weight. The idea was that the microblogging service could be used to develop a free, easy to access, online support group that could supplement in-person meetings of people in a weight loss group. What we learned, though, is that unless people are already active users of Twitter, we needed to build in time and effort to help participants develop behavior patterns around communication that involved Twitter. Without this, we were really seeking two behavior changes instead of one, i.e. behavior changes around diet and exercise, as intended, but also the adoption of a social media tool. (See “Tweeting it off: characteristics of adults who tweet about a weight loss attempt,” Pagoto et al, Journal of the American Medical Informatics Association, 2014 Jun 13.)

I’m sure that you can think of your own experiences where your organization or department or library or university implemented a new intranet or new personal profile pages or a blog. “It’s a GREAT IDEA!,” everyone thinks, but then lacking much motivation or incentive to contribute to it, the new, great idea slowly finds its way to the big cloud of wikis that went nowhere. Over time, we become jaded and cynical and whenever we hear someone suggest the next newfangled new idea, we immediately think, “Yeah, right. Like that ever works.

Yet, recognizing this, I think the HBR post hits on a fact that can, in time, truly make a difference in the adoption of tools:

Lead by example. You can influence the transition to new digital ways of working by modeling the change you want to see happen – and by encouraging your colleagues to do so. For instance, actively participating on digital platforms and experimenting with new ways of communicating, collaborating, and connecting with employees. It is the first important step to earning the right to engage your organization. Coca-Cola faced huge challenges when it deployed its internal social collaboration platform. Only when Coca-Cola’s senior executives became engaged on the platform did the community become active. As the implementation leader put it, “With executive engagement, you don’t have to mandate activity.” (Didier Bonnet)

From the Journal of Cell Biology. Used with permission https://www.flickr.com/photos/thejcb/4117496025/

From the Journal of Cell Biology. Used with permission https://www.flickr.com/photos/thejcb/4117496025/

One of the scientific communities doing a lot of leading here is the neuroscience community. When I began working on the neuroimaging project, I was thrilled to see how active this community is online. They have well-developed data repositories, online journals, information portals, and resources for cloud computing. (See NITRC, as an example.) They have an awareness of and openness to the ideas of sharing; to moving their science forward by using the tools that make sharing so much easier today. Indeed, I was brought on to the neuroimaging project to help improve a few processes along these lines.

And then this morning, I saw an announcement of another new online tool launched for the neuroscience community, this one an extension of the Public Library of Science’s (PloS) Neuro Community, a site on the platform, Medium*, “created as a collaborative workspace for reporting news and discussion coming out of this year’s Society for Neuroscience Annual Meeting on November 15–20, 2014.” Moving past “simply” tweeting a meeting, the Society instead is thinking ahead and building a place for openly sharing, contributing, and reflecting before the meeting happens. And it will be successful. You know why? Because those who initiate these tools in the neuroscience community are the leaders of the community. They have been a part of their past investments, seen the pay off, and thus continue to invest more for the future.

We need this same kind of leadership in libraries, in the Academy, and in other areas of science. Those of us who see and/or have experienced the value of implementing new technologies into our work need to be fairly tireless in banging the can for them. We need to continue to lead by example and hopefully, in time, we will all reap the rewards.

*I’ve become a big fan of Medium over the past months as a place to keep up with a lot of interesting stories on the Web.

Repeat After Me

22 Aug

Reproduction

Reproducibility is the ability of an entire experiment or study to be reproduced, either by the researcher or by someone else working independently. It is one of the main principles of the scientific method and relies on ceteris paribus. Wikipedia

I was going to start this post with a similar statement in my own words, but couldn’t resist the chance to quote Latin. It always makes you sound so smart. But regardless of whether these are a Wikipedia author’s words or my own, the point is the same – one of the foundations of good science is the ability to reproduce the results.

My work for the neuroimaging project involves developing a process for researchers in this field to cite their data in such a way that makes their work more easily reproducible. The current practice of citing data sets alone doesn’t always make reproducibility possible. A researcher might take different images from a number of different data sets to create an entirely new data set, in which case citing the previous sets in whole doesn’t tell exactly which images are being used. Thus, this gap can make the final research harder to replicate, as well as more difficult to review. We think that we may have a way to help fix this problem and that’s what I’ve been working on for the past few months.

At the same time, I’ve been working on a systematic review with the members of the mammography study team. This work has me locating and reading and discussing a whole slew of articles about the use of telephone call reminders to increase the rate of women receiving a mammogram within current clinical guidelines. It also has me wondering about the nature of clinical research and the concept of reproducible science, for in all of my work, I’ve yet to come across any two studies that are exactly alike. In other words, it doesn’t seem to be common practice for anyone to repeat anyone else’s study. And I can’t help but wonder why this is so.

I imagine it has something to do with funding. Will a funding agency award money to a proposal that seeks to repeat something; something unoriginal? Surely they are more apt to look to fund new ideas.

Maybe it has to do with scientific publishing. Like funding agencies, publishers probably much prefer to publish new ideas and new findings. Who wants to read an article that says the same thing as one they read last year?

Of course, it may also be that researchers look to improve on previous studies, rather than simply repeat them. This is what I see in all of the papers I’ve found for this particular systematic review. The methods are tweaked from study to study; the populations differ just a bit, the length of time varies, etc. It makes sense. The goal of this body of research is to determine what intervention works the best and in changing things slightly, you might just find the answer. What has me baffled about this process, though, is that as we continue to tweak this aspect or that aspect of a study’s methodology, when and/or how do we ever discover what aspect actually works and then put it into practice? 

Working on this particular review, I’ve collected easily 50+ relevant articles, yet as we pull them together – consolidate them to discover any conclusions – the task seems, at times, impossible. Too often, despite the relevancy of the articles to the question asked, what you really end up comparing is apples to oranges. How does this get to the heart of scientific discovery? How does it influence or generate “best practice”? I can’t help but wonder.

Yesterday, during my library’s monthly journal club, we discussed an article that had been recommended reading to me by one of the principal investigators on the mammography study. How to Read a Systematic Review and Meta-analysis and Apply the Results to Patient Care, is the latest User’s Guide on the subject from the Journal of the American Medical Association (JAMA). It prompted a lively session about everything from how research is done, to how medical students are taught to read the literature, to how the media portrays medical news. I recommend it.

Of course, there are many explanations to my question and many factors at play. My wondering and our journal club discussion doesn’t afford any concrete solution and/or answer, still I feel it’s a worthwhile topic for medical librarians to think about. If you have any thoughts, please keep the discussion going in the comments section below.

Back to the Starting Square

2 May

One of my favorite singer songwriters is Lucy Wainwright Roche. Fans of folk music who don’t know Lucy may well know her familiar last names. The daughter of Suzzy Roche and Loudon Wainwright III, she comes honestly to her musical gifts. One of my favorites of her songs is, “Starting Square.” It’s a song about seeing an old love again and taking note of the changes that happen after relationships end. That’s my take, anyway. And it’s summed up in the line,

I can tell you can tell it from there
That I may have been everywhere
But I’m back
Back to the starting square

Enjoy Lucy singing it.

I may not have been everywhere in the first round of informationist work, but as I met with the principal investigator of my latest grant-funded project this week, I did feel like I’m back at square one. This latest project is really very different from the mammography study that I’ve worked on for the past couple of years. This supplemental grant is to provide informationist services to the larger grant entitled, “A Knowledge Environment for Neuroimaging in Child Psychiatry.” Our ultimate goal (and there are more than a few steps to take before we’ll get there) is “to establish best practices and standards around data sharing in the discipline of neuroinformatics so that it becomes possible to generate accurate, easy to obtain quantitative metrics that give credit to the original source of data.” In short, it’s a project that will hopefully deliver a means for researchers to cite their data for both the purpose of data sharing and to make the science reproducible. I’ll work on determining the proper level of identification for neuroimages, the best identifier for the images (is it a DOI?), and the most efficient means of organizing and naming new data sets that are derived from bits and pieces of multiple other data sets.

During our first meeting, the PI showed me a whole bunch of really interesting websites and told me of many interesting projects happening in this area (directly and tangentially). I came back to my desk and promptly created a new folder of bookmarks for this work. So now… I’m back to the starting square. I’ve got a mountain of stuff to read and watch and become familiar with. It’s like the first day of class. The first assignments. And I need a new notebook!

I include a few of the resources below, if you’re interested in the topic and want to play a little catch up, too. Enjoy!