Should You Allow Viewer Comments on Your Science Video?

Video_comments_thescientistvideographerIn the previous post, I began exploring the topic of how online commenters can influence reader/viewer opinions about a science article or video. I pointed out two studies, which showed that negative comments can sway how readers perceive a science news article or how viewers perceive a public service announcement. In this post, I’d like to talk a bit more about what you can do as a creator of online content to minimize potential distortion of your science message—focusing specifically on videos.

Given the potentially negative impact of fractious commenters, what should the scientist videographer do? Allow all comments on posted videos, regardless of civility or agreement with your video content? Block all comments? Compromise by filtering out personal attacks, off-topic discussion, and profanity-ridden diatribes? Something else?

As I mentioned in the previous post, some science outlets have chosen to completely block all commenting on their videos. The U.S. Geological Survey, for example, blocks commenting on its YouTube channel but allows voting. This decision means that people who’ve watched the science videos that I published through USGS have no way of asking questions or providing me with feedback (e.g., about whether the video was clear or confusing). I do get some information from the statistics—number of views, likes/dislikes, embeds, and shares—that tells me something about how my video is being used. However, without specific comments, I don’t know what someone liked about my video or if someone else disliked the way I presented some aspect of the topic. That is important information for me to improve my videos and make them more understandable by my target audience.

Other science agencies, such as NASA, do allow commenting on their videos. And they don’t seem to police the comments, as there are off-topic or abstruse comments on some videos. Not surprising, though, considering their channel has over 500,000 subscribers and over 3,000 videos.

Most of us don’t have the level of interaction with our videos that NASA has. Our situation is often quite different from that of a science agency. So what should the individual video maker do?

The decision depends on several factors, specific to each situation. There is no right way or best way to go. But I can tell you what I do and why. On both my YouTube channel and this blog, I allow only approved comments. I’ve set the preferences so that I’m able to review each comment and then decide whether to allow it or delete it. Any comments that contain profanity, personal attacks, off-topic statements, misinformation, or spam are immediately deleted. If anyone repeatedly makes insulting comments on my channel, I have the option of banning them (which blocks them from ever posting a comment to my channel again). I can also report spam or abuse to YouTube as well as set up a “blacklist” with keywords to identify unwelcome comments. Note also that all commenters must have an account to comment on a video, which means no one can make anonymous comments—anyone can hover their cursor on the commenter’s name or pseudonym and find their account and any content they’ve posted. See this page for help setting comment moderation preferences on YouTube.

By the way, I do not respond to the insulting commenters. Many are trolls who are trying to get a rise out of the content creator or to cause general mayhem on a site. Others are people (I suspect teenaged boys) with the manners of a warthog. You might be tempted to lash out and write a scathing putdown, but that just feeds the fire and invites more insults. Instead, by simply deleting their comment without reacting to it, you consign them to an electronic purgatory where they are uncertain about whether their offensive missive even hit its target. And they are essentially helpless to do anything about it.

On this blog, my main problem is spam. For this, I use a plug-in (Akismet) to trap spam—and there’s a lot of it; over the past six months, there have been 210,000 comments identified as spam posted to this site. Still, a few slip through, but I can easily delete the three or four that the spam filter misses.

Sometimes, there are borderline comments that are difficult to categorize.  Some of the spammers are getting better at hiding their true nature in an almost human-like comment, and a few of these slip through. But over time, I’ve gotten much better at spotting spam comments. If I’m suspicious, though, I’ll usually delete the comment, figuring that if it is legitimate and important, the commenter will try again. That’s rarely happened, however.

So that’s how I take care of the trolls and spammers. What about people who are just critical of my videos?

Most of the comments I’ve received on The Scientist Videographer blog and on my YouTube channel are positive. However, a few commenters have pointed out errors or provided constructive criticism that I’m happy to correct or otherwise address. I’m grateful to all who have taken the time to write thoughtful, constructive comments. So, I allow all comments that point out an error in a video, simply disagree with something, offer an alternative opinion, or complain about not understanding something. I also try to reply to such comments—I view these as another opportunity to inform users about some aspect of the topic or to explain a particular point in greater detail. Even if the tone of the comment borders on aggressive/aggrieved/sarcastic, my reply is polite and respectful—and I try to resolve whatever the problem is or point the commenter to another site where they can find the information they need. My goal is to turn a complaint into a lesson or opportunity for additional learning by all viewers.

In summary, I consider the time it takes to filter comments to be time well spent—for the following reasons.

1. As I said above, I welcome critical comments as opportunities to provide more information to viewers or readers. I often get questions about why I chose one video approach over another, for example. My reply then provides details that could not be easily included in the video, and this addition enriches my original posting.

2. Critical comments help me improve my videos. The nice comments that thank me for a video tutorial are great, but they don’t help me improve. It’s the critical comments that show me where I could have done better. If you are a scientist who’s published papers, you already know this.

3. I put a lot of time and effort into creating content for my blog and video channel and also spend time optimizing that content for search engines. The comment section is an important component of each online posting, and as such deserves some attention. If comments influence how future viewers/readers perceive the quality of my videos or blog posts, then it’s essential that I ensure the comment section is not bogged down with spam or off-topic conversations.

Do Online Comments Sway Opinions About Science?

creepy_commenter_thescientistvideographerDuring the question and answer period after a recent seminar I gave on “Communication Tools and Strategies for the 21st Century Scientist”, someone asked me about whether I thought there was a downside to scientists communicating their work through video, social media and other online activities. I was not sure at first what specific downside the questioner was talking about. There are obviously a number of pitfalls that scientists may encounter when engaging directly with the public. Previous posts have addressed some of these (here and here, for example).

One issue that I’d like to explore in this post is the phenomenon of online commenters who distort the viewpoint of readers/viewers about a science topic. This “commenter effect” is of concern to the scientist videographer who regularly posts science videos online. One decision that must be made during posting is whether to allow user engagement (i.e., allowing viewers to vote (likes/dislikes) and/or comment on the video).

We’ve all seen online science articles and videos that are followed by a comments section filled with rants, digressions, personal attacks, and inaccurate statements by people with absolutely no expertise in science, not to mention the specific subject under discussion. These negative comments often hijack the discussion and obscure any legitimate exchange of information and opinions. More recently, however, studies (see below) have suggested a more troubling effect, which is that commenters can sway the opinions of others about a science topic—even when the source of the original information is highly credible. In other words, people’s confidence about the science being presented by legitimate sources could be shaken by commenter opinions.

That’s pretty scary when someone with no scientific credentials (and possibly ulterior motives) can be more persuasive than a science professional with years of experience. This cartoon captures this conundrum perfectly.

The online magazine, Popular Science, stopped accepting comments on new articles on September 24, 2013. They had decided, “Comments can be bad for science. That’s why, here at PopularScience.com, we’re shutting them off.” Their explanation for this action goes on to say, “Even a fractious minority wields enough power to skew a reader’s perception of a story.” The explanatory post referred to a study of this phenomenon in which participants read a fictitious blog post on nanotechnology and then were presented with the comments section, containing either insulting comments or civil comments. The study found that the nasty comments “…not only polarized readers, but they often changed a participant’s interpretation of the news story itself.”

Two of the authors, in a New York Times article, stated, “In the beginning, the technology gods created the Internet and saw that it was good….Then someone invented “reader comments” and paradise was lost.”

Another study (see the preprint here)—perhaps of more relevance to the scientist videographer—examined how perceptions of a Public Service Announcement (PSA) were influenced by user comments. Groups of study participants were shown two fictitious PSAs, one advocating vaccinations (the CDC) and another warning against vaccinations (an anti-vaccination group). The results indicated that, in addition to the PSA message itself, the consumer’s opinion of the PSA was influenced by commenter’s reactions to the PSA. In another experiment, the commenters were identified as being a doctor, a lobbyist, or a student. This information about the identity of commenters had an effect on the consumer’s opinion and appeared to moderate the influence of commenters, e.g., when the expertise of a commenter was high (doctor), their comments had a greater influence on consumers than did other commenters (lobbyist, student), regardless of the PSA-advocated position.

The take-away from these two studies might be that allowing commenting by everyone, regardless of expertise or intent, is counter-productive to science communication. That seems to be the thinking behind some decisions to block all commenting on science articles and videos. However, I think this reaction needs more scrutiny. Science benefits from an open exchange of ideas and opinions. Completely blocking input from the public cuts off a potential source of information and questions that could stimulate further research. It also sends the wrong message (that science communication is a one-way street) and dampens potential public interest in a science topic.

I think there is a compromise that will allow commenting but may minimize the negative influence described above. I’ll explore this option in the next post.

In the end, my response to the seminar question was that if scientists don’t participate in the conversation, other voices will gladly fill the gap—voices that don’t necessarily represent the best interests of science or scientists. I think that is a far more dangerous situation than the one in which scientists engage the public and make their work accessible and understandable by the broader audience.

Communication Down Under

As you may have guessed from the title of this post, I was recently in Australia (one of my favorite places in the world–despite the abundance of poisonous/dangerous animals). This was a combination work-holiday trip, and the following description provides some highlights:

Keynote Address at the University of Wollongong

I gave a keynote address “Communication Tools and Strategies for the 21st Century Scientist” at the Australian Mangrove & karenmckee_au_2015Salt Marsh Network conference held at the University of Wollongong in Wollongong, Australia. This was a great opportunity not only to reconnect with colleagues who study my favorite ecosystems, but to share my ideas about communication. You can see a version of this talk on the Prezi website (a few things were changed to fit this particular audience). I got lots of great feedback from the audience, especially students who had a number of questions about developing their own communication strategies. Also, I made contact with other scientists active in science communication:

Seminar at The University of New South Wales

In addition to the keynote address, I also visited Ben McNeil at the University of New South Wales in Sydney and gave a seminar entitled, “How Video Can Enhance the Communication of Science”. Ben is a co-founder of Thinkable, an online platform that facilitates crowdfunding of research and requires proposers to pitch their projects in a 3-minute video. We had interacted previously by email and Skype; so this trip was a chance to meet in person and share ideas about scientists using video. I also made some great contacts with science communicators at the seminar. Read more about Thinkable in this earlier post.

karen2Also, Thinkable is currently running a competition with a $5,000 award; the submission deadline has passed, but voting is ongoing until April 30, 2015. I mention this competition because you can see how some researchers are structuring video proposals and perhaps get some ideas for your next video (and don’t forget to vote!).

karenlmckee_timelapse_setupVideography

I also had the opportunity before the conference to do some traveling around South Australia and try out my iPhone 6 to shoot video. In the photo, I’m setting up the phone to shoot a time lapse of the surf on Kangaroo Island. I’ll be sharing my experiences and some of those videos in later posts.

Mishaps

As you may have noticed, I’ve not written any posts recently. Between traveling and a computer crash (aaaarrgggghhhh), I’ve not been able to write much of substance. I was really hamstrung without access to my photo and movie files. Fortunately, I had everything backed up….unfortunately, the backup was at home 9,000 miles away. However, I’m now all set up with a new laptop and ready to catch up with my writing.

More about this trip later….

Communicating Science Through Storytelling: A Double-Edged Sword?

wastebook_screenshotIn this series of posts, I’m talking about pitfalls in framing a science message for a non-specialist audience. In the last post, I described a study that showed an association between university press releases that make exaggerated claims about biomedical research and related news stories that also misrepresented the science. I also discussed why workshops designed to teach scientists to “sell their research” should include caveats regarding exaggeration of research findings. In this post, I want to explore one unintended outcome of having one’s research described in the popular press.

When research is described by the media with eye-catching titles, it may attract the attention of anti-science political groups. In the US, scientific research is a popular target of politicians seeking to highlight wasteful government spending. Scientific research projects funded by U.S. Federal grants and described as “Mountain Lions on a Treadmill”, “Synchronized Swimming for Sea Monkeys” or “Watching Grass Grow” sound like a joke at best and a waste of taxpayer money at worst. The latter is what U.S. Senator, Tom Coburn, M.D. (R-Oklahoma), would like Americans to believe. Coburn publishes an annual Wastebook, which purports to reveal wasteful spending by government officials (e.g., “Bureaucrats Gone Wild”), programs (e.g., “State Department Tweets @ Terrorists”), and scientific researchers (e.g., see titles above).

I heard about Coburn’s Wastebook recently in podcasts produced by Science Friday, which talked about attacks on science and scientists and why scientists should push back when it happens.

Science Gone Wrong?

The 2014 Wastebook highlights several research projects funded by Federal agencies such as the National Science Foundation (NSF) and gives them funny-sounding titles designed to bamboozle the taxpayer into believing that these studies are worthless wastes of time and money. Those of you who remember Senator William Proxmire and his Golden Fleece awards (1975-1988) will recognize the simplistic and often incorrect or incomplete characterizations of research projects that are targeted for ridicule. The research singled out by waste-busting efforts like this usually turn out to be far from wasteful. If you read the descriptions of scientific research in the Wastebook report and then google the research grant for the bigger picture, it quickly becomes clear that these awards misrepresent the science and offer no legitimate reason why they should not have been funded:

Mountain_lion

Image Credit: U.S. Fish and Wildlife Service (Public Domain)

1. The so-called “Mountain Lions on a Treadmill” project is actually aimed at promoting conservation of large predators by developing a new tool to track them. The NSF awarded $855,758 for the four-year, multi-investigator study that investigated energy use, hunting behavior, and spatial movements of mountain lions. For part of the study, the lions were indeed trained to use a treadmill so that researchers could test and calibrate a new wildlife collar to track predator movements in the wild (contrary to the Wastebook claim, however, the NSF funds were apparently not used for the treadmill work, according to one Principal Investigator (PI)). Why is the research important? Better tracking data could lead, for example, to better ways to predict and therefore avoid contacts between wild predators and human communities—saving people, pets, livestock, and the predators themselves.

The Wastebook questioned the expenditure, saying that “…scarce resources should be used to pay down the debt or on higher priorities, such as emerging biological threats that could pounce on anyone of us.” On the surface, this statement sounds reasonable—if the choice were really between the mountain lion study and the suggested alternatives. However, this argument is based on a false dichotomy and plays on the ignorance of the taxpayer about how research funds are allocated.

You can hear what one of the lead investigators, Terrie Williams, has to say about attacks on her project and on science in general in this podcast at Science Friday. Read her op-ed in the LA Times.

2. Another research project that made Coburn’s 2014 Wastebook list was entitled

Image Credit: © Hans Hillewaert / , via Wikimedia Commons

Image Credit: © Hans Hillewaert / , via Wikimedia Commons

“Synchronized Swimming for Sea Monkeys”, a name that conjures up frivolous images unrelated to the actual purpose of the 4-year study, funded by NSF, Office of Naval Research, and US-Israel Binational Science Foundation ($307,524). The principal investigator is John Dabiri, a Caltech professor of aeronautics and bioengineering. The objective of the project is to investigate the role of swimming zooplankton in affecting ocean circulation. Forces influencing ocean mixing and circulation are important to understand because they are used in model simulations to study climate change. The scientists used lasers to simulate ocean conditions in an experimental tank, tricking brine shrimp (commonly called sea monkeys) into swimming; they were then able to measure how much water was moved by the swirling shrimp. They chose this laboratory approach to avoid the exorbitant costs of an ocean-going study, something the Wastebook failed to mention. Instead, the Wastebook description trivialized the research by suggesting to the reader: “With kits available online and many toy stores, you can try to train your own team of synchronized swimming Sea Monkeys for as little as $12”.

You can hear a podcast by Dabiri giving his response to the Wastebook claim that his research is wasteful spending of government funds at Science Friday.

3. “Watching Grass Grow” targets a research project designed to determine how best toFLspart harvest a salt marsh plant (Spartina alterniflora) used widely in coastal restoration projects so that the donor marsh is not damaged in the process. This one caught my eye because I’ve conducted research on this plant myself. The Wastebook refers to this cordgrass study as “just another weed of government waste” and fails to mention the important role this plant plays in our nation’s coastal zone. Salt marshes dominated by this species are important stabilizers of shorelines, buffers against storms, and nursery grounds supporting fisheries, to name a few values. The plant is being removed from its natural habitat for transplantation to other areas to reduce erosion or to create or restore marsh, but this activity may damage the donor marsh in the process, which is obviously counterproductive. The investigators will determine what percentage of the donor marsh can be removed and still recover. The $10,000 provided by the U.S. Fish and Wildlife Service is an extremely modest expenditure to figure out how to avoid damage to an important natural resource.

Those are just three examples of research projects featured in Coburn’s report, but they serve to illustrate the nature of this type of attack on science. These false claims of waste were propagated by the media…many news outlets repeated the material in the Wastebook without bothering to determine if these were truly examples of wasteful government spending. There may be legitimate reasons not to fund a particular research project, but such reasons are not offered in the Wastebook. Instead, the research is caricatured and compared to more worthy-sounding, but totally unrelated, causes. And without an explanation of the vetting process that research projects go through, the taxpayer is left to assume that these funding decisions were wrong-headed, based on esoteric rather than meritorious criteria, or made carelessly. Of course, most of the research targeted by the Wastebook went through a rigorous review process in which scientific peers and expert panelists scrutinized the proposed project and the  PI’s qualifications. Given the intense scrutiny that research proposals are put through and the low percentage that end up being funded, it’s unlikely that a truly unworthy study would be funded—and certainly not the number listed in the Wastebook.

How to Avoid Having Your Research Featured in the 2015 Wastebook

You may wonder, as I did, how did Coburn and staff become aware of these research projects and why were they selected for inclusion in the 2014 Wastebook? Well, they may have started with the list of Federally-funded projects at www.grants.gov. You can search all awards and find the names of the PIs and other information. But it’s more likely that Coburn’s office became aware of these particular research projects because they were mentioned in online media. The descriptions in the Wastebook contain information, particularly quotes from specific sources, that are identical to those found in online news articles and press releases. And particularly telling is how these news items are titled.

For example, the “Synchronized Swimming for Sea Monkeys” description in the Wastebook contains several quotes originally published in the following online articles or press releases:

Williams-Hedges, Deborah. “Swimming Sea-Monkeys Reveal How Zooplankton May Help Drive Ocean Circulation | Caltech” The California Institute of Technology. Press Release. 30 Sept. 2014. Web. 16 Oct. 2014.

Boyle, Alan. “Laser-Guided Sea-Monkeys Reveal How Critters Boost Ocean’s Waves.” NBCnews.com. NBC News. 30 September 2014.

Lee, Jane J. “Laser-Guided Sea-Monkeys Show That Tiny Animals Can Move Mountains of Seawater.” Nationalgeographic.com. National Geographic Society. 30 September 2014.

Geggel, Laura. “Tiny Sea Monkeys Create Giant Ocean Currents.” Livescience.com. Purch. 30 September 2014.

So there’s no mystery to how Coburn’s staff came to use the term, sea monkey, or where they got some of the verbiage used in preparing the 2014 Wastebook. For example, the Wastebook repeated critical comments made by another researcher about Dabiri’s project in the National Geographic article —which helped to call into question the validity of the research: “Christian Noss, an environmental physicist at the University of Koblenz-Landau in Germany, says that he’s not convinced the effect would scale up from the laboratory to the ocean.” That was a legitimate comment in the original article, but it was used out of context in the Wastebook description. The Wastbook further trivialized the research by suggesting that taxpayers could grow and train their own sea monkeys. Where did they get that idea? The Caltech press release said, “Brine shrimp (specifically Artemia salina) can be found in toy stores, as part of kits that allow you to raise a colony at home.”

“Mountain Lions on a Treadmill” opens with this quote: ““People just didn’t believe you could get a mountain lion on a treadmill, and it took me three years to find a facility that was willing to try,” exclaimed Terrie Williams, a University of California-Santa Cruz professor.” This quote can also be found in a news story prepared by the scientist’s university press office: Tim Stephens, “Study of mountain lion energetics shows the power of the pounce,” University of California-Santa Cruz News Center, October 2, 2014.

The “Watching Grass Grow” title and main message can also be traced to an article in an online newspaper (Patterson, Steve. “In Guana Marsh, Research Sheds New Light On Old Florida Environment.”Jacksonville.com. The Florida Times-Union. 11 August 2014.). The news article’s first sentence states: “To some folks, watching grass grow could seem sort of tedious, especially if they just planted it.” The first sentence in the Wastebook description states: “The Federal government is literally paying people to watch grass grow.”

Ouch.

What this all boils down to, it seems to me, is that how these research projects were characterized in press releases and later by popular press articles provided fodder for Coburn’s Wastebook—and may have put it on the senator’s radar in the first place.

In an attempt to make the research palatable to the general public, the authors of these press releases and news articles used terminology that unfortunately handed Coburn a weapon that could be turned against the science and its funders. John Dabiri, in his podcast for Science Friday said that explaining science is a “two-edged sword”. On the one hand, it’s important to show that research is not an esoteric endeavor conducted in an ivory tower. To do this requires describing research in easy-to-understand terms. However, in making the research understandable, it’s easy to create caricatures that can be later misused by people with political agendas.

The lesson for scientists is to get involved in sharing your research with the public so that you have some control over how it is described in the popular media. Think carefully about how to characterize your research before an interview by a news outlet or your institution’s press office. Although anyone’s words can be twisted and used against them, some preparation can help you avoid saying something you later regret. If you don’t feel comfortable with a particular interview question, politely decline to answer or offer something else more relevant: “That’s really outside my area of expertise; however, I can describe how that works in my field of study.”

If your institutional press office is planning to write a press release about your work, insist that you be involved in the writing or at least to see a copy prior to release. Too few scientists bother to do this, perhaps thinking it’s not their job. My view is that it is the scientist’s job to ensure that their research is described accurately to the public and not exaggerated in an attempt to make it sound more interesting or relevant. Let me hasten to add here that I’ve not always been successful in controlling the message in such interactions—so I realize that the scientist is not always to blame for what ends up in a press release or news story. However, most press officers and journalists are interested in accuracy and will appreciate the scientist’s input.

The best way to control the message is to take the lead in explaining your research to the general public. You might write a popular article, prepare a fact sheet, or shoot a video. In doing so, of course, you want to take care in characterizing your topic and methods and in explaining why your research is important to society. Avoid making any unsubstantiated or exaggerated claims about your research. Especially avoid using titillating, lascivious, or silly words to describe your research in an attempt to attract more attention to your topic. You might get more than you bargained for.

Communicating Science Through Storytelling: Beware Exaggeration

nasainterview_Valley_fever_1

Image Credit: NASA, www.nasa.gov

Scientists are being encouraged to develop better communication skills and engage the public by finding the story in their research and tailoring it for a wider audience. I’ve certainly encouraged readers to design their science videos so that they tell a compelling story: The Stories We TellSilver Linings, Kurt Vonnegut, and Telling Science StoriesI’m Not Interesting, But My Research IsCan Scientists Be Taught To Talk and Act Like Normal People? However, there’s a downside to reframing one’s research to be more interesting to a broader audience.

Three recent events have made me ponder the dangers inherent in designing science messages that appeal to non-scientific audiences. I’ll describe the first two events in this post and the third in a follow-up post.

Press Releases That Exaggerate the Science

The first event that got me going on this topic was publication of a paper in the British Medical Journal (BMJ), which reported an association between exaggerated news stories about biomedical or health-related research findings and press releases that inflated or extrapolated the research findings beyond what was reported in the original paper. Of 462 press releases issued by 20 leading universities in the UK, 40% contained exaggerated advice, 33% contained exaggerated causal claims, and 36% contained unsupported inference from animal research subjects to humans. 58, 81, and 86% of related news stories written by journalists expressed similar exaggerations in those three ways, respectively. By comparison, news stories related to more accurate press releases had exaggeration rates of 17, 18, and 10%, respectively. The results pointed to a previously unrecognized source of inaccurate news stories about medical research findings: Press releases prepared by the scientists’ own institutions.

Of course, this is only one study, which was based on a correlative approach. That is, the correlation between press releases and news stories may not indicate a cause and effect relationship, and the exaggeration may not be the fault of press officers or journalists but could instead reflect a third factor—scientists who themselves exaggerated their work in interviews, for example. This paper did not address these points. Also, this study focused on biomedical research in the UK; we don’t know whether these findings are representative of science in general.

Whatever the reason for the results, though, this BMJ paper definitely raises important issues about how research is described in the news media, who is designing the message, and how exaggeration gets propagated.

Workshops That (Inadvertently) Teach Scientists to Exaggerate

The second event that caught my attention was a post on the British Ecological Society’s blog: Maximising the reach of your research paper. The post described a workshop at a recent BES meeting that provided some “practical tips on how to sell your research”. The workshop participants designed tweet-length messages for six research papers to be distributed to three hypothetical audiences: the general public, interested non-specialists such as policy makers, and other scientists.

As I was reading this, I was nodding and thinking that this workshop was a worthy effort and a nice way to teach researchers to share their work more broadly. But then I got to the examples of messages that the workshop participants designed and saw a problem.

Here are the workshop-generated examples of messages about a paper reporting that foraging by bumblebees is affected by pesticides:

 Public: “Chemical cocktails intoxicate bumblebees and may lead to increasing food prices”

Policy: “Pesticides make bees worse at foraging for food: may affect bee survival and pollination rates”

Scientists: “New study of bumblebee foraging uses RFID tags to highlight the importance of looking at long-term, realistic exposure to sub-lethal effects of pesticides”

The messages written for policy makers and scientists appear to be accurate reflections of the original paper. However, I can’t say the same for the public message. That message replaced scientific terms with words that are more familiar to the average person (cocktail, intoxicate) and that would presumably attract more attention than the technical terms (pesticides, impaired foraging behavior). The public message also suggests that cocktail-imbibing bumblebees will somehow lead to higher food prices—included presumably to show how the research might affect the average consumer. There are three problems that I see with this message. One is that it lacks a key word—pesticide—which was the focus of the research; instead, the message uses the term “chemical cocktail”. The second problem is that the message uses “titillating” terms (cocktails and intoxicate) that are either inaccurate or confusing. Cocktail gives no hint that this is a pesticide and, along with “intoxicate”, implies that alcohol is one of the ingredients and downplays its potential toxicity. Third, the message makes an exaggerated claim regarding effects on food prices—exaggerated because the researchers did not examine impacts of impaired bumblebee foraging on costs of food.

In other words, the public message not only fails to accurately convey what the researchers studied and found, it makes an unsubstantiated causal claim, similar to the problems noted in exaggerated press releases and news stories.

Now, I’m not saying that these workshop organizers set out to teach participants how to exaggerate their research. Nor am I bashing the idea of teaching scientists how to communicate their research to the public. I realize that the questionable bumblebee message was the result of a workshop exercise by participants with limited experience. Maybe the workshop organizers critiqued all the messages, including this one, and pointed out where they needed improvement. I don’t know. I wasn’t there. All I know is that this example shows how easy it is to exaggerate research findings, especially in a tweet-length message.

My intent here is not to discourage such workshops or to criticize this particular workshop, but to make people aware of some of the pitfalls in designing such messages. Clearly, making exaggerated claims in a press release or tweet is not the way to “sell” your research. I’m all for using everyday terms in place of scientific jargon, but not at the expense of accuracy or clarity. Also, I think the public recognizes hyped verbiage and is getting tired of it.

Had I not first read the BMJ paper about exaggerated press releases and news stories, I probably would not have noticed anything wrong with the bumblebee message or started thinking about why we need to take more care in designing our stories to convey a science message. Fortunately, I did, because I think the problem is likely to get worse as more scientists try to “sell their research” through storytelling and social media.

There is clearly an art to designing short messages that are both accurate and appealing to the general public. You may be wondering, as I did, how was this bumblebee research described in real press releases and in the popular press? Did they use titillating terms or make exaggerated claims? You decide. Here are links to the research institutions’ press releases and several online news stories about the study:

Radio tags reveal how pesticides are impairing bumblebees’ ability to forage (Imperial College London)

Bee foraging chronically impaired by pesticide exposure: Study (University of Guelph)

Bee foraging chronically impaired by pesticide exposure: Study (Science Daily)

Environment: Bumblebees lose foraging skills after exposure to systemic neonicotinoid pesticides (Summit County Citizens Voice)

Commonly Used Pesticide Has Detrimental Consequences On Bumblebee Foraging (IFLScience)

Bee foraging skills impaired by neonicotinoid pesticides (CBC News)

As you can see, the two press releases and subsequent news stories (I’ve included only those on the first search page in Google) about the bumblebee research all used titles that accurately conveyed the research findings. The news stories repeated or slightly rephrased the titles put out by the researchers’ institutions. They did not substitute more flamboyant terms for “pesticide” or “bee foraging” and did not make unsubstantiated claims as to how impaired bumblebee foraging might affect humans. I could not find any other news stories that took major liberties with the message or that used questionable words to describe the research. I did see one news story that had this title: Cocktail of pesticides increases bee deaths, says study (Farmers Weekly). Unlike the workshop message, however, this title makes it clear what the cocktail refers to.

In our haste to jump on the science communication bandwagon, we may be making some mistakes (I certainly have). It’s important for scientists to participate in science communication and engage the public; however, we should not get carried away in making our research more palatable to non-specialists and compromise our scientific ideals. Those of us who are encouraging and training students to be better communicators have an obligation to point out pitfalls. We especially need to reiterate the idea that all of our communications, both technical and non-technical, should be held to a high standard. Otherwise, we risk losing the trust of the public.

This article by Virginia Hughes is an excellent summary of this issue and contains some good advice for scientists and science communicators on how to avoid misrepresenting science.

That’s not all…

There is another danger in putting out inflated research stories, especially with titillating or lascivious titles: Such research may get noticed by anti-science political groups. And those “catchy” titles? They become weapons that are turned against the scientists who conducted the research. In the next post, I’ll describe the third event that has made me rethink how we design our science messages for public consumption.