How to Improve Your On-Camera Delivery in Science Videos

Picture this scenario:

A middle-aged scientist in a white lab coat is speaking on film about his research on cancer. He’s sitting in a well-equiped laboratory and looks very authoritative. The camera gradually pans from a broad view of the room to focus in on the scientist. He begins by saying, “I’m really passionate about my work and want to share my findings with you in this video.” The only problem is that this cancer researcher does not look or sound passionate! Far from it. Instead, he sounds like a robot. He speaks in a monotone, does not smile or show any other facial expression, uses no hand gestures, sits stiffly and does not make eye contact with the viewer (his eyes are looking down or off camera). Things don’t get any better as he continues to explain the details of his research. 

Now, I can sympathize with this guy because this is how my early attempts at making videos about my research looked and sounded. I’ve improved since then, but still find it really difficult not to come across on camera like Mr. Spock (played by Leonard Nimoy in the original Star Trek series). Spock had difficulty showing emotion due to his Vulcan ancestry.

So what’s our excuse?

I think there are three basic reasons why some scientists come across on camera as being stiff and robotic: personality, training, and fear of the camera. People who are naturally gregarious or funny come across well on camera, but someone who is introverted may seem stiff or robotic. It’s possible to go against your natural demeanor, but you will likely find it difficult. I’m a naturally reserved, quiet person and feel terribly awkward when I try to be more extroverted. Also, I have to fight the years of training and experience talking to an audience of scientists, during which I cultivated a demeanor of calm confidence and authority. My talks at conferences and in seminars have been successful because those audiences expected a serious, academic delivery. But what works for an audience of scientists can be a detriment on camera. My serious, authoritative demeanor could be misinterpreted as arrogance or just a nerdy attitude. In addition, the camera not only adds ten pounds to your apparent body weight, it drains your energy. Consequently, it’s necessary to be more personable and to raise your energy level when being filmed above that normally used with a live audience. If you are like me and have a more reserved demeanor, you will have to work much harder than your colleague who is naturally gregarious and likeable.

Also, many people—even experienced speakers—freeze up when the camera is turned on them. They get that “rabbit in the headlights” look on their faces, and their bodies seem to turn to stone. Whenever a camera was turned on, I found it difficult to gather my thoughts and speak coherently. This reaction is a bit like stage fright and can make you look like someone with “Stuck in Their Heads” syndrome. Extreme self-consciousness is the culprit here.

After watching many, many videos made by science professionals (or videos in which a scientist appears), I realized that there were quite a few people out there with the “Stuck in Their Heads” problem. I’ve wanted to make a video tutorial about how to improve on-camera delivery, but put it off because I did not think I was the best person to tackle this topic. I thought it was better to hear tips about on-camera delivery from someone who does it well. However, it finally occurred to me that people might want to hear how a scientist with this problem has faced the problem and eventually improved.

In the video below, I briefly explain what I think are the main problems someone faces when trying to speak on camera and a few ideas of how to overcome them (direct link to video).

As you saw, there are several ways to improve your on-camera delivery if you are having problems. I focused on the most common issues and how to overcome them. My take-home message to you is not to give up if your delivery is poor at first. Keep practicing and you will improve. Even though I’m not as engaging or likable or convincing as, say, Neil deGrasse Tyson, and never will be, I have improved. More importantly, I feel less self conscious and thus more comfortable speaking on camera.

One bonus to learning to speak with more energy and confidence on camera is that it can help you in other stressful, speaking situations such as a job interview seminar or a TED talk. If you have an upcoming presentation, film yourself practicing your talk and try to apply some of the tips I cover in the video. I think you’ll find it’s well worth the effort.

Can Artificial Intelligence Help Scientists Be Better Communicators?

This post is part of a series about Artificial Intelligence (AI). In this concluding post, I explore the possibilities of AI to help scientists be better communicators.

As I’ve talked about before, many scientists have difficulty communicating their work in a way that is interesting and compelling, both intellectually and emotionally. This situation is improving, as more people recognize the importance of addressing the growing anti-science movement in the U.S. and the need for credible and articulate scientists to state the case for science. Once upon a time, scientists could safely remain in their ivory towers and talk among themselves about science. But no longer. Scientists are increasingly called upon to talk to the media (the AAAS has even published media interview tips for scientists), to testify before Congressional committees, to give public lectures, and to explain the “broader impacts” of their research on society. Consequently, efforts are underway to train the next generation of scientists to be better communicators (e.g., through academic programs focused on science communication). Science students today also seem to have a greater interest in developing better communication skills than when I was a student (just my personal impression).

Despite the advances in communication technology and emphasis on the new media to communicate information, though, I find that students still struggle with many of the same issues that plagued earlier generations when it comes to explaining science. And as the volume of science information grows exponentially, staying abreast of the literature and communication technology will be increasingly difficult for these future scientists.

The following are a few ways in which AI may help.

Designing a More Effective Science Message

One of the difficulties faced by a science communicator is how to design a message that resonates with a particular target audience. Few scientists are trained in communication theory and often rely on their default mode—explaining their work as they would for a technical audience. But what if the intended audience is not trained in science? How would you know if your message is appropriate in content and tone? AI might help, for example with the tone. The IBM Watson Tone Analyzer “uses linguistic analysis to detect three types of tones from text: emotion, social tendencies, and language style. Emotions identified include things like anger, fear, joy, sadness, and disgust. Identified social tendencies include things from the Big Five personality traits used by some psychologists. These include openness, conscientiousness, extroversion, agreeableness, and emotional range. Identified language styles include confident, analytical, and tentative.” One of the intended uses is to optimize a message intended for a particular audience. A message that shows strong emotions and is less analytical in style may be perceived more favorably by the general public, for example. You can try it out by inserting a piece of text into a dialog box and get an analysis of the overall tone of the message as well as a sentence by sentence breakdown. There are links to additional information about what a particular tone conveys and how to improve the tone of a message.

Finding Appropriate Material for Your Science Message

AI may be particularly useful in reducing the time involved in finding material to include in a message as well as to locate media that can be freely reused (such as in the public domain) or purchased for a fee. I know I spend a lot of time searching for footage, images, animations and music that I can freely use in a science video. Because I can’t review everything available, I probably miss a lot of really good material. Search engines can locate photos or videos posted on the Internet based on provided keywords and criteria (e.g., size, resolution, format). However, I may still end up with thousands of candidate media, not necessarily ranked according to what I might need. Artificial intelligence systems may improve such searches. Google, which used algorithms (rules set by humans) in the past to respond to search queries, is transitioning to deep neural networks, which can learn to respond to new search queries and other tasks such as figure out where a photo was taken. Improvements in search tools could make finding the right media for a video or other information product much easier.

Creating Media for Your Science Message

Another way AI might help is in creating new media such as art or music that can be used in an information product. If you need a painting or jingle but are not artistic or musically inclined, you may one day be able to generate what you need using an AI system trained to do this. An example of artwork created by an AI system and a 3-D printer is a new painting by Rembrandt…or rather one created by a computer based on information from 346 of Rembrandt’s paintings. The video below shows the amazing process by which this 3-D painting was created:

There are also efforts to develop AI systems that can compose music. Google is apparently working on such a system, although not everyone is impressed with the result. If you want to play around with a music-composing system (based on language algorithms), check out Wolfram Tones. You can select a music style and change up the instruments and other aspects to create a unique tune.

Teaching Science Professionals to Communicate Like Normal People

Scientists are traditionally taught to maintain a serious demeanor when speaking to an audience of our peers so that we are judged to be credible sources of information. But this approach doesn’t work so well with the average person. By hiding our emotions, we can come across on camera as “robotic”, for lack of a better term. So it’s rather ironic to consider if an AI can help scientists be better communicators.

The computer, Watson, was trained to “recognize” different emotions displayed in a film and to assess and rank the ones that would work best in a movie trailer about that film. In the same way, an AI system could be trained to evaluate video footage showing scientists or students conducting their research or discussing the challenges they faced and select the best clips in terms of conveying emotion (enthusiasm, humor, curiosity, tension). But we don’t really need a computer to tell us which footage shows a particular emotion—we are much better at this than any machine or program currently available.

However, an in-depth analysis of a person’s on-camera delivery of information might be used to train science professionals to be better communicators. A video clip could be fed into a computer like Watson to be assessed on the basis of both content and tone. The speaker would be evaluated and scored according to various criteria. They could then try to alter some aspect of their performance and see how it affects their scores. This immediate feedback from a machine might be a faster, more efficient, and less painful way for someone to improve their communication skills. A problem could be identified early and  eliminated before it becomes a habit.

I’m not aware of any system that can analyze a video of a person speaking, but there are AI-based personality tests that use what someone has written (an essay, a letter). One example is the Watson-based service, Personality Insights. You can see the outcome for famous people (Gandhi, Barack Obama) or you can insert your own text. I gave it a try by inserting the text from one of my blog posts. Here’s what it said about me:

You are unconventional, somewhat indirect and skeptical. You are authority-challenging: you prefer to challenge authority and traditional values to help bring about positive changes. You are philosophical: you are open to and intrigued by new ideas and love to explore them. And you are unstructured: you do not make a lot of time for organization in your daily life. Your choices are driven by a desire for discovery. You are relatively unconcerned with both tradition and taking pleasure in life. You care more about making your own path than following what others have done. And you prefer activities with a purpose greater than just personal enjoyment.

This analysis is eerily correct about some aspects of my personality. However, I got different results when I tried different text. For example, a second blog post generated the statement that my “choices are driven by a desire for organization”–opposite to the preceding analysis. Other aspects remained the same: authority-challenging, love of discovery, going my own way rather than following others. It’s necessary to provide sufficient text for a strong analysis, and the service warns you if you’ve given too little text. It also provides a more in-depth breakdown of the various traits that were analyzed.

As I said above, I don’t think there are any AI systems that can analyze a person’s performance in a video. However, it seems that this might be possible using a combination of existing AI systems such as the movie trailer and the personality test described above.

Conclusion

In this post, I’ve mentioned only a few ways AI might be used to improve science communication. Some of these systems, such as better search engines for locating media, are already available to us. Others will need more work and testing. I started this series with a hypothetical, futuristic scenario about a scientist using an AI system to create a video proposal. I don’t know if such a thing would ever exist or even be of widespread use in the scientific community. But it was fun thinking about it and learning more about AI systems.

This post is final part of a series about Artificial Intelligence (AI) and its potential role in science communication. You can find the first post in the series here.

What Does Artificial Intelligence Say About Human Creativity?

pexels-photo-largeThis post is part of a series about Artificial Intelligence (AI) and its potential role in science communication. In this post (part 4), I talk about creativity and how it relates to AI.

In the previous posts, I’ve been talking about the computer Watson and how it helped create a trailer for the movie Morgan. Is this “cognitive movie trailer” evidence of AI creativity or the potential to mimic human creativity? In other words, can a human be replaced by a machine—in this case a trailer editor who uses skill and imagination to create something new?

Let’s first consider what creativity is. The dictionary defines creativity as the ability to make new things or think of new ideas. But is it a trait only exhibited by humans? Is it an attribute that some people have and others don’t? Is it an occasional mental state that we enter? Can one learn to be more creative? I’m not sure of the answers to all these questions, but perhaps it’s more helpful to ask what creativity is not. It’s not problem solving, which is a process whereby a “rule” or “algorithm” is applied to solve a problem. Being able to understand and apply a rule is different from discovering the rule.

In the case of the computer Watson, we can see that understanding what a movie trailer is and identifying the best scenes from the movie Morgan to use fall into the realm of problem solving and not creativity. A human stepped in to do the actual film editing, which additionally suggests that the “creative” aspect of putting together the trailer could only be done by a person with the requisite editing skills and imagination to sequence the clips and add other components such as music and text. However, I don’t think a human was essential to do the editing, once the scenes were selected.

A movie trailer template could have provided a guide with plascreenshot_imovie13ceholders for media and text, much the way iMovie trailers are created. In this screenshot, you can see an iMovie trailer template, which guides the choice of video clips and text. Scenes are suggested, as are text titles that form a story. Such a template could have been used along with the ten selected scenes from Morgan to produce a finished trailer. However, such an ability by an AI could not be called creative. Although some decision-making would be involved in selecting which scene to go into each placeholder, those steps would be guided by a set of rules—in other words, problem-solving, not creativity. Also, templates would produce an assembly-line of movie trailers that all follow the same format—rather than a unique trailer with sequences, pacing, music, and other features individually selected by the editor using his or her knowledge, skill, and imagination.

I think we are a long way from machines that think and create like humans. However, we are at a point where AI can be used to enhance human skills and help us perform tasks involving vast amounts of information. Artificial intelligence systems are already at work aiding, for example, analysis of medical images, detection of suspicious charges to our credit cards, or automated telephone customer service. The real question is not whether AI can replicate human thinking or creativity but how AI can help humans create new things or think of new ideas faster and more efficiently.

This post is part of a series about Artificial Intelligence (AI) and its potential role in science communication. In the next and final post (part 5), I’ll discuss how AI might help scientists be better communicators.

How Did Artificial Intelligence (AI) Help Create a Movie Trailer?

This post is part of a series about Artificial Intelligence (AI) and its potential role in science communication. In this third post, I describe how the computer, Watson, helped create a movie trailer.

Before we get to the Watson movie trailer, let’s first think about how movie trailers are made. Movie trailers are designed to convince people to go see a particular movie. Superficially, trailers appear to be a condensed version of the film, but good trailers are carefully designed to raise expectations and to appeal to the viewer’s emotions. Most trailers follow a typical formula, modified for the genre such as Action/AdventureComedyDrama/Thriller, or Horror. Many trailers begin by introducing the characters and the setting of the film. Next to appear are the obstacles that change that world and set the characters on a new course. This may be followed by increasingly exciting, funny, or tension-filled scenes to ramp up the viewer’s desire to find out what happens. The specifics—selection of clips, the way they are cut (rapid-fire or slow-reveal), the fonts used for text titles, narration, music, and other choices—differ among movie genres.

All, however, are built more or less the same way by the trailer editor. The original movie is first watched carefully and deconstructed to reveal its basic components, visual and audio. The process then slices the movie audio and video further into segments that can then be rearranged to build the trailer. Next comes the choice of the best elements to use. Is the acting superb? The cinematography? The story? Editors often select those elements that highlight the merits of the film or the ones that have the most emotional impact on a viewer.

Not surprisingly, the AI-enhanced trailer of the movie Morgan was created in much the same way as a regular trailer. The first step, however, was to train Watson to understand what a movie trailer is and what features of a movie are used in movie trailers. The IBM team did this through machine learning and Watson APIs (Application Programming Interfaces, i.e., programming instructions). Basically, each of 100 movie trailers was dissected into component scenes, which were then subjected to the following analysis: (1) Visual (identification of people, objects, and environment), (2) Audio (narrator and character voices, music), and (3) Composition (scene location, framing, lighting). Each scene was tagged with one of 24 emotions (based on visual and audio analysis) and further categorized at to type of shot and other features.

Once Watson was trained, it was fed the full-length movie, Morgan. Based on its knowledge of what makes up a movie trailer—particularly a suspenseful one, Watson then selected ten segments as the best candidates for a trailer. These ten turned out to be scenes belonging to two broad categories of emotion: tenderness or suspense. Because the system was not taught to be a movie editor, a human editor was brought in to finish the trailer. The human editor ordered the segments suggested by Watson and also added titles and music. [see reference below for additional details]

Here’s the trailer that resulted, along with some explanations of how it was done (direct link to video):

As you saw, the end result looks and sounds like a typical movie trailer. The big question is if this cognitive movie trailer does what a good trailer should: make us want to see the movie.

If you like science fiction films that explore questions about human engineering or artificial intelligence, then this trailer might appeal. The trailer does convey through the ten selected scenes that Morgan is an engineered creation that goes rogue—a story we’ve heard before. However, we are left in the dark about what exactly Morgan’s problem is (other than being locked up) and how the humans will deal with it. Many trailers fail by showing too much of the story. For example, the official Morgan trailer shows a lot more of the movie, which made the story sound similar to another movie, Ex Machina (an engineered human-like entity is confined in a futuristic laboratory, tested for flaws, goes amok, kills or maims one or more people, and escapes into the world). But by limiting what’s revealed, the Watson-enhanced trailer makes us think that maybe this story will differ from previous movies and be worth seeing.

I thought the computer-selected segments were interesting in that they not only conveyed a range of emotions (happiness, tenderness, suspense, fear), but many did so in a subtle way (a smile, a hand gesture, a slight gasp, a head turn). No scenes seemed to be selected from the latter part of the movie, which would have given too much of the story away. I don’t know if this was a result of the Watson system ranking scenes near the end lower than those from the beginning and middle.

In the end, I think the Watson-enhanced trailer is pretty good and perhaps better in some ways than the official trailer created entirely by a human.

For more information about the making of the Morgan movie trailer, see this article: Smith, J.R. 2016. IBM research takes Watson to Hollywood with first “cognitive movie trailer”. Think <https://www.ibm.com/blogs/think/2016/08/31/cognitive-movie-trailer/>

This post is part of a series about Artificial Intelligence (AI) and its potential role in science communication. In the next post (part 4), I’ll talk a bit about what AI means for human creativity.

What is Watson and What Does It Have to Do with Videos?

This post is part of a series about Artificial Intelligence (AI) and its potential role in science communication. In this second post (part 2), I describe Watson, a computer that was trained to assist in the making of a movie trailer.

artificial-intelligence-elon-musk-hawkingIn the previous post (part 1), I explained that IBM’s computer system, Watson, was used to help a Hollywood film studio make a trailer for the movie, Morgan. But what is Watson? According to the IBM website, Watson is “a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data”. Translating that into everyday language: Watson is a computer that can answer tricky questions like the ones posed on the gameshow Jeopardy!. In 2011, Watson beat two reigning champions, providing answers to Jeopardy! clues—example: even a broken one of these on your wall is right twice a day; correct reply: what is a clock?—and winning $1,000,000 (which was donated to two charities).

Actually, Watson is a cluster of computers (90 servers and 2880 processor cores) running something called DeepQA software. Despite its performance on Jeopardy!, Watson does not “think” like a human and arrives at an answer to a question differently. Tons of information from various sources have been input, providing Watson with an enormous information base to analyze. For the game show, Watson used more than 100 algorithms to come up with a set of reasonable answers to a question. It then ranked those answers and searched its information database for any evidence in support of each answer. The answer with the most evidence was given the highest confidence. When the confidence was not high enough during the Jeopardy! game, though, Watson did not risk losing money by offering an answer.

Despite fears that AI will eliminate jobs or go rogue and destroy humankind, as depicted in the Terminator series, the system is viewed by developers as a way to augment human intelligence and to reduce the time spent on tasks involving large amounts of information. IBM prefers the term Augmented Intelligence (systems that enhance and scale human intelligence) to Artificial Intelligence (systems that replicate human intelligence). There are many ways in which AI can augment information-intensive fields such as medicine, telecommunications, weather forecasting, and financial services. Since the Jeopardy! match, Watson has been used to create cognitive apps and computing tools for businesses and healthcare professionals.

It’s not difficult, then, to imagine AI systems aiding scientific research and especially the communication of those findings in a more efficient way. More and more people are getting their information, particularly about science, in the form of video, but many science professionals have little time or incentive to devote to learning and using new communication tools. A system that can reduce the time involved in making a video and simultaneously enhance the quality could greatly improve communication of science and its importance to society. The first cognitive movie trailer, aided by the computer, Watson, is a “proof of concept” in this regard.

For more information about Watson and preparation for the Jeopardy! gameshow, see this article: Ferrucci, D. et al. 2010. Building Watson: An overview of the DeepQA process. Association for the Advancement of Artificial Intelligence pp. 59-79.

This post is part of a series about Artificial Intelligence (AI) and its potential role in science communication. In the next post (part 3), I’ll describe how Watson helped create a movie trailer.