Costly and time-consuming, computer-animated sequences that feature menacing spaceships, ferocious goblin hordes and spectacular magical explosions are as important to the success of video games and motion pictures as any actor's performance.
Thanks to newly developed animation techniques that use artificial intelligence, Brigham Young University computer scientists have figured out how to create today's entertainment quicker and cheaper than ever before. Their approach opens doors for more convincing characters that teach themselves how to interact with their environment, thus lightening an animator's workload.
"There are two extremes in animating characters -- you can let the animator do everything, which is really tedious, or you can let the computer do it, in which case you tend to get animation that doesn't look natural or convincing," said Parris Egbert, the BYU associate professor of computer science leading the project. "We've tried to find a happy medium, where the computer does most of the work, but keeps it realistic."
The research, published in the April 27 issue of the "Journal of Computer Animation and Virtual Worlds," explains the team's new approach to animation.
Here's how it works: an animator gives a computer a goal and several examples of how to manipulate a character in situations it may encounter. The computer takes its assigned task, compares it to the animator's examples and takes action. Even if presented with a situation it's never faced before, the computer extrapolates a course of action that favorably compares to a human's. The result is an animation sequence that takes a fraction of the time to complete but still looks natural.
"The computer says to itself, 'I'm going to do this because it seems to fit with what the animator did,'" said Egbert. "You don't give the computer an exhaustive set of examples -- that would defeat the time-saving nature of this approach."
But allowing a computer to choose its own actions on course to reaching a goal requires a lot of processing power, said Egbert. Enter the team's second innovation – offline learning.
By examining the way humans develop expertise in any particular situation, researchers were able to retool a computer's artificial intelligence to work in a similar manner.
"Take typing as an example," said Jonathan Dinerstein, a BYU computer science graduate student working on the project. "At first, a typist looks at a keyboard and picks out the keys he wants using sight. It takes a long time and is inefficient."
But soon a typist has memorized each key's location and can type a sentence quickly and automatically, with little thought of which key he is striking, said Dinerstein.
Similarly, the researchers set up their computer's artificial intelligence to learn away from the actual situation of animation how to manipulate the characters.
"The computer looks at the possibilities it may be presented with and develops plans of action," said Dinerstein. "When the time comes for the computer to animate characters in a virtual setting, it doesn't have to think about it. It says, 'I've been in a situation like this before, I don't have to think about it. I know exactly what to do.'"
As a result, the computer acts quickly and uses little processing power.
This approach has implications for video game developers, who could use limited processing power to improve their products -- more detailed backgrounds, better sound effects and smarter characters. It could also speed up the animation of large battle scenes in movies, where thousands of seemingly independent characters are created to visually impress an audience.
An added benefit: developers could animate scenes of this scale in real time on a personal computer, because each character would have previously learned how to act, said Dinerstein. This would save hours of processing time and would produce characters that would even stand up to the scrutiny of an on-screen close up.
Additionally, the researchers say their approach makes way for games that truly learn from a player's actions, becoming more difficult to defeat as a game progresses.
"You could fool it once, but never again," said Egbert. "It would recognize what you were doing the second time and stop you. Furthermore, it could use your own techniques against you and even take the examples you provide to create new moves you'd never displayed before."
Hugo de Garis and Nelson Dinerstein, professors of computer science at Utah State University, join Egbert and Dinerstein on the study.