EA Using AI To Create Automated Realistic Animations In Games

"Electronic Arts - Internet Matters corporate partner" | Source: Internet Matters


  • Electronic Arts was recently awarded a patent describing a system for generating realistic character movements in video games without relying on pre-recorded motion capture data.
  • The system utilises a pose prediction model that can anticipate the position of a character based on its past pose and joint information, reducing the need for laborious and costly motion capture.
  • The innovation lies in incorporating environmental data to modify the predicted character pose based on the specific virtual environment, considering factors such as terrain and obstacles.
  • The system takes into account control signals and character-surface interactions to refine predicted poses and resolve motion artefacts, enhancing realism.

Earlier today, we came across a recently published patent from Electronic Arts titled, “DYNAMIC LOCOMOTION ADAPTATION IN RUNTIME GENERATED ENVIRONMENTS,” filed in December 2021 under the name of Electronic Arts Inc. The patent, published earlier this month, describes a system for generating realistic animated movements of characters in video games without relying on pre-recorded motion capture data.

Block diagram of an example dynamic animation generation system generating a character pose for a frame of an electronic game. | Source: Patent Public Search

“Use of pose prediction models enables runtime animation to be generated for an electronic game. The pose prediction model can predict a character pose of a character based on joint data for a pose of the character in a previous frame,” reads the abstract for the patent.

“Further, by using environment data, it is possible to modify the prediction of the character pose based on a particular environment in which the character is location. Advantageously, the use of machine learning enables prediction of character movement in environment in which it is difficult or impossible to obtain motion capture data.”

Instead of depending on pre-recorded motion capture data, which can be both laborious and costly, especially when certain movements have not been previously recorded, the system utilises a pose prediction model. This model can anticipate the position of a character by analysing the character’s past pose and joint information.

However, the innovation lies in the incorporation of environmental data to modify the predicted character pose based on the specific environment in which the character is located. By considering characteristics of the virtual environment in the video games, such as terrain or obstacles, the system can adjust the predicted character movements to match the surroundings.

Detail block diagram of the dynamic animation generation system. | Source: Patent Public Search

The method involves several steps. First, the system receives the character’s initial pose in a particular frame of the animation. Then, it receives virtual environment labels that describe the characteristics of the environment. These labels affect the character’s motion within the virtual environment.

The system applies the initial character pose and environment labels to a pose prediction model, which generates a second character pose associated with the character in that environment. Based on the second character pose, the system generates the subsequent frame of the animation. This process continues for each frame, creating a sequence of animated movements for the character.

The system can also take into account control signals, such as user input or character motion during the previous frame, to further refine the predicted poses. Additionally, the system can handle interactions between the character and surfaces within the virtual environment.

It receives information about contacts between the character and surfaces and uses this data to resolve any motion artefacts that may occur due to the predicted character pose conflicting with the environment.

Furthermore, the pose prediction model can provide deformation data associated with the character’s interaction with the virtual environment. This data can be used to modify the depiction of the virtual environment in the animation frames, enhancing the realism of the character’s movements.

The patent also mentions that the pose prediction model can be trained using motion capture data captured in real-world environments. This allows the model to learn from real-world interactions and generalise its predictions to virtual environments with different characteristics.

In January, we uncovered an Electronic Arts patent that addressed a comparable technology aimed at generating dynamic animations for video games. This technology involved employing a neural network to extract approximate motion data, including pose information, from real-life videos of a character in motion. However, the patent did not make any reference to predicting the character’s pose based on the particular environment in which the character is situated.

The patent represents a significant advancement in character animation for video games. By leveraging pose prediction models and environment data, this innovative approach allows for the generation of dynamic and context-aware character animations in real-time. This breakthrough technology not only reduces the reliance on pre-recorded motion capture data but also enhances the realism and immersion of character movements.

However, it is crucial to acknowledge that the current status of this technology is limited to being a patent, which does not provide any guarantee of its implementation or even development by Electronic Arts. The future will unveil whether the company intends to integrate the proposed system into its existing or upcoming video game technologies.

What do you think about this? Do tell us your opinions in the comments below!

Similar Reads: EA Allowing Players To Customise In-Game Character Animations

Was this helpful? 🕹️

Good job! Please give your positive feedback 😏

How could we improve this post? Please Help us. 💡