Coolest Thing I’ve Seen at GDC: Software That Animates Anything


Star Trek’s holodeck is one of science fiction’s most seductive technologies: you give a few verbal instructions to a computer, and boom, you’re on a street in 1940s San Francisco, or wherever you want to be . We may never have holograms you can touch, but the part where a computer can generate any requested 3D scene is being worked on by a small studio in London.

At the Game Developers Conference in San Francisco on Wednesday, anything in the world CEO Gordon Midwood asked me what I wanted to see. I said I wanted to see a donkey, and seconds later a donkey was walking around on the screen in front of us. Sure, he walked a bit like a horse, and yes, all he did was wander around a field, but those are just details. The software kept its basic promise: I asked for a donkey and a donkey appeared.

For the next demonstration, Midwood took her hands off the keyboard. “Let’s make an underwater world and add 100 sharks and a dolphin,” he said into a microphone. Seconds later, I was looking at a dolphin who showed up at the wrong party: 100 swimming sharks.

Developers looking to use Anything World as a game development or prototyping tool will integrate it into an engine like Unity, but as Midwood demonstrated, it can also output scenes, objects, and creatures on the fly. It was the coolest thing I’ve seen on the GDC show floor, and others have already noticed its potential. Roblox is exploring a deal with the company, and Ubisoft is already using the software for prototyping, as well as a collaborative project called Rabbids Playground.

How it works

With so much about blockchain haunting GDC, the sight of an older tech buzzword was heartening: Anything World uses machine learning algorithms developed in part during a University research project. of London which lasted more than a year. In short, they built automated methods to teach a system to analyze 3D models from sources like Sketchfab and classify, segment, organize, and animate them (or not) in a way that has to meaning to human beings. At present, it can shoot more than 500,000 models.

Of course, Anything World is sometimes wrong: the software once thought a table was a quadruped, and another time it thought the top of a pineapple was the legs of a spider, which was “scary”, said Midwood.

It’s early days (at least compared to Star Trek: The Next Generation, set in the 2360s), but even at this rather crude stage, it’s fun to see how a machine learning system matches the patterns 3Ds given to him with what he ‘knows’ about animal locomotion – I felt oddly proud of my trotting donkey, like I was somehow responsible for bringing it to life just by asking.

For non-developers, Midwood thinks Anything World has potential as super-accessible game-making tools, or just as a fun, useful thing to have on hand. For example, you can use it to create sets of green screens on the fly while streaming, or treat it like a holodeck computer, putting on a VR headset and asking for a scene to relax.

Meta (the company formerly known as Facebook) demonstrated something similar last month, but without animated creatures. In response, Anything World published a parody demo. Interpreting what people want at a natural language level is perhaps one of the end goals of all software, so it’s no surprise that there’s competition in the “make things appear 3D” business. asking for them”. However, Anything World’s technology seems more powerful than Meta’s at the moment. It’s also a fairly small company, with six machine learning experts and nine other people in technical roles working on the tool.

In the future, Anything World plans to release versions with more faithful models and animations – an Unreal Engine version is coming and plans to use Epic Quixel Templates—as well as its own consumer app. Right now it’s available for use with Unity.

Anything World is a far cry from a Star Trek computer’s understanding of the physical world – I doubt he knows anything about 1940s San Francisco – but that’s not because donkeys walk much like donkeys. horses now than they will tomorrow. Midwood won’t promise me a holodeck just yet, but he’s confident that the system’s ability to label and animate 3D models will only become more granular and complex.

The shark-infested waters that were generated for me. (Image credit: Anything World)

Comments are closed.