PDA

View Full Version : Gabe Newell Q&A(Half Life2)



Knight
05-27-2003, 10:50 AM
Gabe Newell Q&A
Saturday, May 24, 2003
Article by: Shawn Wallace


Recently we were afforded the privilege of speaking with Gabe Newell, Valve Software's Managing Director and head of the Half Life 2 project. We fired off a few quick questions to him concerning Source, the engine powering Half Life 2, and what we can expect from it.

GamingNEXT - After all of the movies, both engine/tech oriented and pure gameplay, that came from E3 it's safe to say that all of the gaming world is now clamoring for Half-Life 2. Can you tell us a bit about the technology used to power HL2?

Gabe Newell - I did a presentation on the engine for Vivendi a little while back. It was about 100 pages long, and we got through it in about 4 hours. I'm not sure really how to condense that all down, but I'll try. We usually break it down into humans, graphics, interactivity, and AI. Now obviously there's overlap. For example there's a special shader for people's teeth. That could go in the graphics section or the humans section. There's a lot of intelligence in moving creatures over an LOD mesh - so is that AI, interactivity, or graphics? You get the idea.

For humans we wanted to make them look realistic and look consistent. Consistency is an important characteristic, as you need to make their skin tones look as "realistic" as their walk cycle. If something is too good, it actually breaks the illusion of humaness you're trying to create. There are lots and lots of details that go into their skinning and muscles to make it look right. We probably spent more time on their eyes than anything else - for example you have to model them as ellipsoids rather than spheres to make them look right as they rotate within the eye socket. The facial expression system is pretty cool in a lot of ways, not the least of which is that it blends together multiple inputs yet always maintains consistency with a set of rules about what are valid potential facial states. In other words you can push random numbers through the expression system and you won't get a face that a human can't create and you will get believable transitions between them.

On the graphics side, there's one issue that hasn't really been discussed much which is scalability. When John Carmack and Michael Abrash created the Quake 1 engine, the fundamental problem was achieving an absolute performance level. When they set out to build an engine that would run a 3D software renderer at 15 FPS on their target CPU, no one in the world thought they could do it. As is typical with those guys, they showed everybody that they were wrong. They had a secondary problem which was optimizing for consistent framerates - it was more important to run 15 FPS all the time than it was to run 20 FPS until you had a dynamic light when it slow down to 5 FPS. Nowadays the problem is scalability. The difference between high end and low end hardware is getting wider. And the differences don't necessarily correlate - so that something which has twice the triangle throughput may not have twice the memory bandwidth. A lot of the tricky work in Source is getting it to work across a wide variety of scenes (indoor, urban, outdoor) and across a wide variety of hardware. Not only do you have to run acceptably fast on a TNT or an Intel 810 based PC, but you have to fully exploit the capabilities of the current and next generation high-end cards.

Physics is sort of the high-sex appeal feature for interactivity, so we tend to talk about that, but there are a lot of other things you have to do. For example there are things we use called "soundscapes" that use the player's actions to drive a bunch of the ambient audio and musical events. Most people won't notice it, but it makes the world seem more reactive in the same way that a good movie score makes the movie seem scarier or more dramatic.

GamingNEXT - How can the Source engine handle both indoor and outdoor environments so well?

Gabe Newell - A lot of the flexibility in the engine involves making sure you are using the triangle and memory bandwidth you've got to render what's most important in the scene.

http://www.gamingnext.com/articles/index.asp?id=17