Reputation: 415
I'm confused regarding the boundary between application (CPU side) vs. the GPU side. Can someone help me understand what the application is generally responsible for in a game?
My understanding is that the application submits frames for the GPU to render, a process that involves vertex shader, rasterization, and pixel shader (in the most basic rendering form). This leads me to believe that the GPU has no concept of what occurs from frame to frame.
Does this mean the application is keeping track of where all the objects are in world space? And if the user moves a character (for example), does the application determine the new location and therefore submit a new transform to the GPU?
This is confusing especially because I read that the vertex shader can be used for things like morphing, which is basically animating models over time based on two static poses.
Upvotes: 4
Views: 1092
Reputation:
I haven't kept in touch with the latest state-of-the-art engines, but last time I checked, almost all of the game state is typically stored on CPU and data is much more often written to from CPU to GPU, not read back nearly as often.
That can even include redundant data stored on the CPU (and not talking about what drivers do). For example, an engine might store triangle meshes in both CPU and GPU, keeping the mesh in CPU as well to do things like frustum culling, collision detection, and picking. One of the reasons for this redundancy is because it'd be too difficult, if not impossible in some cases, to write the bulk of the game logic in GPU code. For example, you might be able to accelerate some parts of collision detection on the GPU side but you can't make your physics system written entirely in GPU push events from the physics systems for the audio system to play sounds on collision events (the GPU can't communicate with audio hardware). The other is that the GPU is generally more limited in terms of memory, e.g., so the CPU might store all the images for a game map (I'm including the hard drive as "CPU memory") in addition to storing some of the active texture data redundantly on the GPU (and the textures might use a lower resolution depending on the circumstance).
The GPU is still a very specialized piece of hardware. You can't even write Pong on the GPU, after all, since it can't directly read user input from, say, keyboard, mouse, gamepad, or play audio, or load files from a hard drive, or anything like that. The CPU is still like the "master brain" that coordinates everything.
As for things like tweening and skinning, that's often computed on the GPU but that's not like "state management". The CPU might still store the matrices for each bone in a bone hierarchy and then just ship those matrices and undeformed vertex positions to the GPU and let it compute stuff on the fly. In that scenario it's not game state being stored/managed on the GPU so much as letting the GPU compute data on the fly on a per-frame basis, which it can do super fast in these scenarios, that doesn't even need to be persistently stored in the first place. The CPU doesn't even read back the resulting data in those cases.
Typically the GPU isn't used that much for managing state. It's more often used for computing things on the fly really fast where it's suitable for doing that. If there's state stored there, it's often temporary state that can be discarded and regenerated because the CPU already has sufficient data to do so. An exception would be some GPGPU software I've seen where they actually stored some application state exclusively on the GPU with no copy whatsoever on the CPU with the CPU doing more reads from the GPU than writes to it, but I don't think games are doing that quite as much.
So for the most part, yes, typically the GPU is rather oblivious of the game world and state. The CPU just uses it store some discardable data temporarily here and there, like texture data and VBOs from meshes and images it already has copies of, and uses the GPU to compute and output a lot of discardable data on the fly really fast. It's not used that frequently to store and output persistent data.
If I try to come up with a crude analogy, it's like business managers at a pizza restaurant would persistently store customer records like their addresses. They might temporarily give an address to the pizza delivery guy with his fast motorcycle to deliver a pizza to the customer, but they aren't exclusively going to leave the pizza deliver guys to keep track of every customer's address, since that would lead to too much back-and-forth communication (plus those pizza delivery guys might not be able to remember every single customer address they deliver pizzas to while the managers have like computers with databases to store a boatload of customer data). It's mostly one-way communication from business manager->pizza delivery guy. So it's like, "Hey pizza guy with fast motorcycle, go deliver a pizza to this address", and similar thing for CPU to GPU: "Hey GPU, go calculate this for me really fast and here's the data you need to do it."
Upvotes: 5