Reputation: 61
Let’s say I have created a simple program in C, using GTK, that brings up a label. When this program is run using ./a.out
from the command line, I am aware that a new process is forked, execve is called, etc.
But when exactly in the process of running a program is my GUI, with my label, drawn to the screen? At what point is X11 interfaced with? I am struggling to understand exactly when such GUIs are drawn in terms of the steps with the Linux process lifecycle.
To help illustrate my understanding, this is the link I am using to try and understand the general process life cycle - it contains no information on when GUIs are drawn however. http://glennastory.net/?p=870
Upvotes: 1
Views: 103
Reputation: 2884
The system for screen rendering has changed often on Linux. An extensive but probably not this much reliable summary can be found at: https://en.wikipedia.org/wiki/Direct_Rendering_Infrastructure.
The information is all there but it assumes a certain level of knowledge from the reader. I can summarize what I understand from the link and give some further information.
In the classic X Window System architecture the X Server is the only process with exclusive access to the graphics hardware, and therefore the one which does the actual rendering on the framebuffer. All that X clients do is communicate with the X Server to dispatch rendering commands. Those commands are hardware independent, meaning that the X11 protocol provides an API that abstracts the graphics device so the X clients don't need to know or worry about the specifics of the underlying hardware. Any hardware specific code lives inside the Device Dependent X, the part of the X Server that manages each type of video card or graphics adapter and which is also often called the video or graphics driver.
What this says is that the X server is started as a privileged process (root). A non-root process communicates with the X server using xlib. Xlib itself communicates with the X server using socket system calls. This allows to securely communicate with the X server (a form of secure IPC). The socket interface to the X server is device independent. Your non-root process will call the same function regardless of the underlying graphics card.
The device independent portion of the X server will make calls in the device dependent portion. The device dependent portion is basically a user mode driver implementation. It is not the same as a kernel driver because it is actually called by a user mode root process. The kernel driver is a separate entity that is added to the kernel at boot. The kernel driver is either programmed by some third party which reads the graphics card documentation or, more likely, by the graphics card vendor themselves. The kernel driver is nothing more than a fancy character device responding to ioctl system calls and doing PCI read/writes on registers that make the graphics card do DMA operations and interact with the screen.
The user mode portion of the driver (the device dependent portion of the X server) is also implemented by the graphics card vendor. It needs to be like this because the X server doesn't know (and it shouldn't) anything about how the driver works. The X server thus presents generic functions that are called by the device independent portion of it. It is these generic functions that are implemented by the graphics card vendor.
The rise of 3D rendering has shown the limits of this architecture. 3D graphics applications tend to produce large amounts of commands and data, all of which must be dispatched to the X Server for rendering. As the amount of inter-process communication (IPC) between the X client and X Server increased, the 3D rendering performance suffered to the point that X driver developers concluded that in order to take advantage of 3D hardware capabilities of the latest graphics cards a new IPC-less architecture was required. X clients should have direct access to graphics hardware rather than relying on a third party process to do so, saving all the IPC overhead. This approach is called "direct rendering" as opposed to the "indirect rendering" provided by the classical X architecture. The Direct Rendering Infrastructure was initially developed to allow any X client to perform 3D rendering using this "direct rendering" approach.
What this says is that the X server indirect (through sockets) rendering was not good enough for 3D. They thus implemented a direct rendering. After all, if you install a driver on your system, you are completely vulnerable to what it is going to do. You can count on a driver written by hardware vendors to collaborate with the X server to render where the driver should render. This is exactly what happens. There is still kernel and user mode portions of the driver. The kernel portion stays the same but the user mode portion becomes an implementation of a 3D rendering convention such as OpenGL. It is exactly what is stated a bit further:
the DRI client —an X client performing "direct rendering"— needs a hardware specific "driver" able to manage the current video card or graphics adapter in order to render on it. These DRI drivers are typically provided as shared libraries to which the client is dynamically linked. Since DRI was conceived to take advantage of 3D graphics hardware, the libraries are normally presented to clients as hardware accelerated implementations of a 3D API such as OpenGL, provided by either the 3D hardware vendor itself or a third party such as the Mesa 3D free software project.
Afterwards, it says:
the X Server provides an X11 protocol extension —the DRI extension— that the DRI clients use to coordinate with both the windowing system and the DDX driver.[9] As part of the DDX (device dependent X) driver, it's quite common that the X Server process also dynamically links to the same DRI driver that the DRI clients, but to provide hardware accelerated 3D rendering to the X clients using the GLX extension for indirect rendering (for example remote X clients that can't use direct rendering). For 2D rendering, the DDX driver must also take into account the DRI clients using the same graphics device.
The X server still does indirect rendering with the GLX extension.
the access to the video card or graphics adapter is regulated by a kernel component called the Direct Rendering Manager (DRM).[10] Both the X Server's DDX driver and each X client's DRI driver must use DRM to access to the graphics hardware. DRM provides synchronization to the shared resources of the graphics hardware —resources such as the command queue, the card registers, the video memory, the DMA engines, ...— ensuring that the concurrent access of all those multiple competing user space processes don't interfere with each other. DRM also serves as a basic security enforcer that doesn't allow any X client to access the hardware beyond what it needs to perform the 3D rendering.
Like I said earlier, there is a kernel and user mode portion of the graphics card driver. The DRM is partly generic (the same for every graphics card) and partly specific to one graphics card. This is made possible by how PCI devices work (all graphics card today are PCI if not all peripherals). PCI devices have some common registers that must be found on all devices. This represents the generic portion of the DRM. Among the common registers there is BAR registers that point to a device specific portion of the configuration space. This represents the specific portion of the DRM. The graphics card vendor thus provides a kernel module in the form of a character device. The kernel then detects using the PCI IDs of the graphics card that it must use this kernel module to drive it. Then, it will present a virtual character device file to user mode which can be opened to do further ioctl operations on it.
The DRI2 extension provides other core operations for the DRI clients, such as finding out which DRM device and driver should they use (DRI2Connect) or getting authenticated by the X Server in order to be able to use the rendering and buffer facilities of the DRM device (DRI2Authenticate).
What this says is that the DRI clients still need to get authenticated with the X server before they can actually render (make DRM system calls).
At least, this is what I understand from the wikipedia page. I cannot be completely sure but it must be something along those lines.
Upvotes: 1