hunter
hunter

Reputation: 308

How to tell stride of shm/fd to xcb_shm_get_image()?

I'm using xcb-shm to capture screen. I create drm/gbm buffer with same resolution as screen then give fd of it to xcb_shm_attach_fd(). Sometimes drm/gbm buffer I create has stride size which is not equal with xcb's geometry->width * 4 which makes everything get broken.

When I try capture 1920x1080 resolution everything is fine since drm/gbm buffer I create has same stride with 1920 * 4. But when I create drm/gbm buffer for my first 1366x768 monitor drm/gbm stride is not 1366 * 4. It's 1408 * 4. When I give that drm/gbm buffer to xcb for capture screen I get broken results. I use gbm_bo_create() and getting stride of it with gbm_bo_get_stride().

How I can tell stride of shm/fd to xcb_shm_get_image()? Or is there faster/similar way to capture screen that takes care of shm/fd stride?

Upvotes: 0

Views: 86

Answers (1)

Uli Schlachter
Uli Schlachter

Reputation: 9877

Well, okay. Here is an attempt at explaining. I do not think the following contains any new information for you.

First: SHM uses the same pixel format as normal X11. So xcb_shm_get_image() gives you the same format as xcb_get_image(). How pixel data is formatted depends on the depth and the visual. Information about these are in the result of xcb_get_setup() and is printed by /usr/bin/xdpyinfo.

Let's look at an example of my X11 server. xdpyinfo says about pixmap formats:

supported pixmap formats:
    depth 1, bits_per_pixel 1, scanline_pad 32
    depth 4, bits_per_pixel 8, scanline_pad 32
    depth 8, bits_per_pixel 8, scanline_pad 32
    depth 15, bits_per_pixel 16, scanline_pad 32
    depth 16, bits_per_pixel 16, scanline_pad 32
    depth 24, bits_per_pixel 32, scanline_pad 32
    depth 32, bits_per_pixel 32, scanline_pad 32

This contains information about how data is stored. "These days", we only care about depth 24 and depth 32. Both of them use 32 bits per pixel, i.e. 4 bytes. At the end of a scanline, things are padded to a multiple of 32 bits. Since each pixel already has a size of 32 bits, this basically means that there is no padding.

Next, what's the default visual of my X11 server? Why this one? Just because there are so many ones and I have to somehow pick one.

screen #0:
[...]
  default visual id:  0x21
  visual:
    visual id:    0x21
    class:    TrueColor
    depth:    24 planes
    available colormap entries:    256 per subfield
    red, green, blue masks:    0xff0000, 0xff00, 0xff
    significant bits in color specification:    8 bits

This visual uses a depth of 24. As we saw above, this means that there are four bytes per pixel and no extra padding at the end of a scanline.

Put differently: And image of size wxh takes w*h*4 bytes and each line takes w*4 bytes.

Note that https://www.x.org/releases/X11R7.7/doc/xextproto/shm.html#USE_OF_SHARED_MEMORY_PIXMAPS says

Unlike X images, for which any image format is usable, the shared memory extension supports only a single format (i.e. XYPixmap or ZPixmap) for the data stored in a shared memory pixmap.

However, I doubt any X11 server actually makes use of this. XYPixmaps are weird...

Upvotes: 0

Related Questions