Reputation: 300
Binder client and server use binder driver to send and receive data. By reading binder related source code, I find that APP process send and receive data by ioctl(BINDER_READ_WRITE), and binder driver read data by copy_from_user and write data by copy_to_user.
Since binder driver implements a character device and send/recv data by ioctl(BINDER_READ_WRITE), then why binder need mmap? After mmap, APP process can read/write data from/to the mmap-ed shared memory, ioctl(BINDER_READ_WRITE) is not necessary.
My question is why binder dose not use mmap-ed shared memory to send/recv data, but using ioctl(BINDER_READ_WRITE)?
It seems the only job mmap does is to alloc memory buffer. If it is like this, when it can alloc memory buffer in binder_open, then binder_mmap is not needed.
Upvotes: 0
Views: 728
Reputation: 12239
It seems the only job mmap does is to alloc memory buffer. If it is like this, when it can alloc memory buffer in binder_open, then binder_mmap is not needed.
mmap
is needed here, as the point is not just to allocate a buffer for the kernel, but to allocate some memory shared by the userspace program and the kernel. The kernel also needs to verify in mmap
that this region is read-only and cannot be made writable with mprotect
.
Since binder driver implements a character device and send/recv data by ioctl(BINDER_READ_WRITE), then why binder need mmap? After mmap, APP process can read/write data from/to the mmap-ed shared memory, ioctl(BINDER_READ_WRITE) is not necessary.
The mmap
region is read-only by userspace, the app cannot write to it. This will make more sense if we go over how it transaction works and what this buffer is actually used for.
A userspace program first opens /dev/binder
and calls mmap
to map this read-only memory. Then a transaction is initiated with the BINDER_WRITE_READ
ioctl
command. The data for this command is as follows:
struct binder_write_read {
binder_size_t write_size; /* bytes to write */
binder_size_t write_consumed; /* bytes consumed by driver */
binder_uintptr_t write_buffer;
binder_size_t read_size; /* bytes to read */
binder_size_t read_consumed; /* bytes consumed by driver */
binder_uintptr_t read_buffer;
};
This gets handled by binder_thread_write
:
struct binder_write_read bwr;
// ...
binder_thread_write(proc, thread, bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
You can see that the write_buffer
is actually a userspace buffer:
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
struct binder_context *context = proc->context;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
This is the same for the read_buffer
. These two buffers are not related to the buffer that was previously mmaped.
The write_buffer
is used for sending commands (not the same as ioctl
commands) to the binder driver, and the read_buffer
is for receiving responses from the driver. One of these commands is BC_TRANSACTION
, which gets handled in the binder_transaction
function. The argument of the BC_TRANSACTION
command is located right after the command in the write_buffer
, and it has the following structure:
struct binder_transaction_data {
/* The first two are only used for bcTRANSACTION and brTRANSACTION,
* identifying the target and contents of the transaction.
*/
union {
/* target descriptor of command transaction */
__u32 handle;
/* target descriptor of return transaction */
binder_uintptr_t ptr;
} target;
binder_uintptr_t cookie; /* target object cookie */
__u32 code; /* transaction command */
/* General information about the transaction. */
__u32 flags;
pid_t sender_pid;
uid_t sender_euid;
binder_size_t data_size; /* number of bytes of data */
binder_size_t offsets_size; /* number of bytes of offsets */
/* If this transaction is inline, the data immediately
* follows here; otherwise, it ends with a pointer to
* the data buffer.
*/
union {
struct {
/* transaction data */
binder_uintptr_t buffer;
/* offsets from buffer to flat_binder_object structs */
binder_uintptr_t offsets;
} ptr;
__u8 buf[8];
} data;
};
Looking at binder_transaction
, we can see that this structure contains more userspace pointers:
const void __user *user_buffer = (const void __user *)(uintptr_t)tr->data.ptr.buffer;
This is also true for tr->data.ptr.offsets
. These buffers are still not the region that was mmaped.
Inside binder_transaction
, we see calls to binder_alloc_new_buf
. This is where that mmaped region is first used. In the remainder of the function, tr->data.ptr.buffer
and tr->data.ptr.offsets
will be "translated" into a form usable by the receiving/target process (for example, if we're sending a file descriptor, we need to translate that into a new file descriptor in the receiving process). The translated results are then copied to the target's mmaped region with binder_alloc_copy_to_buffer
.
switch (hdr->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct flat_binder_object *fp;
fp = to_flat_binder_object(hdr);
ret = binder_translate_binder(fp, t, thread);
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc, t->buffer, object_offset, fp, sizeof(*fp))) {
// ...
}
} break;
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
struct flat_binder_object *fp;
fp = to_flat_binder_object(hdr);
ret = binder_translate_handle(fp, t, thread);
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc, t->buffer, object_offset, fp, sizeof(*fp))) {
// ...
}
} break;
case BINDER_TYPE_FD: {
struct binder_fd_object *fp = to_binder_fd_object(hdr);
binder_size_t fd_offset = object_offset +
(uintptr_t)&fp->fd - (uintptr_t)fp;
int ret = binder_translate_fd(fp->fd, fd_offset, t, thread, in_reply_to);
fp->pad_binder = 0;
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc, t->buffer, object_offset, fp, sizeof(*fp))) {
// ...
}
} break;
...
The sending process's mmap
region is not used when sending a transaction. It will only be used when receiving a transaction.
Hopefully it's clear now why ioctl
can't be used.
Upvotes: 3