Reputation: 171
I am looking for advice on how to get efficient and high performance asynchronous IO working for my application that runs on Ubuntu Linux 14.04.
My app processes transactions and creates a file on disk/flash. As the app is progressing through transactions additional blocks are created that must be appended to the file on disk/flash. The app needs also to frequently read blocks of this file as it is processing new transactions. Each transaction might need to read a different block from this file in addition to also creating a new block that has to be appended to this file. There is an incoming queue of transactions and the app can continue to process transactions from the queue to create a deep enough pipeline of IO ops to hide the latency of read accesses or write completions on disk or flash. For a read of a block (which was put in the write queue by a previous transaction) that has not yet been written to disk/flash, the app will stall until the corresponding write completes.
I have an important performance objective – the app should incur the lowest possible latency to issue the IO operation. My app takes approximately 10 microseconds to process each transaction and be ready to issue a write to or a read from the file on disk/flash. The additional latency to issue an asynchronous read or write should be as small as possible so that the app can complete processing each transaction at a rate as close to 10 usecs per transaction as possible, when only a file write is needed.
We are experimenting with an implementation that uses io_submit to issue write and read requests. I would appreciate any suggestions or feedback on the best approach for our requirement. Is io_submit going to give us the best performance to meet our objective? What should I expect for the latency of each write io_submit and the latency of each read io_submit?
Using our experimental code (running on a 2.3 GHz Haswell Macbook Pro, Ubuntu Linux 14.04), we are measuring about 50 usecs for a write io_submit when extending the output file. This is too long and we aren't even close to our performance requirements. Any guidance to help me launch a write request with the least latency will be greatly appreciated.
Upvotes: 17
Views: 9843
Reputation: 7164
Linux AIO (sometimes known as KAIO or libaio
) is something of a black art where experienced practitioners know the pitfalls but for some reason it's taboo to tell someone about gotchas they don't already know. From scratching around on the web and experience I've come up with a few examples where Linux's asynchronous I/O submission via io_submit()
may become (silently) synchronous, thereby turning it into a blocking (i.e. no longer fast) call:
O_DIRECT
"hint" (e.g. how you submitted the I/O didn't meet O_DIRECT
alignment constraints, filesystem or particular filesystem's configuration doesn't support O_DIRECT
) and it chooses to silently perform buffered I/O instead, resulting in the case above.io_submit()
again will turn into a blocking call while the other operation completes. The Seastar framework contains a small lookup table of filesystem specific cases./sys/block/[disk]/queue/nr_requests
documentation and the un(der) documented /sys/block/[disk]/device/queue_depth
) within the kernel. Making I/O requests back-up and exceed the size of the kernel queues leads to blocking.
/sys/block/[disk]/queue/max_sectors_kb
but the true limit may be something smaller like 512 KiB) they will be split up within the block layer and go on to chew up more than one request./proc/sys/fs/aio-max-nr
documentation) can also have an impact but the result will be seen in io_setup()
rather than io_submit()
.i_rwsem
) that is in use.The list above is not exhaustive.
With >= 4.14 kernels the RWF_NONBLOCK
flag can be used to make some of the blocking scenarios above noisy. For example, when using buffering and trying to read data not yet in the page cache, the RWF_NONBLOCK
flag will cause submission to fail with EAGAIN
when blocking would otherwise occur. Obviously you still a) need a 4.14 (or later) kernel that supports this flag and b) have to be aware of the cases it doesn't cover. I notice there are patches that have been accepted or are being proposed to return EAGAIN
in more scenarios that would otherwise block but at the time of writing (2019) RWF_NONBLOCK
is not supported for buffered filesystem writes.
If your kernel is >=5.1, you could try using io_uring
which does far better at not blocking on submission (it's an entirely different interface and was new in 2020).
io_submit()
blocking/slowness situations.ENOSPC
due to lack of large amounts of contiguous free space.O_DIRECT
rather than failing the open()
call.
O_DIRECT
is requested on compressed files.O_DIRECT
to "accepting" it by falling back to buffered I/O (see point 3 in the commit message). There's further discussion from the lead up to the commit in the ZFS on Linux "Direct IO" GitHub issue. In the "NVMe Read Performance Issues with ZFS (submit_bio to io_schedule)" issue someone suggests they are getting closer to submitting a change that enables a proper zerocopy O_DIRECT
. If such a change were accepted, it would end up in some future version of ZoL greater than 0.8.2.O_DIRECT
allocating writes.io_submit()
delays. There's also an LWN article talking about an earlier version of the no-wait AIO patch set and some of the cases it doesn't cover (but note that buffered reads were covered by it in the end).Related:
Hopefully this post helps someone (and if does help you could you upvote it? Thanks!).
Upvotes: 39
Reputation: 9762
I speak as an author of proposed Boost.AFIO here.
Firstly, Linux KAIO (io_submit) is almost always blocking unless O_DIRECT is on and no extent allocation is required, and if O_DIRECT is on you need to be reading and writing 4Kb multiples on 4Kb aligned boundaries, else you force the device to do a read-modify-write. You therefore will gain nothing using Linux KAIO unless you rearchitect your application to be O_DIRECT and 4Kb aligned i/o friendly.
Secondly, never ever extend an output file during a write, you force an extent allocation and possibly a metadata flush. Instead fallocate the file's maximum extent to some suitably large value, and keep an internal atomic counter of the end of file. That should reduce the problem to just extent allocation which for ext4 is batched and lazy - more importantly you won't be forcing a metadata flush. That should mean KAIO on ext4 will be async most of the time, but unpredictably will synchronise as it flushes delayed allocations to the journal.
Thirdly, the way I'd probably approach your problem is to use atomic append (O_APPEND) without O_DIRECT nor O_SYNC, so what you do is append updates to an ever growing file in the kernel's page cache which is very fast and concurrency safe. You then, from time to time, garbage collect what data in the log file is stale and whose extents can be deallocated using fallocate(FALLOC_FL_PUNCH_HOLE) so physical storage doesn't grow forever. This pushes the problem of coalescing writes to storage onto the kernel where much effort has been spent on making this fast, and because it's an always forward progress write you will see writes hit physical storage in a fairly close order to the sequence they were written which makes power loss recovery straightforward. This latter option is how databases do it and indeed journalling filing systems do it, and despite the likely substantial redesign of your software you'll need to do this algorithm has been proven the best balance of latency to durability in a general purpose problem case.
In case all the above seems like a lot of work, the OS already provides all of the three techniques rolled together into a highly tuned implementation which is better known as memory maps: 4Kb aligned i/o, O_DIRECT, never extending the file, all async i/o. On a 64 bit system, simply fallocate the file to a very large amount and mmap it into memory. Read and write as you see fit. If your i/o patterns confuse the kernel page algorithms which can happen, you may need a touch of madvise() here and there to encourage better behaviour. Less is more with madvise(), trust me.
An awful lot of people try to duplicate mmaps using various O_DIRECT algorithms without realising mmaps already can do everything you need. I'd suggest exploring those first, if Linux won't behave try FreeBSD which has a much more predictable file i/o model, and only then delve into the realm of rolling your own i/o solution. Speaking as someone who does these all day long, I'd strongly recommend you avoid them whenever possible, filing systems are pits of devils of quirky and weird behaviour. Leave the never ending debugging to someone else.
Upvotes: 14