Reputation: 1049
I have doubts regarding Disk Sector and Block . A sector is a unit normally 512 bytes or 1k, 2k, 4k ect.. depends on hardware. Filesystem block size is group of sector size.
Suppose I am storing a file which is 5KB, how this will be written onto disk if a sector is 512 bytes and block is 4KB?
4KB of that File is written into one block and another 1KB of file is written into antoher 4KB Block. Now 3KB of that second Block is unusable.
Will it be usable in future or will it be wasted? If I write the 10 5KB file to disk, 30KB of size will be wasted, or this 30KB is used in disk usage?
Upvotes: 3
Views: 6461
Reputation: 22497
Its a well established fact that files are stored on disk in multiples of "block" size.
The concept of a block began as a simple way for physical sectors on disk to be represented logically in the filesystem. Each sector had its own header, data area and ECC which made it the smallest piece of disk that could be independently represented logically.
As time went by, with the advent of caches on the HDD controller it became easier to have logical blocks which were of the size of multiple physical sectors. This way on-disk sequential I/O increased resulting in better throughput.
Today, a block is the smallest piece of disk-space available. Typically files are stored using 1 or more block(s) on disk.
For each file, the leftover space(if any) in the last block is used whenever changes are made to the file and it "grows", requiring additional disk-space to store the newly added content.
Additional space requirement(beyond what the free space in the current last block can accommodate) is satisfied by requesting additional blocks on the disk and logically linking the new set of blocks to continue following the current last block of the file. File A illustrates the above scenario.
Advantage of allocating blocks in advance is that fragmentation is reduced. Consider the alternative where there is no concept of blocks on disk and disk-space is allocated as required i.e the amount of disk space allocated is exactly the file size.
In such a setup, each time even a single character is added to the file, one would need to :
All this meta-data i.e. additional formation about the "links" requires disk-space too. This constitutes a fixed overhead for each such "link" and hence its imperative that such "links" be kept to a minimal number. The concept of allocating disk size in "blocks" limits the overhead to a pre-determined amount.
Guaranteed number of files on disk = Raw disk-space / block-size
Also such random seeking reduces throughput as repositioning the disk head is the most time-consuming task involved in disk I/O. Frequent random seeking is also likely to wear out the disk faster (remember dancing HDDs?) and must be avoided as much as possible.
Further advantages of this approach :
Using blocks, disk reads are sequential upto the block size. Less seeks = higher read throughput.
Blocks provide a simple implementation that can be mapped to pages which results in higher write throughput as well.
Upvotes: 6
Reputation: 18467
First thing to note, is that the block size is almost always larger than your sector size
To determine your sector size run the following command
root@ubuntu:~# fdisk -l | grep -E "Sector size"
Sector size (logical/physical): 512 bytes / 512 bytes
Sector size will almost always be either 512 bytes or 4096 bytes, depending on when you purchased your drive
To determine your block size, run the following command
root@ubuntu:~# blockdev --getbsz /dev/sda
4096
The block size will often be 4096 on most modern OSes. You can change this if desired
Any files that do not completely fill a block, will result in wasted space. This is normal and expected.
http://linux.die.net/man/8/blockdev
http://www.linuxforums.org/forum/miscellaneous/5654-linux-disk-block-size-help-please.html
Upvotes: 1