Detailed Analysis of the S3C2440 Block Device Driver Framework (Twenty)

Publisher:心满愿望Latest update time:2020-07-17 Source: eefocusKeywords:S3C2440 Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

This program’s:


By analyzing the block device driver framework under the 2.6 kernel, we know how to write drivers


1. What we have learned before are character device drivers. Let's recall them first.


Character device driver:


When our application layer reads and writes (read()/write()) character device drivers, data is read and written by bytes/characters. There is no buffer during this period. Because the amount of data is small, data cannot be read randomly, such as: buttons, LEDs, mice, keyboards, etc.


2. Next, we will start learning block device drivers


Block devices:


Block devices are a type of I/O devices. When our application layer reads and writes to the device, it reads and writes data according to the sector size. If the data to be read or written is smaller than the sector size, a cache area will be required. Data at any location of the device can be read and written randomly, such as ordinary files (*txt, *.c, etc.), hard disks, USB flash drives, and SD cards.


3. Block device structure:


Segments: Consists of several blocks. It is a memory page or part of a memory page in the Linux memory management mechanism.

Blocks: The basic unit of data processing for the kernel or file system defined by Linux. Usually composed of one or more sectors. (For Linux operating system)

Sectors: The basic unit of a block device. Usually between 512 bytes and 32768 bytes, with 512 bytes as the default.


4. Let's take the txt file as an example to briefly analyze the block device process:


For example: when we want to write a very small data to a certain position in a txt file, since the data written by the block device is in sectors, but cannot break other positions in the txt file, we introduce a "cache area", read all the data into the cache area, and then modify the cache data, and then put the entire data into a sector corresponding to the txt file. When we write very small data to the txt file multiple times, the sector will be read and written repeatedly, which will waste a lot of time on reading/writing the hard disk, so the kernel provides a queue mechanism. Before closing the txt file, the read and write requests will be optimized, sorted, merged, etc., thereby improving the efficiency of accessing the hard disk.


(PS: In the kernel, the elv_merge() function is used to optimize, sort, and merge queues, which will be analyzed later)


5. Next, start analyzing the block device framework


When we write data to a *.txt file, the file system converts it into access to the sectors on the block device, that is, calling the ll_rw_block() function, and entering the device layer from this function.


5.1 Let's analyze the ll_rw_block() function (/fs/buffer.c) first


void ll_rw_block(int rw, int nr, struct buffer_head *bhs[])

//rw: read/write flag, nr: bhs[] length, bhs[]: data array to be read/written

{

int i;

 

for (i = 0; i < nr; i++) {

struct buffer_head *bh = bhs[i]; //Get nr buffer_heads

... ...

if (rw == WRITE || rw == SWRITE) {

if (test_clear_buffer_dirty(bh)) {

... ...

submit_bh(WRITE, bh); //Submit the buffer_head of the WRITE write flag

continue;

}

} else {

if (!buffer_uptodate(bh)) {

... ...

submit_bh(rw, bh); //Submit buffer_head with other flags

continue;

}

}

unlock_buffer(bh);

}

}

 

The buffer_head structure is our buffer descriptor, which stores various information of the buffer area. The structure is as follows:


struct buffer_head {

unsigned long b_state; //buffer status flag

struct buffer_head *b_this_page; //buffer in the page

struct page *b_page; //Which page is the storage buffer located on?

sector_t b_blocknr; //Logical block number

size_t b_size; //block size

char *b_data; //buffer in the page

 

struct block_device *b_bdev; //Block device, to represent an independent disk device

 

bh_end_io_t *b_end_io; //I/O completion method

 

  void *b_private; //Complete method data

 

struct list_head b_assoc_buffers; //Related mapping list

 

struct address_space *b_assoc_map;

 

atomic_t b_count; //buffer usage count

};

 

5.2 Then enter submit_bd(), submit_bh() function is as follows:


int submit_bh(int rw, struct buffer_head * bh)

{

struct bio *bio; //Define a bit (block input output), which is block device i/o

... ...

bio = bio_alloc(GFP_NOIO, 1); //allocate bio

    /* Construct bio according to buffer_head(bh) */

bio->bi_sector = bh->b_blocknr * (bh->b_size >> 9); //Store logical block number

bio->bi_bdev = bh->b_bdev; //Store the corresponding block device

bio->bi_io_vec[0].bv_page = bh->b_page; //Physical page where the buffer is stored

bio->bi_io_vec[0].bv_len = bh->b_size; //Store the size of the sector

bio->bi_io_vec[0].bv_offset = bh_offset(bh); //Store the offset in bytes in the sector

 

bio->bi_vcnt = 1; //Count value

bio->bi_idx = 0; //index value

bio->bi_size = bh->b_size; //Store the size of the sector

 

bio->bi_end_io = end_bio_bh_io_sync; //Set i/o callback function

bio->bi_private = bh; //Which buffer does it point to?

 

... ...

submit_bio(rw, bio); //Submit bio

    ... ...

}

The submit_hd() function constructs bio through bh, and then calls submit_bio() to submit bio.


5.3 submit_bio() function is as follows:


void submit_bio(int rw, struct bio *bio)

{

... ...

generic_make_request(bio);

}

Finally, call generic_make_request() to submit the bio data to the request queue of the corresponding block device. The generic_make_request() function mainly implements the submission processing of bio


5.4 The generic_make_request() function is as follows:


void generic_make_request(struct bio *bio)

{

if (current->bio_tail) { //current->bio_tail is not empty, indicating that a bio is being submitted

*(current->bio_tail) = bio; //Put the current bio into the previous bio->bi_next

bio->bi_next = NULL; //Update bio->bi_next=0

current->bio_tail = &bio->bi_next; //Then put the current bio->bi_next into current->bio_tail, so that the next bio will be put into the current bio->bi_next

return;

}

BUG_ON(bio->bi_next);

do {

current->bio_list = bio->bi_next;

if (bio->bi_next == NULL)

current->bio_tail = ¤t->bio_list;

else

bio->bi_next = NULL;

__generic_make_request(bio); //Submit bio

bio = current->bio_list;

} while (bio);

current->bio_tail = NULL; /* deactivate */

}


From the above comments and code analysis, we can see that __generic_make_request() can only be called when current->bio_tail is NULL when entering generic_make_request() for the first time.


__generic_make_request() first obtains the application queue q from the block_device corresponding to bio, then checks whether the corresponding device is a partition. If it is a partition, the sector address must be recalculated, and finally the member function make_request_fn of q is called to complete the submission of bio.


5.5 The __generic_make_request() function is as follows:


static inline void __generic_make_request(struct bio *bio)

{

request_queue_t *q;

    ... ...

int ret;

... ...

do {

... ...

q = bdev_get_queue(bio->bi_bdev); //Get the application queue q through bio->bi_dev

... ...

ret = q->make_request_fn(q, bio); //Submit application queue q and bio

} while (ret);

}

What is this q->make_request_fn() function? What does it do? Let's search where it is initialized. As shown in the figure below, search for make_request_fn, which is initialized with the parameter mfn in the blk_queue_make_request() function.


Continue searching for blk_queue_make_request and find it being called. What is the mfn parameter assigned?


As shown in the figure below, find that it is called in the blk_init_queue_node() function


Finally, q->make_request_fn() executes the __make_request() function



5.6 Let's take a look at what the __make_request() function does with the submitted request queues q and bio


static int __make_request(request_queue_t *q, struct bio *bio)

{

struct request *req; //The queue of the block device itself

    ... ...

    //(1) Sort the previous application queue q and the incoming bio and merge them into its own req queue

el_ret = elv_merge(q, &req, bio);

... ...

init_request_from_bio(req, bio); //Merge failed, put bio into req queue alone

 

add_request(q, req); //Put the previous application queue q into the req queue separately

    ... ...

__generic_unplug_device(q); //(2)Execute the processing function of the application queue

}

1) The elv_merge() function above is the elevator merge algorithm in the kernel. It is similar to the elevator we take, which goes up or down through a flag.


For example, there are 6 applications in the application queue:


4 (in), 2 (out), 5 (in), 3 (out), 6 (in), 1 (out) // in: write out queue to sector, out: read in queue


Finally, the execution will be sorted and merged, first writing out queues 4, 5, and 6, and then reading in queues 1, 2, and 3.


2) The __generic_unplug_device() function above is as follows:

 

void __generic_unplug_device(request_queue_t *q)

{

if (unlikely(blk_queue_stopped(q)))

return;

 

if (!blk_remove_plug(q))

return;

 

q->request_fn(q);

}

Finally, the request_fn() function of the q member is executed to execute the processing function of the application queue


6. The framework analysis summary of this section is shown in the following figure:


7. Where q->request_fn is a request_fn_proc structure, as shown in the following figure:



7.1 Where does this application queue q->request_fn come from?


We refer to the built-in block device driver driversblockxd.c


I found this sentence in the entry function:


static struct request_queue *xd_queue; //Define an application queue xd_queue

 

xd_queue = blk_init_queue(do_xd_request, &xd_lock); //Assign an application queue

 

The prototype of the blk_init_queue() function is as follows:


request_queue_t *blk_init_queue(request_fn_proc *rfn, spinlock_t *lock)

// *rfn: request_fn_proc structure, used to execute the processing function in the application queue

// *lock: spinlock for queue access rights, which needs to be defined by the DEFINE_SPINLOCK() function

Obviously, do_xd_request() is hooked into xd_queue->request_fn. Then the request queue is returned.

[1] [2]
Keywords:S3C2440 Reference address:Detailed Analysis of the S3C2440 Block Device Driver Framework (Twenty)

Previous article:Writing the S3C2440 block device driver to simulate a hard disk using memory (Twenty-one)
Next article:S3C2440 USB mouse driver (19)

Latest Microcontroller Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号