· Equipment initialization and release;
Provide various equipment services;
Responsible for data exchange between the kernel and the device;
Detect and handle errors that occur during equipment operation.
The device driver under Linux is organized into a set of functions that complete different tasks. Through these functions, Windows device operations are like files. From the perspective of the application, the hardware device is just a device file. The application can operate the hardware device like an ordinary file, such as open (), close (), read (), write (), etc.
Linux mainly divides devices into two categories: character devices and block devices. Character devices refer to devices that send and receive data in the form of characters; while block devices do so in the form of entire data buffers. When a read/write request is issued to a character device, the actual hardware I/O generally occurs immediately; this is not the case with block devices, which use a piece of system memory as a buffer. When the user process's request to the device can meet the user's requirements, the requested data is returned. If not, the request function is called to perform the actual I/O operation. Block devices are mainly for slow devices such as disks.
1. Memory allocation
Since the Linux driver runs in the kernel, when the device driver needs to apply for/release memory, it cannot use the user-level malloc/free function, but needs to be implemented by the kernel-level function kmalloc/kfree (). The prototype of the kmalloc() function is:
void kmalloc (size_t size ,int priority);
The size parameter is the number of bytes requested for memory allocation. kmalloc can only allocate up to 128k of memory. The priority parameter specifies the action to be taken by the user process if kmalloc() cannot allocate memory immediately:
GFP_KERNEL means waiting, that is, waiting for kmalloc() function to arrange some memory into swap area to meet your memory needs. GFP_ATOMIC means not waiting. If memory cannot be allocated immediately, it returns 0. The return value of the function points to the starting address of the allocated memory. If an error occurs, it returns 0.
The memory allocated by kmalloc() needs to be released by the kfree() function. kfree() is defined as:
# define kfree (n) kfree_s( (n) ,0)
The prototype of the kfree_s() function is:
void kfree_s (void * ptr ,int size);
The parameter ptr is the pointer to the allocated memory returned by kmalloc(), and size is the number of bytes of memory to be released. If it is 0, the kernel automatically determines the size of the memory.
2. Interruption
Many devices involve interrupt operations, so the driver of such a device needs to provide an interrupt service routine for the interrupt request generated by the hardware. Just like registering the basic entry point, the driver also requests the kernel to associate a specific interrupt request with the interrupt service routine. In Linux, the request is implemented using the request_IRq() function:
int request_irq (unsigned int irq ,void( * handler) int ,unsigned lONg type ,char * name);
The parameter irq is the interrupt request number, the parameter handler is a pointer to the interrupt service routine, the parameter type is used to determine whether it is a normal interrupt or a fast interrupt (a normal interrupt means that after the interrupt service subroutine returns, the kernel can execute the scheduler to determine which process will run; and a fast interrupt means that after the interrupt service subroutine returns, the interrupted program is executed immediately. The normal interrupt type value is 0, and the fast interrupt type value is SA_INTERRUPT), and the parameter nAME is the name of the device driver.
4. Block device driver
Writing a block device driver is a voluminous project, which is much more difficult than writing a character device driver. Thousands of lines of code can only handle a simple block device, while dozens of lines of code may handle a character device. Therefore, it is necessary to have a considerable basic skill to complete this work. The following is an example, that is, the driver of the mtdblock block device. We will explain how to write a block device driver by analyzing the code in this example (due to the length of the article, a large amount of code is omitted, and only the necessary trunk is retained):
#include
#include
staTIc void mtd_notify_add(struct mtd_info* mtd);
static void mtd_notify_remove(struct mtd_info* mtd);
static struct mtd_notifier notifier = {
mtd_notify_add,
mtd_notify_remove,
NULL
} ;
static devfs_handle_t devfs_dir_handle = NULL;
static devfs_handle_t devfs_rw_handle[MAX_MTD_deviceS];
static struct mtdblk_dev {
struct mtd_info *mtd; /* Locked */
int count;
struct semaphore cache_sem;
unsigned char *cache_data;
unsigned long cache_offset;
unsigned int cache_size;
enum { STATE_EMPTY, STATE_CLEAN, STATE_DIRTY } cache_state;
} *mtdblks[MAX_MTD_DEVICES] ;
static spinlock_t mtdblks_lock;
/* this lock is used just in kernels >= 2.5.x */
static spinlock_t mtdblock_lock;
static int mtd_sizes[MAX_MTD_DEVICES];
static int mtd_blksizes[MAX_MTD_DEVICES];
static void erase_callback(struct erase_info *done)
{
wait_queue_head_t *wait_q = (wait_queue_head_t *)done->priv;
wake_up(wait_q);
}
static int erase_write (struct mtd_info *mtd, unsigned long pos,
int len, const char *buf)
{
struct erase_info erase;
DECLARE_WAITQUEUE(wait, current);
wait_queue_head_t wait_q;
size_t retlen;
int ret;
/*
* First, let\'s erase the flash block.
*/
init_waitqueue_head(&wait_q);
erase.mtd = mtd;
erase.callback = erase_callback;
erase.addr = pos;
erase.len = len;
erase.priv = (u_long)&wait_q;
set_current_state(TASK_INTERRUPTIBLE);
add_wait_queue(&wait_q, &wait);
ret = MTD_ERASE(mtd, &erase);
if (ret) {
set_current_state(TASK_RUNNING);
remove_wait_queue(&wait_q, &wait);
printk (KERN_WARNING "mtdblock: erase of region [0x%lx, 0x%x] " "on /"% s/" failed/n",
pos, len, mtd->name);
return ret;
}
schedule(); /* Wait for erase to finish. */
remove_wait_queue(&wait_q, &wait);
/*
* Next, writhe data to flash.
*/
ret = MTD_WRITE (mtd, pos, len, &retlen, buf);
if (ret)
return ret;
if (retlen != len)
return -EIO;
return 0;
} [page]
static int write_cached_data (struct mtdblk_dev *mtdblk)
{
struct mtd_info *mtd = mtdblk->mtd;
int ret;
if (mtdblk->cache_state != STATE_DIRTY)
return 0;
DEBUG(MTD_DEBUG_LEVEL2, "mtdblock: writing cached data for /"%s/"
"at 0x%lx, size 0x%x/n", mtd->name,
mtdblk->cache_offset, mtdblk->cache_size);
ret = erase_write (mtd, mtdblk->cache_offset,
mtdblk->cache_size, mtdblk->cache_data);
if (ret)
return ret;
mtdblk->cache_state = STATE_EMPTY;
return 0;
}
static int do_cached_write (struct mtdblk_dev *mtdblk, unsigned long pos,
int len, const char *buf)
{
…
}
static int do_cached_read (struct mtdblk_dev *mtdblk, unsigned long pos,
int len, char *buf)
{
…
}
static int mtdblock_open(struct inode *inode, struct file *file)
{
…
}
static release_t mtdblock_release(struct inode *inode, struct file *file)
{
int dev;
struct mtdblk_dev *mtdblk;
DEBUG(MTD_DEBUG_LEVEL1, "mtdblock_release/n");
if (inode == NULL)
release_return(-ENODEV);
dev = minor(inode->i_rdev);
mtdblk = mtdblks[dev];
down(&mtdblk->cache_sem);
write_cached_data(mtdblk);
up(&mtdblk->cache_sem);
spin_lock(&mtdblks_lock);
if (!--mtdblk->count) {
/* It was the last usage. Free the device */
mtdblks[dev] = NULL;
spin_unlock(&mtdblks_lock);
if (mtdblk->mtd->sync )
mtdblk->mtd->sync(mtdblk->mtd);
put_mtd_device(mtdblk->mtd);
vfree(mtdblk->cache_data);
kfree(mtdblk);
} else {
spin_unlock(&mtdblks_lock);
}
DEBUG(MTD_DEBUG_LEVEL1, "ok/n");
BLK_DEC_USE_COUNT;
release_return(0);
}
/*
* This is a special request_fn because it is executed in a process context
* to be able to sleep independently of the caller. The
* io_request_lock (for <2.5) or queue_lock (for >=2.5) is held upon entry
* and exit . The head of our request queue is considered active so there is
* no need to dequeue requests before we are done.
*/
static void handle_mtdblock_request(void)
{
struct request *req;
struct mtdblk_dev *mtdblk;
unsigned int res;
for (;;) {
INIT_REQUEST;
req = CURRENT;
spin_unlock_irq(QUEUE_LOCK(QUEUE));
mtdblk = mtdblks[minor(req->rq_dev)];
res = 0;
if (minor(req->rq_dev) >= MAX_MTD_DEVICES)
panic("%s : minor out of bound", __FUNCTION__);
if (!IS_REQ_CMD(req))
goto end_req;
if ((req->sector + req->current_nr_sectors) > (mtdblk->mtd->size >> 9))
goto end_req;
// Handle the request
switch (rq_data_dir(req))
{
int err;
case READ:
down(&mtdblk->cache_sem);
err = do_cached_read (mtdblk, req->sector << 9,
req->current_nr_sectors << 9,
req->buffer);
up(&mtdblk->cache_sem);
if (! err)
res = 1;
break;
case WRITE:
// Read only device
if ( !(mtdblk->mtd->flags & MTD_WRITEABLE) )
break;
// Do the write
down(&mtdblk->cache_sem);
err = do_cached_write (mtdblk, req->sector << 9,req->current_nr_sectors << 9, req->buffer);
up(&mtdblk->cache_sem);
if (!err)
res = 1;
break;
}
end_req:
spin_lock_irq(QUEUE_LOCK(QUEUE));
end_request(res);
}
}
static volatile int leaving = 0;
static DECLARE_MUTEX_LOCKED(thread_sem);
static DECLARE_WAIT_QUEUE_HEAD(thr_wq);
int mtdblock_thread(void *dummy)
{
…
}
#define RQFUNC_ARG request_queue_t *q
static void mtdblock_request(RQFUNC_ARG)
{
/* Don\'t do anything, except wake the thread if necESSary */
wake_up(&thr_wq);
}
static int mtdblock_ioctl(struct inode * inode, struct file * file,
unsigned int CMd, unsigned long arg)
{
struct mtdblk_dev *mtdblk;
mtdblk = mtdblks[minor(inode->i_rdev)];
switch (cMD) {
case BLKGETSIZE: / * Return device size */
return put_user((mtdblk->mtd->size >> 9), (unsigned long *) arg);
case BLKFLSBUF:
if(!capable(CAP_SYS_ADMIN))
return -EACCES;
fsync_dev(inode->i_rdev);
invalidate_buffers(inode->i_rdev);
down(&mtdblk->cache_sem);
write_cached_data(mtdblk);
up(&mtdblk->cache_sem );
if (mtdblk->mtd->sync)
mtdblk->mtd->sync(mtdblk->mtd);
return 0;
default:
return -EINVAL;
}
} [page]
static struct block_device_operations mtd_fops =
{
owner: THIS_MODULE,
open: mtdblock_open,
release: mtdblock_release,
ioctl: mtdblock_ioctl
};
static void mtd_notify_add(struct mtd_info* mtd)
{
…
}
static void mtd_notify_remove(struct mtd_info* mtd)
{
if (!mtd || mtd->type == MTD_ABSENT)
return;
devfs_unregister(devfs_rw_handle[mtd->index]);
}
int __init init_mtdblock(void)
{
int i;
spin_lock_init(&mtdblks_lock);
/* this lock is used just in kernels >= 2.5.x */
spin_lock_init(&mtdblock_lock);
#ifdef CONFIG_DEVFS_FS
if (devfs_register_blkdev(MTD_BLOCK_MAJOR, DEVICE_NAME, &mtd_fops))
{
printk(KERN_NOTICE "Can\'t allocate major number %d for Memory Technology Devices./n",
MTD_BLOCK_MAJOR);
return -EAGAIN;
}
devfs_dir_handle = devfs_mk_dir(NULL, DEVICE_NAME, NULL);
register_mtd_user(¬ifier);
#else
if (register_blkdev(MAJOR_NR,DEVICE_NAME,&mtd_fops)) {
printk(KERN_NOTICE "Can\'t allocate major number %d for Memory Technology Devices./n ",
MTD_BLOCK_MAJOR);
return -EAGAIN;
}
#endif
/* We fill it in at open() time. */
for (i=0; i< MAX_MTD_DEVICES; i++) {
mtd_sizes[i] = 0;
mtd_blksizes[i] = BLOCK_SIZE;
}
init_waitqueue_head(&thr_wq);
/* Allow the block size to default to BLOCK_SIZE. */
blksize_size[MAJOR_NR] = mtd_blksizes;
blk_size[MAJOR_NR] = mtd_sizes;
BLK_INIT_QUEUE(BLK_DEFAULT_QUEUE(MAJOR_NR), &mtdblock_request, &mtdblock_lock);
kernel_thread (mtdblock_thread, NULL, CLONE_FS|CLONE_FILES|CLONE_SIGHAND);
return 0;
}
static void __exit cleanup_mtdblock(void)
{
leaving = 1;
wake_up(&thr_wq);
down(&thread_sem);
#ifdef CONFIG_DEVFS_FS
unregister_mtd_user(¬ifier);
devfs_unregister(devfs_dir_handle);
devfs_unregister_blkdev(MTD_BLOCK_MAJOR, DEVICE_NAME) ;
#else
unregister_blkdev(MAJOR_NR,DEVICE_NAME) ;
#endif
blk_cleanup_queue(BLK_DEFAULT_QUEUE(MAJOR_NR));
blksize_size[MAJOR_NR] = NULL;
blk_size[MAJOR_NR] = NULL;
}
module_init(init_mtdblock);
module_exit(cleanup_mtdblock);
From the above source code, we find that block devices also register and release devices in a similar way to character devices using register_chrdev and unregister_chrdev functions:
int register_blkdev(unsigned int major, const char *name, struct block_device_operations *bdops);
int unregister_blkdev(unsigned int major, const char *name);
However, register_chrdev uses a pointer to a file_operations structure, while register_blkdev uses a pointer to a block_device_operations structure, which defines the same open, release, and ioctl methods as for character devices, but does not define read or write operations. This is because all I/O involving block devices is usually buffered by the system.
The block driver must ultimately provide a mechanism to perform the actual block I/O operations. In Linux, the methods used for these I/O operations are called "requests". During the block device registration process, the request queue needs to be initialized. This is done through blk_init_queue, which creates the queue and associates the driver's request function with the queue. During the cleanup phase of the module, the blk_cleanup_queue function should be called.
The relevant code in this example is:
BLK_INIT_QUEUE(BLK_DEFAULT_QUEUE(MAJOR_NR), &mtdblock_request, &mtdblock_lock);
blk_cleanup_queue(BLK_DEFAULT_QUEUE(MAJOR_NR));
Each device has a default request queue, which can be obtained when necessary using the BLK_DEFAULT_QUEUE(major) macro. This macro searches for the corresponding default queue in the global array of blk_dev_struct structures (the array is called blk_dev). The blk_dev array is maintained by the kernel and can be indexed by the major device number. The blk_dev_struct interface is defined as follows:
struct blk_dev_struct {
/*
* queue_proc has to be atomic
*/
request_queue_t request_queue;
queue_proc *queue;
void *data;
};
The request_queue member contains the I/O request queue after initialization, and the data member can be used by the driver to save some private data.
request_queue is defined as:
struct request_queue
{
/*
* the queue request freelist, one for reads and one for writes
*/
struct request_list rq[2];
/*
* Together with queue_head for cacheline sharing
*/
struct list_head queue_head;
elevator_t elevator;
request_fn_proc * request_fn;
merge_request_fn * back_merge_fn;
merge_request_fn * front_merge_fn;
merge_requests_fn * merge_requests_fn;
make_request_fn * make_request_fn;
plug_device_fn * plug_device_fn;
/*
* The queue owner gets to use this for whatever they like.
* ll_rw_blk doesn\'t touch it.
*/
void *queuedata;
/*
* This is used to remove the plug when tq_disk runs.
*/
struct tq_struct plug_tq;
/*
* Boolean that indicates whether this queue is plugged or not.
*/
char plugged;
/*
* Boolean that indicates whether current_request is active or
* not.
*/
char head_active;
/*
* Is meant to protect the queue in the future instead of
* io_request_lock
*/
spinlock_t queue_lock;
/*
* Tasks wait here for free request
*/
wait_queue_head_t wait_for_request;
};
The following figure shows the relationship between blk_dev, blk_dev_struct and request_queue:
The following figure shows the registration and release process of block devices:
5. Summary
This chapter describes the entry function of the Linux device driver and the memory allocation and interruption in the driver. It also uses examples to illustrate the driver development methods for character devices and block devices.
Previous article:How to deal with the Middle-Endian problem of floating point numbers in the ARM system
Next article:Multi-serial port transplantation technology based on WINCE&S3C2410
Recommended ReadingLatest update time:2024-11-16 15:38
Professor at Beihang University, dedicated to promoting microcontrollers and embedded systems for over 20 years.
- Innolux's intelligent steer-by-wire solution makes cars smarter and safer
- 8051 MCU - Parity Check
- How to efficiently balance the sensitivity of tactile sensing interfaces
- What should I do if the servo motor shakes? What causes the servo motor to shake quickly?
- 【Brushless Motor】Analysis of three-phase BLDC motor and sharing of two popular development boards
- Midea Industrial Technology's subsidiaries Clou Electronics and Hekang New Energy jointly appeared at the Munich Battery Energy Storage Exhibition and Solar Energy Exhibition
- Guoxin Sichen | Application of ferroelectric memory PB85RS2MC in power battery management, with a capacity of 2M
- Analysis of common faults of frequency converter
- In a head-on competition with Qualcomm, what kind of cockpit products has Intel come up with?
- Dalian Rongke's all-vanadium liquid flow battery energy storage equipment industrialization project has entered the sprint stage before production
- Allegro MicroSystems Introduces Advanced Magnetic and Inductive Position Sensing Solutions at Electronica 2024
- Car key in the left hand, liveness detection radar in the right hand, UWB is imperative for cars!
- After a decade of rapid development, domestic CIS has entered the market
- Aegis Dagger Battery + Thor EM-i Super Hybrid, Geely New Energy has thrown out two "king bombs"
- A brief discussion on functional safety - fault, error, and failure
- In the smart car 2.0 cycle, these core industry chains are facing major opportunities!
- The United States and Japan are developing new batteries. CATL faces challenges? How should China's new energy battery industry respond?
- Murata launches high-precision 6-axis inertial sensor for automobiles
- Ford patents pre-charge alarm to help save costs and respond to emergencies
- New real-time microcontroller system from Texas Instruments enables smarter processing in automotive and industrial applications
- 【Repost】Thirty Practical Experiences in Switching Power Supply Design (Part 1)
- Good book recommendation! A brief review of Luo's "Basics of Power Supply Design"
- Please give me some advice, is this an amplifier impedance issue?
- Will the wave of Internet layoffs in 2019 have an impact on salary negotiations when changing jobs?
- Which pins should be connected when adding the Bluetooth module to the IMX6?
- [Qinheng Trial] Three CH549 uses pwm to adjust the brightness of the lamp
- About C language conditional compilation
- 【RT-Thread Reading Notes】Reflections on RT-Thread Chapter 9
- 100% gift for a limited time: Download and share Keysight millimeter wave radar data to win a gift
- About GUI_Init stuck when emwin is ported to stm32f2