Article count:1086 Read by:1552441

Account Entry

Zircon - Fuchsia Kernel Analysis - Boot (Platform Initialization)

Latest update time:2021-08-31 08:05
    Reads:

Introduction


Zircon is the kernel of Google's new operating system Fuchsia, which is based on LK - Little Kernel. Little Kernel has always been the core of the Android system's Bootloader. Zircon adds MMU, System Call and other functions on this basis.


Zircon currently supports two CPU platforms, X86/X64 and ARM. Below I will take ARM64 as an example to analyze the early startup process of the Zircon kernel line by line to see how Zircon and ARM64 complete the platform initialization, which is implemented by assembly.


I need to state in advance that I usually engage in Android development and have limited knowledge of ARM. This source code reading will also refer to some other materials, and there will inevitably be some errors. I hope readers will understand.

Annotated Zircon kernel source code (incomplete):

https://github.com/ganyao114/zircon/tree/doc


ARM64


First, we need to briefly review the ARM64 background involved. Although I have had some brief contact with embedded ARM before, ARM64 is indeed much more complicated in comparison.



Privileged Mode/Exception Level


In ARM32, we use 7 privilege modes such as SVC to distinguish the working mode of the CPU. Low-level programs such as the operating system will run in a high-privilege mode, while ordinary user programs will run in a low-privilege user mode.


In ARM64, it is actually similar, but in ARM64 it is unified into 4 exception levels EL-0/1/2/3.


EL Architecture:


  • Privilege level EL3 > EL2 > EL1 > EL0, EL0 is the non-privileged execution level;

  • EL2 has no Secure State but None-Secure State. EL3 has only Secure-State and controls EL0 and EL1 to switch between the two modes.

  • EL0 and EL1 must be implemented, EL2 and EL3 are optional.



Regarding the actual use of the four privilege levels in system software:

| EL | Purpose|
|------|------|
| EL0 | Run user programs|
| EL1 | Operating system kernel|
| EL2 | Hypervisor (can be understood as running multiple virtual kernels on it) |
| EL3 | Secure Monitor (ARM Trusted Firmware) |

Impact of Secure State:



Multi-core


In a multi-core processor, the CPU core with ID = 0 is the prime core, or called the BSP bootstrap processor , and the other processors are non-prime cores, or AP cores - Application Processors . The prime core is responsible for power-on and kernel initialization, and the AP core only needs to complete its own configuration.

ARM MP architecture diagram:


  • CPU cores communicate with each other via Inter-Core Interrupts - IPI.

  • Each CPU core can see the same memory bus and data. Generally, L1 cache is exclusive to each core, and L2/L3 is shared by all cores.

  • All CPU cores share the same I/O peripherals and interrupt controller, which distributes interrupts to the appropriate CPU core based on the configuration.



register


This article only describes


Common examples in kernel code


With the brief introduction to ARM64 above, we can understand some of the codes in the code.
The following are more common codes.


Determine whether it is a prime CPU core:

mrs     cpuid, mpidr_el1

ubfx    cpuid, cpuid, #0, #15 /* mask Aff0 and Aff1 fields */ //aff0 记录 cpu ID,aff1 记录是否支持超线程

cbnz    cpuid, .Lno_save_bootinfo //如果不是 prim 核心(0 号核心),则不需要启动内核,也就不需要准备内核启动参数,直接执行核心初始化工作


The first two lines take AFF01 from mpidr_el1 and put it into cpuid.


If cpuid = 0 in the third line, it means it is the prime cpu core and also the first thread, although hyperthreading is not implemented now.


Get label/data address


This requires some explanation, because the kernel is linked based on virtual addresses. In the early stages of kernel booting, which is the process described in this article, the MMU is turned off, and the kernel actually runs on physical addresses during this period.


Then, this code must be PIC position-independent code. In addition to using registers as much as possible, when it has to access memory, this code cannot rely on the address given by the linker. Therefore, if the address in the memory needs to be obtained in this code, only instructions can be used to calculate the actual address of the data/Label.

Zircon simplifies this operation into a macro:

.macro adr_global reg, symbol

//得到包含 symbol 4K 内存页的基地址

adrp \reg, \symbol

//第一个全局变量的地址

add \reg, \reg, #:lo12:\symbol

.endm


The first line gets the base address of the 4K memory page containing the symbol.


The second line adds the offset to the base address to get the actual address of the symbol.


Determine the current EL

mrs x9, CurrentEL

cmp x9, #(0b01 << 2)

//不等于 0 时,说明不是在异常级别 1,跳转到 notEL1 代码

bne .notEL1

Looping

str     xzr, [page_table1, tmp, lsl #3]

add     tmp, tmp, #1

cmp     tmp, #MMU_KERNEL_PAGE_TABLE_ENTRIES_TOP

bne     .Lclear_top_page_table_loop


Equivalent to

for(int tmp = 0;tmp < MMU_KERNEL_PAGE_TABLE_ENTRIES_TOP;tmp++) {

    page_table1[tmp] = 0;

}


Boot process overview


The early stage of booting, before the kernel enters the C++ world, is mainly divided into the following steps:


  • Initialize the abnormal configuration under each EL1 - EL3

  • Create the boot-stage page table

  • Preparing to turn on the MMU

  • Turn on the MMU

  • Configuration stack ready to enter the C world



Startup sequence and code


In a multi-core processor architecture, many initialization codes only need to be completed by the prime processor, and other processors only need to complete their own configuration.

<table>
 <tr>
    <td bgcolor=#00F5FF>prime  核心</td>
    <td bgcolor=#00F5FF>其他核心</td>
 </tr>
 <tr>
    <td>保存内核启动参数</td>
    <td>跳过</td>
 </tr>
 <tr>
    <td colspan="2">初始化 EL1 - EL3 的异常配置</td>
 </tr>
  <tr>
    <td colspan="2">初始化缓存</td>
 </tr>
 <tr>
    <td>修复 kernel base 地址 </td>
    <td>跳过</td>
 </tr>
  <tr>
    <td>检查并等待 .bss 段数据清除</td>
    <td>跳过</td>
 </tr>
  <tr>
    <td>创建启动阶段的页表</td>
    <td>自旋等待页表创建完成</td>
 </tr>
 <tr>
    <td colspan="2">打开 MMU 之前的准备工作</td>
 </tr>
 <tr>
    <td colspan="2">打开 MMU (以上代码运行在物理地址,以下代码运行在虚拟地址)</td>
 </tr>
 <tr>
    <td>重新配置内核栈指针</td>
    <td>配置其他 CPU 的栈指针</td>
 </tr>
  <tr>
    <td>跳转到 C 世界继续初始化</td>
    <td>休眠等待唤醒</td>
 </tr>
</table>



Save kernel boot parameters


start.S - _start

mrs     cpuid, mpidr_el1 //aff0 记录 cpu ID,aff1 记录是否支持超线程

cbnz    cpuid, .Lno_save_bootinfo //如果不是 prim 核心(0 号核心),则不需要启动内核,也就不需要准备内核启动参数,直接执行核心初始化工作

/* save x0 in zbi_paddr */

//prim 核心走这里,准备并保存内核启动参数

//计算 zbi_paddr 段中数据的地址,保存在 x0 中,下同

adrp    tmp, zbi_paddr

str     x0, [tmp, #:lo12:zbi_paddr]

/* save entry point physical address in kernel_entry_paddr */

adrp    tmp, kernel_entry_paddr

adr     tmp2, _start

str     tmp2, [tmp, #:lo12:kernel_entry_paddr]

adrp    tmp2, arch_boot_el

mrs     x2, CurrentEL

str     x2, [tmp2, #:lo12:arch_boot_el]

//总之,x0 - x4 现在保存了核心初始化需要的参数,为跳转到 C 世界作准备。

Initialize EL1 - EL3


asm.S - arm64_elX_to_el1


The configuration of each EL requires the CPU to be in the corresponding EL state.


EL1


EL1 does not require configuration and returns directly

//读取现在的异常级别

mrs x9, CurrentEL

cmp x9, #(0b01 << 2)

//不等于 0 时,说明不是在异常级别 1,跳转到 notEL1 代码

bne .notEL1

/* Already in EL1 */

//EL1 直接返回

ret

EL2


The main configurations in EL2 state are:

  • Configure the exception vector table for EL2

  • Configuring the Clock

  • Clear the EL2 translation table register

  • Configure the SPSR and ELR registers. See the register description above for these two.


In fact, EL2 has no specific use in Zircon, so the initialization here basically just sets some empty values .

/* Setup the init vector table for EL2. */

   //计算EL2的异常向量表的基地址

   adr_global x9, arm64_el2_init_table

   //设定EL2的异常向量表的基地址

   msr vbar_el2, x9



   /* Ensure EL1 timers are properly configured, disable EL2 trapping of

       EL1 access to timer control registers.  Also clear virtual offset.

   */


   //检查并配置时钟

   mrs x9, cnthctl_el2

   orr x9, x9, #3

   msr cnthctl_el2, x9

   msr cntvoff_el2, xzr



   /* clear out stage 2 translations */

   //清除 vttbr_el2 寄存器,vttbr_el2 保存了转换表的基地址,负责在 EL2 下进行 EL0 -> EL1 的非安全存储器访问的转换

   msr vttbr_el2, xzr //http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100403_0200_00_en/lau1457340777806.html



   //当系统发生了异常并进入EL2,SPSR_EL2,Saved Program Status Register (EL2)会保存处理器状态,ELR_EL2,Exception Link Register (EL2)会保存返回发生exception的现场的返回地址。

   //这里是设定SPSR_EL2和ELR_EL2的初始值。

   adr x9, .Ltarget

   msr elr_el2, x9

   //ELR 定义看上面

   mov x9, #((0b1111 << 6) | (0b0101)) /* EL1h runlevel */

   msr spsr_el2, x9

EL3


The main task of the EL3 state is to configure the Secure State/HVC/running instruction set of EL0/EL1, and the others are left empty like the above EL2.


  • Set EL0/EL1 to non-Secure State

  • Enable HVC command

  • Using AARCH64 instructions

cmp x9, #(0b10 << 2)



//当前为异常级别 2,跳转到 inEL2

beq .inEL2



//当不在 EL2 状态时,则为 EL3

/* set EL2 to 64bit and enable HVC instruction */

//scr_el3 控制EL0/EL1/EL2的异常路由  逻辑1允许

//http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.100403_0200_00_en/lau1457340777806.html

//若SCR_EL3.RW == 1,则决定 EL2/EL1 是使用AArch64,否则AArch32

mrs x9, scr_el3

//打开 EL0/EL1 的非安全状态,EL0/EL1 无法访问安全内存

orr x9, x9, #SCR_EL3_NS

//开启 HVC 指令

//关于 HVC,看 http://www.wowotech.net/armv8a_arch/238.html

orr x9, x9, #SCR_EL3_HCE

//设置 SCR_EL3.RW == 1,EL2/EL1 是使用AArch64

orr x9, x9, #SCR_EL3_RW

msr scr_el3, x9



//ELR 寄存器 Exception Link Register,用于保存异常进入ELX的异常地址,在返回异常现场的时候,可以使用 ELR_ELX(x = 1/2/3) 来恢复PC值, 异常迁移到哪一个exception level就使用哪一个ELR

//同样的,由于不会有异常把系统状态迁移到EL0,因此也就不存在ELR_EL0了。

adr x9, .Ltarget

//这里异常进入地址为 Ltarget

msr elr_el3, x9





//设定 spsr_el3

mov x9, #((0b1111 << 6) | (0b0101)) /* EL1h runlevel */

msr spsr_el3, x9



//配置 EL1 并准备进入 EL1 *

b   .confEL1

Return from EL3 to EL1

/* disable EL2 coprocessor traps */

mov x9, #0x33ff

msr cptr_el2, x9



/* set EL1 to 64bit */

//设置 EL1 的异常处理为 AARCH64 指令,同上

mov x9, #HCR_EL2_RW

msr hcr_el2, x9



/* disable EL1 FPU traps */

mov x9, #(0b11<<20)

msr cpacr_el1, x9



/* set up the EL1 bounce interrupt */



//配置 EL1 栈指针

mov x9, sp  

msr sp_el1, x9



isb

//模拟异常返回,执行该指令会使得CPU返回EL1状态

eret



Initialize the cache


//使缓存失效 *

bl      arch_invalidate_cache_all



/* enable caches so atomics and spinlocks work */

//启用缓存,使原子操作和自旋锁生效

mrs     tmp, sctlr_el1

//打开指令缓存

orr     tmp, tmp, #(1<<12) /* Enable icache */

//打开数据缓存

orr     tmp, tmp, #(1<<2)  /* Enable dcache/ucache */

msr     sctlr_el1, tmp

Fix redirected Kernel Base address


This work is completed by the prime cpu, and other cpu start to enter spin waiting:

//加载 kernel_relocated_base 段地址

//内核重定向的基地址,即内核开始的虚拟地址

adr_global  tmp, kernel_relocated_base

//负值给 kernel_vaddr

ldr     kernel_vaddr, [tmp]



// Load the base of the translation tables.

//貌似 Zircon 中 1GB 物理内存由一个 translation_table 维护,所以这里 tt_trampoline 相当于一级页表?

adr_global page_table0, tt_trampoline

//虚拟地址内存页地址转换表

adr_global page_table1, arm64_kernel_translation_table



// Send secondary cpus over to a waiting spot for the primary to finish.

//如果不是 prim CPU 内核,则跳转到 Lmmu_enable_secondary 后等待 prim 内核运行完下面代码

cbnz    cpuid, .Lmmu_enable_secondary

//下面的代码只有 prim CPU 内核执行



// The fixup code appears right after the kernel image (at __data_end in

// our view).  Note this code overlaps with the kernel's bss!  It

// expects x0 to contain the actual runtime address of __code_start.

//将内核代码开始的虚拟地址保存到 x0 中

mov     x0, kernel_vaddr

//跳转到 __data_end *

//__data_end 指向 image.S - apply_fixups 方法

bl      __data_end


FUNCTION(apply_fixups)

   // This is the constant address the kernel was linked for.

   movlit x9, KERNEL_BASE

   sub x0, x0, x9



// The generated kernel-fixups.inc invokes this macro for each run of fixups.

.macro fixup addr, n, stride

   adr x9, FIXUP_LOCATION(\addr)

.if \n >= 4 && \stride == 8

   // Do a loop handling adjacent pairs.

   mov x16, #(\n / 2)

0:  fixup_pair

   subs x16, x16, #1

   b.ne 0b

.if \n % 2

   // Handle the odd remainder after those pairs.

   fixup_single 8

.endif

.elseif \n >= 2 && \stride == 8

   // Do a single adjacent pair.

   fixup_pair

.if \n == 3

   // Do the third adjacent one.

   fixup_single 8

.endif

.elseif \n > 1

   // Do a strided loop.

   mov x16, #\n

0:  fixup_single \stride

   subs x16, x16, #1

   b.ne 0b

.else

   // Do a singleton.

   fixup_single 8

.endif

.endm



.macro fixup_pair

   ldp x10, x11, [x9]

   add x10, x10, x0

   add x11, x11, x0

   stp x10, x11, [x9], #16

.endm



.macro fixup_single stride

   ldr x10, [x9]

   add x10, x10, x0

   str x10, [x9], #\stride

.endm



#include "kernel-fixups.inc"



   ret



DATA(apply_fixups_end)

END_FUNCTION(apply_fixups)


Checking for pending cleanup of .bss segment


//检查内核 bss 段是否被清除,猜测是因为前面 bss 段所在内存已经被操作过  

.Ldo_bss:

   //见 kernel.ld

   //计算保存内核 .bss 段开始地址

   adr_global tmp, __bss_start

   //计算保存内核 .bss 段结束地址

   adr_global tmp2, _end

   //计算 .bss 段大小

   sub     tmp2, tmp2, tmp

   //.bss 段大小为 0 则跳转 Lbss_loop_done

   cbz     tmp2, .Lbss_loop_done



//不为 0 则循环等待

.Lbss_loop:

   sub     tmp2, tmp2, #16

   stp     xzr, xzr, [tmp], #16

   cbnz    tmp2, .Lbss_loop

.Lbss_loop_done:


Create the page table for the boot phase


First, clear the memory in the table:

//清除内核虚地址转换表 

.Lclear_top_page_table_loop:



   //遍历转换表中的所有条目并设置 0

   /**

       等价于

       for(int tmp = 0;tmp < MMU_KERNEL_PAGE_TABLE_ENTRIES_TOP;tmp++) {

           page_table1[tmp] = 0;

       }



   关于 xzr 寄存器 https://community.arm.com/processors/f/discussions/3185/wzr-xzr-register-s-purpose

   **/


   str     xzr, [page_table1, tmp, lsl #3]

   add     tmp, tmp, #1

   cmp     tmp, #MMU_KERNEL_PAGE_TABLE_ENTRIES_TOP

   bne     .Lclear_top_page_table_loop


During the initialization phase, three addresses need to be mapped:

  • The first section is identity mapping, which is actually mapping the physical address to the physical address. This mapping is required when turning on the MMU (ARM ARCH strongly recommends doing so)

  • The second section is kernel image mapping. The kernel code must be run happily to map the addresses required by the kernel running (kernel txt, dernel rodata, data, bss, etc.)

  • The third paragraph is the mapping corresponding to the blob memory


Because the page table mapping calls the C method, the SP pointer needs to be configured for the CPU in advance:

.Lbss_loop_done:



   /* set up a functional stack pointer */

   //设定内核栈地址,准备调用 C 代码

   adr_global tmp, boot_cpu_kstack_end

   mov     sp, tmp



   /* make sure the boot allocator is given a chance to figure out where

    * we are loaded in physical memory. */


   bl      boot_alloc_init



   /* save the physical address the kernel is loaded at */

   //保存内核开始地址到 kernel_base_phys 全局变量

   adr_global x0, __code_start

   adr_global x1, kernel_base_phys

   str     x0, [x1]



   /* set up the mmu according to mmu_initial_mappings */



   /* clear out the kernel translation table */



   mov     tmp, #0

Mapping physical memory:

//准备调用 C 函数 arm64_boot_map

   //1.该函数任务是帮内核映射物理内存

   //先准备 5 个参数 x0-x4 寄存器保存函数参数

   /* void arm64_boot_map(pte_t* kernel_table0, vaddr_t vaddr, paddr_t paddr, size_t len, pte_t flags); */

   /* map a large run of physical memory at the base of the kernel's address space */

   mov     x0, page_table1

   mov     x1, KERNEL_ASPACE_BASE

   mov     x2, 0

   mov     x3, ARCH_PHYSMAP_SIZE

   movlit  x4, MMU_PTE_KERNEL_DATA_FLAGS

   //调用 arm64_boot_map *

   bl      arm64_boot_map

Map kernel running memory:

//2.映射内核的地址

/* map the kernel to a fixed address */

/* note: mapping the kernel here with full rwx, this will get locked down later in vm initialization; */

mov     x0, page_table1

mov     x1, kernel_vaddr

adr_global x2, __code_start

adr_global x3, _end

sub     x3, x3, x2

mov     x4, MMU_PTE_KERNEL_RWX_FLAGS

bl      arm64_boot_map

Notification page table configuration is complete:

//标记页表已经设置完毕,通知其他 CPU 内核可以继续往下跑了

adr_global tmp, page_tables_not_ready

str     wzr, [tmp]

//prime CPU 内核跳入 Lpage_tables_ready

b       .Lpage_tables_ready


Prepare to turn on the MMU


Some configuration is required before enabling the MMU.


Clean up junk data


The status of MMU and Cache needs to be reset to clear the residual data inside. Before entering the Kernel code, the Bootloader may have used MMU and Cache, so there may be residual garbage data left in ICache and TLB.

//使 TLB 失效以清除数据

   /* Invalidate TLB */

   tlbi    vmalle1is

Initialize Memory attributes configuration


Memory attributes simply add several attributes to Memory, and each attribute affects the read and write strategy of Memory.

Because the memory read and write strategy is very complex, for example, a memory area points to a FIFO device, and there are strict timing requirements for memory reading and writing. In this case, you need to configure memory attributes to prohibit CPU read and write reordering, cache and other optimizations, because these are meaningless for this memory area and will affect the correctness of data reading and writing.

movlit  tmp, MMU_MAIR_VAL

msr     mair_el1, tmp



/* Initialize TCR_EL1 */

/* set cacheable attributes on translation walk */

/* (SMP extensions) non-shareable, inner write-back write-allocate */

movlit  tmp, MMU_TCR_FLAGS_IDENT

msr     tcr_el1, tmp


Take a look at Zircon's default Memory Attribute configuration:

/* Default configuration for main kernel page table:

*    - do cached translation walks

*/




/* Device-nGnRnE memory */

#define MMU_MAIR_ATTR0                  MMU_MAIR_ATTR(0, 0x00)

#define MMU_PTE_ATTR_STRONGLY_ORDERED   MMU_PTE_ATTR_ATTR_INDEX(0)



/* Device-nGnRE memory */

#define MMU_MAIR_ATTR1                  MMU_MAIR_ATTR(1, 0x04)

#define MMU_PTE_ATTR_DEVICE             MMU_PTE_ATTR_ATTR_INDEX(1)



/* Normal Memory, Outer Write-back non-transient Read/Write allocate,

* Inner Write-back non-transient Read/Write allocate

*/


#define MMU_MAIR_ATTR2                  MMU_MAIR_ATTR(2, 0xff)

#define MMU_PTE_ATTR_NORMAL_MEMORY      MMU_PTE_ATTR_ATTR_INDEX(2)



/* Normal Memory, Inner/Outer uncached, Write Combined */

#define MMU_MAIR_ATTR3                  MMU_MAIR_ATTR(3, 0x44)

#define MMU_PTE_ATTR_NORMAL_UNCACHED    MMU_PTE_ATTR_ATTR_INDEX(3)



#define MMU_MAIR_ATTR4                  (0)

#define MMU_MAIR_ATTR5                  (0)

#define MMU_MAIR_ATTR6                  (0)

#define MMU_MAIR_ATTR7                  (0)



#define MMU_MAIR_VAL                    (MMU_MAIR_ATTR0 | MMU_MAIR_ATTR1 | \

                                        MMU_MAIR_ATTR2 | MMU_MAIR_ATTR3 | \

                                        MMU_MAIR_ATTR4 | MMU_MAIR_ATTR5 | \

                                        MMU_MAIR_ATTR6 | MMU_MAIR_ATTR7 )



#define MMU_TCR_IPS_DEFAULT MMU_TCR_IPS(2) /* TODO: read at runtime, or configure per platform */



/* Enable cached page table walks:

* inner/outer (IRGN/ORGN): write-back + write-allocate

*/


#define MMU_TCR_FLAGS1 (MMU_TCR_TG1(MMU_TG1(MMU_KERNEL_PAGE_SIZE_SHIFT)) | \

                       MMU_TCR_SH1(MMU_SH_INNER_SHAREABLE) | \

                       MMU_TCR_ORGN1(MMU_RGN_WRITE_BACK_ALLOCATE) | \

                       MMU_TCR_IRGN1(MMU_RGN_WRITE_BACK_ALLOCATE) | \

                       MMU_TCR_T1SZ(64 - MMU_KERNEL_SIZE_SHIFT))

#define MMU_TCR_FLAGS0 (MMU_TCR_TG0(MMU_TG0(MMU_USER_PAGE_SIZE_SHIFT)) | \

                       MMU_TCR_SH0(MMU_SH_INNER_SHAREABLE) | \

                       MMU_TCR_ORGN0(MMU_RGN_WRITE_BACK_ALLOCATE) | \

                       MMU_TCR_IRGN0(MMU_RGN_WRITE_BACK_ALLOCATE) | \

                       MMU_TCR_T0SZ(64 - MMU_USER_SIZE_SHIFT))

#define MMU_TCR_FLAGS0_IDENT \

                      (MMU_TCR_TG0(MMU_TG0(MMU_IDENT_PAGE_SIZE_SHIFT)) | \

                       MMU_TCR_SH0(MMU_SH_INNER_SHAREABLE) | \

                       MMU_TCR_ORGN0(MMU_RGN_WRITE_BACK_ALLOCATE) | \

                       MMU_TCR_IRGN0(MMU_RGN_WRITE_BACK_ALLOCATE) | \

                       MMU_TCR_T0SZ(64 - MMU_IDENT_SIZE_SHIFT))

#define MMU_TCR_FLAGS_IDENT (MMU_TCR_IPS_DEFAULT | MMU_TCR_FLAGS1 | MMU_TCR_FLAGS0_IDENT)


Turn on the MMU


It is very simple to turn on the MMU here. Before turning it on, the above code runs at the physical address, and after turning it on, it runs at the virtual address.

//内存栅栏

isb



//保存 EL1 状态的异常向量表

/* Read SCTLR */

mrs     tmp, sctlr_el1



//打开 MMU

/* Turn on the MMU */

orr     tmp, tmp, #0x1



//恢复 EL1 状态的异常向量表

/* Write back SCTLR */

msr     sctlr_el1, tmp


Note here to back up and restore the exception vector table.


Memory fences prevent out-of-order execution of code before and after the MMU is turned on, causing logic errors.



Ready to jump into the C world


Reset the stack pointer


Before that, reset the stack pointer, since it has become a virtual address:

//重新设置 prime CPU 的内核栈指针,因为现在 MMU 已经打开,需要使用虚拟地址

// set up the boot stack for real

adr_global tmp, boot_cpu_kstack_end

mov     sp, tmp


Set stack overflow exception


Configuring Stack Guard actually means setting a page break at the end of the stack. If the program reads or writes here, it means the stack overflows and triggers an exception.


In case stack protection is not enabled during compilation:

//配置 Stack Guard,其实就是在栈末尾设置一个页中断,如果程序读写到这里,代表栈溢出,触发异常

adr_global tmp, boot_cpu_fake_thread_pointer_location

msr     tpidr_el1, tmp



// set the per cpu pointer for cpu 0

adr_global x18, arm64_percpu_array



// Choose a good (ideally random) stack-guard value as early as possible.

bl      choose_stack_guard

mrs     tmp, tpidr_el1

str     x0, [tmp, #ZX_TLS_STACK_GUARD_OFFSET]

// Don't leak the value to other code.

mov     x0, xzr

The other CPUs set up the stack, initialize, and go to sleep:

.Lsecondary_boot:



   //配置其他 CPU 内核的栈指针

   bl      arm64_get_secondary_sp

   cbz     x0, .Lunsupported_cpu_trap

   mov     sp, x0

   msr     tpidr_el1, x1



   bl      arm64_secondary_entry



.Lunsupported_cpu_trap:

   //其他 CPU 内核初始化完毕

   wfe

   b       .Lunsupported_cpu_trap

The prime kernel enters the C world:

//跳转到内核 C 代码入口

bl  lk_main

b


About kernel initialization


Most of this part is done in C/C++ and will be analyzed in the next article.


 
EEWorld WeChat Subscription

 
EEWorld WeChat Service Number

 
AutoDevelopers

About Us About Us Service Contact us Device Index Site Map Latest Updates Mobile Version

Site Related: TI Training

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

EEWORLD all rights reserved 京B2-20211791 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号 Copyright © 2005-2021 EEWORLD.com.cn, Inc. All rights reserved