aarch64中,有一个专门配置内存属性表的寄存器,为MAIR,本文基于MAIR寄存器讲解其作用
对于虚拟内存地址,我们知道其分布如下
这里我们只关注AttrIndx[2:0],可以知道描述如下:
这里stage 1就说ttbrx到pte的过程,这里的bits [4:2]指示的是MAIR寄存器中attr的索引。而MAIR寄存器如下 以el1为例
这里mair有8个attr,如下
这里的attr的值含义如下
其中熟悉可以分为设备内存和正常内存,如下
通过dd位区分,如下
这里我们看到GRE三个概念,解释如下
关于memory的shareable补充如下:
在TCR_EL1寄存器中,关注sh1 bit[29:28]如下:
这里可以看到
对于aarch64,cpu按照cluster划分,根据cluster的share规则如下
这里的inner指的是一个cluster内,outer指的非一个cluster内
而对于Normal Memory,通过0booooiiii来区分,如下解释
这里oooo是高位,iiii是低位。可以看到,这里对于Normal memory而言,我们看到了cacheable的如下属性
这里写通指的是内存写操作,直接更新到缓存和内存中,也就是缓存和内存数据是一致的
这里写回指的是内存写操作,只更新到缓存中,推迟对内存的更新,直到该缓存行被替换时才写回内存,这种情况下我们可以跟踪dirty bit,如果脏位是1,则缓存和内存的数据不一致
处理cacheable的write方式,还有transient 标志位
如果带有transient ,则表示内存使用时间很短,可以利用此标志来优先更新cache line
除了这些,我们还可以注意到RW标志,如下解释
这里的RW分别是 Read-allocate和Write-allocate,解释如下:
如果都是0,则代表No allocate,解释如下
假设我设置的MAIR_EL1的值是0xffffffffff440400,这里计算如下
可以知道,dd也是00,便知道这里是如下:
也就是 设备内存,不执行合并,不执行指令重排,不执行提前写应答
可以知道,dd是01,便知道如下:
也就是 设备内存,不执行合并,不执行指令重排,可以提前写应答
0x44,对应如下
对于oooo如下:
对于iiii如下:
也就是 正常内存,cluster内和cluster之间 都是non-cacheable的,所以的读写操作都是没有cache的。
0xff,对于如下
对于oooo如下:
对于iiii如下:
这里rw也是11,如下解释
这里是 正常内存,非短暂内存,读 allocate, 写 allocate。 简单来说就是所有的内存带cache,读写操作都经过cache
rtems的内存初始化在bootcard解析的时候已经提到了,这里为了了解系统的内存申请情况,以c库的申请函数为例,探索内存的申请效果
回顾RTEMS初始化-bootcard调用流程可以知道,_Malloc_Initialize
会初始化内存,其调用如下:
void _Malloc_Initialize( void ) { RTEMS_Malloc_Heap = ( *_Workspace_Malloc_initializer )(); }
我们知道,这里赋值了一个函数指针_Workspace_Malloc_initializer
的返回值,跟踪函数可以查找到默认赋值为了_Workspace_Malloc_initialize_separate
,此函数的实现如下:
Heap_Control *_Workspace_Malloc_initialize_separate( void ) { return _Malloc_Initialize_for_one_area( &_Malloc_Heap ); }
我们留意static Heap_Control _Malloc_Heap
的全局变量,其在_Malloc_Initialize_for_one_area
中给了RTEMS_Malloc_Heap
,并调用了_Heap_Initialize
。_Heap_Initialize
已经分析过了,这里重点记住Heap_Control结构体如下
struct Heap_Control { Heap_Block free_list; uintptr_t page_size; uintptr_t min_block_size; uintptr_t area_begin; uintptr_t area_end; Heap_Block *first_block; Heap_Block *last_block; Heap_Statistics stats; };
对于每个malloc申请的内存,都对应一个Heap_Block,这里结构体如下:
struct Heap_Block { uintptr_t prev_size; uintptr_t size_and_flag; Heap_Block *next; Heap_Block *prev; };
可以发现,这个和glibc有点相似。但是是简化版本的。简单解释一下:
prev_size是前一个block的大小,如果size_and_flag的bit0为1(HEAP_PREV_BLOCK_USED),也就是正在使用则无效
size_and_flag是整个block大小和是否HEAP_PREV_BLOCK_USED的flag的判断
next是下一个block的地址,双向链表
prev是上一个block的地址,双向链表
为了测试malloc,需要如下测试程序
static void test_early_malloc( void ) { void *p; char *q; void *r; void *s; void *t; p = malloc( 1 ); rtems_test_assert( p != NULL ); free( p ); q = calloc( 1, 1 ); rtems_test_assert( q != NULL ); rtems_test_assert( p != q ); rtems_test_assert( q[0] == 0 ); free( q ); /* * This was added to address the following warning. * warning: pointer 'q' used after 'free' */ #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wuse-after-free" r = realloc( q, 128 ); rtems_test_assert( r == q ); #pragma GCC diagnostic pop s = malloc( 1 ); rtems_test_assert( s != NULL ); free( s ); t = realloc( r, 256 ); rtems_test_assert( t != NULL ); rtems_test_assert( t != r ); free( t ); }
演示通过gdb,可以断点在函数test_early_malloc,此时我们先查看_Malloc_Heap 地址,如下
(gdb) p _Malloc_Heap $28 = {free_list = {prev_size = 0, size_and_flag = 0, next = 0x1342a0, prev = 0x1342a0}, page_size = 16, min_block_size = 32, area_begin = 1262239, area_end = 1072431104, first_block = 0x1342a0, last_block = 0x3febfff0, stats = {lifetime_allocated = 0, lifetime_freed = 0, size = 1071168848, free_size = 1071168848, min_free_size = 1071168848, free_blocks = 1, max_free_blocks = 1, used_blocks = 0, max_search = 0, searches = 0, allocs = 0, failed_allocs = 0, frees = 0, resizes = 0}} (gdb) p &_Malloc_Heap $30 = (Heap_Control *) 0x102c90 <_Malloc_Heap>
我们记住这个0x102c90和首个malloc地址0x1342b0。打印如下
(gdb) x/2g 0x1342a0 0x1342a0: 0x000000003fec0000 0x000000003fd8bd51
此时代码malloc(1)后,得到虚拟地址0x1342b0,对应p地址,我们查看p的block如下:
(gdb) x/4g p-0x10 0x1342a0: 0x000000003fec0000 0x0000000000000021 0x1342b0: 0x0000000000102c90 0x0000000000102c90
这里可以知道prev_size默认是area_end,size_and_flag是0x21,最低位1是HEAP_PREV_BLOCK_USED,代表正在使用,大小是32字节,next和prev相等则这是第一个内存。而0x102c90正好是默认heap的地址_Malloc_Heap
这里我们可以知道一个block的大小就是32字节。故malloc(1)默认是一个block的size。
接下来calloc(1,1)。获得了q的地址是0x1342d0,可以发现刚刚是0x1342b0 + 0x20。一切正常,此时打印q如下
(gdb) x/4g q-0x10 0x1342c0: 0x0000000000000000 0x0000000000000021 0x1342d0: 0x0000000000102c00 0x0000000000102c90
然后realloc( q, 128 ) 扩大q的大小,扩大之前我们知道size是32,扩大后如下
(gdb)x/4g r-0x10 0x1342c0: 0x0000000000000000 0x0000000000000091 0x1342d0: 0x0000000000100140 0x00000000001342b0
这里看到0x91,其中0x90是144,刚好是128+16。这里16正好是uintptr_t prev_size; + uintptr_t size_and_flag;的大小。一切正常
然后s = malloc( 1 );,这里理所应当是从0x1342d0+0x90,也就是0x134360。查看内存如下:
(gdb) x/4g s-0x10 0x134350: 0x0000000000000000 0x0000000000000021 0x134360: 0x0000000000102c90 0x0000000000102c90
完全没问题,最后t = realloc( r, 256 );将r的地址扩大256。因为r中间有一个s,所以这里t的地址发生改变,如下
(gdb) x/4g t-0x10 0x134370: 0x0000000000000000 0x0000000000000111 0x134380: 0x0000000000134360 0x00000000001342b0
因为是调用的realloc,所以链表的next指向了0x0000000000134360,prev是0x00000000001342b0,size是0x110也就是272,也就是256 + 16.一切正常。
至此,我们演示了rtmes通过malloc调用的内存行为,其行为是简化的glibc的分配行为。简单易懂。
我们知道系统的启动从start.S开始,对于RTEMS也一样,这里简单介绍一下RTEMS的start.S
对于Start.S,抽离了一些没开的宏定义,代码如下:
#include <rtems/asm.h> #include <rtems/score/percpu.h> #include <bspopts.h> /* Global symbols */ .globl _start .section ".bsp_start_text", "ax" /* Start entry */ _start: mov x5, x1 /* machine type number or ~0 for DT boot */ mov x6, x2 /* physical address of ATAGs or DTB */ /* Initialize SCTLR_EL1 */ mov x0, XZR msr SCTLR_EL1, x0 mrs x0, CurrentEL // 这里是0x20,bit3,所以是EL1 cmp x0, #(1<<2) b.eq .L_el1_start .L_el1_start: bl _AArch64_Get_current_processor_for_system_start /* * Check that this is a configured processor. If not, then there is * not much that can be done since we do not have a stack available for * this processor. Just loop forever in this case. */ ldr x1, =_SMP_Processor_configured_maximum // 这里x获得了符合_SMP_Processor_configured_maximum的地址,_SMP_Processor_configured_maximum的值是1 ldr w1, [x1] // 把x1的值给w1,此时w1是1 cmp x1, x0 bgt .Lconfigured_processor .Lconfigured_processor: /* * Get current per-CPU control and store it in PL1 only Thread ID * Register (TPIDR_EL1). */ ldr x1, =_Per_CPU_Information // 这里_Per_CPU_Information是0,x1是_Per_CPU_Information的地址 add x1, x1, x0, lsl #PER_CPU_CONTROL_SIZE_LOG2 msr TPIDR_EL1, x1 //这里计算了per cpu,将el1下的线程结构指针设置到tpidr上,这里就是_Per_CPU_Information的地址 /* Calculate interrupt stack area end for current processor */ ldr x1, =_ISR_Stack_size //这里将stack大小0x2000的_ISR_Stack_size地址给x1 add x3, x0, #1 //这里x3是1 mul x1, x1, x3 //因为x1是0,所以乘法后是0 ldr x2, =_ISR_Stack_area_begin //这里栈的begin add x3, x1, x2 //将_ISR_Stack_area_begin+0x2000 /* Disable interrupts and debug */ msr DAIFSet, #0xa /* * SPx: the stack pointer corresponding to the current exception level * Normal operation for RTEMS on AArch64 uses SPx and runs on EL1 * Exception operation (synchronous errors, IRQ, FIQ, System Errors) uses SP0 */ ldr x1, =bsp_stack_exception_size // 设置bsp_stack_exception_size /* Switch to SP0 and set exception stack */ msr spsel, #0 mov sp, x3 // 设置sp,x3是之前计算的sp /* Switch back to SPx for normal operation */ msr spsel, #1 sub x3, x3, x1 /* Set SP1 stack used for normal operation */ mov sp, x3 // 设置sp,x3是之前计算的sp /* Stay in EL1 mode */ /* Read CPACR */ mrs x0, CPACR_EL1 /* Enable EL1 access permissions for CP10 */ orr x0, x0, #(1 << 20) /* Write CPACR */ msr CPACR_EL1, x0 isb /* Branch to start hook 1 */ bl bsp_start_hook_1 // 跳到bsp_start_hook_1 /* Branch to boot card */ mov x0, #0 bl boot_card //跳到 boot_card 这里调整的函数_AArch64_Get_current_processor_for_system_start如下: FUNCTION_ENTRY(_AArch64_Get_current_processor_for_system_start) /* Return the affinity level 0 reported by the MPIDR_EL1 */ mrs x0, mpidr_el1 # 获取cpu亲和性的值 and x0, x0, #0xff ret FUNCTION_END(_AArch64_Get_current_processor_for_system_start)
如上所述,已经在必要的地方添加了注释,其主要步骤如下:
rtems系统的程序通过Init开始默认执行,本文基于此现象,分享Init函数的原理
Init函数声明如下:
#ifndef CONFIGURE_INIT_TASK_ENTRY_POINT rtems_task Init( rtems_task_argument ); #define CONFIGURE_INIT_TASK_ENTRY_POINT Init #ifndef CONFIGURE_INIT_TASK_ARGUMENTS extern const char *bsp_boot_cmdline; #define CONFIGURE_INIT_TASK_ARGUMENTS \ ( (rtems_task_argument) &bsp_boot_cmdline ) #endif #endif
这里可以知道,默认情况下,Entry是Init函数,形参是全局变量bsp_boot_cmdline,对于bsp_boot_cmdline我们可以在bootcard看到如下:
void boot_card( const char *cmdline ) { bsp_boot_cmdline = cmdline; }
对于Init函数如何来的,我们首先关注如下全局变量
const rtems_initialization_tasks_table _RTEMS_tasks_User_task_table = { CONFIGURE_INIT_TASK_NAME, CONFIGURE_INIT_TASK_STACK_SIZE, CONFIGURE_INIT_TASK_PRIORITY, CONFIGURE_INIT_TASK_ATTRIBUTES, _CONFIGURE_ASSERT_NOT_NULL( rtems_task_entry, CONFIGURE_INIT_TASK_ENTRY_POINT ), CONFIGURE_INIT_TASK_INITIAL_MODES, CONFIGURE_INIT_TASK_ARGUMENTS };
其结构体如下:
typedef struct { rtems_name name; size_t stack_size; rtems_task_priority initial_priority; rtems_attribute attribute_set; rtems_task_entry entry_point; rtems_mode mode_set; rtems_task_argument argument; } rtems_initialization_tasks_table;
我们打印实际的值如下:
(gdb) p/x _RTEMS_tasks_User_task_table $36 = { name = 0x55493120, stack_size = 0x2000, initial_priority = 0x1, attribute_set = 0x1, entry_point = 0x19100, mode_set = 0x0, argument = 0x1028a0 }
对于此,解析如下:
name: “'U', 'I', '1', ' '” staci_size:8192 task_priority: 1 attr: 1 entry: Init函数地址 mode_set: 设置调度模式是asr argument:这里指向 bsp_boot_cmdline
然后,我们关心rtems_task_start函数,这里构造了Thread_Entry_information如下
Thread_Entry_information entry = { .adaptor = _Thread_Entry_adaptor_numeric, .Kinds = { .Numeric = { .entry = entry_point, .argument = argument } } };
我们留意这个information的如下成员
adaptor Kinds.Numeric.entry
对于adaptor的实现如下:
void _Thread_Entry_adaptor_numeric( Thread_Control *executing ) { const Thread_Entry_numeric *numeric = &executing->Start.Entry.Kinds.Numeric; ( *numeric->entry )( numeric->argument ); }
这里可以发现,通过adaptor的封装,直接调用的实际上是rtems_task_start的形参entry_point函数指针。
此时我们留意Thread_Entry_information entry,它被_Thread_Start( the_thread, &entry, &lock_context );调用,然后直接赋值给Entry成员
the_thread->Start.Entry = *entry;
根据上面的代码分析,我们找到了Entry指针是Init函数。但是如何初始化的问题并没有解析到,所以继续查看
这里我们需要额外注意the_thread,在_Thread_Load_environment中会构造线程的上下文,如下:
_Context_Initialize( &the_thread->Registers, the_thread->Start.Initial_stack.area, the_thread->Start.Initial_stack.size, the_thread->Start.isr_level, _Thread_Handler, the_thread->is_fp, the_thread->Start.tls_area );
这里上下文初始化的实现如下:
void _CPU_Context_Initialize( Context_Control *the_context, void *stack_area_begin, size_t stack_area_size, uint64_t new_level, void (*entry_point)( void ), bool is_fp, void *tls_area ) { (void) new_level; the_context->register_sp = (uintptr_t) stack_area_begin + stack_area_size; the_context->register_lr = (uintptr_t) entry_point; the_context->isr_dispatch_disable = 0; the_context->thread_id = (uintptr_t) tls_area; if ( tls_area != NULL ) { the_context->thread_id = (uintptr_t) _TLS_Initialize_area( tls_area ); } }
可以发现lr寄存器的值就是entry_point,这里就是_Thread_Handler函数,也就是线程完全初始化完成之后,默认的第一个x30寄存器就是_Thread_Handler函数
根据上面的函数,我们关注两个重点,1是the_thread的地址,2是the_context的地址。gdb如下:
(gdb) p the_thread $2 = (Thread_Control *) 0x1056e8 <_RTEMS_tasks_Objects> (gdb) p &the_thread->Registers $4 = (Context_Control *) 0x105920 <_RTEMS_tasks_Objects+568>
此时我们回到_Thread_Start_multitasking函数,这里开始执行除idle线程外的第一个线程。如下
void _Thread_Start_multitasking( void ) { Per_CPU_Control *cpu_self = _Per_CPU_Get(); Thread_Control *heir; heir = _Thread_Get_heir_and_make_it_executing( cpu_self ); _CPU_Start_multitasking( &heir->Registers ); }
这里对于aarch64,实际上是_AArch64_Start_multitasking的实现在cpu_asm.S
DEFINE_FUNCTION_AARCH64(_AArch64_Start_multitasking) mov x1, x0 GET_SELF_CPU_CONTROL reg_2 /* Switch the stack to the temporary interrupt stack of this processor */ add sp, x2, #(PER_CPU_INTERRUPT_FRAME_AREA + CPU_INTERRUPT_FRAME_SIZE) /* Enable interrupts */ msr DAIFClr, #0x2 b .L_check_is_executing
这里GET_SELF_CPU_CONTROL是获取TPIDR_EL1的值,也就是当前cpu的per cpu information。如下
.macro GET_SELF_CPU_CONTROL REG #ifdef RTEMS_SMP /* Use Thread ID Register (TPIDR_EL1) */ mrs \REG, TPIDR_EL1 #else ldr \REG, =_Per_CPU_Information #endif .endm
这里_AArch64_Start_multitasking读取了per cpu值,然后设置了sp,并打开终端,然后跳转到L_check_is_executing。L_check_is_executing的实现如下:
.L_check_is_executing: /* Check the is executing indicator of the heir context */ add x3, x1, #AARCH64_CONTEXT_CONTROL_IS_EXECUTING_OFFSET ldaxrb w4, [x3] cmp x4, #0 bne .L_get_potential_new_heir /* Try to update the is executing indicator of the heir context */ mov x4, #1 stlxrb w5, w4, [x3] cmp x5, #0 bne .L_get_potential_new_heir dmb SY #endif /* Start restoring context */ .L_restore: ldr x3, [x1, #AARCH64_CONTEXT_CONTROL_THREAD_ID_OFFSET] ldr x4, [x1, #AARCH64_CONTEXT_CONTROL_ISR_DISPATCH_DISABLE] #ifdef AARCH64_MULTILIB_VFP add x5, x1, #AARCH64_CONTEXT_CONTROL_D8_OFFSET ldp d8, d9, [x5] ldp d10, d11, [x5, #0x10] ldp d12, d13, [x5, #0x20] ldp d14, d15, [x5, #0x30] #endif msr TPIDR_EL0, x3 str w4, [x2, #PER_CPU_ISR_DISPATCH_DISABLE] ldp x19, x20, [x1] ldp x21, x22, [x1, #0x10] ldp x23, x24, [x1, #0x20] ldp x25, x26, [x1, #0x30] ldp x27, x28, [x1, #0x40] ldp fp, lr, [x1, #0x50] ldr x4, [x1, #0x60] mov sp, x4 ret
上述汇编我们先看L_check_is_executing的含义
add x3, x1, #AARCH64_CONTEXT_CONTROL_IS_EXECUTING_OFFSET # 将x1的值+0xb8给x3 ldaxrb w4, [x3] # 将x3存放地址计算值给w4,这时候w4应该是0。 ldaxrb是load acquire(ldr) 和 exclusive (xrb) cmp x4, #0 # 比较x4的值和0 bne .L_get_potential_new_heir # 如果x4的值不为0,则说明说明本线程已经被人置1.避免重复激活已经运行的线程 /* Try to update the is executing indicator of the heir context */ mov x4, #1 # 将x4设置为1 stlxrb w5, w4, [x3] # 将x4的值存放在x3指向的地址上,如果存放成功则w5为0,如果存放失败则w5为1. cmp x5, #0 # 比较x5是否为0 bne .L_get_potential_new_heir # 如果不为0,则store release失败 dmb SY # 设置内存屏障,禁止乱序执行
通过上面可以发现,这里通过原子设置一个内存地址的值,从而确定当前线程激活运行。
接下来查看上下文的保存代码:
/* Start restoring context */ .L_restore: ldr x3, [x1, #AARCH64_CONTEXT_CONTROL_THREAD_ID_OFFSET] # 将x1的值+0x70的地址的值读给x3 ldr x4, [x1, #AARCH64_CONTEXT_CONTROL_ISR_DISPATCH_DISABLE] # x1的值+0x68的地址的值读给x4 #ifdef AARCH64_MULTILIB_VFP add x5, x1, #AARCH64_CONTEXT_CONTROL_D8_OFFSET # 将x1的值+0x78后赋值给x5 ldp d8, d9, [x5] ldp d10, d11, [x5, #0x10] ldp d12, d13, [x5, #0x20] ldp d14, d15, [x5, #0x30] #endif msr TPIDR_EL0, x3 # 将x3给 tpidr_el0 str w4, [x2, #PER_CPU_ISR_DISPATCH_DISABLE] # 将w4的值存放在 x2+PER_CPU_ISR_DISPATCH_DISABLE 的位置上 ldp x19, x20, [x1] # 将x1和下一个值 赋值给x19和x20 ldp x21, x22, [x1, #0x10] # 以此类推 ldp x23, x24, [x1, #0x20] # 以此类推 ldp x25, x26, [x1, #0x30] # 以此类推 ldp x27, x28, [x1, #0x40] # 以此类推 ldp fp, lr, [x1, #0x50] # 以此类推 ldr x4, [x1, #0x60] # 以此类推 mov sp, x4 # 将x4的值给sp寄存器 ret # 返回
上述这段的ldp指令,对应结构体如下:
typedef struct { uint64_t register_x19; uint64_t register_x20; uint64_t register_x21; uint64_t register_x22; uint64_t register_x23; uint64_t register_x24; uint64_t register_x25; uint64_t register_x26; uint64_t register_x27; uint64_t register_x28; uint64_t register_fp; uint64_t register_lr; uint64_t register_sp; uint64_t isr_dispatch_disable; uint64_t thread_id; #ifdef AARCH64_MULTILIB_VFP uint64_t register_d8; uint64_t register_d9; uint64_t register_d10; uint64_t register_d11; uint64_t register_d12; uint64_t register_d13; uint64_t register_d14; uint64_t register_d15; #endif #ifdef RTEMS_SMP volatile bool is_executing; #endif
故,这里是加载线程上下文的寄存器到系统寄存器上。这里我们需要注意lr寄存器,之前提到是entry_point,如下:
the_context->register_lr = (uintptr_t) entry_point;
所以我们接下来关注函数:
void _Thread_Handler( void )
这里调用了adaptor回调,如下:
( *executing->Start.Entry.adaptor )( executing );
根据上面的分析,这里adaptor是 _Thread_Entry_adaptor_numeric
,然后函数调用entry,这里是entry_point,也就是Init函数指针
至此,这个Init函数的初始化介绍完全完整
gdb工具可以调试RTEMS操作系统,本文介绍如何使用gdb开展调试
总共三种方法设置safe-path,分别如下
我们可以设置自己想要的路径作为saft-path,如下
# vim ~/.gdbinit add-auto-load-safe-path /home/user
也可也将所有路径作为saft-path,如下
# vim ~/.gdbinit set auto-load safe-path /
可以通过启动参数来设置,如下
# aarch64-rtems6-gdb -iex "set auto-load safe-path /" build/aarch64/zynqmp_qemu/testsuites/samples/ticker.exe
我们可以通过-s来运行RTEMS,这样默认情况下,qemu会启动gdb,远程可以连接RTEMS来进行调试,如下
# qemu-system-aarch64 -no-reboot -nographic -s -serial mon:stdio -machine xlnx-zcu102 -m 4096 -kernel build/aarch64/zynqmp_qemu/testsuites/samples/ticker.exe
在qemu启动rtems之后,可以通过127.0.0.1连接,如下
# aarch64-rtems6-gdb build/aarch64/zynqmp_qemu/testsuites/samples/ticker.exe # target extended-remote 127.0.0.1:1234
当连接成功之后,出现如下信息
Remote debugging using 127.0.0.1:1234 _CPU_Thread_Idle_body (ignored=0) at ../../../cpukit/score/cpu/aarch64/aarch64-thread-idle.c:46 46 while ( true ) { (gdb) bt #0 _CPU_Thread_Idle_body (ignored=0) at ../../../cpukit/score/cpu/aarch64/aarch64-thread-idle.c:46 #1 0x000000000001edd0 in _Thread_Handler () at ../../../cpukit/score/src/threadhandler.c:164 #2 0x000000000001ece0 in ?? ()
至此,gdb远程加载成功
为了支持pretty-printing,可以导出.debug信息如下
# aarch64-rtems6-objdump -s -j .debug_gdb_scripts build/aarch64/zynqmp_qemu/testsuites/samples/ticke r.exe build/aarch64/zynqmp_qemu/testsuites/samples/ticker.exe: file format elf64-littleaarch64 Contents of section .debug_gdb_scripts: 0000 04676462 2e696e6c 696e6564 2d736372 .gdb.inlined-scr 0010 6970740a 696d706f 72742073 79730a69 ipt.import sys.i 0020 6d706f72 74206f73 2e706174 680a7379 mport os.path.sy 0030 732e7061 74682e61 7070656e 64286f73 s.path.append(os 0040 2e706174 682e6a6f 696e2867 64622e50 .path.join(gdb.P 0050 5954484f 4e444952 2c202772 74656d73 YTHONDIR, 'rtems 0060 2729290a 696d706f 72742072 74656d73 ')).import rtems 0070 2e707072 696e7465 72206173 20707072 .pprinter as ppr 0080 696e7465 720a00 inter..
然后通过pprint.py来加载,如下即可
(gdb) source ../out/share/gdb/python/rtems/pprinter.py