编辑
2025-04-07
记录知识
0

rtems不仅支持基于优先级调度的算法,也支持简单优先级调度算法,简单优先级算法相比于优先级调度算法而言减少了bitmap,直接通过优先级和队列FIFO特性来更新调度优先级。本文介绍rtems中简单优先级的实现方法

调度器结构体

我们通过_Scheduler_simple_Initialize函数可以知道此调度算法的结构体构造,如下

void _Scheduler_simple_Initialize( const Scheduler_Control *scheduler ) { Scheduler_simple_Context *context = _Scheduler_simple_Get_context( scheduler ); _Chain_Initialize_empty( &context->Ready ); }

首先,需要构造一个Scheduler_simple_Context结构体,其成员如下

typedef struct { /** * @brief Basic scheduler context. */ Scheduler_Context Base; /** * @brief One ready queue for all ready threads. */ Chain_Control Ready; } Scheduler_simple_Context;

可以发现,此结构体的上下文只包含了调度器上下文和一个链表,这个链表就是此调度算法的实现
我们知道此调度算法通过链表,所以需要初始化链表如下

static inline void _Chain_Initialize_empty( Chain_Control *the_chain ) { Chain_Node *head; Chain_Node *tail; _Assert( the_chain != NULL ); head = _Chain_Head( the_chain ); tail = _Chain_Tail( the_chain ); head->next = tail; head->previous = NULL; tail->previous = head; }

执行调度

对于基于队列的调度算法,其schedule函数就是将最高优先级的元素出队,如下

static inline Thread_Control *_Scheduler_simple_Get_highest_ready( const Scheduler_Control *scheduler ) { Scheduler_simple_Context *context = _Scheduler_simple_Get_context( scheduler ); return (Thread_Control *) _Chain_First( &context->Ready ); }

因为队列的FIFO结构,默认最高优先级的元素在队列头。所以直接从head->next拿即可。

阻塞任务

对于基于队列的调度算法,阻塞任务的方式就是从队列中解压出此任务,让其不在就绪队列中,如下

static inline void _Chain_Extract_unprotected( Chain_Node *the_node ) { Chain_Node *next; Chain_Node *previous; _Assert( !_Chain_Is_node_off_chain( the_node ) ); next = the_node->next; previous = the_node->previous; next->previous = previous; previous->next = next; #if defined(RTEMS_DEBUG) _Chain_Set_off_chain( the_node ); #endif }

任务就绪

对于基于队列的调度算法,任务就绪主要两个部分

  • 将待插入的任务和就绪队列所有元素对比,找到小于等于的队列位置
  • 插入队列

对于遍历就绪队列的代码如下

while ( next != tail && !( *order )( key, to_insert, next ) ) { previous = next; next = _Chain_Next( next ); }

这里的order就是根据遍历的元素,找到小于等于的队列位置,如下代码实现

static inline bool _Scheduler_simple_Priority_less_equal( const void *key, const Chain_Node *to_insert, const Chain_Node *next ) { const unsigned int *priority_to_insert; const Thread_Control *thread_next; (void) to_insert; priority_to_insert = (const unsigned int *) key; thread_next = (const Thread_Control *) next; return *priority_to_insert <= _Thread_Get_priority( thread_next ); }

根据上面的操作,可以找到待插入的位置,然后执行队列的插入操作即可

static inline void _Chain_Insert_unprotected( Chain_Node *after_node, Chain_Node *the_node ) { Chain_Node *before_node; _Assert( _Chain_Is_node_off_chain( the_node ) ); the_node->previous = after_node; before_node = after_node->next; after_node->next = the_node; the_node->next = before_node; before_node->previous = the_node; }

优先级更新

基于队列的优先级调度算法的更新优先级,主要如下操作

  • 将待更新的任务取出
  • 与任务就绪一致,找到队列中小于等于待更新优先级的位置
  • 插入任务
  • 执行调度

这里取出操作就是阻塞操作中的解压任务_Chain_Extract_unprotected
这里找到匹配的位置插入的操作和就绪任务一致 这里执行调度的操作就是让最高优先级的任务得以调度

总结

至此,我们完全了解的简单优先级调度算法的逻辑了,其相比于优先级调度算法而言,没有使用位图。设计更加简单,因为位图占用一定的内存资源,故此调度器算法内存占用少。而又因为位图的时间复杂度是O(1),遍历队列链表的复杂度是O(n)。
故简单优先级调度算法,减少了内存占用,牺牲了一些性能,比较适合非常简单的任务系统

编辑
2025-04-03
记录知识
0

在rtems中,可以通过region来分配内存,这样分配的内存可以独立于malloc申请的内存,从而实现内存区域的管理。本文基于此分享如何使用内存区域

一、作用

内存区域的作用是单独的内存区域,这个区域是固定的起始地址,大小和属性。使用内存区域的情况下,可以避免内存回收,内存规整等操作带来的性能消耗,同样的,也可以为多个任务创建不同的内存区域。

当然,在rtems中,如果分配内存区域的时候内存不足,也可以将任务进入等待队列,直到内存释放后有可用内存时。

二、代码实现

内存区域的代码如下

rtems_status_code rtems_region_create( rtems_name name, void *starting_address, uintptr_t length, uintptr_t page_size, rtems_attribute attribute_set, rtems_id *id ) { rtems_status_code return_status; Region_Control *the_region; if ( !rtems_is_name_valid( name ) ) { return RTEMS_INVALID_NAME; } if ( id == NULL ) { return RTEMS_INVALID_ADDRESS; } if ( starting_address == NULL ) { return RTEMS_INVALID_ADDRESS; } the_region = _Region_Allocate(); if ( !the_region ) return_status = RTEMS_TOO_MANY; else { _Thread_queue_Object_initialize( &the_region->Wait_queue ); if ( _Attributes_Is_priority( attribute_set ) ) { the_region->wait_operations = &_Thread_queue_Operations_priority; } else { the_region->wait_operations = &_Thread_queue_Operations_FIFO; } the_region->maximum_segment_size = _Heap_Initialize( &the_region->Memory, starting_address, length, page_size ); if ( !the_region->maximum_segment_size ) { _Region_Free( the_region ); return_status = RTEMS_INVALID_SIZE; } else { the_region->attribute_set = attribute_set; *id = _Objects_Open_u32( &_Region_Information, &the_region->Object, name ); return_status = RTEMS_SUCCESSFUL; } } _Objects_Allocator_unlock(); return return_status; }

2.1 解析形参

name:内存区域名称

starting_address:需要申请的内存地址

length:内存区域大小

page_size: 设置内存区域的页大小

attribute_set: 设置内存的调度属性

id: 内存区域id

2.2 内存管理线程

rtems提供两种线程方式:一个是fifo,也就是先进先出调度,另一个是priority,也就是基于优先级的调度,分别如下:

const Thread_queue_Operations _Thread_queue_Operations_FIFO = { .priority_actions = _Thread_queue_Do_nothing_priority_actions, .enqueue = _Thread_queue_FIFO_enqueue, .extract = _Thread_queue_FIFO_extract, .surrender = _Thread_queue_FIFO_surrender, .first = _Thread_queue_FIFO_first }; const Thread_queue_Operations _Thread_queue_Operations_priority = { .priority_actions = _Thread_queue_Priority_priority_actions, .enqueue = _Thread_queue_Priority_enqueue, .extract = _Thread_queue_Priority_extract, .surrender = _Thread_queue_Priority_surrender, .first = _Thread_queue_Priority_first };

这个初始化的线程,会在内存不足的时候阻塞。

2.3 申请堆

通过_Heap_Initialize直接申请堆作为内存区域的Memory成员。地址指向starting_address

2.4 设置属性和申请id

对于这块内存,可以设置region结构体的属性字段,并通过_Objects_Open_u32申请内存区域id。region结构体如下

typedef struct { Objects_Control Object; Thread_queue_Control Wait_queue; /* waiting threads */ const Thread_queue_Operations *wait_operations; uintptr_t maximum_segment_size; /* in bytes */ rtems_attribute attribute_set; Heap_Control Memory; } Region_Control;

三、测试分配内存

测试分配内存区域非常简单,我们可以直接通过rtems_region_create分配即可,如下:

static const rtems_name name = rtems_build_name('K', 'Y', 'L', 'I'); #define CONFIGURE_MAXIMUM_REGIONS 1 static void test_region_create(void) { rtems_status_code sc; rtems_id id; void *seg_addr1; void *seg_addr2; sc = rtems_region_create( name, 0x4000000, 640, 16, RTEMS_DEFAULT_ATTRIBUTES, &id ); printf( "sc=%d\n", sc); sc = rtems_region_get_segment( id, 32, RTEMS_DEFAULT_OPTIONS, RTEMS_NO_TIMEOUT, &seg_addr1 ); printf( "sc=%d seg_addr=%p\n", sc, seg_addr1); sc = rtems_region_get_segment( id, 32, RTEMS_DEFAULT_OPTIONS, RTEMS_NO_TIMEOUT, &seg_addr2 ); printf( "sc=%d seg_addr=%p\n", sc, seg_addr2); sc = rtems_region_return_segment( id, seg_addr1); printf( "sc=%d\n", sc); sc = rtems_region_return_segment( id, seg_addr2); printf( "sc=%d\n", sc); }

3.1 region个数

默认情况下,需要通过一个宏设置内存区域个数CONFIGURE_MAXIMUM_REGIONS,这里定义多少就代表系统最多可以申请多少region

3.2 最小申请大小

内存区域最小分配为64字节,准确来说是大于等于52字节,这样内存区域才能正常工作。否则rtems_region_create返回无效

3.3 获取内存

通过rtems_region_get_segment可以获取region的段,这样就能直接使用region

3.4 释放内存到内存

通过rtems_region_get_segment可以释放region的段,这样下一个程序可以重新分配

3.5 测试程序内存布局

内存布局如下

(gdb) x/14g 0x4000000 0x4000000: 0x0000000004000280 0x0000000000000031 0x4000010: 0x0000000004000060 0x0000000000105d68 0x4000020: 0x0000000000000000 0x0000000000000000 0x4000030: 0x0000000000000030 0x0000000000000030 0x4000040: 0x0000000000105d68 0x0000000000105d68 0x4000050: 0x0000000000000000 0x0000000000000000 0x4000060: 0x0000000000000000 0x0000000000000211

从heap block结构体可以看到,内存分配一切正常

四、总结

通过上述的实验,可以清楚的了解到rtems的region的分配和原理。这里分配固定地址上的内存块,供程序使用

编辑
2025-04-03
记录知识
0

rtems不仅支持malloc,还支持region分配,这两个都是基于heap来分配的,而partition的内存分配是直接通过链表管理的内存池,他不会记录block信息,本文介绍partition分配内存

一、测试代码

复用测试region的代码,稍作修改可以直接测试partition,如下

#define CONFIGURE_MAXIMUM_PARTITIONS 1 static void test_partition_create(void) { rtems_status_code sc; rtems_id id; void *partition_buf1; void *partition_buf2; sc = rtems_partition_create( name, (void* )0x4000000, 128, 32, RTEMS_DEFAULT_ATTRIBUTES, &id ); printf( "sc=%d \n", sc); sc = rtems_partition_get_buffer( id, &partition_buf1); printf( "sc=%d buf1=%p\n", sc, partition_buf1); sc = rtems_partition_get_buffer( id, &partition_buf2); printf( "sc=%d buf2=%p\n", sc, partition_buf2); sc = rtems_partition_return_buffer(id, partition_buf1); printf( "sc=%d \n", sc); sc = rtems_partition_return_buffer(id, partition_buf2); printf( "sc=%d \n", sc); return ; }

二、测试运行

直接测试效果如下:

(gdb) x/14xg 0x4000000 0x4000000: 0x0000000004000020 0x00000000001056d0 0x4000010: 0x0000000000000000 0x0000000000000000 0x4000020: 0x0000000004000040 0x00000000001056d0 0x4000030: 0x0000000000000000 0x0000000000000000 0x4000040: 0x0000000004000060 0x00000000001056d0 0x4000050: 0x0000000000000000 0x0000000000000000 0x4000060: 0x00000000001056d8 0x0000000004000040

可以发现,我申请了128内存,按照32字节划分,那么每块都是0x20大小,如果起始地址是0x4000000,则划分为如下

0x4000000/0x4000020/0x4000040/0x4000060

值得注意的是partition分配的内存,不会带block结构体,也就少了16字节的冗余,这种情况下,也不会参与内存的回收,规整,相当于蛋糕自己随便分,整体效率是最高的。

如果留意这些分区的内存的话,可以发现管理这些内存使用的是16字节的链表。接下来分析代码

三、代码解析

3.1 create

创建分区代码如下

rtems_status_code rtems_partition_create( rtems_name name, void *starting_address, uintptr_t length, size_t buffer_size, rtems_attribute attribute_set, rtems_id *id ) { Partition_Control *the_partition; if ( !rtems_is_name_valid( name ) ) { return RTEMS_INVALID_NAME; } if ( id == NULL ) { return RTEMS_INVALID_ADDRESS; } if ( starting_address == NULL ) { return RTEMS_INVALID_ADDRESS; } if ( length == 0 ) { return RTEMS_INVALID_SIZE; } if ( buffer_size == 0 ) { return RTEMS_INVALID_SIZE; } if ( length < buffer_size ) { return RTEMS_INVALID_SIZE; } /* * Ensure that the buffer size is an integral multiple of the pointer size so * that each buffer begin meets the chain node alignment. */ if ( buffer_size % CPU_SIZEOF_POINTER != 0 ) { return RTEMS_INVALID_SIZE; } if ( buffer_size < sizeof( Chain_Node ) ) return RTEMS_INVALID_SIZE; /* * Ensure that the buffer area starting address is aligned on a pointer * boundary so that each buffer begin meets the chain node alignment. */ if ( (uintptr_t) starting_address % CPU_SIZEOF_POINTER != 0 ) { return RTEMS_INVALID_ADDRESS; } #if defined(RTEMS_MULTIPROCESSING) if ( !_System_state_Is_multiprocessing ) { attribute_set = _Attributes_Clear( attribute_set, RTEMS_GLOBAL ); } #endif the_partition = _Partition_Allocate(); if ( !the_partition ) { _Objects_Allocator_unlock(); return RTEMS_TOO_MANY; } #if defined(RTEMS_MULTIPROCESSING) if ( _Attributes_Is_global( attribute_set ) && !( _Objects_MP_Allocate_and_open( &_Partition_Information, name, the_partition->Object.id, false ) ) ) { _Objects_Free( &_Partition_Information, &the_partition->Object ); _Objects_Allocator_unlock(); return RTEMS_TOO_MANY; } #endif _Partition_Initialize( the_partition, starting_address, length, buffer_size, attribute_set ); *id = _Objects_Open_u32( &_Partition_Information, &the_partition->Object, name ); #if defined(RTEMS_MULTIPROCESSING) if ( _Attributes_Is_global( attribute_set ) ) _Partition_MP_Send_process_packet( PARTITION_MP_ANNOUNCE_CREATE, the_partition->Object.id, name, 0 /* Not used */ ); #endif _Objects_Allocator_unlock(); return RTEMS_SUCCESSFUL; }

这里的重点是

_Partition_Initialize( the_partition, starting_address, length, buffer_size, attribute_set );

可以查到_Partition_Initialize调用的_Chain_Initialize,我们查看_Chain_Initialize的实现如下

void _Chain_Initialize( Chain_Control *the_chain, void *starting_address, size_t number_nodes, size_t node_size ) { size_t count = number_nodes; Chain_Node *head = _Chain_Head( the_chain ); Chain_Node *tail = _Chain_Tail( the_chain ); Chain_Node *current = head; Chain_Node *next = starting_address; _Assert( node_size >= sizeof( *next ) ); head->previous = NULL; while ( count-- ) { current->next = next; next->previous = current; current = next; next = (Chain_Node *) _Addresses_Add_offset( (void *) next, node_size ); } current->next = tail; tail->previous = current; }

可以发现, 这里定义了一个双链表,其中count就是分区的数量,根据分区的数量直接填充链表的成员,链表非常简单,如下:

typedef struct Chain_Node { /** This points to the node after this one on this chain. */ struct Chain_Node *next; /** This points to the node immediate prior to this one on this chain. */ struct Chain_Node *previous; } Chain_Node;

可以发现,和测试运行中的内存布局完全一致。

3.2 get buffer

rtems_partition_get_buffer函数的实现如下:

rtems_status_code rtems_partition_get_buffer( rtems_id id, void **buffer ) { Partition_Control *the_partition; ISR_lock_Context lock_context; void *the_buffer; if ( buffer == NULL ) { return RTEMS_INVALID_ADDRESS; } the_partition = _Partition_Get( id, &lock_context ); if ( the_partition == NULL ) { #if defined(RTEMS_MULTIPROCESSING) return _Partition_MP_Get_buffer( id, buffer ); #else return RTEMS_INVALID_ID; #endif } _Partition_Acquire_critical( the_partition, &lock_context ); the_buffer = _Partition_Allocate_buffer( the_partition ); if ( the_buffer == NULL ) { _Partition_Release( the_partition, &lock_context ); return RTEMS_UNSATISFIED; } the_partition->number_of_used_blocks += 1; _Partition_Release( the_partition, &lock_context ); *buffer = the_buffer; return RTEMS_SUCCESSFUL; }

代码熟悉了就发现非常简单,我们关注_Partition_Allocate_buffer,它调用_Chain_Get_unprotected,然后调用_Chain_Get_first_unprotected

_Chain_Get_first_unprotected的实现如下

static inline Chain_Node *_Chain_Get_first_unprotected( Chain_Control *the_chain ) { Chain_Node *head; Chain_Node *old_first; Chain_Node *new_first; _Assert( !_Chain_Is_empty( the_chain ) ); head = _Chain_Head( the_chain ); old_first = head->next; new_first = old_first->next; head->next = new_first; new_first->previous = head; #if defined(RTEMS_DEBUG) _Chain_Set_off_chain( old_first ); #endif return old_first; }

可以发现,这里get buffer的操作就是取出head,然后通过head拿到next返回,并且将head->next->next返回给head→next,相当于list的del head操作

3.3 return buffer

rtems_partition_return_buffer的实现如下:

rtems_status_code rtems_partition_return_buffer( rtems_id id, void *buffer ) { Partition_Control *the_partition; ISR_lock_Context lock_context; the_partition = _Partition_Get( id, &lock_context ); if ( the_partition == NULL ) { #if defined(RTEMS_MULTIPROCESSING) return _Partition_MP_Return_buffer( id, buffer ); #else return RTEMS_INVALID_ID; #endif } _Partition_Acquire_critical( the_partition, &lock_context ); if ( !_Partition_Is_address_a_buffer_begin( the_partition, buffer ) ) { _Partition_Release( the_partition, &lock_context ); return RTEMS_INVALID_ADDRESS; } _Partition_Free_buffer( the_partition, buffer ); the_partition->number_of_used_blocks -= 1; _Partition_Release( the_partition, &lock_context ); return RTEMS_SUCCESSFUL; }

同样的,我们关注_Partition_Free_buffer,从而关注_Chain_Append_unprotected,其实现如下

static inline void _Chain_Append_unprotected( Chain_Control *the_chain, Chain_Node *the_node ) { Chain_Node *tail; Chain_Node *old_last; _Assert( _Chain_Is_node_off_chain( the_node ) ); tail = _Chain_Tail( the_chain ); old_last = tail->previous; the_node->next = tail; tail->previous = the_node; old_last->next = the_node; the_node->previous = old_last; }

这里就是对双链表做了尾插,list add tail。

四、总结

至此,关于rtems partition 内存的操作介绍完毕了。可以发现,如果不需要碎片管理,则partition的内存分配相当于内存池,简单高效。

编辑
2025-04-03
记录知识
0

在bootcard中有流程是heap的初始化,为了更详细的了解内存初始化,这里以_Heap_Initialize为入口,详细了解具体做了什么

一、RTEMS默认初始化哪些堆

我们之前分析流程的时候,可以知道RTEMS初始化了多个堆,主要如下

_Workspace_Handler_initialization 会初始化_Heap_Initialize _Malloc_Initialize 会初始化 _Heap_Initialize add_area的cache_coherent_heap会初始化_Heap_Initialize

总共三个heap区,本文以malloc为例关注_Heap_Initialize 的初始化

二、代码解析

uintptr_t _Heap_Initialize( Heap_Control *heap, void *heap_area_begin_ptr, // 起始地址 uintptr_t heap_area_size, // 大小 uintptr_t page_size ) { Heap_Statistics *const stats = &heap->stats; uintptr_t const heap_area_begin = (uintptr_t) heap_area_begin_ptr; uintptr_t const heap_area_end = heap_area_begin + heap_area_size; uintptr_t first_block_begin = 0; uintptr_t first_block_size = 0; uintptr_t last_block_begin = 0; uintptr_t min_block_size = 0; bool area_ok = false; Heap_Block *first_block = NULL; Heap_Block *last_block = NULL; // 如果page_size 没有设置,则设置为cpu对齐大小,否则按照设置大小向cpu对齐 if ( page_size == 0 ) { page_size = CPU_ALIGNMENT; } else { page_size = _Heap_Align_up( page_size, CPU_ALIGNMENT ); if ( page_size < CPU_ALIGNMENT ) { /* Integer overflow */ return 0; } } // 计算最小的块大小 min_block_size = _Heap_Min_block_size( page_size ); // 获取第一个块和最后一个块的地址 area_ok = _Heap_Get_first_and_last_block( heap_area_begin, heap_area_size, page_size, min_block_size, &first_block, &last_block ); if ( !area_ok ) { return 0; } memset(heap, 0, sizeof(*heap)); #ifdef HEAP_PROTECTION heap->Protection.block_initialize = _Heap_Protection_block_initialize_default; heap->Protection.block_check = _Heap_Protection_block_check_default; heap->Protection.block_error = _Heap_Protection_block_error_default; #endif // 计算第一个块的大小,拿最后一个块的开始减去第一个块的开始 first_block_begin = (uintptr_t) first_block; last_block_begin = (uintptr_t) last_block; first_block_size = last_block_begin - first_block_begin; // 设置第一个块,结合block的结构体,将prev_size设置为堆的末尾 // 将第一个块的next设置为空闲链表的尾,prev设置为空闲链表的头,这里相当于初始化空闲链表 /* First block */ first_block->prev_size = heap_area_end; first_block->size_and_flag = first_block_size | HEAP_PREV_BLOCK_USED; first_block->next = _Heap_Free_list_tail( heap ); first_block->prev = _Heap_Free_list_head( heap ); _Heap_Protection_block_initialize( heap, first_block ); // 报错堆的信息,如页大小,最小块大小,堆范围,第一个块地址,最后一个块地址 // 设置空闲链表的第一个元素是first_block /* Heap control */ heap->page_size = page_size; heap->min_block_size = min_block_size; heap->area_begin = heap_area_begin; heap->area_end = heap_area_end; heap->first_block = first_block; heap->last_block = last_block; _Heap_Free_list_head( heap )->next = first_block; _Heap_Free_list_tail( heap )->prev = first_block; // 将最后一个blokc的prev_size设置为第一个block的size /* Last block */ last_block->prev_size = first_block_size; last_block->size_and_flag = 0; _Heap_Set_last_block_size( heap ); _Heap_Protection_block_initialize( heap, last_block ); // 统计信息 总大小,空闲大小,块数 /* Statistics */ stats->size = first_block_size; stats->free_size = first_block_size; stats->min_free_size = first_block_size; stats->free_blocks = 1; stats->max_free_blocks = 1; _Heap_Protection_set_delayed_free_fraction( heap, 2 ); _HAssert( _Heap_Is_aligned( heap->page_size, CPU_ALIGNMENT ) ); _HAssert( _Heap_Is_aligned( heap->min_block_size, page_size ) ); _HAssert( _Heap_Is_aligned( _Heap_Alloc_area_of_block( first_block ), page_size ) ); _HAssert( _Heap_Is_aligned( _Heap_Alloc_area_of_block( last_block ), page_size ) ); // 返回第一个块的大小 return first_block_size; }

代码中有注释,我们可以知道,对于heap初始化来说,其主要是构造了Heap_Control *heap结构体,然后计算Heap_Block的大小来分配每个block,并为每个block来构造block信息,最后设置heap的统计信息。

三、确认构造信息

根据gdb我们可以观察heap构造情况,首先查看heap control结构体,结构体定义如下:

struct Heap_Control { Heap_Block free_list; uintptr_t page_size; uintptr_t min_block_size; uintptr_t area_begin; uintptr_t area_end; Heap_Block *first_block; Heap_Block *last_block; Heap_Statistics stats; #ifdef HEAP_PROTECTION Heap_Protection Protection; #endif };

实际信息如下

(gdb) p _Malloc_Heap $1 = {free_list = {prev_size = 0, size_and_flag = 0, next = 0x0, prev = 0x0}, page_size = 0, min_block_size = 0, area_begin = 0, area_end = 0, first_block = 0x0, last_block = 0x0, stats = {lifetime_allocated = 0, lifetime_freed = 0, size = 0, free_size = 0, min_free_size = 0, free_blocks = 0, max_free_blocks = 0, used_blocks = 0, max_search = 0, searches = 0, allocs = 0, failed_allocs = 0, frees = 0, resizes = 0}}(gdb) p _Malloc_Heap $2 = {free_list = {prev_size = 0, size_and_flag = 0, next = 0x1342a0, prev = 0x1342a0}, page_size = 16, min_block_size = 32, area_begin = 1262239, area_end = 1072431104, first_block = 0x1342a0, last_block = 0x3febfff0, stats = { lifetime_allocated = 0, lifetime_freed = 0, size = 1071168848, free_size = 1071168848, min_free_size = 1071168848, free_blocks = 1, max_free_blocks = 1, used_blocks = 0, max_search = 0, searches = 0, allocs = 0, failed_allocs = 0, frees = 0, resizes = 0}}

heap block结构体如下

struct Heap_Block { uintptr_t prev_size; uintptr_t size_and_flag; Heap_Block *next; Heap_Block *prev; };

实际信息如下

(gdb) p *(Heap_Block *)0x1342a0 $5 = {prev_size = 1072431104, size_and_flag = 1071168849, next = 0x102c90 <_Malloc_Heap>, prev = 0x102c90 <_Malloc_Heap>}(gdb) p *(Heap_Block *)0x3febfff0 $6 = {prev_size = 1071168848, size_and_flag = 18446744072638382768, next = 0x0, prev = 0x0}

heap statistics结构体如下

typedef struct { /** * @brief Lifetime number of bytes allocated from this heap. * * This value is an integral multiple of the page size. */ uint64_t lifetime_allocated; /** * @brief Lifetime number of bytes freed to this heap. * * This value is an integral multiple of the page size. */ uint64_t lifetime_freed; /** * @brief Size of the allocatable area in bytes. * * This value is an integral multiple of the page size. */ uintptr_t size; /** * @brief Current free size in bytes. * * This value is an integral multiple of the page size. */ uintptr_t free_size; /** * @brief Minimum free size ever in bytes. * * This value is an integral multiple of the page size. */ uintptr_t min_free_size; /** * @brief Current number of free blocks. */ uint32_t free_blocks; /** * @brief Maximum number of free blocks ever. */ uint32_t max_free_blocks; /** * @brief Current number of used blocks. */ uint32_t used_blocks; /** * @brief Maximum number of blocks searched ever. */ uint32_t max_search; /** * @brief Total number of searches. */ uint32_t searches; /** * @brief Total number of successful allocations. */ uint32_t allocs; /** * @brief Total number of failed allocations. */ uint32_t failed_allocs; /** * @brief Total number of successful frees. */ uint32_t frees; /** * @brief Total number of successful resizes. */ uint32_t resizes; } Heap_Statistics;

实际信息在heap结构体已经打印,这里摘出来如下

(gdb) p _Malloc_Heap->stats $10 = {lifetime_allocated = 0, lifetime_freed = 0, size = 0, free_size = 0, min_free_size = 0, free_blocks = 0, max_free_blocks = 0, used_blocks = 0, max_search = 0, searches = 0, allocs = 0, failed_allocs = 0, frees = 0, resizes = 0}(gdb) p _Malloc_Heap->stats $11 = {lifetime_allocated = 0, lifetime_freed = 0, size = 1071168848, free_size = 1071168848, min_free_size = 1071168848, free_blocks = 1, max_free_blocks = 1, used_blocks = 0, max_search = 0, searches = 0, allocs = 0, failed_allocs = 0, frees = 0, resizes = 0}

四、总结

至此,我们清楚的知道了RTEMS系统的堆的初始化方式。后续调试堆的时候,这些结构体对我们非常有用

编辑
2025-04-03
记录知识
0

rtems支持阻塞任务调度,这个名字听起来有点奇怪,按照通俗的理解,就是当任务遇到IO事件或等待事件时,可以将任务设置为阻塞状态,然后将其从就绪队列移除

一、测试程序

对于阻塞任务调度,我们可以两种方式测试

  • 主动等待睡眠
  • 线程任务挂起

对于上面两种程序,主要是两个函数:rtems_task_suspend/rtems_task_wake_after

为了测试阻塞任务调度,本文基于rtems_task_wake_after来测试,rtems_task_suspend方法一致,就不演示了。

测试rtems_task_wake_after,可以借鉴原有的验证程序,我们可以在Process_asr函数中,将rtems_task_wake_after( RTEMS_YIELD_PROCESSOR );设置为status = rtems_task_wake_after( rtems_clock_get_ticks_per_second() );

这里tick是100为1s。修改了0为100后,rtems_task_wake_after会主动block线程,如下代码:

rtems_status_code rtems_task_wake_after( rtems_interval ticks ) { /* * It is critical to obtain the executing thread after thread dispatching is * disabled on SMP configurations. */ Thread_Control *executing; Per_CPU_Control *cpu_self; cpu_self = _Thread_Dispatch_disable(); executing = _Per_CPU_Get_executing( cpu_self ); if ( ticks == 0 ) { _Thread_Yield( executing ); } else { _Thread_Set_state( executing, STATES_WAITING_FOR_TIME ); _Thread_Wait_flags_set( executing, THREAD_WAIT_STATE_BLOCKED ); _Thread_Add_timeout_ticks( executing, cpu_self, ticks ); } _Thread_Dispatch_direct( cpu_self ); return RTEMS_SUCCESSFUL; }

我们可以看到,tick是非0时,会调用_Thread_Set_state( executing, STATES_WAITING_FOR_TIME );函数,此函数加锁后调用_Thread_Set_state_locked,_Thread_Set_state_locked的函数实现如下

States_Control _Thread_Set_state_locked( Thread_Control *the_thread, States_Control state ) { States_Control previous_state; States_Control next_state; _Assert( state != 0 ); _Assert( _Thread_State_is_owner( the_thread ) ); previous_state = the_thread->current_state; next_state = _States_Set( state, previous_state); the_thread->current_state = next_state; if ( _States_Is_ready( previous_state ) ) { _Scheduler_Block( the_thread ); } return previous_state; }

可以发现,其先将block的state赋值给了当前状态

previous_state = the_thread->current_state; next_state = _States_Set( state, previous_state); the_thread->current_state = next_state;

然后如果任务状态是就绪的,则直接阻塞任务

_Scheduler_Block( the_thread );

二、阻塞任务调度实现

_Scheduler_Block的实现如下

static inline void _Scheduler_Block( Thread_Control *the_thread ) { Chain_Node *node; const Chain_Node *tail; Scheduler_Node *scheduler_node; const Scheduler_Control *scheduler; ISR_lock_Context lock_context; node = _Chain_First( &the_thread->Scheduler.Scheduler_nodes ); tail = _Chain_Immutable_tail( &the_thread->Scheduler.Scheduler_nodes ); scheduler_node = SCHEDULER_NODE_OF_THREAD_SCHEDULER_NODE( node ); scheduler = _Scheduler_Node_get_scheduler( scheduler_node ); _Scheduler_Acquire_critical( scheduler, &lock_context ); ( *scheduler->Operations.block )( scheduler, the_thread, scheduler_node ); _Scheduler_Release_critical( scheduler, &lock_context ); node = _Chain_Next( node ); while ( node != tail ) { scheduler_node = SCHEDULER_NODE_OF_THREAD_SCHEDULER_NODE( node ); scheduler = _Scheduler_Node_get_scheduler( scheduler_node ); _Scheduler_Acquire_critical( scheduler, &lock_context ); ( *scheduler->Operations.withdraw_node )( scheduler, the_thread, scheduler_node, THREAD_SCHEDULER_BLOCKED ); _Scheduler_Release_critical( scheduler, &lock_context ); node = _Chain_Next( node ); } }

这里可以看到,默认情况下从本调度节点上取出Scheduler_Control,然后直接调用调度器的block函数,然后遍历所有的调度节点,逐个调用调度器的withdraw_node回调。

我们可以查看withdraw的实现如下

(gdb) p *(Scheduler_Control *)0x2d248 $18 = {context = 0x100820 <_Configuration_Scheduler_priority_dflt>, Operations = { initialize = 0x1eaf0 <_Scheduler_priority_Initialize>, schedule = 0x1ef10 <_Scheduler_priority_Schedule>, yield = 0x1f130 <_Scheduler_priority_Yield>, block = 0x1ebd0 <_Scheduler_priority_Block>, unblock = 0x1efd0 <_Scheduler_priority_Unblock>, update_priority = 0x1ecf0 <_Scheduler_priority_Update_priority>, map_priority = 0x1ea30 <_Scheduler_default_Map_priority>, unmap_priority = 0x1ea40 <_Scheduler_default_Unmap_priority>, ask_for_help = 0x0, reconsider_help_request = 0x0, withdraw_node = 0x0, make_sticky = 0x22720 <_Scheduler_default_Sticky_do_nothing>, clean_sticky = 0x22720 <_Scheduler_default_Sticky_do_nothing>, pin = 0x22760 <_Scheduler_default_Pin_or_unpin_do_nothing>, unpin = 0x22760 <_Scheduler_default_Pin_or_unpin_do_nothing>, add_processor = 0x0, remove_processor = 0x0, node_initialize = 0x1eb40 <_Scheduler_priority_Node_initialize>, node_destroy = 0x1ea50 <_Scheduler_default_Node_destroy>, release_job = 0x1ea60 <_Scheduler_default_Release_job>, cancel_job = 0x1ea70 <_Scheduler_default_Cancel_job>, start_idle = 0x1ea80 <_Scheduler_default_Start_idle>, set_affinity = 0x22770 <_Scheduler_default_Set_affinity>}, maximum_priority = 255, name = 1431323680, is_non_preempt_mode_supported = true}

这里withdraw并未实现,而且scheduler_node 是根据线程来创建的,如果只有一个线程,则scheduler_node 就一个。可以查看如下定义

#define _CONFIGURE_SCHEDULER_COUNT RTEMS_ARRAY_SIZE( _Scheduler_Table ) const size_t _Scheduler_Count = _CONFIGURE_SCHEDULER_COUNT;

所以我们在_Thread_Initialize_scheduler_and_wait_nodes可以看到如下调用

do { ...... } while ( scheduler_index < _Scheduler_Count );

对于block的实现,实际上是将此任务从就绪队列删除。