编辑
2025-04-03
工作知识
0

优先级调度算法顾名思义就是基于任务的优先级来进行调度,任何高优先级的任务都应该得到调度,本文介绍rtems的默认调度器算法优先级调度算法

一、基本数据结构

优先级调度算法默认是通过位图的方式实现O(1)的调度,其结构如下

typedef struct { /** * @brief Basic scheduler context. */ Scheduler_Context Base; /** * @brief Bit map to indicate non-empty ready queues. */ Priority_bit_map_Control Bit_map; /** * @brief One ready queue per priority level. */ Chain_Control Ready[ RTEMS_ZERO_LENGTH_ARRAY ]; } Scheduler_priority_Context;

这里面bitmap的结构体实现如下

typedef struct { /** * @brief Each sixteen bit entry in this word is associated with one of the * sixteen entries in the bit map. */ Priority_bit_map_Word major_bit_map; /** * @brief Each bit in the bit map indicates whether or not there are threads * ready at a particular priority. * * The mapping of individual priority levels to particular bits is processor * dependent as is the value of each bit used to indicate that threads are * ready at that priority. */ Priority_bit_map_Word bit_map[ 16 ]; } Priority_bit_map_Control;

根据上面两个数据结构,可以知道如下:

调度器的上下文,通过Base存储,如Lock和Processor_mask 位图来控制优先级,主优先级由major_bit_map来设置,次优先级通过bit_map来设置 位图的快速查找O(1)复杂度得益于aarch64的clz指令 根据位图计算的优先级值,每个值对应一个Ready的链表 Ready存放的就是最大256个优先级的链表 由数组和链表组成的拉链能够实现快速定位最高优先级任务

二、汇编CLZ

基于优先级的调度算法的快速查找优先级的核心是clz指令,该指令从最低位遇到1之前的计算0的个数。如下

image.png

rtems使用了gnu的内置函数版本,代码如下:

static inline unsigned int _Bitfield_Find_first_bit( unsigned int value ) { unsigned int bit_number; #if ( CPU_USE_GENERIC_BITFIELD_CODE == FALSE ) _CPU_Bitfield_Find_first_bit( value, bit_number ); #elif defined(__GNUC__) bit_number = (unsigned int) __builtin_clz( value ) - __SIZEOF_INT__ * __CHAR_BIT__ + 16; #else if ( value < 0x100 ) { bit_number = _Bitfield_Leading_zeros[ value ] + 8; } else { \ bit_number = _Bitfield_Leading_zeros[ value >> 8 ]; } #endif return bit_number; }

_Bitfield_Find_first_bit返回遇到1时0的个数

三、位图

对于位图的设置和更新,由于位图的特性,需要设置位图的major,然后根据major设置位图的minor。其代码如下

static inline void _Priority_bit_map_Add ( Priority_bit_map_Control *bit_map, Priority_bit_map_Information *bit_map_info ) { *bit_map_info->minor |= bit_map_info->ready_minor; bit_map->major_bit_map |= bit_map_info->ready_major; }

比较清晰的是major_bit_map的设置,它是直接赋值操作,而位图的minor应该由major的值来选择数组,所以需要在初始化中如下设置

函数_Priority_bit_map_Initialize_information

bit_map_info->minor = &bit_map->bit_map[ _Priority_Bits_index( major ) ];

这里可以看到,minor是取得bit_map→bit_map[x]的地址。

同样的,对于bitmap的remove,则很容易理解了,如下

static inline void _Priority_bit_map_Remove ( Priority_bit_map_Control *bit_map, Priority_bit_map_Information *bit_map_info ) { *bit_map_info->minor &= bit_map_info->block_minor; if ( *bit_map_info->minor == 0 ) bit_map->major_bit_map &= bit_map_info->block_major; }

需要注意的是,这里通过位与来清楚minor和major的值,那么有一个细节如下可以留意

mask = _Priority_Mask( major ); bit_map_info->ready_major = mask; bit_map_info->block_major = (Priority_bit_map_Word) ~mask; mask = _Priority_Mask( minor ); bit_map_info->ready_minor = mask; bit_map_info->block_minor = (Priority_bit_map_Word) ~mask;

这里我们位与的值实际上取反过了。所以位与可以达到清零的作用。

四、拉链

将数组和链表结合起来的拉链,其实现效果如下

image.png

这里每个优先级对应一个数组,然后每个数组都是一个链表,链表通过队列的方式实现。也就是如下:

Chain_Control Ready[ RTEMS_ZERO_LENGTH_ARRAY ];

五、队列

管理基于优先级调度的方式是使用队列的方式,所有的插入操作都是尾插,所有的查询操作都是从头找。而链表的删除操作本身就是O(1)

则可以发现,队列实现的优先级链表的整体效率就是O(1)。代码如下

// 查找 static inline Chain_Node *_Chain_First( const Chain_Control *the_chain ) { return _Chain_Immutable_head( the_chain )->next; } // 插入 static inline void _Chain_Append_unprotected( Chain_Control *the_chain, Chain_Node *the_node ) { Chain_Node *tail; Chain_Node *old_last; _Assert( _Chain_Is_node_off_chain( the_node ) ); tail = _Chain_Tail( the_chain ); old_last = tail->previous; the_node->next = tail; tail->previous = the_node; old_last->next = the_node; the_node->previous = old_last; } // 删除 static inline void _Chain_Extract_unprotected( Chain_Node *the_node ) { Chain_Node *next; Chain_Node *previous; _Assert( !_Chain_Is_node_off_chain( the_node ) ); next = the_node->next; previous = the_node->previous; next->previous = previous; previous->next = next; #if defined(RTEMS_DEBUG) _Chain_Set_off_chain( the_node ); #endif }

六、调度器初始化

根据上面的分析,初始化调度器就是将bitmap和数组以及链表申请并清空即可。如下

void _Scheduler_priority_Initialize( const Scheduler_Control *scheduler ) { Scheduler_priority_Context *context = _Scheduler_priority_Get_context( scheduler ); _Priority_bit_map_Initialize( &context->Bit_map ); _Scheduler_priority_Ready_queue_initialize( &context->Ready[ 0 ], scheduler->maximum_priority ); }

_Priority_bit_map_Initialize将bit_map结构之间memset即可

_Scheduler_priority_Ready_queue_initialize遍历所有的Ready数组,对每个数组进行链表初始化,因为是单链表,所以如下

static inline void _Chain_Initialize_empty( Chain_Control *the_chain ) { Chain_Node *head; Chain_Node *tail; _Assert( the_chain != NULL ); head = _Chain_Head( the_chain ); tail = _Chain_Tail( the_chain ); head->next = tail; head->previous = NULL; tail->previous = head; }

七、调度器任务就绪

对于调度器的任务就绪,就是调用链表的插入,然后更新优先级的bitmap。那么对应函数如下

_Scheduler_priority_Ready_queue_update _Scheduler_priority_Ready_queue_enqueue

update的作用是初始化或更新bitmap,而enqueue的作用是将ready的链表节点移除,并且通过或操作设置bitmap。如下

static inline void _Scheduler_priority_Ready_queue_update( Scheduler_priority_Ready_queue *ready_queue, unsigned int new_priority, Priority_bit_map_Control *bit_map, Chain_Control *ready_queues ) { ready_queue->current_priority = new_priority; ready_queue->ready_chain = &ready_queues[ new_priority ]; _Priority_bit_map_Initialize_information( bit_map, &ready_queue->Priority_map, new_priority ); } static inline void _Scheduler_priority_Ready_queue_enqueue( Chain_Node *node, Scheduler_priority_Ready_queue *ready_queue, Priority_bit_map_Control *bit_map ) { Chain_Control *ready_chain = ready_queue->ready_chain; _Chain_Append_unprotected( ready_chain, node ); _Priority_bit_map_Add( bit_map, &ready_queue->Priority_map ); }

八、调度器任务阻塞

任务阻塞和就绪是相反的作用,先通过_Scheduler_priority_Ready_queue_extract将就绪队列的任务删除,如果就绪队列只有一个元素,则更新bitmap,清空bitmap设置,如下

static inline void _Scheduler_priority_Ready_queue_extract( Chain_Node *node, Scheduler_priority_Ready_queue *ready_queue, Priority_bit_map_Control *bit_map ) { Chain_Control *ready_chain = ready_queue->ready_chain; if ( _Chain_Has_only_one_node( ready_chain ) ) { _Chain_Initialize_empty( ready_chain ); _Chain_Initialize_node( node ); _Priority_bit_map_Remove( bit_map, &ready_queue->Priority_map ); } else { _Chain_Extract_unprotected( node ); } }

九、调度器任务调度

任务调度的操作就是从ready队列取出最高优先级任务出来,然后标记执行,对应函数为_Scheduler_priority_Get_highest_ready

static inline Thread_Control *_Scheduler_priority_Get_highest_ready( const Scheduler_Control *scheduler ) { Scheduler_priority_Context *context = _Scheduler_priority_Get_context( scheduler ); return (Thread_Control *) _Scheduler_priority_Ready_queue_first( &context->Bit_map, &context->Ready[ 0 ] ); }

_Scheduler_priority_Ready_queue_first的作用就是在队列中取出head头节点。

十、调度器任务优先级更新

优先级更新的操作有如下四步

先将原任务从就绪队列取出 根据新的任务设置就绪队列和bitmap 如果bitmap存在,则追加的方式将此任务入队 如果bitmap不存在,则初始化就绪队列 代码实现如下

_Scheduler_priority_Ready_queue_extract( &the_thread->Object.Node, &the_node->Ready_queue, &context->Bit_map ); _Scheduler_priority_Ready_queue_update( &the_node->Ready_queue, unmapped_priority, &context->Bit_map, &context->Ready[ 0 ] ); if ( SCHEDULER_PRIORITY_IS_APPEND( new_priority ) ) { _Scheduler_priority_Ready_queue_enqueue( &the_thread->Object.Node, &the_node->Ready_queue, &context->Bit_map ); } else { _Scheduler_priority_Ready_queue_enqueue_first( &the_thread->Object.Node, &the_node->Ready_queue, &context->Bit_map ); }

十一、总结 至此,我们可以知道优先级调度算法是基于bitmap和队列实现的一种调度算法,此调度算法通过计算任务优先级来调整任务。整体效率是O(1)。

对于该调度算法而言,其是可抢占性的,并且任务是具备队列的FIFO特性的。

编辑
2025-04-03
工作知识
0

根据之前的理解,rtems可以在一定条件下主动调度,同样的,也可以在某些条件下调用yield,让出调度。这种让出和主动调度不一样的点在于,让出调度是将当前任务挂起一段时间(以系统时钟滴答 ticks 为单位),使其进入延迟状态,并在指定时间后自动唤醒并恢复就绪状态。

一、让出调度函数

让出调度通过函数_Scheduler_Yield实现,其实先基于_Scheduler_Schedule,但与其不同的是,schedule不会修改就绪队列,而yield会将自己任务从就绪队列删除, 然后从tail再次加入。这样每次调用yield的情况下,自身任务永远在队尾

二、测试代码

让出调度可以在多个场景下测试,只要能够体现让出调度相比于主动调度情况下,让出调度会将自己就绪队列拿出并重新插入。本文还是基于信号的测试来验证让出调度。其函数为

rtems_status_code rtems_task_wake_after( rtems_interval ticks ) { /* * It is critical to obtain the executing thread after thread dispatching is * disabled on SMP configurations. */ Thread_Control *executing; Per_CPU_Control *cpu_self; cpu_self = _Thread_Dispatch_disable(); executing = _Per_CPU_Get_executing( cpu_self ); if ( ticks == 0 ) { _Thread_Yield( executing ); } else { _Thread_Set_state( executing, STATES_WAITING_FOR_TIME ); _Thread_Wait_flags_set( executing, THREAD_WAIT_STATE_BLOCKED ); _Thread_Add_timeout_ticks( executing, cpu_self, ticks ); } _Thread_Dispatch_direct( cpu_self ); return RTEMS_SUCCESSFUL; }

可以看到,如果ticks不为0,及时等待,如为0,则开始让出。根据上面的代码展示,其测试代码如下

rtems_asr Process_asr( rtems_signal_set the_signal_set ) { rtems_status_code status; printf( "ASR - ENTRY - signal => %08" PRIxrtems_signal_set "\n", the_signal_set ); switch( the_signal_set ) { case RTEMS_SIGNAL_16: case RTEMS_SIGNAL_17: case RTEMS_SIGNAL_18 | RTEMS_SIGNAL_19: break; case RTEMS_SIGNAL_0: case RTEMS_SIGNAL_1: puts( "ASR - rtems_task_wake_after - yield processor" ); status = rtems_task_wake_after( RTEMS_YIELD_PROCESSOR ); directive_failed( status, "rtems_task_wake_after yield" ); break; case RTEMS_SIGNAL_3: Asr_fired = TRUE; break; } printf( "ASR - EXIT - signal => %08" PRIxrtems_signal_set "\n", the_signal_set ); }

此代码需要结合RTEMS调度器-主动调度的测试代码,此文章仅提供了信号回调函数Process_asr的实现

可以发现,对于0,1信号,默认会让出cpu,从而使得其他task正常运行。

为了体现两个task之间的相互让出,测试代码可以在task2上发送信号并主动让出,如下

rtems_task Task_2( rtems_task_argument argument ) { rtems_status_code status; puts( "TA2 - rtems_signal_send - RTEMS_SIGNAL_17 to TA1" ); status = rtems_signal_send( Task_id[ 1 ], RTEMS_SIGNAL_17 ); directive_failed( status, "rtems_signal_send" ); puts( "TA2 - rtems_task_wake_after - yield processor" ); status = rtems_task_wake_after( RTEMS_YIELD_PROCESSOR ); directive_failed( status, "rtems_task_wake_after" ); puts("TA2 - rtems_signal_send - RTEMS_SIGNAL_18 and RTEMS_SIGNAL_19 to TA1"); status = rtems_signal_send( Task_id[ 1 ], RTEMS_SIGNAL_18 | RTEMS_SIGNAL_19 ); directive_failed( status, "rtems_signal_send" ); TEST_END(); rtems_test_exit( 0 ); }

可以发现,task2会发送signal 17,然后主动让出后再发送信号18和19.我们查看运行后的日志

TA1 - rtems_signal_catch - RTEMS_INTERRUPT_LEVEL( 0 ) TA1 - rtems_signal_send - RTEMS_SIGNAL_16 to self ASR - ENTRY - signal => 00010000 ASR - EXIT - signal => 00010000 TA1 - rtems_signal_send - RTEMS_SIGNAL_0 to self ASR - ENTRY - signal => 00000001 ASR - rtems_task_wake_after - yield processor TA2 - rtems_signal_send - RTEMS_SIGNAL_17 to TA1 TA2 - rtems_task_wake_after - yield processor ASR - ENTRY - signal => 00020000 ASR - EXIT - signal => 00020000 ASR - EXIT - signal => 00000001

上面日志可以发现,TA1默认发送RTEMS_SIGNAL_0 后,主动让出了,然后此时TA2任务得到执行,如上发送信号RTEMS_SIGNAL_17 to TA1,然后TA2主动让出,此时Process_asr的顺序是

先发送RTEMS_SIGNAL_1,在RTEMS_SIGNAL_0中,主动让出了,所以有日志 ENTRY - signal => 00000001 。等RTEMS_SIGNAL_17 发送完成后 asr handler得到日志ASR - ENTRY - signal => 00020000 和ASR - EXIT - signal => 00020000 。最后因为让出调度会插入调度队列的tail,所以最后打印

ASR - EXIT - signal => 00000001

三、总结

至此,根据上面的测试代码和测试日志,可以非常清楚的了解了主动调度的逻辑。主动调度会放弃cpu,并将自己插入就绪队列的尾部。

编辑
2025-04-03
工作知识
0

还是基于rtems_task_wake_after的测试例子,如果我们设置tick不为0,那么线程将进行休眠等待。当等待事件超过,会主动触发thread unlock,此时会调用调度器的_Scheduler_Unblock函数。本文延续之前的测试例子,演示一下rtems的恢复阻塞调度函数

一、测试程序

还是在rtems_task_wake_after中,我们原子的设置了thread的flag,然后设置了ticks的timeout 如下

_Thread_Set_state( executing, STATES_WAITING_FOR_TIME ); _Thread_Wait_flags_set( executing, THREAD_WAIT_STATE_BLOCKED ); _Thread_Add_timeout_ticks( executing, cpu_self, ticks );

二、代码解析

对于设置timeout,我们可以看到函数回调_Thread_Timeout,如下

static inline void _Thread_Add_timeout_ticks( Thread_Control *the_thread, Per_CPU_Control *cpu, Watchdog_Interval ticks ) { ISR_lock_Context lock_context; _ISR_lock_ISR_disable_and_acquire( &the_thread->Timer.Lock, &lock_context ); the_thread->Timer.header = &cpu->Watchdog.Header[ PER_CPU_WATCHDOG_TICKS ]; the_thread->Timer.Watchdog.routine = _Thread_Timeout; _Watchdog_Per_CPU_insert_ticks( &the_thread->Timer.Watchdog, cpu, ticks ); _ISR_lock_Release_and_ISR_enable( &the_thread->Timer.Lock, &lock_context ); }

_Thread_Timeout函数如下

void _Thread_Timeout( Watchdog_Control *the_watchdog ) { Thread_Control *the_thread; the_thread = RTEMS_CONTAINER_OF( the_watchdog, Thread_Control, Timer.Watchdog ); _Thread_Continue( the_thread, STATUS_TIMEOUT ); }

当超时的时候,就会进入关键函数_Thread_Continue,如下

void _Thread_Continue( Thread_Control *the_thread, Status_Control status ) { Thread_queue_Context queue_context; Thread_Wait_flags wait_flags; bool unblock; _Thread_queue_Context_initialize( &queue_context ); _Thread_queue_Context_clear_priority_updates( &queue_context ); _Thread_Wait_acquire( the_thread, &queue_context ); wait_flags = _Thread_Wait_flags_get( the_thread ); if ( wait_flags != THREAD_WAIT_STATE_READY ) { Thread_Wait_flags wait_class; bool success; _Thread_Wait_cancel( the_thread, &queue_context ); the_thread->Wait.return_code = status; wait_class = wait_flags & THREAD_WAIT_CLASS_MASK; success = _Thread_Wait_flags_try_change_release( the_thread, wait_class | THREAD_WAIT_STATE_INTEND_TO_BLOCK, THREAD_WAIT_STATE_READY ); if ( success ) { unblock = false; } else { _Assert( _Thread_Wait_flags_get( the_thread ) == ( wait_class | THREAD_WAIT_STATE_BLOCKED ) ); _Thread_Wait_flags_set( the_thread, THREAD_WAIT_STATE_READY ); unblock = true; } } else { unblock = false; } _Thread_Wait_release( the_thread, &queue_context ); _Thread_Priority_update( &queue_context ); if ( unblock ) { _Thread_Wait_tranquilize( the_thread ); _Thread_Unblock( the_thread ); #if defined(RTEMS_MULTIPROCESSING) if ( !_Objects_Is_local_id( the_thread->Object.id ) ) { _Thread_MP_Free_proxy( the_thread ); } #endif } }

这里我们清楚的看到wait_flags != THREAD_WAIT_STATE_READY 如果线程不是就绪态,则通过等待success = _Thread_Wait_flags_try_change_release,如果原子操作成功,则设置线程为就绪态,_Thread_Wait_flags_set( the_thread, THREAD_WAIT_STATE_READY );。

在设置线程的就绪态之后,会将unblock标准设置为true。unblock = true; 这样,最后根据此标志位调用_Thread_Unblock( the_thread );

_Thread_Unblock的调用流程是:_Thread_Unblock--->_Thread_Clear_state--->_Thread_Clear_state_locked--->_Scheduler_Unblock

_Scheduler_Unblock的作用就是将本任务加入就绪队列。

三、测试结果

对于thread设置的timeout,其堆栈如下

#0 _Thread_Timeout (the_watchdog=0x105be0 <_RTEMS_tasks_Objects+2024>) at ../../../cpukit/score/src/threadtimeout.c:110 #1 0x0000000000022234 in _Watchdog_Do_tickle (header=header@entry=0x1023a8 <_Per_CPU_Information+808>, first=0x105be0 <_RTEMS_tasks_Objects+2024>, now=102, lock=lock@entry=0x102398 <_Per_CPU_Information+792>, lock_context=lock_context@entry=0x105178 <_ISR_Stack_area_begin+8056>) at ../../../cpukit/score/src/watchdogtick.c:66 #2 0x00000000000222f4 in _Watchdog_Tick (cpu=0x102080 <_Per_CPU_Information>) at ../../../cpukit/score/src/watchdogtick.c:105 #3 0x0000000000026c3c in rtems_timecounter_tick () at ../../../cpukit/include/rtems/timecounter.h:101 #4 Clock_driver_timecounter_tick (arg=<optimized out>) at ../../../bsps/aarch64/include/../../shared/dev/clock/clockimpl.h:124 #5 Clock_isr (arg=<optimized out>) at ../../../bsps/aarch64/include/../../shared/dev/clock/clockimpl.h:237 #6 0x0000000000026d9c in bsp_interrupt_dispatch_entries (entry=0x1026e8 <arm_gt_interrupt_entry>) at ../../../bsps/include/bsp/irq-generic.h:571 #7 bsp_interrupt_handler_dispatch_unchecked (vector=30) at ../../../bsps/include/bsp/irq-generic.h:627 #8 bsp_interrupt_dispatch () at ../../../bsps/shared/dev/irq/arm-gicv2.c:98 #9 0x0000000000029410 in .AArch64_Interrupt_Handler () at ../../../cpukit/score/cpu/aarch64/aarch64-exception-interrupt.S:87

当_Thread_Timeout 之后,其调用堆栈如下

#0 _Scheduler_priority_Unblock (scheduler=0x2d248 <_Scheduler_Table>, the_thread=0x1059e0 <_RTEMS_tasks_Objects+1512>, node=0x105de0 <_RTEMS_tasks_Objects+2536>) at ../../../cpukit/include/rtems/score/schedulerimpl.h:108 #1 0x0000000000025008 in _Scheduler_Unblock (the_thread=0x1059e0 <_RTEMS_tasks_Objects+1512>) at ../../../cpukit/include/rtems/score/schedulerimpl.h:344 #2 _Thread_Clear_state_locked (the_thread=the_thread@entry=0x1059e0 <_RTEMS_tasks_Objects+1512>, state=state@entry=805396479) at ../../../cpukit/score/src/threadclearstate.c:65 #3 0x0000000000025070 in _Thread_Clear_state (the_thread=the_thread@entry=0x1059e0 <_RTEMS_tasks_Objects+1512>, state=805396479) at ../../../cpukit/score/src/threadclearstate.c:81 #4 0x0000000000021b18 in _Thread_Unblock (the_thread=0x1059e0 <_RTEMS_tasks_Objects+1512>) at ../../../cpukit/include/rtems/score/threadimpl.h:1098 #5 _Thread_Continue (the_thread=0x1059e0 <_RTEMS_tasks_Objects+1512>, status=STATUS_TIMEOUT) at ../../../cpukit/score/src/threadtimeout.c:91

四、结果

至此,我们通过测试rtems_task_wake_after函数,将其设置了1s的timeout,可以验证调度器的 block和unblock。unblock会在超时函数之后,清楚block标志位后,直接调用_Scheduler_Unblock函数

编辑
2025-04-03
工作知识
0

rtems提供了更新任务优先级的函数,最常用的场景是主动调用rtems_task_set_priority来调整任务的优先级,当前其他相关线程的操作都可以操作优先级,本文仅以最简单和通用的方式解释更新任务优先级

一、函数实现

rtems_task_set_priority的函数实现如下:

rtems_status_code rtems_task_set_priority( rtems_id id, rtems_task_priority new_priority, rtems_task_priority *old_priority_p )_RTEMS_tasks_Set_priority { Thread_Control *the_thread; Thread_queue_Context queue_context; const Scheduler_Control *scheduler; Priority_Control old_priority; rtems_status_code status; if ( old_priority_p == NULL ) { return RTEMS_INVALID_ADDRESS; } _Thread_queue_Context_initialize( &queue_context ); _Thread_queue_Context_clear_priority_updates( &queue_context ); the_thread = _Thread_Get( id, &queue_context.Lock_context.Lock_context ); if ( the_thread == NULL ) { #if defined(RTEMS_MULTIPROCESSING) return _RTEMS_tasks_MP_Set_priority( id, new_priority, old_priority_p ); #else return RTEMS_INVALID_ID; #endif } _Thread_Wait_acquire_critical( the_thread, &queue_context ); scheduler = _Thread_Scheduler_get_home( the_thread ); old_priority = _Thread_Get_priority( the_thread ); if ( new_priority != RTEMS_CURRENT_PRIORITY ) { status = _RTEMS_tasks_Set_priority( the_thread, scheduler, new_priority, &queue_context ); } else { _Thread_Wait_release( the_thread, &queue_context ); status = RTEMS_SUCCESSFUL; } *old_priority_p = _RTEMS_Priority_From_core( scheduler, old_priority ); return status; }

对于此函数,其作用主要是获取线程后加锁然后更新优先级,最后释放锁后返回旧的优先级, 所以我们关心主要函数_RTEMS_tasks_Set_priority。其调用路径如下

_RTEMS_tasks_Set_priority--->_Thread_Priority_update--->_Scheduler_Update_priority

二、测试程序

测试调整优先级的方法是在task创建之后的任意时间,根据taskid来调整优先级,故部分代码如下

status = rtems_task_start( Task_id[ 1 ], Task_1, 0 ); directive_failed( status, "rtems_task_start of TA1" ); rtems_task_priority previous_priority; status = rtems_task_set_priority( Task_id[ 1 ], 253, &previous_priority ); printf("Kylin: set priority 254 ret=%d prev=%d \n", status, previous_priority ); status = rtems_task_start( Task_id[ 2 ], Task_2, 0 ); directive_failed( status, "rtems_task_start of TA2" ); status = rtems_task_set_priority( Task_id[ 2 ], 254, &previous_priority ); printf("Kylin: set priority 255 ret=%d prev=%d \n", status, previous_priority );

我们可以调整task1和task2的优先级,这样task1和task2的启动顺序会出现不一样。日志如下

task1 优于 task2

Kylin: set priority 254 ret=0 prev=4 Kylin: set priority 255 ret=0 prev=4 TA1 - rtems_signal_send - RTEMS_SIGNAL_16 to self TA1 - rtems_signal_send - RTEMS_SIGNAL_0 to self TA2 - rtems_signal_send - RTEMS_SIGNAL_17 to self TA2 - rtems_signal_send - RTEMS_SIGNAL_18 and RTEMS_SIGNAL_19 to self

task2优于task1

Kylin: set priority 254 ret=0 prev=4 Kylin: set priority 255 ret=0 prev=4 TA2 - rtems_signal_send - RTEMS_SIGNAL_17 to self TA2 - rtems_signal_send - RTEMS_SIGNAL_18 and RTEMS_SIGNAL_19 to self TA1 - rtems_signal_send - RTEMS_SIGNAL_16 to self TA1 - rtems_signal_send - RTEMS_SIGNAL_0 to self

三、总结

本文简单的演示了调整任务优先级,由于目前未了解不同调度器的实现,故此实验比较简单

编辑
2025-04-03
工作知识
0

在rtems中,创建一个任务的时候会绑定一个调度器节点,_Scheduler_Node_initialize函数就是初始化调度器节点。本文基于rtems的任务创建函数rtems_task_create来介绍其初始化调度器节点的过程

一、代码流程

根据示例,代码可以调用rtems_task_create来创建一个task,如

status = rtems_task_create( Task_name[ 1 ], 4, RTEMS_MINIMUM_STACK_SIZE * 2, RTEMS_DEFAULT_MODES, RTEMS_DEFAULT_ATTRIBUTES, &Task_id[ 1 ] );

本文关注其初始化调度器节点,所以其调用流程如下

rtems_task_create--->_RTEMS_tasks_Create--->_Thread_Initialize--->_Thread_Try_initialize--->_Thread_Initialize_scheduler_and_wait_nodes--->_Scheduler_Node_initialize

二、关键代码

对于每个task的初始化过程中,需要初始化调度器节点,这里核心代码如下

do { Priority_Control priority; if ( scheduler == config->scheduler ) { priority = config->priority; home_scheduler_node = scheduler_node; } else { /* * Use the idle thread priority for the non-home scheduler instances by * default. */ priority = _Scheduler_Map_priority( scheduler, scheduler->maximum_priority ); } _Scheduler_Node_initialize( scheduler, scheduler_node, the_thread, priority ); /* * Since the size of a scheduler node depends on the application * configuration, the _Scheduler_Node_size constant is used to get the next * scheduler node. Using sizeof( Scheduler_Node ) would be wrong. */ scheduler_node = (Scheduler_Node *) ( (uintptr_t) scheduler_node + _Scheduler_Node_size ); ++scheduler; ++scheduler_index; } while ( scheduler_index < _Scheduler_Count ); /* * The thread is initialized to use exactly one scheduler node which is * provided by its home scheduler. */ _Assert( home_scheduler_node != NULL ); _Chain_Initialize_one( &the_thread->Scheduler.Wait_nodes, &home_scheduler_node->Thread.Wait_node ); _Chain_Initialize_one( &the_thread->Scheduler.Scheduler_nodes, &home_scheduler_node->Thread.Scheduler_node.Chain );
  • 如果当前调度器是主调度器,则设置优先级,并将节点设置为主调度器节点
  • 如果不是主调度器,则设置为maximum_priority优先级
  • 根据设置的优先级,初始化调度器节点
  • 定位下一个调度器节点,将调度器指针自加,如果有多个调度器,则循环初始化

三、总结

根据上面的代码演示,我们知道在每个task创建的时候,都会初始化调度器节点。由于此功能没有实际演示的效果,故无需编写测试代码演示