On 32 bit microcontrollers such as the ST STM32F103 (ARM core) or the GigaDevices GD32VF103 (RISC-V core) there are many registers for dealing with peripherals.
What surprises me is that peripheral registers that require more than 16 bits are split into 2 registers, i.e. a high and low register - although the word size (and thus standard register size) of the CPU is 32 bits! Example: RTC_CNTH
and RTC_CNTL
(for reading the current RTC counter value).
Thus, code for reading/writing them gets tedious and errorprone and there is the issue of non-atomic access. Example:
static uint32_t get_rtc_counter()
{
uint32_t r = RTC_CNTL;
r |= RTC_CNTH << 16;
return r;
}
The GlobalDevices user manual even marks the upper 16 its of these registers as reserved ('must be kept at reset value') - so they are even accessible as 32 bit registers!
Since two vendors are doing this with different MCUs I'm interested in the reasoning behind that.
There must be some technical advantages in using 16 bit registers like this when interacting with peripherals, even if the CPU itself is 32 bit.
Reasons I can think of:
- when ST started with the STM32 it just reused the peripherals blocks/design from a previous non-32 bit MCU family, such as the ST8 or ST10
- GlobalDevices copied some STM32F for their ARM MCU family and just re-used their peripherals blocks in their GD32VF103 RISC-V family
But perhaps there are better/real technical reasons behind that.
Perhaps there are other 32 bit MCUs available which don't do this.
Update (2022-05-10): FWIW, the SiFive E310 32 bit RISC-V MCU does things differently! That means it generally uses the whole 32 bits of memory mapped peripheral registers and doesn't split fields into 16 bit halves. For example, its RTC counter is 48 bit wide and is thus split into lower 32 bit and higher 16 bit parts.
To answer the question, it needs to know a little bit of seconductor development history and the design principle of ARM RISC architecuture.
Historically, ARM processors provide a 32-bit instructions set. However, a 32-bit instruction set has a cost in terms of memory footprint of the firmware. This means that a program written with a 32-bit Instruction Set Architecture (ISA) requires a higher amount of bytes of flash storage, and often flash memory are power hungry, especially with the older 14nm node that used by the MCUs like STM32Fx series (as compare to newer 7nm STM32Lx and STM32Gx series). 32-bit ISA has impacts on power consumption and overall costs of the MCUs.
To address such issues, ARM introduced the Thumb 16-bit instruction set, which is a subset of the most commonly used 32-bit one. Thumb instructions are each 16 bits long, and are automatically “translated” to the corresponding 32-bit ARM instruction. This means that 16-bit Thumb instructions are transparently expanded (from the developer point of view) to full 32-bit ARM instructions in real time, without performance loss.
ARM introduced the Thumb-2 instruction set afterward (in 2003?), which is a mix of 16 and 32-bit instruction sets in one operation state. Thumb-2 is a variable length instruction set, and offers a lot more instructions compared to the Thumb one, achieving similar code density.
Cortex-M3/4/7 supports the full Thumb and Thumb-2 instruction sets.