STM32H7 SPI DMA Low Level - sends only one Frame

488 views Asked by At

I have a problem with STM32H723 SPI with DMA and Low Level drivers.

The issue is: it sends only one Frame, after that first Frame, each transmission is aborted with an Transfer-Error Interrupt.

The SPI should be triggered every 100 ms and should send a frame of 10 bytes. I got it running using HAL, so the Hardware is OK (i was able to Rx/Tx data), but I can not get it running with Low Level Drivers. My guess is: i forgot something in the SPI / DMA Interrupt handlers. It's working when the programm is (re-)started, but after the first frame the DMA remains blocked/locked and triggers a "transfer error" uppon the next request.

Hardware Setup

SPI: Full Duplex Master, 8 Bit, Motorola, MSB First, NSS Output Hardware, Fifo Threshold 1 Data SPI Setup

DMA: one channel for Rx and Tx each, normal mode, data width = 1 byte and memory increment. DMA Setup

Basic initialisation and Code generation is done by Cube MX

Code

Basic init, called once to configure the DMA

void myDMA_init(void) {
   // DMA Basic Configuriation 
   LL_DMA_ConfigAddresses( DMA1, LL_DMA_STREAM_2, 
      LL_SPI_DMA_GetRxRegAddr(SPI2), 
      (uint32_t)RxBuffer,
      LL_DMA_GetDataTransferDirection(DMA1, LL_DMA_STREAM_2));

   LL_DMA_ConfigAddresses(DMA1, LL_DMA_STREAM_3,
      (uint32_t)TxBuffer,
      LL_SPI_DMA_GetTxRegAddr(SPI2),
      LL_DMA_GetDataTransferDirection(DMA1, LL_DMA_STREAM_3));
}

Send function: called periodically in the main loop

void SendData(void) {
   // configure RX DMA
   uint32_t rxAddr = (uint32_t)RxBuffer;
   LL_DMA_SetMemoryAddress(DMA1, LL_DMA_STREAM_2, rxAddr);
   LL_DMA_SetDataLength(DMA1, LL_DMA_STREAM_2, DataSize);
   LL_DMA_EnableStream(DMA1, LL_DMA_STREAM_2);
   LL_SPI_EnableDMAReq_RX(SPI2);
   // configure TX DMA
   uint32_t txAddr = (uint32_t)TxBuffer;
   LL_DMA_SetMemoryAddress(DMA1, LL_DMA_STREAM_3, txAddr);
   LL_DMA_SetDataLength(DMA1, LL_DMA_STREAM_3, DataSize);
   LL_DMA_EnableStream(DMA1, LL_DMA_STREAM_3);
   LL_SPI_SetTransferSize(SPI2, DataSize);
   LL_SPI_EnableDMAReq_TX(SPI2); 
   // enable Transfer Complete and Transfer Error IT for RxDMA Stream
   LL_DMA_EnableIT_TC(DMA1, LL_DMA_STREAM_2);
   LL_DMA_EnableIT_TE(DMA1, LL_DMA_STREAM_2);
   // enable and start SPI Master Transfer
   LL_SPI_Enable(SPI2);
   LL_SPI_StartMasterTransfer(SPI2);
}

Note: DataSize = 10, uint8_t TxBuffer[10] and uint8_t RxBuffer[10] are global variables. TxBuffer is filled before send, RxBuffer is cleared before send. The SendData function works once. I can see the transmission on the Scope.

Furthermore I have 3 Interrupt Handlers:

Handler for DMA1 Stream 2 (Rx DMA)

void DMA1S2_IRQHandler(void) {
   // clear Transfer Complete and Transfer Error IRQ-Flag
   if(LL_DMA_IsActiveFlag_TC2(DMA1)) LL_DMA_ClearFlag_TC2(DMA1);
   if(LL_DMA_IsActiveFlag_TE2(DMA1)) LL_DMA_ClearFlag_TE2(DMA1);
   LL_DMA_DisableIT_TC(DMA1, LL_DMA_STREAM_2);
   LL_DMA_DisableIT_TE(DMA1, LL_DMA_STREAM_2);
   // enable SPI End of Transfer Interrupt
   LL_SPI_EnableIT_EOT(SPI2);
}

Handler for DMA1 Stream 3 (Tx DMA)

void DMA1S3_IRQHandler(void) {
   // clear Transfer Complete and Transfer Error IRQ-Flag
   if(LL_DMA_IsActiveFlag_TC3(DMA1)) LL_DMA_ClearFlag_TC3(DMA1);
   if(LL_DMA_IsActiveFlag_TE3(DMA1)) LL_DMA_ClearFlag_TE3(DMA1);
   LL_DMA_DisableIT_TC(DMA1, LL_DMA_STREAM_2);
   LL_DMA_DisableIT_TE(DMA1, LL_DMA_STREAM_2);
}

and the Handler for the SPI Interrupt(s):

void SPI2_myIRQHandler(void) {
   // clear the End of Transfer Interrupt Flag
   LL_SPI_ClearFlag_EOT(SPI2);
   LL_SPI_DisableIT_EOT(SPI2);
   // Disable SPI2
   LL_SPI_Disable(SPI2);
   // Disable DMA1 Stream 2 and DMA Request
   LL_DMA_DisableStream(DMA1, LL_DMA_STREAM_3);
   LL_SPI_DisableDMAReq_RX(SPI2);
   // Disable DMA1 Stream 3 and DMA Request
   LL_DMA_DisableStream(DMA1, LL_DMA_STREAM_3);
   LL_SPI_DisableDMAReq_TX(SPI2);
}

Behaviour:

on the first run, the DMA Rx Interrupt calls the Interrupt handler, the Transfer complete Flag is Set, but no transfer error. At the end of the Handler the SPI End of Transfer Interrupt is enabled, which is called without almost any delay. On the second and all following runs the Rx DMA Interrupt handler is called with TE and TC flags enabled. The SPI EoT Interrupt is never triggered again. In the Scope I can see that one frame has been sent out, but that's it.

The Question: has anyone an idea, why the DMA aborts immediately with an Transfer Error after the first run? Thanks!

Annotations:

  • this is just a test programm to get the SPI working. It is not the final productive code. The intention here is to keep it as simple as possible to figure out how this stuff works
  • this is a (improved/reviewed) crosspost from the STM forums, where I posted it initially, but where my question got marked as spam for some reason I don't understand
  • the main loop is just a HAL_Delay, a blinking LED, an update of the TxBuffer and the call to the SPI Send function.
1

There are 1 answers

4
Chris_B On

I found a solution.

my error was, that i tried to close - both the Rx and the Tx DMA in the SPI Interrupt handler.

I had to enable the Transfer Complete and Transfer Error Interrupt for both DMA Streams (Stream 2 Rx and Stream 3 Tx), too, in the "send" function:

void SendData(void) {
   // configure RX DMA
   uint32_t rxAddr = (uint32_t)RxBuffer;
   LL_DMA_SetMemoryAddress(DMA1, LL_DMA_STREAM_2, rxAddr);
   LL_DMA_SetDataLength(DMA1, LL_DMA_STREAM_2, DataSize);
   LL_DMA_EnableStream(DMA1, LL_DMA_STREAM_2);
   LL_SPI_EnableDMAReq_RX(SPI2);
   // configure TX DMA
   uint32_t txAddr = (uint32_t)TxBuffer;
   LL_DMA_SetMemoryAddress(DMA1, LL_DMA_STREAM_3, txAddr);
   LL_DMA_SetDataLength(DMA1, LL_DMA_STREAM_3, DataSize);
   LL_DMA_EnableStream(DMA1, LL_DMA_STREAM_3);
   LL_SPI_SetTransferSize(SPI2, DataSize);
   LL_SPI_EnableDMAReq_TX(SPI2); 
   // enable Transfer Complete and Transfer Error IT for RxDMA Stream
   LL_DMA_EnableIT_TC(DMA1, LL_DMA_STREAM_2);
   LL_DMA_EnableIT_TE(DMA1, LL_DMA_STREAM_2);
   // enable Transfer Complete and Transfer Error IT for TxDMA Stream
   LL_DMA_EnableIT_TC(DMA1, LL_DMA_STREAM_3);
   LL_DMA_EnableIT_TE(DMA1, LL_DMA_STREAM_3);
   // enable and start SPI Master Transfer
   LL_SPI_Enable(SPI2);
   LL_SPI_StartMasterTransfer(SPI2);
}

Both Interrupt Handler functions (which are called in the auto-generated stm32h7xx_it.c) need to disable the DMA-Stream and remove the DMA-Request Flag in the SPI Config. The Rx DMA Transfer Complete Interrupt Handler also enables the SPI End-of-Transfer interrupt:

void DMA1S2_IRQHandler(void) {
   // handle Rx DMA Transfer Complete / Error Interrupt
   // clear Transfer Complete and Transfer Error IRQ-Flag
   LL_DMA_DisableStream(DMA1, LL_DMA_STREAM_2);     
   LL_SPI_DisableDMAReq_RX(SPI2);   
   // clear interrupt flags
   if(LL_DMA_IsActiveFlag_TC2(DMA1)) LL_DMA_ClearFlag_TC2(DMA1);
   if(LL_DMA_IsActiveFlag_TE2(DMA1)) LL_DMA_ClearFlag_TE2(DMA1);
   LL_DMA_DisableIT_TC(DMA1, LL_DMA_STREAM_2);
   // enable SPI End of Transfer Interrupt
   LL_SPI_EnableIT_EOT(SPI2);
}

The Tx DMA Transfer Complete always comes before the Rx DMA Transfer Complete interrupt:

void DMA1S3_IRQHandler(void) {
   // handle Tx DMA Transfer Complete / Error Interrupt
   // clear Transfer Complete and Transfer Error IRQ-Flag
   LL_DMA_DisableStream(DMA1, LL_DMA_STREAM_3);     
   LL_SPI_DisableDMAReq_RX(SPI2);   
   if(LL_DMA_IsActiveFlag_TC3(DMA1)) LL_DMA_ClearFlag_TC3(DMA1);
   if(LL_DMA_IsActiveFlag_TE3(DMA1)) LL_DMA_ClearFlag_TE3(DMA1);
   LL_DMA_DisableIT_TC(DMA1, LL_DMA_STREAM_2);
   LL_DMA_DisableIT_TE(DMA1, LL_DMA_STREAM_2);
}

The SPI EoT Interrupt handler simply shuts down the SPI:

void SPI2_myIRQHandler(void) {
   // clear the End of Transfer Interrupt Flag
   LL_SPI_ClearFlag_EOT(SPI2);
   LL_SPI_DisableIT_EOT(SPI2);
   // Disable SPI2
   LL_SPI_Disable(SPI2);
   // add "transmission complete" handler here
}

Now this leads me to the following questions: obviously the Tx DMA was the troublemaker. It needed to be shut down immediately after it sends its "Transfer Complete" Interrupt.

My Question is: why is this? Why dos the DMA not stop automatically when it has transfered the amount of data defined with

LL_DMA_SetDataLength(DMA1, LL_DMA_STREAM_2, DataSize);

Setting a data length seems not to make much sense, if this limit has no effect on the behaviour of the peripheral. I'd expect the DMA to stop any transfers, when the "remaining bytes" counter had reached zero.

RM0468 Rev 3, Page 619, Chapter 15.3.8 {Source, destination and transfer modes} states:

Memory-to-peripheral mode Figure 81 describes this mode. When this mode is enabled (by setting the EN bit in the DMA_SxCR register), the stream immediately initiates transfers from the source to entirely fill the FIFO. Each time a peripheral request occurs, the contents of the FIFO are drained and stored into the destination. When the level of the FIFO is lower than or equal to the predefined threshold level, the FIFO is fully reloaded with data from the memory. The transfer stops once the DMA_SxNDTR register reaches zero, when the peripheral requests the end of transfers (in case of a peripheral flow controller) or when the EN bit in the DMA_SxCR register is cleared by software.

Has anyone an answer to this question, or is there still a bug in my code, which prevents the DMA to stop transfers automatically?