Projects / Project Swiftlet
Summary:
☑ Sonar generation working
☑ Sonar transmitter working
☑ Sonar receiver working
☑ Sonar Analog sampling working
Here is what the Sonar Tx signals looks from the ARM chip's TPM (Timer / PWM Module) before it get fed into the full bridge driver.
There is an app note AN5121 "Generating a Fixed Number of PWM Pulses Using TPM and DMA" , but the code is complicated and ties up a DMA channel.
I used a timer overflow interrupt to count the clock cycles and stop the PWM. I have decided to use 'Fast Interrupts" (i.e. bypasses the RTOS) as the overhead is a bit too much at 40,000 interrupts/sec.
The sonar transmitter driver circuit run on a different power rail. Now that power management circuit works, the driver also works.
Notice the amount of ringing from the transducer persists even though the 74LVC08 drivers pins are driven low. With the sonar receiver circuit barnacles done on the PCB, here is what a ping and its return echo looks like.
The waveform is now symmetrical as the LMV324 opamp I used is rail to rail output. (Old project log that shows module hacking here )Here is the current consumption profile during the ping. Each of the spikes on the right plateau is a ping.
Current consumption profile captured by Mooshimeter |
Some back of the envelope calculations:
Speed of sound: 343.2m/s
Range of ultrasonic module: 5m
Playback speed: 32768/4 = 8192 samples/s
Sampling rate: 8192 x 20 = 163840 samples/s
Sample size = 163840 s/s x 2 bytes/sample x 5m x 2 / 343.2 m/s = 9576 bytes per channel, so ~ 18.7kB which would fit inside 32kB of RAM inside the ARM for now. By reducing the ADC resolution to 8-bit, the amount of samples stored in memory can be doubled so the sampling rate can be doubled.
There is an external 64kB SPI SRAM should a higher sample rate at 12-bit or higher is required in the future.
Made some progress with analog sampling of the echo waveform. Here is the current consumption profile. (The first plateau was from the previous session.) The spike is about 20mA and that's the ping + ADC + DMA. The peak value is probably higher, but is beyond what the sampling rate/bandwidth of the data logger.
Right now this is done on a debug shell, so the individual steps are farther apart and easier to see.
I saw some numbers filled up the ADC buffer in the debugger. I have to visualize the collected raw data somehow. I print the content as ASC text and import it into Excel, this is what I have. That looks like the familiar sonar echo waveform.
This is what it looks like in detail looks like when I plot the first 2 blips. Pretty sweet, eh?
Looks like I used the FLL frequency instead of the Peripheral bus clock for calculating the sampling rate and off by a factor of 2. The following shows the correct sampling frequency.
The following shows the data points on the waveform: This is sampled at 16380 samples/sec, giving roughly 4 data points per cycle for the 40kHz signal.
The sampling rate can easily double (or quadruple up to the 800+ksps spec if I crank up the FLL from 21MHz to 48MHz ), but I won't be able to fit the left and right side into ARM's internal memory. I can either drop the resolution down to 8-bit or reduce the range or have to do some fancy DMA streaming the data into the external SPI SRAM.
For now, I think I'll leave it as is.
I have written the raw data into a binary file and imported into Audacity. Here is what the spectrum plot looks like by playing back at 1/20th the speed.
There is a bit of low frequency humming. Since we are dealing with a slowed down 40kHz signal with very narrow bandwidth (limited by the transducers), we can pretty ignore most of it outside of 2kHz +/- 500Hz. Now this might not even be a issue for those tiny speakers already work like a high pass filter attenuating frequency below 800Hz.
"Firmware progress 3 - More sonar work" Original post date: 09/17/2015
I have modified the code so that samples are taken from the left and right side are collected in separate passes and interleaved to form "stereo" samples. It is done separately as the echoes might interfere with each other. There is also big savings on the hardware as both sides are MUX together shares the same amplifier and ADC.
This is my ultra low tech setup. The separation is about the width of the Dollar Store safety glasses. I'll probably going to be using this for most of the test
The left side hits the edge of the grey wall and the left side of the door frame while the right side hit the right side of the door frame.
This is the "stereo" waveforms collected, exported to Audacity, normalized and high pass filtered. To be quite honest, my brains can't quite processing that. I don't know how long it would take for me to do it.
Looks like the 22K resistor on the left side is a cleaner waveform. I solder in a 22K to the right side as well. (note the lack of ripples in the after picture) So yeah, I definitely want them.
Right now I use a buffer for sampling a mono audio and another one twice as big to hold the interleave "stereo" sample.
There is an interesting algorithm to merge mono audio samples in place. which can lower the storage requirement allowing for either a lower memory model microcontroller to be used. http://num3ric.calepin.co/interleave-an-array-in-place.html
Next step: Writing code to play back to the stereo DAC on I2S. Lot of reference manual pages to read. :(
Sonar samples captured for the HaD Semifinal video, so might as well put the uncompressed .wav files on the GitHub for everyone. Actual samples are 8-bit, 8192 samples/s captured using Li-ion battery with no post-processing except normalized it.
My low tech setup for the test.
"DMA tricks for Kinetis KL" Original post date: 09/28/2015
Normally this stuff is easy with the MK series eDMA, but the KL series has a very limited DMA that doesn't have a minor/major transfer loops. It can only transfer 8/16/32 bit of data into consecutive locations. i.e. we can't tell the DMA to skip every other byte in the transfer.
The DMA controller does offer linking which let you trigger a new DMA transfer from a DMA event. We are going to take advantage of it to break the operation into painfully smaller steps using multiple DMA channels.
Interleave 2 separate ADC samples into stereo sample using DMA transfer:
1. DMA1: ADC 8-bit samples (left samples) into buffer using 16-bit word transfer.
The MSB is filled with zeros which we'll over-written with the right samples later.
2. Now the fun part for the right side audio samples
- DMA1: Transfer from previous sample into a 16-bit variable. Source: 16-bit, increment; Destination: 16-bit; link to DMA2.
- DMA2: transfer new ADC sample into the MSB of a 16-bit variable. Source: 8-bit fixed address; Destination: 8-bit fixed address; link to DMA3.
- DMA3: transfer 16-bit word back to sample. Source: 16-bit fixed address; Destination: 16-bit increment.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.