Midi SynthFpga

De Wiki LOGre
(Redirigé depuis Synth FPGA)
Aller à : navigation, rechercher

On this wiki page you will find some information related to a design of sound synthesiser develop on an FPGA.

HW Requirement

This project have been develop on Jack Gassett's Papilio Papilio One 500K board.

SW Requirement

A linux box, libftdi, with rights access allow to read/write Papilio board serial interface pmidi https://github.com/Thomasb81/Midi_SynthFpga checkout master branch

Compile

This software use a sound library asound: To compile and execute you should have this library installed on your unix system. Libasound2 is usually installed. But you probably have to install development package: libasound2-dev to get header properly installed.

$cd midi_if
$make

Execution

$./midi_papilio_if

In an other terminal

$pmidi -p 129:0 your_midi_file.mid

It is possible that you need to use another midi input than me. You can use aconnect to find it:

$aconnect -i

expected result

I case of failure, you can find a demo video on youtube here : video

Support of the drum

A support of the drum has been implemented it required an SD card. Basically, When a drum sample need to be play a state machine fetch the corresponding sample inside the SDCard and put it in a fifo. Where a sound mixer can periodically (48Khz) take next drum sample to mix with the other sound to mix. Original inspiration as been found on this project : http://www.sk-electronics.com/www/index.php/opensourceprojects/68-sksynth. But it use a memory chip.

Source code https://github.com/Thomasb81/Midi_SynthFpga/tree/drum

Details of SDCard controller support can be found here.

To convert wav file into a usable sample, there is a 2 steps procedure with sox:

  1. convert the sample rate:
    $sox orgi.wav -b 16 48k.wav rate 48k 
  2. produce raw data:
    $sox 48k.wav -b 16 --encoding unsigned-integer --endian little 48k.raw

Discussion

Software part

Pmidi is a midi player is used to read midi file and send it on alsa midi interface of your linux box. midi_papilio_if act as a sequencer that re-route some midi event to /dev/ttyUSB1 (The serial device of my Papilio board). Only note event are send. If midi file contain other, they are basically ignored. When a event is recognised, the meaningful data is fetched in Alsa structure to rebuild a midi-byte protocol. Then send to the board through serial interface. All the timing part is done by pmidi that send midi event as specified in the file.

HW part

midi command are receive thought an UART at 3Mbaud. This "high" speed serial interface is supposed to be fast enough to send midi event. The recognised midi command are decoded and send to the synthesizer. This synthesizer is compose of 2 LUTs (Look Up Table). One is sample, basically a sinus period with additional harmonic to avoid to much whistle. The idea is to put a value of this sample on a DAC (Digital to Analog Convertor) at the sampling period in my case 48KHz. The second LUT is a look up table that contain for each note i wish to reproduce the value to find the next sample to put on my DAC. This way depending of the note (aka the frequency) I will downsample my sample, in an other term the observe frequency of my sample will change depending of the note to play. At this step to play 16 channel you need 32 LUTs while the FPGA only have 20 RAM block. Some resources need to be shared. This imply to memorize last channel sample is played to periodical. Additionally some HW is used to do ADSR. It is for volume control. Then a channel mixer is used mix the different channel.

Most of ideas on sound generation are explain here : hamsterworks synth


Current limitation

Drum channel is play as all instrument. So some song can be odd. The synthesiser can only play 1 note per channel while midi protocol allow up to 127 note per channel and some song use that ! The FPGA is near that 75% of resources used and take about 15 minutes to synthetize. The main raison is that too much logic is spent to store note sample state. Then it become to costly to route signals. An other issue is that in this HW a channel have this dedicated HW. Even if used by the song we are going to play... We have efficiency issue in the use of our HW.

[1] branch poly is an attempt to resolve those issue. But real world shows bug that are very difficult to find in simulation. The point is I used a DPRAM (Dual Port RAM) as circular buffer to handle note status. One port is use to write note event. The other one is used to read not at 48Khz and send it in sound pipeline. The difficulties come to the FSM that handle this state machine. On note on event, no issue, i add a note in the buffer. But on a note off i need to search in this circular buffer the previous note on to mark it as off. Current status shows that on some song after 20 or 30 second of playing I am not able to retrieve a note on.

To pass this new difficulty I think to wipe off all the buggy FSM part and handle it by SW...

Tips that have nothing to do here but that I log before forgetting

To compile zpu gcc tool chain available here [2] on a 64bit native linux workstation, the build.sh script need to be modify. Otherwise one of zpu tools we just compile and we use to build the toolchain will crash.

binutils need to be compile with the additionnal following flags in CFLAGS variable:

-DFORTIFY_SOURCE=0