Maximum clock frequency on DE1-SOC - verilog

What is the maximum clock frequency that can be generated with Altera PLLs in DE1-SOC board?

I can't find a reference to a maximum PLL frequency in any of the Cyclone V documentation. However, it appears (from my own experimentation) that the Altera PLL megafunction/IP Core won't product a generated clock with a frequency faster than 1.6 GHz (1600 MHz).
That said, I doubt you'll be able to clock any CV circuitry (even fully pipelined) that quickly.

Related

play audio with pwm of a attiny85

I'm trying to understand how to implement audio playback from scratch on attiny85. The goal is to play a short sound (cat meows, so i want it to remain recognizable) from an array representing strength of audio signal sampled at fixed interval.
As far as i understand, signal strength is linearly mapped to voltage of analogue audio signal. As far as I know, audio cards are Digital to Analogue Converters, but attiny85 probably doesn't have that.
I'm curious if I can use pwm to play the sound back. Since pwm changes average voltage by changing duty cycle of alternating high and low phases of signal, it most likely would result in the drop of audio quality. Wav sampling rates can differ between 1 HZ and 4.3 GHz according to google. Attiny85 has internal clock with frequency up to 8MHz (which I hope is same for it's pwm generator).
Considering reconfiguring the timer and pwm settings as well as looping in the array, what is the maximum sampling rate of audio i can reliably play? And should i even try to do it with pwm, or there are better options?
Given a system clock of 8 MHz, you can use PWM to generate mono (single-channel) audio.
Consider a PWM period of 1000 clocks, giving you about 10 bit resolution. The sample rate will be 8000 Hz then, which gives you some kind of lo-fi audio.
If you reduce your signal resolution to 8 bits, you'll get 8 MHz / 28 = 31.25 kHz sample rate. This gets near hi-fi.
Synchronize your sample output with the PWM generator, and use an appropriate analogue filter.
Many years ago I built a digital door bell with a sample rate of 8 kHz and 8 bit samples. It played nice sounds in the quality of telephones. The microcontroller was a 8051 derivative and it used an R-2R ladder as DAC.
A simpel sinus can be generated by using a 50% PWM signal and varying the frequency. Given some filtering effect through the speaker, it would mimik a single tone audio signal.
Making more advanced tones (needed for natural sound) quickly gets more complicated and the duty cycle of the signal can also be used to trick the human ear into hearing harmonics. Check out the arduino function tone() for some inspiration.
Be carefull when connecting a small speaker to the Arduino, preferably a transistor/buffer/small amplifyer should be place between the Arduino and the speaker.

Clock Conversion for RTL Verilog (FPGA) Synthesizable Code

Converting 12 MHz system clock signal on FPGA to 1 MHz Signal output at a 50% duty cycle.
I understand that I need to divide by 2 # 50/50 duty cycle to get 6 MHz, and then divide by 2 again to get to 3 MHz, and then divide by 3 to get to 1 MHz. Is this the correct method?
Also, how would I implement this in RTL Verilog code?
Is this the correct method?
No. First, operating on clocks in logic is often difficult to route appropriately, especially in multiple stages. Second, it is especially difficult to divide a clock by 3 and get a 50% duty cycle without either negative-edge or DDR flip-flops, both of which are often unavailable in FPGA fabric.
The correct method is to use your FPGA's clocking resources. Most modern FPGAs will have one or more onboard DLLs or PLLs which can be used to manage clock signals.
On Xilinx parts, these resources are known as the DCM, PLL, and/or MMCM, and can be instantiated using the ClockGen IP core.
On Altera/Intel parts, these resources can be configured through the PLL and other megafunctions.
On Lattice parts, these resources are known as the sysCLOCK PLL, and can be configured using IPexpress.

Beaglebone Black; Wrong SPI Frequency

I'm new at programming the Beaglebone Black and to Linux in general, so I'm trying to figure out what's happening when I'm setting up a SPI-connection. I'm running Linux beaglebone 3.8.13-bone47.
I have set up a SPI-connection, using a Device Tree Overlay, and I'm now running spidev_test.c to test the connection. For the application I'm making, I need a quite specific frequency. So when I run spidev_test and measure the frequency of the bits shiftet out, I don't get the expected frequency.
I'm sending a SPI-packet containing 0xAA, and in spidev_test I've modified the "spi_ioc_transfer.speed_hz" to 4000000 (4MHz). But I'm measuring a data transfer frequency of 2,98MHz. I'm seeing the same result with other speeds as well, deviations are usually around 25-33%.
How come the measured speed doesn't match the assigned speed?
How is the speed assigned in "speed_hz" defined?
How precise should I expect the frequency to be?
Thank you :)
Actually If you look closely on the DSO you can see that each clock cycles takes approx 312.5 ns , which makes the clock frequency to be 3.2Mhz,. May be the channel you're monitoring i
Then, the variation between the expected & actual speed,
In microncontrollers I've worked the all the peripherlas including the SPI derives ots clock from the Master clock which is supplied to the MCU(in your case MPU), the master frequency divided by some prescalar gives the frequency for periperal opearations, where as peripherals use this frequency and uses its prescalar for controlling the baud rate,
So in your case suppose if the master frequency is not proper this could lead to the behavior mentioned above.
So you have two options
1. Correct the MPU core frequency
2. Do a trial & error method to find the value which has to be given is spi test program to get the desired frequency

How clock_gettime achieves nano seconds resolution?

Which system hardware timer does clock_gettime function in linux use internally to give nano seconds level resolution back to the user code when invoked to measure elapsed time for a given segment of code ?
Modern CPUs run at several GHz clock frequency. A frequency of 1 GHz equals a clock period of 1 ns. So running a (wide) counter at 1 GHz gives a time resolution in nanoseconds. This does not mean that the time is as accurate as it is displayed. The value has just such a high resolution.

tuning pid in systems with delay

I need to tune PI(D) gains in a system which has a quite large delay. It's a common temperature controller, but the temperature probe is far away from the heater. Some further info:
the response of the probe is delayed about 10 seconds from any change on the heater
the temperature is sampled # 1 Hz, with a resolution of 0.01 °C
the heater is controller in PWM with a period of 1 Hz, with a 10-bit PWM
the goal is to maintain the oscillation below ±0.05 °C
Currently I'm using the controller as PI. I can't avoid oscillations. The higher the gain, the smaller and faster the oscillations. Still too high (about ±0.15 °C).
Reducing the P and I gains leads to very long and deep oscillations.
I think this is due to the delay.
The settling time is not a problem, it may take all the time it needs.
I'm puzzling over how get the system to work. Let's think to use only I. When the probe reaches the target value and the I output starts to decrease, the temperature will rise for some other time. I cannot use the derivative term because the variations are too slow and the dError is very close to zero (if I set the dGain to a huge value there is too much noise).
Any idea?
Try P-only. How fast are the proportional-only oscillations? If you can't tune Kp small enough to get no oscillations, then your heater is overpowered for your system.
If the dead time of the of the system is on the order of 10s, the time constant (T_i) for the Integral term should be 3.3 times the dead time, using a Ziegler Nichols open-loop PI rule ( https://controls.engin.umich.edu/wiki/index.php/PIDTuningClassical#Ziegler-Nichols_Open-Loop_Tuning_Method_or_Process_Reaction_Method: ) , and then Integral term should be Ki = Kp/T_i. So with deadtime = 10s, then Ki should be Kp/33 or slower.
If you are getting integral-only oscillations, then the integral is winding up and down quicker than the process responds, and it should be even smaller.
Also -- think of the units of the different terms. It might not be the delay causing your problems so much as the resolution of the measurement and control systems. If you're driving a (for example) 100W heater with a 1/1024 resolution PWM, you've got 0.1W resolution per PWM count that you are trying to adjust based on 0.01C temperature differences. At less than Kp = 100 PWMcount/degree (or 10W/degree) you don't have enough resolution in the PWM to make changes in response to a 0.01C error. At a Kp=10PWM/C you might need a 0.10C change to result in an actual change in the PWM power. Can you use a higher resolution PWM?
Thinking of it the other way, if you want to operate a system over a range of 30C at 0.01C, I'd think you would want at least a 15bit PWM to have 10 times the resolution in the controlled system. With only 10 bits of PWM you only get about 1C of total range with control at 10x the resolution of the measurements.
Normally for large delays you have two options: Lower the gains of the system or, if you have a model of the plant you are controlling, use a Smith Predictior.
I would start by modelling your system (using open-loop steps in the input) to quantify the delay and the time constant of your plant, then check if the sampling of the temperature and the PWM rate are OK.
Notice that if your PWM frequency is too small in comparison to the plant dynamics, you will have sustained oscillations because of the slow PWM. You can check it using just an constant input to your PWM (with no controllers, open loop).
EDIT: Didn't see that the problem was already solved, but I'll leave this here for reference.

Resources