I am very new to python and programming in general so I am hoping someone might be able to point me in the right direction in my search for a solution;
In controlling a linear actuator, I need to be sure the speed is constant over varying loads.
The motor controller receives byte values of 0(stopped)-127(full speed) over serial to control speed
The position is a voltage from 0-5v with 5v being fully retracted and 0v being fully extended.
I have looked up simple - PID but Im not sure I understand how I would apply it in this case.
Essentially I would specify every 0.x seconds, check the position relative to the previous position, calculate the speed, and then update the speed value being sent over the serial port so I reach my speed setpoint.
If someone could detail what should go where, specifically what the tunings are for?
https://pypi.org/project/simple-pid/
Thanks!
Related
I've been trying to find a way to capture keyboard input during runtime simulation of my Verilog code. Is this even possible?
I have taken a look at resources like asic-world and the Quick Reference for Verilog found on Google, but found nothing regarding a way to take keyboard inputs.
There seems to be a fundamental misunderstanding here in the difference between a hardware description language used to simulate a design versus using that same description to implement a design in actual hardware. It's like drawing a picture of a pinwheel, blowing on that picture, and expecting the pinwheel to start turning.
You can certainly build a 3-D model of that pinwheel, simulate the force of the wind on that model and watch it turn, and then send that model to a 3-d printer to get your pinwheel. I suppose you could put wind sensors in front of your monitor, and write a program that converts a value from the sensor to a value used in the simulation. The point is, the simulator has no knowledge that the value came from someone blowing on the monitor, it just sees a parameter value change.
Unless you are designing the keyboard hardware yourself and simulating that, there really is not much point in taking keyboard input from a computer and using that to stimulate your design in simulation. The operating system has already abstracted away the keyboard hardware and provides you with a string of character codes. The reason you are simulating in the first place is to verify the functionality of your design. If you find a problem, you are going to want to replay the exact same stimulus until you fix your problem.
Just like the pinwheel example, I do know it's possible for someone to set up a program that reads keyboard input and provides that as stimulus to a simulation. But that involves inter-process communication(IPC) and specific tool knowledge to set that up.
Let's say I have two separate recordings of the same concert (created on a user's phone and then uploaded to our server). These recordings are then aligned according to their creation timestamp. However, when these recordings are played together or quickly toggled between, it is revealed that their creation timestamps must be off because there is a perceptible delay.
Since the time stamp is not a reliable way to align these recordings, what is an alternative? I would really prefer not to have to learn about audio signal processing to solve this problem, but recognize this may be the only way. So, I guess my question is:
Can I get away with doing some kind of clock synchronization? Is that even possible if the internal device clocks are clearly off by an unknown amount? If yes, a general outline of how this would work and key words would be appreciated.
If #1 is not an option, I guess I need to learn about audio signal processing? Again, a general outline of how to tackle the problem from that angle and some key words would be appreciated.
There are 2 separate issues you need to deal with. Issue 1 is the alignment of the start time of the recordings. I doubt you can expect that both user's pressed record at the exact same moment. Even if they did they may be located different distances from the speaker and it takes time for sound to travel. Aligning the start times by hand is pretty trivial. The human brain is good at comparing the similarities of sound. Programmatically it's a different story. You might try using something like cross correlation or looking over on dsp.stackexchange.com. There is no exact method though.
Issue 2 is that the clocks driving the A/D converters on the two devices are not going to be running at the same exact rate. So even if you synchronize the start time, eventually the two are going to drift apart. The time it takes to noticeably drift is a function of the difference of the two clock frequencies. If they are relatively close you may not notice in a short recording. To counter act this you need to stretch the time of one of the recordings. This increases or decreases the duration of the recording without affecting the pitch. There are plenty of audio recording apps that allow you to time stretch but they don't give you any help in figuring out by how much. Start be googling "time stretching" or again have a look at dsp.stackexchange.com.
I realize neither of these are direct answers - rather suggestions.
Take a look at this document, describes how you can align recordings using Sonic Visualizer(GPL) and a plugin.
I've not used it before, but found the document (and this question) when I was faced with a similar problem.
I'm trying to interface a Nexys3 board with a VmodTFT via a VHDCI connector. I am pretty new to FPGA design, and although I have experience with micro-controllers. I am trying to approach the whole problem as a FSM. However, I've been stuck on this for quite some time now. What signals constitute my power up sequence? When do I start sampling data? I've looked at the relevant datasheets and they don't make things very clearer. Any help would be greatly appreciated (P.S : I use Verilog for the design).
EDIT:
Sorry for the vagueness of my question. Here's specifically what I am looking at.
For starters, I am going to overlook the touch module. I want to look at the whole setup as a FSM. I am assuming the following states:
1. Setup connection or handshake signals
2. Switch on the LCD
3. Receive pixel data
4. Display video
5. Power off the LCD
Would this be a reasonable FSM? My main concerns are with interpreting the signals. Table 5 in the VmodTFT_rm manual shows a list of signals; however, I am having trouble understanding what signals are for what (This is my first time with display modules). I am going to assume everything prefixed with TFT_ is for the display and everything with TP_ is for the touch panel (Please correct me if I'm wrong). So what signals would I be changing in each state and what would act as inputs?
Now what changes should I make to accommodate the touch panel too?
I understand I am probably asking for too much, but I would greatly appreciate a push in the right direction as I am pretty stuck with this for a long time.
Your question could be filled out a little better, it's not clear exactly what's giving you trouble.
I see two relevant docs online (you may have seen these):
Schematic: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_sch.pdf
User Guide: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_rm.pdf
The user guide explains what signals are part of the Power up sequence
you must wait between 0.5ms and 100ms after driving TFT-EN before you can drive DE and the pixel bus
You must wait 0 to 200ms after setting up valid pixel data to enable the display (with DISP)
You must wait 160ms after enabling DISP before you start pulsing LED-EN (PWM controls the backlight)
Admittedly the documentation doesn't look great and some of the signals names are not consistent, but I think you can figure it out from there.
After looking at the user guide to understand what the signals do, look at the schematic to find the mapping between the signal names and the VHDCI pinout. Then when you connect the VHDCI pinout to your FPGA, look at your FPGA's manual to find mapping between pins on the VHDCI connector and balls of the FPGA, and then you can use the fpga's configuration settings to map an FPGA ball to a logical verilog input to your top module.
Hope that clears things up a bit, but please clarify your question about what you don't understand.
I am using a MPU6050 IMU to map the path of a device (with starting point as origin). For this I need to convert the accelerometer and gyroscope readings into (Cartesian)co-ordinates. I think I need to continuously sample the accelerometer readings and go on adding (integrating) the sample to the previous point for each axes respectively. At startup the previous point will be (0,0,0).
I know this on paper. But I dont think it will be that simple. How will I know when the device is moving backwards, ie towards the origin?
The MPU6050 provides accleration and gyro reading in all axes. I used this to fetch the values. But I dont know how to continue. So what I need is an "Inertial Navigation system" which takes acceleration and angular velocity vectors as well as the current position as input and returns the new position. I know this will have errors, but I am not concerned about that for now.
If someone can guide me in this that would be great. Any hints or pointers will be appreciated.
Kiran G
Kiran,
To answer that question it would be good to know what kind of Gyro are you using or willing to use. It is very different depending if the output is an analog signal (voltage or current loop) or if that is any kind of (normally serial) bus.
Please note that most likely you will have also to filter the signal based on the expected dynamics of the environment.
I am developing a software which can auto record and extract every words in my voice. I used portaudio library to solve it. But I am stuck on detecting the sound: I set the silence's value is zero so if there is a sample which is zero, it must be a start or end point of a sound. But when I ran it, the program created many words. I think because the value I read by portaudio is raw data, so it can't be processed like that. Am I right? How can I fix it? By the way, I am coding in C++ :D
To detect the presence of a signal in a PCM stream you be able to detect it. As dprogramz put said, the noise floor of your soundcard is probably not perfect and so there will be some noise signal recorded (even with no mic connected).
The solution is to use a VOX or VAD algorithm to detect the presence of your voice. VOX can be tricky, since in most consumer grade electronics the noise floor is just low enough to be "silence" to the human ear, relative to the signal. This means that the difference on amplitude between the noise floor and signal may be slight. If your sound card has AGC turned on this can make it even more difficult, since the noise floor may move. Having said that, VOX can be implemented successfully on consumer grade equipment. It just takes more effort to establish the threshold. When done best the threshold is calculated periodically while the stream is active.
If I were doing this I'd implement a VAD algorithm. Since your objective is to detect your voice this should provide a reliable result regardless of the equipment you use.
I don't think it's because it is a RAW value. RAW sound files are a bitstream of frequency and volume information.
However, the value will rarely (if ever) be zero. You have to take into account there is a small amount of electrical noise that is made by the mic. Figure out the "idle" dB of your mic (just test the level when you aren't talking into it). You Then need to set a silence threshold (below a certain dB level for a certain number of samples) to detect the beginning/end. Attempting to detect a zero value is gonna be near impossible.