Problem in switching MOSFETs in brushless motor drive - motordriver

I have designed a brushless motor drive, the schematic below shows the connection of gate drive and MOSFETs:
schematic
In general, the drive works well, but there are some problems with the switching of the MOSFETs
As shown in the picture below, U-phase MOSFETs cause a short circuit at the moment of shutdown, which causes the MOSFETs to become extremely hot at high currents and destroy the MOSFETs (Miller effect). The blue waveform corresponds to the upper MOSFET and the yellow waveform corresponds to the lower MOSFET
U-phase gate-source waveform
This problem only happens to the U phase and as shown in the figure below, the W phase does not have this problem:
W-phase gate-source waveform
I tried many methods but none of them worked like putting a capacitor in series with the gate resistor or putting a p-channel MOSFET between the gate and source of MOSFET.
Can anyone help? Why does this happen only for the U phase?

Related

Color management - what exactly does the monitor ICC profile do, and where does it sit in the color conversion chain?

I'm reading/watching anything I can about color management/color science and something that's not making sense to me is the scene-referred and display-referred workflows. Isn't everything display-referred, because your monitor is converting everything you see into something it can display?
While reading this article, I came across this image:
So, if I understand this right to follow a linear workflow, I should apply an inverse power function to any imported jpg/png/etc files that contain color data, to get it's gamma to be linear. I then work on the image, and when I'm ready to export, say to sRGB and save it as a png, it'll bake in the original transfer function.
But, even while it's linear, and I'm working on it, is't my monitor converting everything I see to what I can display? Isn't it basically applying it's own LUT? Isn't there already a gamma curve that the monitor itself is applying?
Also, from input to output, how many color space conversions take place, say if I'm working in the ACEScg color space. If I import a jpg texture, I linearize it and bring it into the ACEScg color space. I work on it, and when I render it out, the renderer applies a view transform to convert it from ACEScg to sRGB, and then also what I'm seeing is my monitor converting then from sRGB to my monitor's own ICC profile, right (which is always happening since everything I'm seeing is through my monitor's ICC profile)?
Finally, if I add a tone-mapping s curve, where does that conversion sit on that image?
I'm not sure your question is about programming, and the question has not much relevance to the title.
In any case:
light (photons) behave linearly. The intensity of two lights is the sum of the intensity of each light. For this reason a lot of image mangling is done in linear space. Note: camera sensors have often a near linear response.
eyes see nearly as with a gamma exponent of 2. So for compression (less noise with less bit information) gamma is useful. By accident also the CRT phosphors had a similar response (else the engineers would have found some other methods: in past such fields were done with a lot of experiments: feed back from users, on many settings).
Screens expects images with a standardized gamma correction (now it depends on the port, setting, image format). Some may be able to cope with many different colour spaces. Note: now we have no more CRT, so the screen will convert data from expected gamma to the monitor gamma (and possibly different value for each channel). So a sort of a LUT (it may just be electronically done, so without the T (table)). Screens are setup so that with a standard signal you get expected light. (There are standards (images and methods) to measure the expected bahavious, but so ... there is some implicit gamma correction of the gamma corrected values. It was always so: on old electronic monitor/TV technicians may get an internal knob to regulate single colours, general settings, etc.)
Note: Professionals outside computer graphic will use often opto-electronic transfer function (OETF) from camera (so light to signal) and the inverse electro-optical transfer function (EOTF) when you convert a signal (electric) to light, e.g. in the screen. I find this way to call the "gamma" show quickly what it is inside gamma: it is just a conversion between analogue electrical signal and light intensity.
The input image has own colour space. You now assume a JPEG, but often you have much more information (RAW or log, S-log, ...). So now you convert to your working colour space (it may be linear, as our example). If you show the working image, you will have distorted colours. But you may not able to show it, because you will use probably more then 8-bit per channel (colour). Common is 16 or 32bits, and often with half-float or single float).
And I lost some part of my answer (after last autosave). The rest was also complex, but the answer is already too long. In short. You can calibrate the monitor: two way: the best way (if you have a monitor that can be "hardware calibrated"), you just modify the tables in monitor. So it is nearly all transparent (it is just that the internal gamma function is adapted to get better colours). You still get the ICC, but for other reasons. Or you get the easy calibration, where the bytes of an image are transformed on your computer to get better colours (in a program, or now often by operating system, either directly by OS, or by telling the video card to do it). You should careful check that only one component will do colour correction.
Note: in your program, you may save the image as sRGB (or AdobeRGB), so with standard ICC profiles, and practically never as your screen ICC, just for consistency with other images. Then it is OS, or soft-preview, etc. which convert for your screen, but if the image as your screen ICC, just the OS colour management will see that ICC-image to ICC-output will be a trivial conversion (just copying the value).
So, take into account that at every step, there is an expected colour space and gamma. All programs expect it, and later it may be changed. So there may be unnecessary calculation, but it make things simpler: you should not track expectations.
And there are many more details. ICC is also use to characterize your monitor (so the capable gamut), which can be used for some colour management things. The intensions are just the method the colour correction are done, if the image has out-of-gamut colours (just keep the nearest colour, so you lose shade, but gain accuracy, or you scale all colours (and you expect your eyes will adapt: they do if you have just one image at a time). The evil is in such details.

Clock domain crossing signals and Jitter requirement

I am reading the DVCON paper 2006 "Pragmatic Simulation-Based Verification of Clock Domain Crossing Signals and Jitter using SystemVerilog Assertings" by Mark Litterick. I am confused with some of the statements
Page 2 Section 4.2 Input data values must be stable for three destination clock edges.
the paper seems to imply positive edges since that is what the property p_stability seems to check.
But the paper by Clifford Cummings (CDC design and verification techniques Using System Verilog) mentions this as 1.5x. So he is suggesting 2 positive and 1 negative edge. Can someone confirm if the paper meant positive edge?
Page 5, Section 6, Figure 11 Synchronizer will Jitter Emulation allows 3 clock delay randomly. For a single-bit input, how do we get 3 clock delay? I can see that being useful for multi-bit input where there is some skew but not for a single bit.
property p_stability;
#(posedge clk) // NOTE POSITIVE EDGE
!$stable(d_in) |=> $stable(d_in)[*2];
endproperty
I can confirm the intent of the original statement is 3 positive edges, let me explain why. It is quite straightforward to identify the potential for a pulse that is two positive edges wide to be filtered - specifically if the actual pulse (lets say high level) violates the setup time for the first edge and the hold time for the second edge, then RTL simulation would see the signal as high for two clock edges, but it could be filtered out completely due to metastability. If the verification remains in the event driven simulation domain, then a safe verification margin is to say we can (only) guarantee propagation if it is observed for 3 consecutive positive edges.
Now the reality, in the time domain rather than event driven domain, is that the pulse width must be strictly greater than the clock period plus the setup and hold times.... which is more than two edges but less than three. But you would need a temporal check to validate that, not an event based check.
(for the second question, I need to go back to the paper myself)
Hope that helps,
Mark
I read both papers a while back. My understanding is Clifford Cummings's statement is more accurate. D input width > 1.5x receiving clock period is the minimum requirement. This will guarantee to have 2 positive sampling edges and some space for hold and setup time.

Using microphone input to create a music visualization in real time on a 3D globe

I am involved in a side project that has a loop of LEDs around 1.5m in diameter with a rotor on the bottom which spins the loop. A raspberry pi controls the LEDs so that they create what appears to be a 3D globe of light. I am interested in a project that takes a microphone input and turns it into a column of pixels which is rendered on the loop in real time. The goal of this is to see if we can have it react to music in real-time. So far I've come up with this idea:
Using a FFT to quickly turn the input sound into a function that maps to certain pixels to certain colors based on the amplitude of the resultant function at frequencies, so the equator of the globe would respond to the strength of the lower-frequency sound, progressing upwards towards the poles which would respond to high frequency sound.
I can think of a few potential problems, including:
Performance on a raspberry pi. If the response lags too far behind the music it wouldn't seem to the observer to be responding to the specific song he/she is also hearing.
Without detecting the beat or some overall characteristic of the music that people understand it might be difficult for the observers to understand the output is correlated to the music.
The rotor has different speeds, so the image is only stationary if the rate of spin is matched perfectly to the refresh rate of the LEDs. This is a problem, but also possibly helpful because I might be able to turn down both the refresh rate and the rotor speed to reduce the computational load on the raspberry pi.
With that backstory, I should probably now ask a question. In general, how would you go about doing this? I have some experience with parallel computing and numerical methods but I am totally ignorant of music and tone and what-not. Part of my problem is I know the raspberry pi is the newest model, but I am not sure what its parallel capabilities are. I need to find a few linux friendly tools or libraries that can do an FFT on an ARM processor, and be able to do the post-processing in real time. I think a delay of ~0.25s or about would be acceptable. I feel like I'm in over my head so I thought id ask you guys for input.
Thanks!

Using cross-correlation to detect the beginning of a signal

I am using cross-correlation to find where an audio signal occurs within a recording. When doing this, the point of highest correlation is always found somewhere within the signal in the recording, but I'm looking for a way to find the point where that signal BEGINS in the recording. Does anybody know of a way to go about doing this, or if cross-correlation will even do the job? Thanks in advance.
If your signal is stationary, then instead of looking for a maxima using a single cross-correlation window, try looking for a maximum difference between 2 adjacent signal-sized cross correlation windows. If the prior window shows a very low correlation, and the current window shows a very high correlation, then the likelihood that you are right at a transition edge is good.

PWM Current calculation and dependency on frequency

I am using a PIC16F877a to drive a solid state realy connected to a 300W starter motor (R=50. millohms, L=50mH);
I tried varying The frequency and duty cycle to reduce the inrush current. it worked my current reduced to almost half.
I know that the average voltage for a pwm is V*duty cycle. But i am not driving the motor directly but through a relay. can anyone tell me a formula on how to calculate the current to the motor for validation.
Regs,
cj
I think you would need a datasheet of the motor with its electrical and mechanical characteristcs to determine the current. But that would still be a theoretical value. In the real world you will have the wires, contacts and so on, that add additional resistance and will "help" to limit the start current. But don't choose the wires to small and use a fuse for safety reasons. This should help you to choose the right wires: American Wire Gauge
If it's a DC motor there is a better and quiet simple solution.
Because of mechanical wearing and the limited switching frequency you should better not use a relay. The better solution would be an application fitting field effect transistor (FET)switching at a pwm frequency of about 20kHz so it would not produce any annoying humming or whimpering sounds in the motor. Depending on the transistor you will need a driver circuit for the FET to operate fine, dropping just a small ammount of power (passive cooling might still be needed).
For a smooth start of the motor with a minimum of peak current you should apply a linear duty cycle sweep from 0 to 100%. The optimum duration of the sweep depends on the motor and the mechanical load. Discover your optimum by trying different sweep durations.

Resources