The problem is this:
I want to display on the LCD, the data transmitted by DHT11, but I fail to do so.
Simulation on Proteus
This is the main code: https://pastecode.io/s/nuw0hxkc
LCD library: https://pastecode.io/s/xh93auwq
DHT11 library: https://pastecode.io/s/7xma86jp
I don't understand where I'm wrong.
With the debugger we obtained these values for the variables:
I_RH=223
D_RH=225
I_Temp=225
D_Temp=225
CheckSum=225
I found the problem.
In the simulation I had to change CKSEL Fuses in Internal 8MHz.
Related
I am trying to demodulate a modulated FM Signal. As you can see in the code below I use fmmod to modulate the FM signal, however when I use fmdemod or ademodce then neither returns anything similar to the original signal. For fmdemod I believe it is that you can not enter freqdev as in fmmod, so it does not perform the reverse. With ademodce I have no idea why that is not working. Is there any other Octave function I can use to attain the original signal again or how do I use either one of these two correctly to do it?
See example code below:
[sound2,fs]=audioread('sound2.wav');
fc=7500;
freqdev=100;
dt=1/fs;
len=length(sound2)*dt;
t=0:dt:len;
t=t(1:end-1);
t1=t(1:end-1);
FMmod=fmmod(sound2.',fc,fs,freqdev);
FMDemod=ademodce(FMmod,fs,"fm",freqdev); %Or fmdemod(FMmod.',fc,fs) Neither is working.
sound(sound2,fs)
sound(FMDemod,fs)
subplot(2,2,1),plot(t,sound2),title('Original Sound');
subplot(2,2,2),plot(t,FMmod,'r'),title('FM Modulated');
subplot(2,2,3),plot(t,FMDemod,'g'),title('FM De-Modulated');
subplot(2,2,4),plot(t,FMmod-sound2.'), title('Carrier Signal');
I am attempting to write a python script using the angr binary analysis library (http://angr.io/). I have written code that successfully loads a core dump of the process I want to play with by using the ElfCore back end (http://angr.io/api-doc/cle.html#cle.backends.elf.elfcore.ELFCore) passed to the project constructor, doing something like the following:
ap = angr.Project("corefile", main_opts={'backend': 'elfcore'})
What I am wondering is, how do I now "run" the program forward from the state (registers and memory) which was defined by the core dump? For example, when I attempted to create a SimState using the above project:
ss = angr.sim_state.SimState(project=ap)
ss.regs.rip
I got back that rip was uninitialized (which it was certainly initialized in the core dump/at the point when the core dump was generated).
Thanks in advance for any help!
Alright! I figured this out. Being a total angr n00b® this may not be the best way of doing this, but since nobody offered a better way this is what I came up with.
First...
ap = angr.Project("corefile", main_opts={'backend': 'elfcore'}, rebase_granularity=0x1000)
ss = angr.factory.AngrObjectFactory(ap).blank_state()
the rebase_granularity was needed because my core file had the stack mapped high in the address range and angr refuses to map things above your main binary (my core file in this case).
From inspecting the angr source (and playing at a Python terminal) I found out that at this point, the above state will have memory all mapped out the way the core file defined it to be, but the registers are not defined appropriately yet. Therefore I needed to proceed to:
# Get the elfcore_object
elfcore_object = None
for o in ap.loader.all_objects:
if type(o) == cle.backends.elf.elfcore.ELFCore:
elfcore_object = o
break
if elfcore_object is None:
error
# Set the reg values from the elfcore_object to the sim state, realizing that not all
# of the registers will be supported (particularly some segment registers)
for regval in elfcore_object.initial_register_values():
try:
setattr(ss.regs, regval[0], regval[1])
except Exception:
warn
# get a simgr
simgr = ap.factory.simgr(ss)
Now, I was able to run forward from here using the state defined by the core dump as my starting point...
for ins in ap.factory.block(simgr.active[0].addr).capstone.insns:
print(ins)
simgr.step()
...repeat
My problem is taking mean of all features from different frames in one sample .wav file. I am trying cFunctionals in "chroma_fft.conf" file which belongs to latest OpenEar framework. For best explanation, i am writing these essential codes which i wrote in "chroma_fft.conf" and it is shown below;
[componentInstances:cComponentManager]
instance[functL1].type = cFunctionals
[functL1:cFunctional]
reader.dmLevel = chroma
writer.dmLevel = func
frameMode = full
frameSize=0
frameStep=0
functionalsEnabled = Means
Means.amean = 1
[csvSink:cCsvSink]
reader.dmLevel = func
..NOT-IMPORTANT......
..NOT-IMPORTANT......
However, when i run from command prompt in windows, i got error;
"(ERROR) [1] in configManager : base instance of field 'functL1.reader.dmInstance' not found in configmanager!"
Very similar code is running succesfully from "emo_large.conf" but this code got error. If any body knows how to use OpenSmile audio feature extractor, can give advice or answer why it has error and how to use "cFunctionals" properly to take mean, variance, moments etc. of large feature sets.
Thanks!
In this case you have a typo in
[functL1:cFunctional]
which should be
[functL1:cFunctionals]
I admit the error message
"(ERROR) [1] in configManager : base instance of field 'functL1.reader.dmInstance' not found in configmanager!"
is not intutive, but it refers to the fact that openSMILE expects a configuration section functL1 of type cFunctionals in the config to read the mandatory (sub-)field functL1.reader.dmInstance, which it then cannot find, because the section (due to the typo) is not defined.
Cheers,
Florian
I`m trying to decode h264 video using HW with Stagefright library.
i have used an example in here. Im getting decoded data in MedaBuffer. For rendering MediaBuffer->data() i tried AwesomeLocalRenderer in AwesomePlayer.cpp.
but picture in screen are distorted
Here is The Link of original and crashed picture.
And also tried this in example`
sp<MetaData> metaData = mVideoBuffer->meta_data();
int64_t timeUs = 0;
metaData->findInt64(kKeyTime, &timeUs);
native_window_set_buffers_timestamp(mNativeWindow.get(), timeUs * 1000);
err = mNativeWindow->queueBuffer(mNativeWindow.get(),
mVideoBuffer->graphicBuffer().get(), -1);`
But my native code crashes. I can`t get real picture its or corrupted or it black screen.
Thanks in Advance.
If you are using a HW accelerated decoder, then the allocation on the output port of your component would have been based on a Native Window. In other words, the output buffer is basically a gralloc handle which has been passed by the Stagefright framework. (Ref: OMXCodec::allocateOutputBuffersFromNativeWindow). Hence, the MediaBuffer being returned shouldn't be interpreted as a plain YUV buffer.
In case of AwesomeLocalRenderer, the framework performs a software color conversion when mTarget->render is invoked as shown here. If you trace the code flow, you will find that the MediaBuffer content is directly interpreted as YUV buffer.
For HW accelerated codecs, you should be employing AwesomeNativeWindowRenderer. If you have any special conditions for employing AwesomeLocalRenderer, please do highlight the same. I can refine this response appropriately.
P.S: For debug purposes, you could also refer to this question which captured the methods to dump the YUV data and analyze the same.
I am trying to use the following function:
PCHAR pVID_PID="vid_04D8&pid_fc5f";
DWORD n= MPUSBGetDeviceCount(pVID_PID);
It is imported from mpusbapi.h, where it is defined as such:
DWORD (*MPUSBGetDeviceCount)(PCHAR pVID_PID);
I was wondering where I can find my USB device's VID&PID information. The one I am using doesn't seem to be working because cout<<n ouputs 0! Thanks in advance for any help.
You will normally find them in the driver '.inf' files associated with the device you are using.