How to get the angle value using xcubememsmic1 expansion pack? - audio

I am working on a project with 4 mems microphones and I want to get the angle of sound(sound source localization) using xcubememsicmic1 sound expansion pack but I don't know how can I get it?
Anyone who has experience in mems technology and sound source localization, please help me.
I have some of the functions here, please tell me how to use it in my main()?
uint32_t AcousticSL_Process(int32_t * Estimated_Angle, AcousticSL_Handler_t * pHandler); //taken from AcousticSL.c
static float32_t GCC_GetAngle(libSoundSourceLoc_Handler_Internal * SLocInternal, int32_t * out_angles); //taken from libSoundSourceLoc.c

Related

Android java get wav file frames

I need to get waveform data from the wav file,but my code returns not right waveform (i compare my results with waveform from fl studio)
This is my code:
path = "/storage/emulated/0/FLM User
Files/My Samples/808 (16).wav";
waveb = FileUtil.readFile(path);
waveb = waveb.substring((int) (waveb.indexOf("data") + 4), (int)(waveb.length()));
byte[] b = waveb.getBytes();
for(int i= 0; i < (int)(b.length/4); i++) {
map = new HashMap<>();
map.put("value", String.valueOf((long)((b[i*4] & 0xFF) +
((b[i*4+1] & 0xFF) << 8))));
map.put("byte", String.valueOf((long)(b[i*4])));
l.add(map);
}
listview1.setAdapter(new
Listview1Adapter(l));
( (BaseAdapter)listview1.getAdapter()).notifyDataSetChanged();
My results:
Fl studio mobile results:
I'm not sure I can help, given what I know off of the top of my head, but perhaps this will trigger some ideas in your search for a solution.
It looks to me like you are assuming the sound file is 16-bit stereo, little-endian, and that you are only attempting to inspect one track of the stereo frame. Can you confirm this?
There's at least one way this plan could go awry: the .wav header may be an odd number of bytes in length, and you might not be properly parsing frame boundaries as a result. As an experiment, maybe try adding a different increment when you reference the b[] array? For example b[i4 + 1] and b[i4 + 2] instead of b[i4] and b[i4 + 1]. This won't solve the general problem of parsing .wav headers, but it could at least get you closer to understanding the situation.
It sure looks like Java's AudioInputStream is not accessible in Android, and all searches that I have that ask if there is an Android equivalent are turning up unanswered.
I've used AudioTrack for the playback of raw PCM, but I don't know an Android equivalent for reading wav files. The AudioRecord class and read() methods look interesting as the read methods store PCM data in a short array, but I've never used them, and they seem to be hard-coded to the microphone for input.
There used to be a Google Group: andraudio#googlegroups.com. IDK if it is still around. I used to go there and occasionally ask about things.
Maybe there is code you can use from Oboe or libGDX? The latter makes use of OpenAL and is for cross-platform development, with Android as one of the target platforms. I have not looked into either for this question.
If you do find the answer, it would be great to post it as a solution. This seems to be a matter that many have tried to solve and given up on.

how to convert GIS DMS coordinates

I know this is a very wide subject - still - I would like to convert a GIS DMS coordinates, for example:
33° 0' 10'' , 33° 40' 30''
into an EPSG:3857 format, ie:
3689865.02422557637, 3212878.5986975324
(this is not the calcualated convert, just an exmaple of the formats).
I know there are calculations\conversions in Map suppliers (ESRI, etc.). I'm looking for either of these ways, if somehow possible:
nodejs module (proj4js ? I looked in it but couldn't find a way doing so).
asp.net core FW feature\nuget ?
Yeah, proj4js can do this. First convert your DMS coordinates into decimal degrees, then tell proj4js to convert from WGS-84 to EPSG:3857.
Happily, proj4js ships with this conversion, so you don't have to look for the datum strings online.
const proj4= require("proj4");
// TODO: that's not the correct conversion of the original DMS to decimal degrees :)
console.log(proj4("WGS84", "EPSG:3857", [33.01, 33.4]));
outputs
[ 3674656.3910859604, 3948518.4270993923 ]

Multiple Screens with Qt

I want to have a single Qt application showing two windows on different display outputs(screens) on my Ubuntu 14.04 computer. Does someone know how to do that?
The documentation of Qt for embedded linux is what I could find so far but it did not help me really.
Edit:
Based on your comments, I've done this but it doesn't work as it should:
int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);
QQuickView view1(QUrl(QStringLiteral("qrc:/Screen1.qml")));
qDebug() << app.screens().length();
QScreen* screen1 = app.screens().at(0);
QScreen* screen2 = app.screens().at(1);
view1.setGeometry(0,0,200,200);
view1.setScreen(screen1);
view1.show();
QQuickView view2(QUrl(QStringLiteral("qrc:/Screen2.qml")));
view2.setGeometry(0,0,200,200);
view2.setScreen(screen2);
view2.show();
return app.exec();
}
The debug output is: 2
This code is putting both views to the same display output, although the qDebug output gives the correct number of display outputs with correct names.
Your mistake is wrong geometry. In these 2 lines of code, you place both windows on same position:
view1.setGeometry(0,0,200,200);
view2.setGeometry(0,0,200,200);
Instead of this, you can set the position (not sure if you need size also):
view1.setGeometry(screen1->geometry().x(),screen1->geometry().y(),200,200);
view2.setGeometry(screen2->geometry().x(),screen2->geometry().y(),200,200);
To change the position instead of changing both the position and the size, you can use the function move.
P.S. There may be some small typos as I wrote this code by memory, but the main idea should be clear for you.
I suggest you to take a look at this question and this answer on another question. Also, refer to the documentation of QDesktopWidget. Hope that helps !

How to get moving object's mask using OpenCV BackgroundSubtractorMOG2

I want to mask the moving objects from video.
I found that OpenCV has some built-in BackgroundSubtractors which could possibly saving my time a lot. However, according to the official reference, the function:
void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
should output a mask, fgmask, but it doesn't. The fgmask variable will contain the "contour of the mask" instead after invoking above method. That's weird. All I want is a simple closed region filled with white color(for example) to represent the moving objects. How could I do that?
Any reply or recommendation would be very appreciate. Thanks a lot.
Here's my code:
int main(int argc, char *argv[])
{
cv::BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(30,16.0,false);
cv::VideoCapture cap(0);
cv::Mat frame, mask, _frame, _fmask;
cvNamedWindow("mask", CV_WINDOW_AUTOSIZE);
for(;;)
{
cap >> frame;
bg(frame,fmask,-1);
_frame = IplImage(frame);
_fmask = IplImage(fmask);
cvShowImage("mask", &_fmask);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
A snapshot of the output video is:
p.s. My working environment is OpenCV2.4.3 on OSX 10.8 and XCode 4.5.2 with apple LLVM compiler 4.1.
If you want to acquire the whole objects filled with white pixels in the foreground then I would ask you to tell me something about your experience.
My question is, for the code, you mentioned above, do you get more white pixels when you generate more motion in front of your camera?
If yes then there are two paramenters to learn about for your requirement.
First is the History parameter. which you have configured as 30 in the constructor BackgroundSubtractorMOG2(30,16.0,false);. You can test this param by incresing, say to 300. It will maintain the motion history of the object in the foreground. So if you have moved completely from your starting location within the 300 frames then you will get whole object covered with white pixels as you want. but it will be erased gradually. So it cannot give you the 100% solution.
The second parameter is called learning rate. In the code you mentioned bg(frame,fmask,-1); where -1 is your learning rate. you can set it to 0.0 to 1.0 and default is -1. When you set it 0, you will get what you want for the objects which are not part of the frame in the starting of the video. You can call this kind of object "foreign objects". You will get foreign object covered with white pixels.
Explore your testing from the information I have mentioned above and share your experience.

How can I save OpenGL draw with OpenGL?

I draw a screen with OpenGL commands. And I must save this screen to .bmp or .png format. But I can't do it. I am using glReadpixels but I can't do continue. How can I save this drawing in c++ with OpenGL?
Here it comes! you must include WinGDI.h (which i think the GL will do it!)
void SaveAsBMP(const char *fileName)
{
FILE *file;
unsigned long imageSize;
GLbyte *data=NULL;
GLint viewPort[4];
GLenum lastBuffer;
BITMAPFILEHEADER bmfh;
BITMAPINFOHEADER bmih;
bmfh.bfType='MB';
bmfh.bfReserved1=0;
bmfh.bfReserved2=0;
bmfh.bfOffBits=54;
glGetIntegerv(GL_VIEWPORT,viewPort);
imageSize=((viewPort[2]+((4-(viewPort[2]%4))%4))*viewPort[3]*3)+2;
bmfh.bfSize=imageSize+sizeof(bmfh)+sizeof(bmih);
data=(GLbyte*)malloc(imageSize);
glPixelStorei(GL_PACK_ALIGNMENT,4);
glPixelStorei(GL_PACK_ROW_LENGTH,0);
glPixelStorei(GL_PACK_SKIP_ROWS,0);
glPixelStorei(GL_PACK_SKIP_PIXELS,0);
glPixelStorei(GL_PACK_SWAP_BYTES,1);
glGetIntegerv(GL_READ_BUFFER,(GLint*)&lastBuffer);
glReadBuffer(GL_FRONT);
glReadPixels(0,0,viewPort[2],viewPort[3],GL_BGR,GL_UNSIGNED_BYTE,data);
data[imageSize-1]=0;
data[imageSize-2]=0;
glReadBuffer(lastBuffer);
file=fopen(fileName,"wb");
bmih.biSize=40;
bmih.biWidth=viewPort[2];
bmih.biHeight=viewPort[3];
bmih.biPlanes=1;
bmih.biBitCount=24;
bmih.biCompression=0;
bmih.biSizeImage=imageSize;
bmih.biXPelsPerMeter=45089;
bmih.biYPelsPerMeter=45089;
bmih.biClrUsed=0;
bmih.biClrImportant=0;
fwrite(&bmfh,sizeof(bmfh),1,file);
fwrite(&bmih,sizeof(bmih),1,file);
fwrite(data,imageSize,1,file);
free(data);
fclose(file);
}
Unless you're feeling particularly ambitious (or perhaps masochistic) you probably want to use a library like DevIL that already supports this. The current version can load and/or save in both PNG and BMP formats, along with a few dozen others.
Compared to something like IJG, this is oriented much more heavily toward working with OpenGL or DirectX (e.g., it can load a file fairly directly into an texture or vice versa).
I know you're asking for raster formats, but an indirect way would be to first output vector graphics through gl2ps (http://www.geuz.org/gl2ps/). Examples of usage are provided with the package and on the site (http://www.geuz.org/gl2ps/#tth_sEc3).
Then, the vector output can be converted to the format of your choice using another tool (Inkscape, Image/GraphicsMagick, etc.) or library. An added benefit is you can convert to bitmaps of any resolution in the future.
One thing need to be fixed at:
bmih.biXPelsPerMeter = bmih.biYPelsPerMeter = 0;
Otherwise, some picture edit can not open correctly.

Resources