I need to get waveform data from the wav file,but my code returns not right waveform (i compare my results with waveform from fl studio)
This is my code:
path = "/storage/emulated/0/FLM User
Files/My Samples/808 (16).wav";
waveb = FileUtil.readFile(path);
waveb = waveb.substring((int) (waveb.indexOf("data") + 4), (int)(waveb.length()));
byte[] b = waveb.getBytes();
for(int i= 0; i < (int)(b.length/4); i++) {
map = new HashMap<>();
map.put("value", String.valueOf((long)((b[i*4] & 0xFF) +
((b[i*4+1] & 0xFF) << 8))));
map.put("byte", String.valueOf((long)(b[i*4])));
l.add(map);
}
listview1.setAdapter(new
Listview1Adapter(l));
( (BaseAdapter)listview1.getAdapter()).notifyDataSetChanged();
My results:
Fl studio mobile results:
I'm not sure I can help, given what I know off of the top of my head, but perhaps this will trigger some ideas in your search for a solution.
It looks to me like you are assuming the sound file is 16-bit stereo, little-endian, and that you are only attempting to inspect one track of the stereo frame. Can you confirm this?
There's at least one way this plan could go awry: the .wav header may be an odd number of bytes in length, and you might not be properly parsing frame boundaries as a result. As an experiment, maybe try adding a different increment when you reference the b[] array? For example b[i4 + 1] and b[i4 + 2] instead of b[i4] and b[i4 + 1]. This won't solve the general problem of parsing .wav headers, but it could at least get you closer to understanding the situation.
It sure looks like Java's AudioInputStream is not accessible in Android, and all searches that I have that ask if there is an Android equivalent are turning up unanswered.
I've used AudioTrack for the playback of raw PCM, but I don't know an Android equivalent for reading wav files. The AudioRecord class and read() methods look interesting as the read methods store PCM data in a short array, but I've never used them, and they seem to be hard-coded to the microphone for input.
There used to be a Google Group: andraudio#googlegroups.com. IDK if it is still around. I used to go there and occasionally ask about things.
Maybe there is code you can use from Oboe or libGDX? The latter makes use of OpenAL and is for cross-platform development, with Android as one of the target platforms. I have not looked into either for this question.
If you do find the answer, it would be great to post it as a solution. This seems to be a matter that many have tried to solve and given up on.
I'm developing a Windows 8.1 Store App. I have a CommandBar control with a couple of AppBarButtons inside. Using the standard icons is easy, I just set the icon property to the appropriate string like so :
<AppBarButton Icon="Download" Label="Download Files"/>
I'd like to use a couple of custom icons from the very nice free collection Modern UI Icons. Ideally, I'd like to be able to set the icon property in much the same way :
<AppBarButton Icon="transit.distance.to" Label="Distance to destination"/>
This would refer to this icon : PNG / XAML
Is this possible ?
If not, what are the alternatives ?
Tim Heuer proposes using a font file, although at present the font files available here only cover a sub-set of the icons, and also this code is quite unreadable :
<FontIcon FontFamily="ms-appx:///modernuiicons.ttf#Modern-UI-Icons---Social" Margin="0,2,0,0" Glyph="" FontSize="37.333" />
Would you believe that shows a twitter icon?!
Tim Heuer also proposes using vector data, and one of the commenters explains how the vector data can be rolled into a style. I could do that, but then I would have to copy and paste the path data for each icon I want to include ?
Should I be using the PNG files, as explained in this question ? That looks pretty messy as well.
What a nightmare!
I'm not sure what the nightmare part is -- you want to use a custom icon that isn't present in the 200+ supplied defaults. You have options:
Use SymbolIcon and supply your own font. You note that you don't like that the code feels unreadable. Unicode ranges are universally used for symbol fonts and I agree that Unicode isn't human-readable, but a simple code comment would help ;-) Fonts give you the most ease and flexibility because they are also vectors.
PathIcon. You convert your image into vector geometries we can render. This would be the second best, but also requires a bit fine tuning of the vectors to get right. For people not familiar with working with geometries this can be annoying at first. Blend and Inkscape are helpful tools here.
BitmapIcon. This would allow you to use your PNG, however you now must supply multiple of them for different scales and states. This is my least favorite option as it requires most work, but for some may be the simplest. Now your problem you will hit is there is an issue with BitmapIcon for non-rectangular shapes (which looks like your icon is). This won't have the fidelity you seek due to a bug in rasterizing.
Contact metroicon author and see if he can put it into the font file so you can use option #1 :-)
Maybe this is what you're looking for:
<AppBarButton Label="Transit">
<AppBarButton.Icon>
<PathIcon Data="F1 M 3.912,17.38C 4.89067,17.38 5.688,18.2653 5.688,19.3586C 5.688,20.448 4.89067,21.3333 3.912,21.3333C 2.92667,21.3333 2.136,20.448 2.136,19.3586C 2.136,18.2653 2.92667,17.38 3.912,17.38 Z M 16,17.38C 16.984,17.38 17.776,18.2653 17.776,19.3586C 17.776,20.448 16.984,21.3333 16,21.3333C 15.016,21.3333 14.224,20.448 14.224,19.3586C 14.224,18.2653 15.016,17.38 16,17.38 Z M 21.3333,18.9626L 18.464,18.9626C 18.292,17.62 17.2547,16.5933 16,16.5933C 14.7453,16.5933 13.708,17.62 13.536,18.9626L 6.37467,18.9626C 6.20267,17.62 5.16667,16.5933 3.912,16.5933C 2.656,16.5933 1.62,17.62 1.448,18.9626L 0,18.9626L 0,10.2706C 0,9.396 0.636,8.69196 1.42133,8.69196L 19.5573,8.69196C 20.3387,8.69196 20.9787,9.396 20.9787,10.2706M 20.4427,10.2706L 19.1973,10.2706L 19.1973,15.8013L 20.62,15.8013M 17.776,13.432L 17.776,10.2706L 14.224,10.2706L 14.224,13.432M 13.5107,10.2706L 9.95333,10.2706L 9.95333,13.432L 13.5107,13.432M 9.24533,10.2706L 5.688,10.2706L 5.688,13.432L 9.24533,13.432M 4.97867,10.2706L 1.42133,10.2706L 1.42133,13.432L 4.97867,13.432M 14.5787,2.36932L 12.4427,0L 15.2867,0L 17.776,2.45862L 17.776,0L 19.1973,0L 19.1973,6.31732L 17.776,6.31732L 17.776,3.85864L 15.2867,6.31732L 12.4427,6.31732L 14.5787,3.948L 7.73467,3.948C 7.41733,5.31195 6.30267,6.31732 4.97867,6.31732C 3.40667,6.31732 2.136,4.90533 2.136,3.16132C 2.136,1.41064 3.40667,0 4.97867,0C 6.30267,0 7.41733,1.00531 7.73467,2.36932L 14.5787,2.36932 Z " HorizontalAlignment="Center" VerticalAlignment="Center"/>
</AppBarButton.Icon>
</AppBarButton>
Hope this helps!
I`m trying to decode h264 video using HW with Stagefright library.
i have used an example in here. Im getting decoded data in MedaBuffer. For rendering MediaBuffer->data() i tried AwesomeLocalRenderer in AwesomePlayer.cpp.
but picture in screen are distorted
Here is The Link of original and crashed picture.
And also tried this in example`
sp<MetaData> metaData = mVideoBuffer->meta_data();
int64_t timeUs = 0;
metaData->findInt64(kKeyTime, &timeUs);
native_window_set_buffers_timestamp(mNativeWindow.get(), timeUs * 1000);
err = mNativeWindow->queueBuffer(mNativeWindow.get(),
mVideoBuffer->graphicBuffer().get(), -1);`
But my native code crashes. I can`t get real picture its or corrupted or it black screen.
Thanks in Advance.
If you are using a HW accelerated decoder, then the allocation on the output port of your component would have been based on a Native Window. In other words, the output buffer is basically a gralloc handle which has been passed by the Stagefright framework. (Ref: OMXCodec::allocateOutputBuffersFromNativeWindow). Hence, the MediaBuffer being returned shouldn't be interpreted as a plain YUV buffer.
In case of AwesomeLocalRenderer, the framework performs a software color conversion when mTarget->render is invoked as shown here. If you trace the code flow, you will find that the MediaBuffer content is directly interpreted as YUV buffer.
For HW accelerated codecs, you should be employing AwesomeNativeWindowRenderer. If you have any special conditions for employing AwesomeLocalRenderer, please do highlight the same. I can refine this response appropriately.
P.S: For debug purposes, you could also refer to this question which captured the methods to dump the YUV data and analyze the same.
I want to mask the moving objects from video.
I found that OpenCV has some built-in BackgroundSubtractors which could possibly saving my time a lot. However, according to the official reference, the function:
void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
should output a mask, fgmask, but it doesn't. The fgmask variable will contain the "contour of the mask" instead after invoking above method. That's weird. All I want is a simple closed region filled with white color(for example) to represent the moving objects. How could I do that?
Any reply or recommendation would be very appreciate. Thanks a lot.
Here's my code:
int main(int argc, char *argv[])
{
cv::BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(30,16.0,false);
cv::VideoCapture cap(0);
cv::Mat frame, mask, _frame, _fmask;
cvNamedWindow("mask", CV_WINDOW_AUTOSIZE);
for(;;)
{
cap >> frame;
bg(frame,fmask,-1);
_frame = IplImage(frame);
_fmask = IplImage(fmask);
cvShowImage("mask", &_fmask);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
A snapshot of the output video is:
p.s. My working environment is OpenCV2.4.3 on OSX 10.8 and XCode 4.5.2 with apple LLVM compiler 4.1.
If you want to acquire the whole objects filled with white pixels in the foreground then I would ask you to tell me something about your experience.
My question is, for the code, you mentioned above, do you get more white pixels when you generate more motion in front of your camera?
If yes then there are two paramenters to learn about for your requirement.
First is the History parameter. which you have configured as 30 in the constructor BackgroundSubtractorMOG2(30,16.0,false);. You can test this param by incresing, say to 300. It will maintain the motion history of the object in the foreground. So if you have moved completely from your starting location within the 300 frames then you will get whole object covered with white pixels as you want. but it will be erased gradually. So it cannot give you the 100% solution.
The second parameter is called learning rate. In the code you mentioned bg(frame,fmask,-1); where -1 is your learning rate. you can set it to 0.0 to 1.0 and default is -1. When you set it 0, you will get what you want for the objects which are not part of the frame in the starting of the video. You can call this kind of object "foreign objects". You will get foreign object covered with white pixels.
Explore your testing from the information I have mentioned above and share your experience.
So I load a color .png file that has been taken with an iphone using cvLoadImage. And after it's been loaded, when I immediately display it in my X11 terminal, the image is definitely darker than the original png file.
I currently use this to load the image:
IplImage *img3 = cvLoadImage( "bright.png", 1);
For the second parameter I have tried all of the following:
CV_LOAD_IMAGE_UNCHANGED
CV_LOAD_IMAGE_GRAYSCALE
CV_LOAD_IMAGE_COLOR
CV_LOAD_IMAGE_ANYDEPTH
CV_LOAD_IMAGE_ANYCOLOR
but none of these have worked. Grayscale definitely made the image grayscale. But as suggested from http://www.cognotics.com/opencv/docs/1.0/ref/opencvref_highgui.htm, even using CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR to load the image as truthfully as possible resulted in a darker image being displayed in the terminal.
Does anyone have any ideas on how to get the original image to display properly?
Thanks a lot in advance.
Yes, OpenCV does not apply Gamma correction.
// from: http://gegl.org/
// value: 0.0-1.0
static inline qreal
linear_to_gamma_2_2 (qreal value){
if (value > 0.0030402477)
return 1.055 * pow (value, (1.0/2.4)) - 0.055;
return 12.92 * value;
}
// from: http://gegl.org/
static inline qreal
gamma_2_2_to_linear (qreal value){
if (value > 0.03928)
return pow ((value + 0.055) / 1.055, 2.4);
return value / 12.92;
}
It only happens when you load it in OpenCV? Opening with any other viewer doesn't show a difference?
I can't confirm this without a few tests but I believe the iPhone display gamma is 1.8 (source: http://www.colorwiki.com/wiki/Color_on_iPhone#The_iPhone.27s_Display). Your X11 monitor probably is adjusted for 2.2 (like the rest of the world).
If this theory holds, yes, images are going to appear darker on X11 than on the iPhone. You may change your monitor calibration or do some image processing to account for the difference.
Edit:
I believe OpenCV really does not apply gamma correction. My reference to this is here:
http://permalink.gmane.org/gmane.comp.lib.opencv.devel/837
You might want to implement it yourself or "correct" it with ImageMagick. This page instructs you on how to do so:
http://www.4p8.com/eric.brasseur/gamma.html
I usually load an image with:
cvLoadImage("file.png", CV_LOAD_IMAGE_UNCHANGED);
One interesting test you could do to detect if OpenCV is really messing with the image data, is simply creating another image with cvCreateImage(), then copy the data to this newly created image and save it to another file with cvLoadImage().
Maybe, it's just a display error. Of course, I would suggest you to update to the most recent version of OpenCV.