I have created 8 bit yuv player for format YUY2 packed using SDL lib,some part of code:
handle->texture = SDL_CreateTexture(handle->renderer, SDL_PIXELFORMAT_YUY2, SDL_TEXTUREACCESS_STREAMING, width, height);
SDL_UpdateTexture(handle->texture, NULL,pDisplay->Ydata,(handle->width*2));
in that while creating texture,pixel format is given SDL_PIXELFORMAT_YUY2 and update texture pitch in twice of width. So it is playing fine.
But when it comes to 10 bit YUV, it plays disturbed and greenish video.
What I have tried is changed pitch to (handle->width*2 * 2) but no success
also someone suggested to convert 10bit value to 8bit but I don't want to do that.
Please help me to play 10bit YUY2 packed format YUV.
Is SDL support more than 8 bit depth pixel rendering ?
Related
I've tried to convert a SVG file to PNG with antialiasing off in Magick++ but I wasn't successful. But I was able to convert the SVG file to PDF with another program and the use the ImageMagick convert command to convert the PDF file to PNG.
How can I use ImageMagick to do it? The command I use for converting PDF to PNG is this:
convert +antialias -interpolate Nearest -filter point -resize 1000x1000 "img.pdf" PNG24:"filter.png"
Is there any way to use Magick++ to do that or better, convert SVG to PNG directly with antialiasing off?
Thanks in advance.
Edit:
The answer given in this post doesn't work for me. Possible because I'm using a colored SVG instead of 1-bit alpha channel. Also I mentioned in my question that I'm also looking for a way to do this in Magick++.
Magick++ has the Magick::Image::textAntiAlias & Magick::Image::strokeAntiAlias methods available, but they would only be useful if your parsing the SVG and rebuilding the image (i.e. roll-your-own SVG engine) one SVG element at a time.
As #ccprog pointed out in the comments; once the decoder utility rasters the vectors, the damage is done & setting the flags would not have an effect on the resulting resize.
Without seeing the SVG, I can only speculate what the problem is. I would suggest setting the document size before reading the SVG content.
For example, read the image at a smaller size than resample up.
Magick::Image img;
img.size(Magick::Geometry(100, 100)); // Decode to a small context
img.read("input.svg");
img.interpolate(Magick::NearestInterpolatePixel);
img.filterType(Magick::PointFilter);
img.resize(Magick::Geometry(600, 600));
img.write("PNG24:output#100x100.png");
Or render at larger size then the finial image.
Magick::Image img;
img.size(Magick::Geometry(1000, 1000)); // Decode to a larger context
img.read("input.svg");
img.interpolate(Magick::NearestInterpolatePixel);
img.filterType(Magick::PointFilter);
img.resize(Magick::Geometry(600, 600));
img.write("PNG24:output#1000x1000.png");
Update from comments
For Postscript (PDF) & True-Type antialiasing, you would set Magick::Image::textAntiAlias (or Magick::Image::antiAlias if using IM6) to false. Just ensure that the density is set to allow any overhead.
Magick::Image img;
img.density(Magick::Point(300));
if (MagickLibVersion < 0x700) {
img.antiAlias(false);
} else {
img.textAntiAlias(false);
}
img.interpolate(Magick::NearestInterpolatePixel);
img.filterType(Magick::PointFilter);
img.read("input.pdf");
img.resize(Magick::Geometry(1000, 1000));
img.write("PNG24:output.png");
I am trying to analyze a wav file in python and get the rms value from the wav. I am using audioop.rms to get the value from the wav. I went to do this and I did not know what fragment and width stood for. I am new to audioop and hope somebody can explain this. I am also wondering if there is any better way to do this in python.
Update: I have done some research and I found out fragment stood for the wav file. I still need to figure out what width means.
A fragment is just a chunk of data. Width is the size in bytes that the data is organized. ex 8bits data has width 1, 16bits has 2 and so on.
```
import alsaaudio, audioop
self.input = alsaaudio.PCM(alsaaudio.PCM_CAPTURE,alsaaudio.PCM_NONBLOCK)
self.input.setchannels(1)
self.input.setrate(8000)
self.input.setformat(alsaaudio.PCM_FORMAT_S16_LE)
self.input.setperiodsize(300)
length, data = self.input.read()
avg_i = audioop.avg(data,2)
```
In the example i am setting alsa capture card to use a S16_LE signed 16bits Little Endian, so I have to set width to be 2. For the fragment is just the data captured by alsa. In your case the wav file is your data.
I'm trying to convert a YUV422 image (YUV422_8_UYVY, unsigned ,unpacked, 16bpp) in to jpeg using ffmpeg's ,this is Code which I am following
Image size: 2448x2050
Original YUV Image: not able to upload as the format is YUV
(Original Image Decodec by ffmpeg command prompt)
Image:This is original Image
Image size: 2448x2050
reconstruct Image:Reconstruct Image through above Code
so the reconstruct image is not as the original image
my format is UYVY whereas supported format is AV_PIX_FMT_YUVJ420P
so what should be the correct format for UYVY input image...?
pCodecCtx->pix_fmt=AV_PIX_FMT_?????
if i use pCodecCtx->pix_fmt=AV_PIX_FMT_UYVY422;
i got an arrer saying
[mjpeg # 00c0b2a0] specified pixel format uyvy422 is invalid or not supported
You say the image format is "unpacked" (??), but at the same time you call it YUV422_8_UYVY, which suggests it's packed (i.e. not planar). The output you're getting suggests that it's packed.
FFmpeg's image encoders, in general, do not support packed input. You first need to make it planar. You have two options:
convert it to planar YUV-4:2:2 (AV_PIX_FMT_YUVJ422P) and input that into the encoder;
convert it to planar YUV-4:2:0 (AV_PIX_FMT_YUVJ420P) and input that into the encoder.
The first will preserve chroma subsampling (better quality), but the second will have better downstream support (in other applications, to decode the image). To convert the image, you use libswscale. The output image from that conversion can be input into the FFmpeg encoder.
i'm working on a VC++ and OpenCV application, i'm loading images into picturBox and make some OpenCV operations on them, i assign the loaded image into IplImage to make processing on it but then assign the processed image again into the picture box, i write this code to load the image selected from openFileDialog into IplImage ,binarize the image then reassign the binarized image back to the pictureBox
code:
const char* fileName = (const char*)(void*)
Marshal::StringToHGlobalAnsi(openFileDialog1->FileName);
IplImage *img=cvLoadImage(fileName,CV_LOAD_IMAGE_COLOR);
int width=img->width;
int height=img->height;
IplImage *grayScaledImage=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
cvCvtColor(img,grayScaledImage,CV_RGB2GRAY);
cvThreshold(grayScaledImage,grayScaledImage,128,256,CV_THRESH_BINARY);
this->pictureBox1->Image=(gcnew
System::Drawing::Bitmap(grayScaledImage->width,grayScaledImage->height,grayScaledImage->widthStep,
System::Drawing::Imaging::PixelFormat::Format24bppRgb,(System::IntPtr)grayScaledImage->imageData));
but i doesn't find a format which displays a binary image, any help about that.
Original Image:
Converted image:
You seem to be creating an RGB image (System::Drawing::Imaging::PixelFormat::Format24bppRgb) but copying into it a grayscale, presumably the System::Drawing::Imaging function doesn't do conversion - or isn't doing it properly.
Edit: Some more explanation.
Your greyscale image is stored in memory as one byte for each pixel Y0, Y1, Y2,Y3...... Y639 (we use Y for brightness, and assuming a 640 pixel wide image).
You have told the .net image class that this is Format24bppRgb which would be stored as one red,one green and blue byte per pixel (3bytes = 24bpp). So the class takes your image data and assumes that Y0,Y1,Y2 are the red,green,blue values for he first pixel, Y3,Y4,Y5 for the next and so on.
This is using up 3x as many bytes as your image has, so after 1/3 of the row it starts reading the next row and so on - which gives you the three repeated pictures.
ps. the fact that you have turned it into a binary image just means that the Y values are either 0 or 255 - it doesn't change the data size or shape.
OpenCV provided function to convert Bayer to RGB, but how to use this CV_BayerBG2BGR , and other similar function?
I used code below, but the error appears stated invalid channel number. Since I use RGB image as originalImage, anyway how this function actually works?
void main(){
// Declare and load the image
// Assume we have sample image *.png
IplImage *originalImage = cvLoadImage("bayer-image.jpg",-1);
// The image size is said to be 320X240
IplImage *bayer2RGBImage;
bayer2RGBImage = cvCreateImage(cvSize(100,100),8,3);
cvCvtColor(originalImage,bayer2RGBImage,CV_BayerBG2BGR);
//Save Convertion Image to file.
cvSaveImage("test-result.jpg",bayer2RGBImage);
//Release the memory for the images that were created.
cvReleaseImage(&originalImage);
cvReleaseImage(&bayer2RGBImage);}
Furthermore, I'd like to convert common RGB image to bayer format (let say bilinear) too, whether openCV provide this function as well?
any help would be really appreciated.
Thanks in advance.
Unfortunately OpenCV does not provide BGR to Bayer conversion. Only backward conversion is available.
If you need a conversion to Bayer format then you should implement this conversion yourself or use another library.