I am trying my hands on Direct X 11 template in VS 2015 in VC++. I am using:
D3D11_MAPPED_SUBRESOURCE Resource and MAP and UNMAP to update texture.
Now i have a separate file in my project where i am reading pixels and need to upload it to this texture.
I am using a struct to hold the texture data :
struct Frames{
int text_Width;
int text_height;
unsigned int text_Sz;
unsigned char* text_Data; };
Want to know how can i use this struct from a separate file to upload the texture data in my Direct X based Spinning Cube file.
You don't mention what format the data is, which is essential to knowing how to do this, but let's assume your text_Data points to an array of R8G8B8A8 data (i.e. each pixel is 32-bits with 8-bits each of Red, Green, Blue, and Alpha in that order from LSB to MSB). If so, it would look like:
Frames f = ...; // your structure
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = UINT(f.text_Width);
desc.Height = UINT(f.text_height);
desc.MipLevels = desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initData = {};
initData.pSysMem = f.text_Data;
initData.SysMemPitch = UINT( 4 * f.text_width );
initData.SysMemSlicePitch = UINT( text_Sz );
Microsoft::WRL::ComPtr<ID3D11Texture2D> pTexture;
HRESULT hr = d3dDevice->CreateTexture2D( &desc, &initData, &pTexture );
if (FAILED(hr))
...
Note this is covered on MSDN in the How to use Direct3D 11 topics, although the sample code style there is a little dated.
Take a look at the DirectX Tool Kit for DirectX 11 and the tutorials in particular. There's no reason to write your own loader when you can just DDSTextureLoader or WICTextureLoader.
Related
In FFMPEG sofftware, AVPicture is used to store image data using data pointer and linesizes.It means all subtitles are stored in the form of picture inside ffmpeg. Now I have DVB subtitle and I want to dump picture of subtitles stored in AVPicture in a buffer. I know these images subtitles can be dump using for, fopen and sprintf. But do not know how to dump Subtitle.I have to dump subtitles in .ppm file format.
Can anyone help me to dump picture of subtitles in buffer from AVSubtitle .
This process looks complex but actually very simple.
AVSubtitle is generic format, supports text and bitmap modes. Dvbsub format afaik bitmap only and the bitmap format can be differ like 16color or 256color mode as called CLUT_DEPTH.
I believe (in current ffmpeg) the bitmaps stored in AVSubtitleRect structure, which is member of AVSubtitle.
I assume you have a valid AVSubtitle packet(s) and if I understand correctly you can do these and it should work:
1) Check pkt->rect[0]->type. The pkt here is a valid AVSubtitle packet. It must be type of SUBTITLE_BITMAP.
2) If so, bitmap with and height can be read from pkt->rects[0]->w and pkt->rects[0]->h.
3) Bitmap data itself in will be pkt->rects[0]->data[0].
4) CLUT_DEPTH can be read from pkt->rects[0]->nb_colors.
5) And CLUT itself (color table) will be in pkt->rects[0]->data[1].
With these data, you can construct a valid .bmp file that can be viewable on windows or linux desktop, but I left this part to you.
PPM Info
First check this info about PPM format:
https://www.cs.swarthmore.edu/~soni/cs35/f13/Labs/extras/01/ppm_info.html
What I understand is PPM format uses RGB values (24bit/3bytes). It looks like to me all you have to do is construct a header according to data obtained from AVSubtitle packet above. And write a conversion function for dvbsub's indexed color buffer to RGB. I'm pretty sure somewhere there are some ready to use codes out there but I'll explain anyway.
In the picture frame data Dvbsub uses is liner and every pixel is 1 byte (even in 16color mode). This byte value is actually index value that correspond RGB (?) values stored in Color Look-Up Table (CLUT), in 16 color mode there are 16 index each 4 bytes, first 3 are R, G, B values and 4th one is alpha (transparency values, if PPM doesn't support this, ignore it).
I'm not sure if decoded subtitle still has encoded YUV values. I remember it should be plain RGBA format.
encode_dvb_subtitles function on ffmpeg shows how this encoding done. If you need it.
https://github.com/FFmpeg/FFmpeg/blob/a0ac49e38ee1d1011c394d7be67d0f08b2281526/libavcodec/dvbsub.c
Hope that helps.
As this is where I ended up when searching for answers to how to create a thumbnail of an AVSubtitle, here is what I ended up using in my test application. The code is optimized for readability only. I got some help from this question which had some sample code.
Using avcodec_decode_subtitle2() I get a AVSubtitle structure. This contains a number of rectangles. First I iterate over the rectangles to find the max of x + w and y + h to determine the width and height of the target frame.
The color table in data[1] is RGBA, so I allocate an AVFrame called frame in AV_PIX_FMT_RGBA format and shuffle the pixels over to it:
struct [[gnu::packed]] rgbaPixel {
uint8_t r;
uint8_t g;
uint8_t b;
uint8_t a;
};
// Copy the pixel buffers
for (unsigned int i = 0; i < sub.num_rects; ++ i) {
AVSubtitleRect* rect = sub.rects[i];
for (int y = 0; y < rect->h; ++ y) {
int dest_y = y + rect->y;
// data[0] holds index data
uint8_t *in_linedata = rect->data[0] + y * rect->linesize[0];
// In AVFrame, data[0] holds the pixel buffer directly
uint8_t *out_linedata = frame->data[0] + dest_y * frame->linesize[0];
rgbaPixel *out_pixels = reinterpret_cast<rgbaPixel*>(out_linedata);
for (int x = 0; x < rect->w; ++ x) {
// data[1] contains the color map
// compare libavcodec/dvbsubenc.c
uint8_t colidx = in_linedata[x];
uint32_t color = reinterpret_cast<uint32_t*>(rect->data[1])[colidx];
// Now store the pixel in the target buffer
out_pixels[x + rect->x] = rgbaPixel{
.r = static_cast<uint8_t>((color >> 16) & 0xff),
.g = static_cast<uint8_t>((color >> 8) & 0xff),
.b = static_cast<uint8_t>((color >> 0) & 0xff),
.a = static_cast<uint8_t>((color >> 24) & 0xff),
};
}
}
}
I did manage to push that AVFrame through an image decoder to output it as a bitmap image, and it looked OK. I did get green areas where the alpha channel is, but that might be an artifact of the settings in the JPEG encoder I used.
I am trying to do some basic drawing with skia. Since I'm working on grayscale images I want to use the corresponding color type. The minimal Example I want to use is:
int main(int argc, char * const argv[])
{
int width = 1000;
int heigth = 1000;
float linewidth = 10.0f;
SkImageInfo info = SkImageInfo::Make(
width,
heigth,
SkColorType::kAlpha_8_SkColorType,
SkAlphaType::kPremul_SkAlphaType
);
SkBitmap img;
img.allocPixels(info);
SkCanvas canvas(img);
canvas.drawColor(SK_ColorBLACK);
SkPaint paint;
paint.setColor(SK_ColorWHITE);
paint.setAlpha(255);
paint.setAntiAlias(false);
paint.setStrokeWidth(linewidth);
paint.setStyle(SkPaint::kStroke_Style);
canvas.drawCircle(500.0f, 500.0f, 100.0f, paint);
bool success = SkImageEncoder::EncodeFile("B:\\img.png", img,
SkImageEncoder::kPNG_Type, 100);
return 0;
}
But the saved image does not contain the circle that was drawn. If I replace kAlpha_8_SkColorType with kN32_SkColorType I get the expected result. How can I draw the circle onto a 8 bit grayscale image? I'm working with Visual Studio 2013 on a 64bit Windows machine.
kN32_SkColorType type result
kAlpha_8_SkColorType result
You should use kGray_8_SkColorType than kAlpha_8_SkColorType.
The kAlpha_8_SkColorType used for bitmap mask.
My problem is getting the pixels in a window. I can't find a way to do this. I using standard windows functions and Direct2D (not DirectDraw).
I am using standard initialization of new window:
WNDCLASS wc;
wc.style = CS_OWNDC;
wc.lpfnWndProc = WndProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.hInstance = hInstance;
wc.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)(6);
wc.lpszMenuName = 0;
wc.lpszClassName = L"WINDOW"; RegisterClass(&wc);
hWnd = CreateWindow(L"WINDOW", L"Game", WS_OVERLAPPEDWINDOW,100,100,1024,768,NULL,NULL,hInstance,NULL);
Then I create a D2D1factory object and draw the bitmap in the window:
HWND hWnd = NULL;
srand((unsigned int)time((time_t)NULL));
ID2D1Factory* factory = NULL;
ID2D1HwndRenderTarget* rt = NULL;
CoInitializeEx(NULL,COINIT_MULTITHREADED);
My_CreateWindow(&hWnd, hInstance);
My_CreateFactory(&hWnd, factory, rt);
D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED,&factory);
factory->CreateHwndRenderTarget(RenderTargetProperties(), HwndRenderTargetProperties(hWnd,SizeU(800,600)), &rt);
// tons of code
rt->BeginDraw(); rt->DrawBitmap(background[0], RectF(0,0,800,600));
rt->DrawBitmap(GIFHorse[(int)iterator],
RectF(0+MyX-100,0+MyY-100,100+MyX,100+MyY));
rt->DrawBitmap(Star[(int)iterator], RectF(0 + starX, 0 + starY, 50 + starX, 50+starY)); rt->EndDraw();
I need to calculate the hits (I am trying to make simple game). So that's why I need access to all pixels in window.
I'm thinking about using other algorithms, but they're harder to make and I want to start with something easier.
I know about the GetPixel() function, but I cant understand what I should use for HDC.
There is no simple and easy way to get to D2D pixels.
A simple collision detection can be implemented using geometries. If you have outlines of your objects in ID2D1Geometry objects you can compare them using ID2D1Geometry::CompareWithGeometry.
If you insist on the pixels, a way to access them is from the DC obtainable via IDXGISurface1::GetDC
In this case you need to use the DXGI surface render target and the swap chain instead of the hwnd render target.
Step by step:
D3D11CreateDeviceAndSwapChain gives you the swap chain.
IDXGISwapChain::GetBuffer gives you the DXGI surface.
ID2D1Factory::CreateDxgiSurfaceRenderTarget gives you the render target.
IDXGISurface1::GetDC gives you the DC. Read the remarks on this page!
I am writing a Unity3D plugin that reads data from an MP3 file, feeds the PCM data to Unity so that it can play it inside the engine. On iOS, I use the AVAssetReaderAudioMixOutput class to decode and read the data, and on Android/Windows, I use FMOD.
I have set up a program on both Windows and iOS which uses FMOD to play back the music, just like Unity3D does.
I am having trouble getting the same results on both iOS and windows, and I cant seem to find the difference in audio output settings/format that would cause the difference.
So first, these are the settings that I set for my output audio stream, which are the same settings as Unity3D uses:
FMOD_CREATESOUNDEXINFO exinfo2;
memset(&exinfo2, 0, sizeof(FMOD_CREATESOUNDEXINFO));
exinfo2.cbsize = sizeof(FMOD_CREATESOUNDEXINFO);
exinfo2.decodebuffersize = 44100;
exinfo2.length = 44100 * 1 * sizeof(float) * 100;
exinfo2.numchannels = 1;
exinfo2.defaultfrequency = 44100;
exinfo2.format = FMOD_SOUND_FORMAT_PCMFLOAT;
exinfo2.pcmreadcallback = pcmreadcallback;
result = system_->createStream("./1.mp3", FMOD_LOOP_NORMAL | FMOD_SOFTWARE | FMOD_OPENUSER | FMOD_CREATESTREAM, &exinfo2, &sound2_);
ERRCHECK(result);
result = system_->playSound(FMOD_CHANNEL_FREE,sound2_,false,0);
Basically, 1 channel, 32-bit floating point PCM data. This is set on both the iOS and windows playback programs.
Now, on iOS, I set the AVAssetReaderAudioMixOutput audio settings like this:
NSDictionary *audioSetting = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:44100.0],AVSampleRateKey,
[NSNumber numberWithInt:1],AVNumberOfChannelsKey, //how many channels has original?
[NSNumber numberWithInt:32],AVLinearPCMBitDepthKey, //was 16
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithBool:YES], AVLinearPCMIsFloatKey, //was NO
[NSNumber numberWithBool:0], AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSData data], AVChannelLayoutKey, nil];
I set the PCMIsFloatKey to 1 so that the PCM data is a floating point, I set the bit-depth to 32, 1 channel, so that everything matches the FMOD output settings.
I read the data and write it into a circular buffer:
float* convertedBuffer = (float * ) audioBufferList.mBuffers[0].mData;
//We don't need the audioconverter on iOS
//Fill up the circular buffer
for(int i = 0; i < numSamples; i++)
{
circularAudioBuffer_[bufferWritePosition_] = convertedBuffer[i];
bufferWritePosition_++;
if(bufferWritePosition_ >= circularBufferSize_)
bufferWritePosition_ = 0;
}
Then read the data from the buffer and write it into the audio stream in the pcmreadcallback:
float *writeBuffer = (float *)data;
for(int i = 0; i < dataLength; i++)
{
sampleBuffer[i] = circularAudioBuffer_[bufferReadPosition_];
bufferReadPosition_++;
if(bufferReadPosition_ >= circularBufferSize_)
bufferReadPosition_ = 0;
}
With this, the audio plays perfectly, and the range of values inside the circular buffer is 0.0-1.0f
Now, on windows, I initialize the sound from which I read the data like this:
exinfo.cbsize = sizeof(FMOD_CREATESOUNDEXINFO);
exinfo.decodebuffersize = 44100;
exinfo.numchannels = 1;
exinfo.defaultfrequency = 44100;
exinfo.format = FMOD_SOUND_FORMAT_PCMFLOAT;
Setting the same parameters: 1channel, 32bit floating point. I read the data and write the data in the buffer:
FMOD_RESULT result = sound_->readData(rawBuffer, N4, &bytesRead);
float* floatBuffer = (float*) rawBuffer;
for(int j = 0; j < N; j++)
{
circularAudioBuffer_[bufferWritePosition_++] = floatBuffer[j];
if(bufferWritePosition_ >= circularBufferSize_)
bufferWritePosition_ = 0;
}
Now, when I read the data, I get very high or very low floating-point values (about 1e34 or -1e33). In the test program, I can't hear anything in the output.
I can switch the input and output sound format to PCM32 and it plays fine in the test program, but can`t be read properly in Unity3D (it screeches a lot, but I can make out the song).
Can anyone help me figure this out and make it works properly using the PCMFLOAT format? thanks!
TL;DR: I can't read data from FMOD sound with PCMFLOAT format!
From Fmod Support: Fmod Forums
The specified output format doesn't matter. The FMod codec will always return PCM16 and the iOS codec returns PCMFloats. So, I need to convert them:
(float)pcm16in[j] / 32768.0f;
In addition to this, I was (accidently) initializing the output stream with an MP3 file, which made it so that I couldn't change the output format.
I'm calling the cvFindContours function inside a separate thread that I've created to handle all OpenCV work while another is kept for OpenGL stuff.
I noticed that my cvFindContours function always returns 0 when this code is executed inside a separate thread. It worked fine before, when executed in the main thread itself. I used breakpoints and Watches to evaluate value changes. everything else (variables) gets values except for contourCount (value: 0).
Any clue?
// header includes goes here
CvCapture* capture = NULL;
IplImage* frame = NULL;
IplImage* image;
IplImage* gray;
IplImage* grayContour;
CvMemStorage *storage;
CvSeq *firstcontour=NULL;
CvSeq *polycontour=NULL;
int contourCount = 0;
DWORD WINAPI startOCV(LPVOID vpParam){
capture = cvCaptureFromCAM(0); // NOTE 1
capture = cvCaptureFromCAM(0);
frame = cvQueryFrame(capture);
image = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U,3);
gray = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U,1);
grayContour = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U,1);
storage = cvCreateMemStorage (0);
firstcontour=NULL;
while(1){
frame = cvQueryFrame(capture);
cvCopy(frame,image);
cvCvtColor(image,gray,CV_BGR2GRAY);
cvSmooth(gray,gray,CV_GAUSSIAN,3);
cvThreshold (gray, gray, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
cvNot(gray,gray);
cvCopy(gray,grayContour);
contourCount=cvFindContours (grayContour, storage, &firstcontour, sizeof (CvContour),
CV_RETR_CCOMP);
polycontour=cvApproxPoly(firstcontour,sizeof(CvContour),storagepoly,CV_POLY_APPROX_DP,3,1); // Error starts here (Pls refer to stack trace)
}
// goes on...
}
int main(int argc, char** argv){
DWORD qThreadID;
HANDLE ocvThread = CreateThread(0,0,startOCV, NULL,0, &qThreadID);
initGL(argc, argv); //some GL intitialization functions
glutMainLoop(); // draw some 3D objects
CloseHandle(ocvThread);
return 0;
}
NOTE1: these lines had to be duplicated due to the error mentioned at How to avoid "Video Source -> Capture source" selection in OpenCV 2.3.0 - Visual C++ 2008
Environment:
OpenCV 2.3.0
Visual C++ 2008
EDIT
Traces
opencv_core230d.dll!cv::error(const cv::Exception & exc={...}) Line 431 C++
opencv_imgproc230d.dll!cvPointSeqFromMat(int seq_kind=20480, const void * arr=0x00000000, CvContour * contour_header=0x01a6f514, CvSeqBlock * block=0x01a6f4f4) Line 47 + 0xbd bytes C++
opencv_imgproc230d.dll!cvApproxPoly(const void * array=0x00000000, int header_size=88, CvMemStorage * storage=0x017e7b40, int method=0, double parameter=3.0000000000000000, int parameter2=1) Line 703 + 0x28 bytes C++
Project.exe!startOCV(void * vpParam=0x00000000) Line 267 + 0x24 bytes C++
All this stuff boils down to the function CV_Assert( arr != 0 && contour_header != 0 && block != 0 ) in cvPointSeqFromMat and it fails since arr it requires is empty.
Your variable contourCount is not doing what you think it's doing. From the contours.cpp source file:
/*F///////////////////////////////////////////////////////////////////////////////////////
// Name: cvFindContours
// Purpose:
// Finds all the contours on the bi-level image.
// Context:
// Parameters:
// img - source image.
// Non-zero pixels are considered as 1-pixels
// and zero pixels as 0-pixels.
// step - full width of source image in bytes.
// size - width and height of the image in pixels
// storage - pointer to storage where will the output contours be placed.
// header_size - header size of resulting contours
// mode - mode of contour retrieval.
// method - method of approximation that is applied to contours
// first_contour - pointer to first contour pointer
// Returns:
// CV_OK or error code
// Notes:
//F*/
You are getting CV_OK == 0, which means it successfully ran. cvFindContours does not return the number of contours found to you. It merely lets you known if it failed or not. You should use the CvSeq* first_contour to figure out the number of contours detected.
Hope that helps!