How to compile shader with newest Effect Framework? - graphics

I usually use the Microsoft DirectX SDK (June 2010) which is deprecated. I download vs2015 which includes newest directx sdk and latest version FX11 on github. I don't know how to compile shader now.
before I can compile shader like this
ID3D10Blob *compiledShader = 0;
ID3D10Blob *compilationMsgs = 0;
result = D3DX11CompileFromFile("SolidColor.fx", 0, 0, 0, "fx_5_0", shaderFlags,
0, 0, &compiledShader, &compilationMsgs, 0);
if (compilationMsgs != 0)
{
MessageBox(0, (char*)compilationMsgs->GetBufferPointer(), 0, 0);
compilationMsgs->Release();
compilationMsgs = 0;
}
if (FAILED(result))
{
MessageBox(0, "error", 0, 0);
return false;
}
result = D3DX11CreateEffectFromMemory(compiledShader->GetBufferPointer(), compiledShader->GetBufferSize(),
0, m_pd3dDevice, &m_pFx);
compiledShader->Release();
if (FAILED(result))
{
MessageBox(0, "error", 0, 0);
return false;
}
m_pTechnique = m_pFx->GetTechniqueByName("ColorTech");
m_pFxWorldViewProj = m_pFx->GetVariableByName("gWorldViewProj")->AsMatrix();
But now how to compile shader? use D3DX11CompileEffectFromFile or D3DX11CreateEffectFromFile? Please give sample code,thank you.

I know, it is so easy, just one function D3DX11CompileEffectFromFile is OK.
//compile shader
ID3DBlob* errorBlob;
DWORD shaderFlags = D3DCOMPILE_ENABLE_STRICTNESS;
#if defined _DEBUG || defined DEBUG
shaderFlags = D3DCOMPILE_DEBUG;
#endif
hr = D3DX11CompileEffectFromFile(L"color.fx", nullptr, D3D_COMPILE_STANDARD_FILE_INCLUDE, shaderFlags,
0, m_pd3dDevice, &m_pFx, &errorBlob);
if (FAILED(hr))
{
MessageBox(nullptr, (LPCWSTR)errorBlob->GetBufferPointer(), L"error", MB_OK);
return hr;
}
m_pTechnique = m_pFx->GetTechniqueByName("ColorTech");
m_pFxWorldViewProj = m_pFx->GetVariableByName("gWorldViewProj")->AsMatrix();

Related

Is there a way to create GLX context after Xlib's window creation?

I'm trying to create OpenGLx context after the Xlib's window creation. I'm trying to separate the Xlib window creation and opengl context creation into two different phases.
Win32 window-opengl context creation was rather simple but I couldnt find any resource that illustrates the same process with Xlib-opengl in linux
This is how its done for xlib-linux
GLint glxAttribs[] = {
GLX_RGBA,
GLX_DOUBLEBUFFER,
GLX_DEPTH_SIZE, 24,
GLX_STENCIL_SIZE, 8,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_SAMPLE_BUFFERS, 0,
GLX_SAMPLES, 0,
None
};
XVisualInfo* visual = glXChooseVisual(display, screenId, glxAttribs);
XSetWindowAttributes windowAttribs;
windowAttribs.border_pixel = BlackPixel(display, screenId);
windowAttribs.background_pixel = WhitePixel(display, screenId);
windowAttribs.override_redirect = True;
windowAttribs.colormap = XCreateColormap(display, RootWindow(display, screenId), visual->visual, AllocNone);
windowAttribs.event_mask = ExposureMask;
window = XCreateWindow(display, RootWindow(display, screenId), 0, 0, 320, 200, 0, visual->depth, InputOutput, visual->visual, CWBackPixel | CWColormap | CWBorderPixel | CWEventMask, &windowAttribs);
This is how its done in windows
const WindowsWindow* pWin32Window = (const WindowsWindow*)pOwnerWindow;
HWND windowHandle = pWin32Window->GetWin32WindowHandle();
HDC windowDeviceContext = pWin32Window->GetWin32WindowDeviceContext();
/*
* Create pixel format
*/
PIXELFORMATDESCRIPTOR pfd = { sizeof(pfd),1 };
memset(&pfd, 0, sizeof(PIXELFORMATDESCRIPTOR));
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.nVersion = 1;
pfd.cColorBits = OpenGLDeviceUtilsWin32::GetColorBits(desc.SwapchainBufferFormat);
pfd.cAlphaBits = OpenGLDeviceUtilsWin32::GetAlphaBits(desc.SwapchainBufferFormat);
pfd.cDepthBits = OpenGLDeviceUtilsWin32::GetDepthBits(desc.SwapchainDepthStencilBufferFormat);
pfd.cStencilBits = OpenGLDeviceUtilsWin32::GetStencilBits(desc.SwapchainDepthStencilBufferFormat);
pfd.cAuxBuffers = 3;
pfd.iLayerType = PFD_MAIN_PLANE;
const int pixelFormatIndex = ChoosePixelFormat(windowDeviceContext, &pfd);
ASSERT(pixelFormatIndex != 0,"OpenGLDevice","Invalid pixel format");
ASSERT(SetPixelFormat(windowDeviceContext, pixelFormatIndex, &pfd), "OpenGLDevice", "Win32 window rejected the specified pixel format");
HGLRC tempContext = wglCreateContext(windowDeviceContext);
ASSERT(tempContext != NULL, "OpenGLDevice", "Creation of wgl dummy context failed!");
wglMakeCurrent(windowDeviceContext, tempContext);
PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = NULL;
wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB");
ASSERT(wglCreateContextAttribsARB != NULL, "OpenGLDevice", "WGL get proc address failed!");
But I would expect something like this.
Create xlib window
Check for glx attribs if the window can support that pixel format
Create glx context using pixel format
But instead it goes as
Create window with your specific glx attribs
Create glx context
I wonder if there is a way for us to create window without letting xlib know we are going to use it for opengl and implement OpenGL specific setup for window creation process.
I'm trying to create OpenGLx context after the Xlib's window creation.
I don't really see your problem. On Win32 the usual stanza is:
Create window
Select pixelformat
Set pixelformat on window
Get HDC from window and use it to create context
On GLX the stanza is:
Select visual for window
Create window that's compatible with visual
Create OpenGL context with the selected visual
Take note that in both Win32 and GLX there is no hard tie between the window and the OpenGL context. As long as the pixelformat/visual of a OpenGL context and a window are compatiple, you can use them with each other.
The only difference between GLX and Win32 is, how the pixelformat/visual is communicated to OpenGL context creation. In GLX it's done directy, in Win32 the pixelformat is communicated in a rather convoluted way by means of the HDC of a window. And take note that in order to obtain a modern OpenGL context you actually have to go the route of OpenGL context creation with attributes which works exactly the same in Win32 and GLX (with Win32 needing the added steps of creating a dummy OpenGL context first in order to obtain the function pointers to the wglCreateContextAttribsARB functions, which are directly available in GLX).
Honestly, I do not understand your motivation.
Many implementations like GLFW gets Visual from GLX/EGL APIs (including glXChooseFBConfig) and use it when creating a window. The GLX/EGL stuff part can be abstracted by writing wrappers, so I don't see the need to go to the trouble of avoiding it.
That being said, it is still possible to avoid it, so I wrote the sample code for you.
// To build, execute the command below.
// c++ -Wall -Wextra -std=c++17 -o main main.cpp -lX11 -lGLX -lGL
#include <cstdio>
#include <chrono>
#include <thread>
#include <sys/time.h>
#include <unistd.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <GL/glx.h>
#include <GL/glxext.h>
#define OGL_MAJOR_VERSION 3
#define OGL_MINOR_VERSION 3
#define WINDOW_WIDTH 640
#define WINDOW_HEIGHT 360
#define FPS 60
static double get_time() {
static timeval s_tTimeVal;
gettimeofday(&s_tTimeVal, NULL);
double time = s_tTimeVal.tv_sec * 1000.0; // sec to ms
time += s_tTimeVal.tv_usec / 1000.0; // us to ms
return time;
}
struct TestWindowConfig {
int width = 640;
int height = 360;
};
class TestWindow final {
public:
explicit TestWindow(const TestWindowConfig& config) : m_config(config) {}
virtual ~TestWindow() {
if (m_display) {
if (m_xid) {
XDestroyWindow(m_display, m_xid);
}
XCloseDisplay(m_display);
}
}
bool create() {
m_display = XOpenDisplay(NULL);
if (!m_display) {
fprintf(stderr, "XOpenDisplay() failed\n");
return false;
}
XSetWindowAttributes x_attr;
x_attr.override_redirect = False;
x_attr.border_pixel = 0;
m_xid = XCreateWindow(m_display, DefaultRootWindow(m_display), 0, 0, m_config.width,
m_config.height, 0, CopyFromParent, InputOutput, CopyFromParent,
CWOverrideRedirect | CWBorderPixel, &x_attr);
if (!m_xid) {
fprintf(stderr, "XOpenDisplay() failed\n");
return false;
}
XStoreName(m_display, m_xid, "X11-GLX Sample");
XMapWindow(m_display, m_xid);
m_wm_delete_window = XInternAtom(m_display, "WM_DELETE_WINDOW", True);
XSetWMProtocols(m_display, m_xid, &m_wm_delete_window, 1);
return true;
}
void show() const {
if (m_display && m_xid) {
XMapRaised(m_display, m_xid);
}
}
bool poll_events() {
if (!m_display) {
fprintf(stderr, "Display is null\n");
return false;
}
while (XPending(m_display) > 0) {
XEvent ev;
XNextEvent(m_display, &ev);
if (ev.type == ClientMessage) {
if ((Atom)ev.xclient.data.l[0] == m_wm_delete_window) {
m_should_close = true;
}
}
}
return true;
}
bool should_close() const { return m_should_close; }
Display* display() const { return m_display; }
Window xid() const { return m_xid; }
int screen_id() const { return DefaultScreen(m_display); }
private:
TestWindowConfig m_config;
Display* m_display = nullptr;
Window m_xid = 0;
Atom m_wm_delete_window;
bool m_should_close = false;
};
class TestGLContext final {
public:
explicit TestGLContext() = default;
virtual ~TestGLContext() = default;
bool create(const TestWindow& window) {
// clang-format off
int visual_attr[] = {
GLX_DRAWABLE_TYPE, GLX_WINDOW_BIT,
GLX_RENDER_TYPE, GLX_RGBA_BIT,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_ALPHA_SIZE, 8,
GLX_DEPTH_SIZE, 0,
GLX_STENCIL_SIZE, 0,
GLX_DOUBLEBUFFER, True,
None
};
// clang-format on
int cfg_count;
auto fb_configs =
glXChooseFBConfig(window.display(), window.screen_id(), visual_attr, &cfg_count);
if (!fb_configs || (cfg_count < 1)) {
fprintf(stderr, "glXChooseFBConfig(): No config found\n");
return false;
}
PFNGLXCREATECONTEXTATTRIBSARBPROC glXCreateContextAttribsARB =
(PFNGLXCREATECONTEXTATTRIBSARBPROC)glXGetProcAddressARB(
(const GLubyte*)"glXCreateContextAttribsARB");
if (!glXCreateContextAttribsARB) {
fprintf(stderr, "Failed to load glXCreateContextAttribsARB\n");
return false;
}
// clang-format off
int ctx_attr[] = {
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
GLX_CONTEXT_MAJOR_VERSION_ARB, OGL_MAJOR_VERSION,
GLX_CONTEXT_MINOR_VERSION_ARB, OGL_MINOR_VERSION,
0, 0
};
// clang-format on
m_ctx = glXCreateContextAttribsARB(window.display(), fb_configs[0], NULL, True, ctx_attr);
if (!m_ctx) {
fprintf(stderr, "Failed to create GLX Context\n");
return false;
}
m_should_destroy = true;
return true;
}
bool make_current(const TestWindow& window) {
if (glXMakeCurrent(window.display(), window.xid(), m_ctx) != True) {
fprintf(stderr, "glXMakeCurrent() Failed\n");
return false;
}
return true;
}
void swap_buffers(const TestWindow& window) { glXSwapBuffers(window.display(), window.xid()); }
static void* get_proc_address(const char* name) {
return reinterpret_cast<void*>(glXGetProcAddress((const GLubyte*)name));
}
void destroy(const TestWindow& window) {
glXDestroyContext(window.display(), m_ctx);
m_should_destroy = false;
}
bool should_destroy() const { return m_should_destroy; }
private:
GLXContext m_ctx;
bool m_should_destroy = false;
};
int main() {
// 1. Prepare Window and OpenGL Context
// In normal design, TestWindow should have its GLContext within itself.
// But, in order to fit your needs, I separated these explicitly.
TestWindowConfig config{.width = WINDOW_WIDTH, .height = WINDOW_HEIGHT};
TestWindow window{config};
TestGLContext glctx{};
if (!window.create()) {
return 1;
}
if (!glctx.create(window) || !glctx.make_current(window)) {
if (glctx.should_destroy()) {
glctx.destroy(window);
}
return 1;
}
// 2. Load OpenGL functions
// In normal cases, you are always recommended to use loader libraries like glad.
// In this example, I omited the loading part.
//
// if (!gladLoadGLLoader((GLADloadproc)glctx.get_proc_address)) {
// fprintf(stderr, "Failed to load OpenGL functions\n");
// return 1;
// }
// 3. Show the window and call OpenGL APIs
// As above, there are various problems in this implentation for real use.
window.show();
double last_time = get_time();
while (true) {
if (!window.poll_events() || window.should_close()) {
break;
}
auto delta_ms = get_time() - last_time;
if (auto diff = (1000.0 / FPS) - delta_ms; diff > 0) {
std::this_thread::sleep_for(std::chrono::milliseconds((long)diff));
continue;
}
// fprintf(stderr, "delta: %f\n", delta_ms);
glViewport(0, 0, config.width, config.height);
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glctx.swap_buffers(window);
last_time = get_time();
}
glctx.destroy(window);
return 0;
}

How to absolutely ensure Direct3D9 hook overlay rendering on top

I'm trying to hook IDirect3DDevice9::Present or ::EndScene and render my own overlay (which, for now, is just a simple rectangle) on top of everything else in a D3D9 application, but my overlay seems to be appearing and disappearing quite randomly. The drawing code I'm currently using is:
typedef struct CUSTOMVERTEX {
float x, y, z, rwh;
DWORD color;
};
#define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
void draw() {
// the positions are just for testing purposes, so they don't really make sense
CUSTOMVERTEX vertices[] = {
{ 0, 0, 0, 1.0f, D3DCOLOR_XRGB(255, 255, 255) },
{ 0, cursor_pos.y+500, 0, 1.0f, D3DCOLOR_XRGB(127, 255, 255) },
{ cursor_pos.x, cursor_pos.y, 0, 1.0f, D3DCOLOR_XRGB(255,255, 255) },
{ cursor_pos.x, 600, 0, 1.0f, D3DCOLOR_XRGB(127, 0, 0) }
};
if (vBuffer == 0) return;
VOID* pVoid;
vBuffer->Lock(0, 0, &pVoid, 0);
memcpy(pVoid, vertices, sizeof(vertices));
vBuffer->Unlock();
d3dDevice->SetRenderState(D3DRS_LIGHTING, FALSE);
d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, FALSE);
D3DMATRIX orthographicMatrix;
D3DMATRIX identityMatrix;
// MAKE_D3DMATRIX should be equivalent to D3DXMATRIX constructor
D3DMATRIX viewMatrix = MAKE_D3DMATRIX(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
((float)-(get_window_width()/2)), ((float)-(get_window_height()/ 2)), 0, 1
);
// MAKE_ORTHO_LH is equivalent to D3DMatrixOrthoLH
MAKE_ORTHO_LH(&orthographicMatrix, (FLOAT)get_window_width(), (FLOAT)get_window_height(), -1.0, 1.0);
// and this to D3DMatrixIdentity
MAKE_IDENTITY(&identityMatrix); // and this to D3DMatrixIdentity
d3dDevice->SetTransform(D3DTS_PROJECTION, &orthographicMatrix);
d3dDevice->SetTransform(D3DTS_WORLD, &identityMatrix);
d3dDevice->SetTransform(D3DTS_VIEW, &viewMatrix);
d3dDevice->SetRenderState(D3DRS_ZENABLE, false);
d3dDevice->SetFVF(CUSTOMFVF);
d3dDevice->SetStreamSource(0, vBuffer, 0, sizeof(CUSTOMVERTEX));
d3dDevice->DrawPrimitive(D3DPT_TRIANGLEFAN, 0, 2);
d3dDevice->SetRenderState(D3DRS_ZENABLE, true);
}
I can't seem to figure out why this would cause the overlay to pop in and out of existence at seemingly random points in time.. What am I missing?
I found a foolproof method to render my stuff directly to the backbuffer (on CPU):
IDirect3DSwapChain9 *sc;
if (FAILED(d3dDevice->GetSwapChain(0, &sc))) {
PRINT("GetSwapChain failed\n");
return;
}
IDirect3DSurface9 *s;
if (FAILED(sc->GetBackBuffer(0, D3DBACKBUFFER_TYPE_MONO, &s))) {
PRINT("GetBackBuffer failed\n");
return;
}
D3DLOCKED_RECT r;
if (FAILED(s->LockRect(&r, NULL, D3DLOCK_DONOTWAIT))) {
PRINT("LockRect failed\n");
return;
}
// here, find out pixel format and manipulate
// pixel buffer through r->pBits, using r->Pitch accordingly
s->UnlockRect();
s->Release();
sc->Release();
Will most probably incur a performance penalty, but in my application this doesn't really make a difference. A more optimal way would be to harness the GPU's hwacceled rendering capabilities, but I currently lack the D3D knowledge to do so.

Multithread decoding of Video PID of Mpeg2Ts using FFMPEG

I'm working on an app in VC++ to display video frames of a video Pid of mpeg2ts stream using FFMPEG and need to do the same, for other mpeg2stream simultaneously by using multi thread process,my source code is:
int main (int argc, char* argv[])
{
av_register_all();
avformat_network_init();
pFormatCtx = avformat_alloc_context();
if(avformat_open_input(&pFormatCtx,filepath,NULL,NULL)!=0){
printf("Couldn't open input stream.\n");
return -1;
}
if(avformat_find_stream_info(pFormatCtx,NULL)<0){
printf("Couldn't find stream information.\n");
return -1;
}
videoindex=-1;
for(i=0; i<pFormatCtx->nb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){
videoindex=i;
break;
}
if(videoindex==-1){
printf("Didn't find a video stream.\n");
return -1;
}
pCodecCtx=pFormatCtx->streams[videoindex]->codec;
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL){
printf("Codec not found.\n");
return -1;
}
if(avcodec_open2(pCodecCtx, pCodec,NULL)<0){
printf("Could not open codec.\n");
return -1;
}
pFrame=av_frame_alloc();
pFrameYUV=av_frame_alloc();
out_buffer=(uint8_t *)av_malloc(avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height));
avpicture_fill((AVPicture *)pFrameYUV, out_buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height);
packet=(AVPacket *)av_malloc(sizeof(AVPacket));
//Output Info-----------------------------
printf("--------------- File Information ----------------\n");
av_dump_format(pFormatCtx,0,filepath,0);
printf("-------------------------------------------------\n");
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
pCodecCtx->width, pCodecCtx->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
#if OUTPUT_YUV420P
fp_yuv=fopen("output.yuv","wb+");
#endif
if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
printf( "Could not initialize SDL - %s\n", SDL_GetError());
return -1;
}
screen_w = pCodecCtx->width;
screen_h = pCodecCtx->height;
//SDL 2.0 Support for multiple windows
screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
screen_w, screen_h, SDL_WINDOW_OPENGL);
if(!screen) {
printf("SDL: could not create window - exiting:%s\n",SDL_GetError());
return -1;
}
sdlRenderer = SDL_CreateRenderer(screen, -1, 0);
//IYUV: Y + U + V (3 planes)
//YV12: Y + V + U (3 planes)
sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,pCodecCtx->width,pCodecCtx->height);
sdlRect.x=0;
sdlRect.y=0;
sdlRect.w=screen_w;
sdlRect.h=screen_h;
//SDL End----------------------
BYTE buffer [4] ;
int nSize = 0 ;
int nByteCnt = 0 ;
int nPreviuosPos = 0 ;
mpgfile = fopen ("D:\\00_Projects\\Farzan II\\SampleData\\Yahsat1996V_N_PID(2101).pes", "rb");
while(av_read_frame(pFormatCtx, packet)>=0 /*&& nSize > 0*/)
{
if(packet->stream_index==videoindex)
{
ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);
if(ret < 0)
{
printf("Decode Error.\n");
return -1;
}
if(got_picture)
{
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height,
pFrameYUV->data, pFrameYUV->linesize);
#if OUTPUT_YUV420P
y_size=pCodecCtx->width*pCodecCtx->height;
fwrite(pFrameYUV->data[0],1,y_size,fp_yuv); //Y
fwrite(pFrameYUV->data[1],1,y_size/4,fp_yuv); //U
fwrite(pFrameYUV->data[2],1,y_size/4,fp_yuv); //V
#endif
//SDL---------------------------
#if 0
SDL_UpdateTexture( sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0] );
#else
SDL_UpdateYUVTexture(sdlTexture, &sdlRect,
pFrameYUV->data[0], pFrameYUV->linesize[0],
pFrameYUV->data[1], pFrameYUV->linesize[1],
pFrameYUV->data[2], pFrameYUV->linesize[2]);
#endif
SDL_RenderClear( sdlRenderer );
SDL_RenderCopy( sdlRenderer, sdlTexture, NULL, &sdlRect);
SDL_RenderPresent( sdlRenderer );
//SDL End-----------------------
//Delay 40ms
SDL_Delay(40);
}
}
av_free_packet(packet);
}
//flush decoder
//FIX: Flush Frames remained in Codec
while (1) {
ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet);
if (ret < 0)
break;
if (!got_picture)
break;
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height,
pFrameYUV->data, pFrameYUV->linesize);
#if OUTPUT_YUV420P
int y_size=pCodecCtx->width*pCodecCtx->height;
fwrite(pFrameYUV->data[0],1,y_size,fp_yuv); //Y
fwrite(pFrameYUV->data[1],1,y_size/4,fp_yuv); //U
fwrite(pFrameYUV->data[2],1,y_size/4,fp_yuv); //V
#endif
//SDL---------------------------
SDL_UpdateTexture( sdlTexture, &sdlRect, pFrameYUV->data[0], pFrameYUV->linesize[0] );
SDL_RenderClear( sdlRenderer );
SDL_RenderCopy( sdlRenderer, sdlTexture, NULL, &sdlRect);
SDL_RenderPresent( sdlRenderer );
//SDL End-----------------------
//Delay 40ms
SDL_Delay(40);
}
sws_freeContext(img_convert_ctx);
#if OUTPUT_YUV420P
fclose(fp_yuv);
#endif
SDL_Quit();
av_frame_free(&pFrameYUV);
av_frame_free(&pFrame);
avcodec_close(pCodecCtx);
avformat_close_input(&pFormatCtx);
return 0;
}
it works well when i call it in One thread but,after calling this function in multi thread ,the error of access violation occurred , is there anyone to guide me to solution?

SDL2/SDL.h causing undefined references in Non-SDL code

I've been having an awfully strange problem that I cannot seem to grasp. I'm almost convinced that this is a compiler bug.
xTech : xIncludes.hh
#ifndef _xIncludes_
#define _xIncludes_
#define SDL_MAIN_HANDLED
#include <string.h>
#include <stdio.h>
#include <stdarg.h>
#include <stdint.h>
#include <vector>
#include <SDL2/SDL.h>
#if defined _WIN32
#include <winsock.h>
#endif
#endif
xTech : xSound.cc
#include "xSound.hh"
int xOGGStreamSource::_stream(ALuint Buffer) {
char data[BufferSize];
int size = 0;
int section;
int result;
while (size < BufferSize) {
result = ov_read(&_oggstream, data + size, BufferSize - size, 0, 2, 1, &section);
if (result > 0)
size += result;
else
if (result < 0)
return result;
else
break; //This seems a little redundant.... deal with it after it works.
}
if (size == 0) return 0;
alBufferData(Buffer, _format, data, size, _vorbisinfo->rate);
return 1;
}
void xOGGStreamSource::_empty() {
int queued;
alGetSourcei(_source, AL_BUFFERS_QUEUED, &queued);
while (queued--) {
ALuint Buffer;
alSourceUnqueueBuffers(_source, 1, &Buffer);
}
}
int xOGGStreamSource::Open(xString path) {
int result;
_oggfile = xOpenFile(path, "rb");
if (_oggfile.Buffer == NULL) {
xLogf("Audio", "Error in OGG File '%s', file does not exist.", path);
return -3;
}
if (result = ov_open(_oggfile.Buffer, &_oggstream, NULL, 0) < 0) {
xLogf("Audio", "Error in OGG File '%s', file is non-OGG.", path);
xCloseFile(_oggfile);
return -2;
}
_vorbisinfo = ov_info(&_oggstream, -1);
_vorbiscomment = ov_comment(&_oggstream, -1);
if (_vorbisinfo->channels == 1)
_format = AL_FORMAT_MONO16;
else
_format = AL_FORMAT_STEREO16;
alGenBuffers(2, _buffers);
alGenSources(1, &_source);
return 1;
}
void xOGGStreamSource::Close() {
alSourceStop(_source);
_empty();
alDeleteSources(1, &_source);
alDeleteBuffers(1, _buffers);
ov_clear(&_oggstream);
}
int xOGGStreamSource::Playback() {
if (Playing()) return 1;
if (!_stream(_buffers[0])) return 0;
if (!_stream(_buffers[1])) return 0;
alSourceQueueBuffers(_source, 2, _buffers);
alSourcePlay(_source);
return 1;
}
int xOGGStreamSource::Playing() {
ALenum state;
alGetSourcei(_source, AL_SOURCE_STATE, &state);
return (state == AL_PLAYING);
}
int xOGGStreamSource::Update(xVec3f_t Pos, xVec3f_t Vloc, xVec3f_t Dir, float Vol) {
int processed;
int active = 1;
alSource3f(_source, AL_POSITION, Pos.X, Pos.Y, Pos.Z);
alSource3f(_source, AL_VELOCITY, Vloc.X, Vloc.Y, Vloc.Z);
alSource3f(_source, AL_DIRECTION, Dir.X, Dir.Y, Dir.Z);
alSourcef (_source, AL_GAIN, Vol);
alSourcei (_source, AL_SOURCE_RELATIVE, AL_TRUE);
alGetSourcei(_source, AL_BUFFERS_PROCESSED, &processed);
while(processed--) {
ALuint Buffer;
alSourceUnqueueBuffers(_source, 1, &Buffer);
active = _stream(Buffer);
alSourceQueueBuffers(_source, 1, &Buffer);
}
return active;
}
xSound::xSound(xOGGStreamSource xss) { _source = xss; }
int xSound::PlaySound(float Volume, xVec3f_t Location) {
if (!_source.Playback()) return -3;
while(_source.Update(Location, xVec3f_t(0,0,0), xVec3f_t(0,0,0), Volume)) {
if (!_source.Playing()) {
if (!_source.Playback()) return -2;
else return -1;
}
}
_source.Close();
return 1;
}
xSoundManager::xSoundManager(){}
int xSoundManager::Init() {
_device = alcOpenDevice(NULL);
if (!_device) return -2;
_context = alcCreateContext(_device, NULL);
if (alcMakeContextCurrent(_context) == ALC_FALSE || !_context) return -1;
if (!Volume) {
xLogf("Error", "Volume in Audio is not set properly. Setting to default");
Volume = DEFAULT_VOLUME;
}
alListenerf(AL_GAIN, Volume);
if (!BufferSize) {
xLogf("Error", "Buffer size in Audio is not set properly. Setting to default");
BufferSize = DEFAULT_BUFFER_SIZE;
}
return 0;
}
xSound* xSoundManager::LoadOGG(xString file) {
xOGGStreamSource ogg;
if (ogg.Open(file) < 0) return NULL;
return new xSound(ogg);
}
xTechLibTest : main.cc
int main() {
xSetLogFile("xTechLibTest.log");
xSoundManager* audio = new xSoundManager();
if (audio->Init() < 0) return -1;
xSound* testsound1 = audio->LoadOGG("testsound.ogg");
if (testsound1 == NULL) return -2;
testsound1->PlaySound(1.0, xVec3f_t(1.0,0.5,0.3));
}
The above code and everything associated with it (string implementations, etc) work fine, no problems at all. That is until I include SDL.h; I get undefined references for every function I defined, when the compiler could find them with no problem before. It seems that the mere inclusion of SDL.h completely nullifies any definition I make. Any ideas what's going on here?
Have you properly included the linkage to the SDL libraries?
If you have built the binaries yourself, you need to include the path and library. On a linux system, if you have built the static libraries yourself, you will have a binary called libSDL2.a, however to link you need to specify SDL2 as your linked library.
Also as a side note, do you have a redundant include guard on your xsound.h file( via #ifdef _xsound_ ... ) ?
p.s. It will help the other users if you specify what how your environment is setup; compiler, system os, IDE.
It would be useful to see the output from your compiler/linker.
I've had similar problems with network related code when using Cygwin on a windows machine. I've had sockets working fine without SDL, as soon as I include SDL, the whole lot breaks with messages saying that certain header files and references can't be found.
I'm not certain, but I think it has something to do with the way that SDL has it's own main macros (here is a post about it here - simple tcp echo program not working when SDL included?).
I may be wrong, but is this similar to what you are seeing?

how to contruct HttpSendRequest method of WININET

I have dummy web service with URI =http://localhost/IO_100_Service.svc/xml?id={id} which return data in XML format
I want to call this service using WINInet APi in VC++.Can anybody help me how to use contruct "HttpSendRequest" method to add header and data to call this service .
Here is a sample code you should be able to modify to your needs. I tested it with VS2005 using command line template project.
#include <tchar.h>
#include <wininet.h>
/// ....
HINTERNET hIntSession =
::InternetOpen(_T("MyApp"), INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0);
HINTERNET hHttpSession =
InternetConnect(hIntSession, _T("api.twitter.com"), 80, 0, 0, INTERNET_SERVICE_HTTP, 0, NULL);
HINTERNET hHttpRequest = HttpOpenRequest(
hHttpSession,
_T("GET"),
_T("1/statuses/user_timeline.xml?screen_name=twitterapi"),
0, 0, 0, INTERNET_FLAG_RELOAD, 0);
TCHAR* szHeaders = _T("Content-Type: text/html\nMySpecialHeder: whatever");
CHAR szReq[1024] = "";
if( !HttpSendRequest(hHttpRequest, szHeaders, _tcslen(szHeaders), szReq, strlen(szReq))) {
DWORD dwErr = GetLastError();
/// handle error
}
CHAR szBuffer[1025];
DWORD dwRead=0;
while(::InternetReadFile(hHttpRequest, szBuffer, sizeof(szBuffer)-1, &dwRead) && dwRead) {
szBuffer[dwRead] = 0;
OutputDebugStringA(szBuffer);
dwRead=0;
}
::InternetCloseHandle(hHttpRequest);
::InternetCloseHandle(hHttpSession);
::InternetCloseHandle(hIntSession);

Resources