How to save the contents of a COM object to a file Visual C++ - visual-c++

I'm working my way through a DirectX11 tutorial and one of the "extra credit" exercises is to compile a shader and save the compiled code to a .shader file.
The program I ended up writing is super simple so I'll just list it here instead of explaining it in English.
The main program:
#include <windows.h>
#include <windowsx.h>
#include <d3d11.h>
#include <d3dx11.h>
#include <d3dx10.h>
#pragma comment (lib, "d3d11.lib")
#pragma comment (lib, "d3dx11.lib")
#pragma comment (lib, "d3dx10.lib")
int WINAPI WinMain(HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPSTR lpCmdLine,
int nCmdShow)
{
ID3D10Blob *VS, *Errors;
D3DX11CompileFromFile(L"shaders.shader", 0, 0, "VShader", "vs_4_0", 0, 0, 0, &VS, &Errors, 0);
if(Errors)
MessageBox(NULL, L"The vertex shader failed to compile.", L"Error", MB_OK);
else if(!Errors)
MessageBox(NULL, L"The vertex shader compiled successfully.", L"Message", MB_OK);
HANDLE sHandle = CreateFile(L"compiledShader.shader",
GENERIC_WRITE,
FILE_SHARE_WRITE,
NULL,
CREATE_ALWAYS,
FILE_ATTRIBUTE_NORMAL,
NULL);
if(WriteFile(sHandle, VS->GetBufferPointer(), VS->GetBufferSize(), NULL, NULL)) // <--- break point
MessageBox(NULL, L"The compiled shader was saved successfully to 'compiledSahder.shader'", L"Message", MB_OK);
else
MessageBox(NULL, L"The vertex shader was not saved successfull.", L"Error", MB_OK);
return 0;
}
And the shader:
float4 VShader(float4 position : POSITION) : SV_POSITION
{
return position;
}
Since the contents of the VS Blob is where the compiled code is my first instinct is that I need to save the contents of that Blob to a .shader file. The only file writing I've worked with before is using fstream in basic C++ in linux so I'm unsure how to approach this. I've dug through the MSDN library for various file writing functions but I couldn't find anything that would let me write the content of a COM object to a file.
Let me know if you need any more information, and thanks in advanced for your help.
EDIT: the code has been updated per the suggestions of the current answer. I'm not 100% sure if I'm using these functions properly as I had to intuit how to by reading their descriptions on MSDN, but I think I'm on the right track. It creates the file, but when I try to write to it I'm getting the following error:
First-chance exception at 0x755cdeef in shader compiler.exe: 0xC0000005: Access violation writing location 0x00000000.
Unhandled exception at 0x755cdeef in shader compiler.exe: 0xC0000005: Access violation writing location 0x00000000.
The program '[5604] shader compiler.exe: Native' has exited with code -1073741819 (0xc0000005).
I feel like it's a simple mistake that is causing this problem. Any ideas?
EDIT #2: Here is what gets written to the file "compiledShader.shader"
Compiled shader code?
Does this look correct? If so I'll close the question as answered and worry about whatever is causing the generic access error on my own.

Look at the ID3D10Blob interface for the content, and check the CreateFile, WriteFile and CloseHandle Win32 functions.

Related

OpenGL EGL eglGetDisplay keeps return EGL error 0x3008(EGL_BAD_DISPLAY )

My ubuntu version is 16.04, and I first installed mesa-common-dev, libgl1-mesa-dev, libglm-dev, libegl1-mesa-dev. Then I installed NVIDIA-Linux-x86_64-440.64.run with opengl support.
But when I tried to run a toy example, I keep getting this error main: Assertion display != EGL_NO_DISPLAY failed
/* Compile with gcc -g3 -o example example.c -lX11 -lEGL */
#include <assert.h>
#include <stdio.h>
#include <EGL/egl.h>
#include <EGL/eglplatform.h>
void printEGLError();
int main(void) {
Display* x_display = XOpenDisplay(NULL);
EGLDisplay display = eglGetDisplay(x_display);
// EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
assert(display != EGL_NO_DISPLAY);
EGLint major, minor;
eglInitialize(display, &major, &minor);
char *string = eglQueryString(display, EGL_CLIENT_APIS);
assert(string);
printf("%s\n", string);
return 0;
}
/* Use printEGLError to show a description of the last EGL Error.
The descriptions are taken from the eglGetError manual */
#define ERROR_DESC(...) fprintf(stderr, "%s\n", __VA_ARGS__); break
void printEGLError() {
switch(eglGetError()) {
case(EGL_SUCCESS):
ERROR_DESC("The last function succeeded without error.");
case(EGL_NOT_INITIALIZED):
ERROR_DESC("EGL is not initialized, or could not be initialized, for the specified EGL display connection.");
case(EGL_BAD_ACCESS):
ERROR_DESC("EGL cannot access a requested resource (for example a context is bound in another thread).");
case(EGL_BAD_ALLOC):
ERROR_DESC("EGL failed to allocate resources for the requested operation.");
case(EGL_BAD_ATTRIBUTE):
ERROR_DESC("An unrecognized attribute or attribute value was passed in the attribute list.");
case(EGL_BAD_CONTEXT):
ERROR_DESC("An EGLContext argument does not name a valid EGL rendering context.");
case(EGL_BAD_CONFIG):
ERROR_DESC("An EGLConfig argument does not name a valid EGL frame buffer configuration.");
case(EGL_BAD_CURRENT_SURFACE):
ERROR_DESC("The current surface of the calling thread is a window, pixel buffer or pixmap that is no longer valid.");
case(EGL_BAD_DISPLAY):
ERROR_DESC("An EGLDisplay argument does not name a valid EGL display connection.");
case(EGL_BAD_SURFACE):
ERROR_DESC("An EGLSurface argument does not name a valid surface (window, pixel buffer or pixmap) configured for GL rendering.");
case(EGL_BAD_MATCH):
ERROR_DESC("Arguments are inconsistent (for example, a valid context requires buffers not supplied by a valid surface).");
case(EGL_BAD_PARAMETER):
ERROR_DESC("One or more argument values are invalid.");
case(EGL_BAD_NATIVE_PIXMAP):
ERROR_DESC("A NativePixmapType argument does not refer to a valid native pixmap.");
case(EGL_BAD_NATIVE_WINDOW):
ERROR_DESC("A NativeWindowType argument does not refer to a valid native window.");
case(EGL_CONTEXT_LOST):
ERROR_DESC("A power management event has occurred. The application must destroy all contexts and reinitialise OpenGL ES state and objects to continue rendering. ");
}
}
More Information: my graphics card is Titan Xp and I tried to run sudo servide lightdm stop and removed all possible remote desktop softwares. But the problem still exists. Anyone could help?
For those who may be confused about this problem, just unset DISPLAY. This may save your day.

Unicode conversion in C++17 [duplicate]

A bit of foreground: my task required converting UTF-8 XML file to UTF-16 (with proper header, of course). And so I searched about usual ways of converting UTF-8 to UTF-16, and found out that one should use templates from <codecvt>.
But now when it is deprecated, I wonder what is the new common way of doing the same task?
(Don't mind using Boost at all, but other than that I prefer to stay as close to standard library as possible.)
Don't worry about that.
According to the same information source:
this library component should be retired to Annex D, along side ,
until a suitable replacement is standardized.
So, you can still use it until a new standardized, more-secure version is done.
std::codecvt template from <locale> itself isn't deprecated. For UTF-8 to UTF-16, there is still std::codecvt<char16_t, char, std::mbstate_t> specialization.
However, since std::wstring_convert and std::wbuffer_convert are deprecated along with the standard conversion facets, there isn't any easy way to convert strings using the facets.
So, as Bolas already answered: Implement it yourself (or you can use a third party library, as always) or keep using the deprecated API.
Since nobody really answers the question and provides usable replacement code, here is one but it's only for Windows:
#include <string>
#include <stdexcept>
#include <Windows.h>
std::wstring string_to_wide_string(const std::string& string)
{
if (string.empty())
{
return L"";
}
const auto size_needed = MultiByteToWideChar(CP_UTF8, 0, &string.at(0), (int)string.size(), nullptr, 0);
if (size_needed <= 0)
{
throw std::runtime_error("MultiByteToWideChar() failed: " + std::to_string(size_needed));
}
std::wstring result(size_needed, 0);
MultiByteToWideChar(CP_UTF8, 0, &string.at(0), (int)string.size(), &result.at(0), size_needed);
return result;
}
std::string wide_string_to_string(const std::wstring& wide_string)
{
if (wide_string.empty())
{
return "";
}
const auto size_needed = WideCharToMultiByte(CP_UTF8, 0, &wide_string.at(0), (int)wide_string.size(), nullptr, 0, nullptr, nullptr);
if (size_needed <= 0)
{
throw std::runtime_error("WideCharToMultiByte() failed: " + std::to_string(size_needed));
}
std::string result(size_needed, 0);
WideCharToMultiByte(CP_UTF8, 0, &wide_string.at(0), (int)wide_string.size(), &result.at(0), size_needed, nullptr, nullptr);
return result;
}
The new way is... you write it yourself. Or just rely on deprecated functionality. Hopefully, the standards committee won't actually remove codecvt until there is a functioning replacement.
But at present, there isn't one.

#pragma section and attributes

Just trying to make a new section and setting up his attributes with #pragma return this warning:
warning C4330: attribute 'write' for section '.mysec' ignored
Simple code:
#include <windows.h>
#include <stdio.h>
#pragma section(".mysec",execute,read,write)
__declspec(allocate(".mysec")) UCHAR var[] = {0xDE, 0xAD, 0xBE, 0xEF};
void main() { return; }
linker options: /DYNAMICBASE:NO, /FIXED, /NXCOMPAT:NO, /OPT:NOREF
OS/tools: Win x64 / msvc++ 110
I read some articles on MSDN and in particulary this http://msdn.microsoft.com/en-us/library/50bewfwa(v=vs.110).aspx but didn't found answer.
Thanks.
I think that this due to the execute flag. I don't think you can have a section that contains writeable code in Windows.
I might be remembering this wrong but it would a security issue and thus not supported.

Gdiplus::Image Object and boost::shared_ptr

I have a simple Image cache class in my MFC application, to keep track of images loaded from the file system:
typedef boost::shared_ptr<Gdiplus::Image> ImagePtr;
typedef std::map<std::string, ImagePtr> ImageMap;
Whenever an image is requested by file name, a look up is done, or if it is already loaded, the appropriate ImagePtr is returned.
The problem occurs when I exit my application, and the shared pointer gets destructed. I get an access violation here, in checked_delete.hpp:
// verify that types are complete for increased safety
template<class T> inline void checked_delete(T * x)
{
// intentionally complex - simplification causes regressions
typedef char type_must_be_complete[ sizeof(T)? 1: -1 ];
(void) sizeof(type_must_be_complete);
delete x; // <-------- violation here!!
}
Is GDI+ managing these objects for me? If so, what do I need to do to my shared_ptr so that it doesn't call delete? Or is something else awry?
Thanks in advance!
That might be a symptom of calling GdiplusShutdown before the pointers are destroyed.

error C2065: 'CComQIPtr' : undeclared identifier

I'm still feeling my way around C++, and am a complete ATL newbie, so I apologize if this is a basic question. I'm starting with an existing VC++ executable project that has functionality I'd like to expose as an ActiveX object (while sharing as much of the source as possible between the two projects).
I've approached this by adding an ATL project to the solution in question, and in that project have referenced all the .h and .cpp files from the executable project, added all the appropriate references, and defined all the preprocessor macros. So far so good. But I'm getting a compiler error in one file (HideDesktop.cpp). The relevant parts look like this:
#include "stdafx.h"
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <WinInet.h> // Shell object uses INTERNET_MAX_URL_LENGTH (go figure)
#if _MSC_VER < 1400
#define _WIN32_IE 0x0400
#endif
#include <atlbase.h> // ATL smart pointers
#include <shlguid.h> // shell GUIDs
#include <shlobj.h> // IActiveDesktop
#include "stdhdrs.h"
struct __declspec(uuid("F490EB00-1240-11D1-9888-006097DEACF9")) IActiveDesktop;
#define PACKVERSION(major,minor) MAKELONG(minor,major)
static HRESULT EnableActiveDesktop(bool enable)
{
CoInitialize(NULL);
HRESULT hr;
CComQIPtr<IActiveDesktop, &IID_IActiveDesktop> pIActiveDesktop; // <- Problematic line (throws errors 2065 and 2275)
hr = pIActiveDesktop.CoCreateInstance(CLSID_ActiveDesktop, NULL, CLSCTX_INPROC_SERVER);
if (!SUCCEEDED(hr))
{
return hr;
}
COMPONENTSOPT opt;
opt.dwSize = sizeof(opt);
opt.fActiveDesktop = opt.fEnableComponents = enable;
hr = pIActiveDesktop->SetDesktopItemOptions(&opt, 0);
if (!SUCCEEDED(hr))
{
CoUninitialize();
// pIActiveDesktop->Release();
return hr;
}
hr = pIActiveDesktop->ApplyChanges(AD_APPLY_REFRESH);
CoUninitialize();
// pIActiveDesktop->Release();
return hr;
}
This code is throwing the following compiler errors:
error C2065: 'CComQIPtr' : undeclared identifier
error C2275: 'IActiveDesktop' : illegal use of this type as an expression
error C2065: 'pIActiveDesktop' : undeclared identifier
The two weird bits: (1) CComQIPtr is defined in atlcomcli.h, which is included in atlbase.h, which is included in HideDesktop.cpp; and (2) this file is only throwing these errors when it's referenced in my new ATL/AX project: it's not throwing them in the original executable project, even though they have basically the same preprocessor definitions. (The ATL AX project, naturally enough, defines _ATL_DLL, but I can't see where that would make a difference.)
My current workaround is to use a normal "dumb" pointer, like so:
IActiveDesktop *pIActiveDesktop;
HRESULT hr = ::CoCreateInstance(CLSID_ActiveDesktop,
NULL, // no outer unknown
CLSCTX_INPROC_SERVER,
IID_IActiveDesktop,
(void**)&pIActiveDesktop);
And that works, provided I remember to release it. But I'd rather be using the ATL smart stuff.
Any thoughts?
You may have forgotten the namespace ATL
ATL::CComQIPtr

Resources