Draw 32-bits with alpha channel from resources using Direct2D - visual-c++

I have a legacy MCF application that displays some images (bmp 32-bits with alpha channel information) by pre-multiplying the images and using CDC::AlphaBlend method.
I would like to introduce some new graphics using Direct2D but I don't want to migrate all the images to png or other formats.
I managed to draw a bmp image from a file but I'm facing problems to get the image from resources and also the displayed image does not use the alpha channel information.
So could anybody help me out with this?
This is my code to create the bitmap:
hr = pIWICFactory->CreateDecoderFromFilename( L"D:\\image.bmp",
NULL,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
&pDecoder);
if (SUCCEEDED(hr))
{
// Create the initial frame.
hr = pDecoder->GetFrame(0, &pSource);
}
if (SUCCEEDED(hr))
{
//create a Direct2D bitmap from the WIC bitmap.
hr = pRenderTarget->CreateBitmapFromWicBitmap(
pSource,
NULL,
ppBitmap
);
}
This is the code to draw the bitmap:
m_pRenderTarget->DrawBitmap(
m_pBitmap,
D2D1::RectF(0.0f, 0.0f, size.width, size.height)
);

You'll need to make an IStream from the resource to pass to IWICImagingFactory::CreateDecoderFromStream.
Since resources are available in memory (assuming the module that contains them is loaded), the easiest way to do that is to create an IWICStream object using IWICImagingFactory::CreateStream and initialize it using IWICStream::InitializeFromMemory.
To get the size of the resource and a pointer to the first byte, use the FindResource, LoadResource, LockResource, and SizeofResource functions.
If your bitmap's header uses BI_BITFIELDS to specify a format with alpha data, I believe WIC will respect that. I don't have any experience with Direct2D, so I can't say if you need to do anything further to make it use alpha data.
If you can't use BI_BITFIELDS (or if that doesn't work), you can write your own IWICBitmapSource implementation that wraps the frame's IWICBitmapSource. You should be able to pass most calls directly to the frame source, and supply your own GetPixelFormat method that returns the real format of your image data. Alternatively, you can create an IWICBitmap with the format you want, lock the bitmap, and copy in the pixel data from the frame source.

Related

How to apply shaders and generate images only once?

I'm trying to apply a pixelation shader to my textures and I need it to be applied only once, after that I can reuse my shader generated images as textures over and over without having to calculate every single time.
so how do I take a few images -> apply a shader and render them only once every time the game loads -> and use them as my textures?
so far I've managed to find the shader to apply:
shader_type canvas_item;
uniform int amount = 40;
void fragment()
{
vec2 grid_uv = round(UV * float(amount)) / float(amount);
vec4 text = texture(TEXTURE, grid_uv);
COLOR = text;
}
but I have no idea how to render out the images using it
Shaders reside in the GPU, and their output goes to the screen. To save the image, the CPU would have to see the GPU output, and that does not happen… Usually. And since it does not go through the CPU, the performance is good. Usually. Well, at least it is better than if the CPU was doing it all the time.
Also, are you sure you don't want to get a pixel art look by other means? Such as removing filter from the texture, changing the stretch mode and working on a small resolution, and perhaps enable pixel snap? No? Watch How to make a silky smooth camera for pixelart games in Godot. Still No? Ok...
Anyway, for what you want, you are going to need a Viewport.
Viewport setup
What you will need is to create a Viewport. Don't forget to set its size. Also may want to set render_target_v_flip to true, this flips the image vertically. If you find the output image is upside down it is because you need to toggle render_target_v_flip.
Then place as child of the Viewport what you want to render.
Rendering
Next, you can read the texture form the Viewport, convert it to an image, and save it to a png. I'm doing this on a tool script attached to the Viewport, so I'll have a workaround to trigger the code from the inspector panel. My code looks like this:
tool
extends Viewport
export var save:bool setget do_save
func do_save(new_value) -> void:
var image := get_texture().get_data()
var error := image.save_png("res://output.png")
if error != OK:
push_error("failed to save output image.")
You can, of course, export a FILE path String to ease changing it in the inspector panel. Here I'm handing common edge cases:
tool
extends Viewport
export(String, FILE) var path:String
export var save:bool setget do_save
func do_save(_new_value) -> void:
var target_path := path.strip_edges()
var folder := target_path.get_base_dir()
var file_name := target_path.get_file()
var extension := target_path.get_extension()
if file_name == "":
push_error("empty file name.")
return
if not (Directory.new()).dir_exists(folder):
push_error("output folder does not exist.")
return
if extension != "png":
target_path += "png" if target_path.ends_with(".") else ".png"
var image := get_texture().get_data()
var error := image.save_png(target_path)
if error != OK:
push_error("failed to save output image.")
return
print("image saved to ", target_path)
Another option is to use ResourceSaver:
tool
extends Viewport
export var save:bool setget do_save
func do_save(new_value) -> void:
var image := get_texture().get_data()
var error := ResourceSaver.save("res://image.res", image)
if error != OK:
push_error("failed to save output image.")
This will only work from the Godot editor, and will only work for Godot, since you get a Godot resource file. Although I find interesting the idea of using Godot to generate images. I'm going to suggest going with ResourceSaver if you want to automate generating them for Godot.
About saving resources from tool scripts
In the examples above, I'm assuming you are saving to a resource path. This is because the intention is to use the output image as a resource in Godot. Using a resource path has a couple implications:
This might not work on an exported game (since the goals is improve the workflow, this is OK).
Godot would need to re-import the resource, but will not notice it changed.
We can deal with the second point from an EditorPlugin, if that is what you are doing, you can do this to tell Godot to scan for changes:
get_editor_interface().get_resource_filesystem().scan()
And if you are not, you can cheat by creating an empty EditorPlugin. The idea is to do this:
var ep = EditorPlugin.new()
ep.get_editor_interface().get_resource_filesystem().scan()
ep.free()
By the way, you will want to cache cache the EditorPlugin instead of making a new one each time. Or better yet, cache the EditorFileSystem you get from get_resource_filesystem.
Automation
Now, I'm aware that it can be cumbersome to have to place things inside the Viewport. It might be Ok for your workflow if you don't need to do it all the time.
But what about automating it? Well, regardless of the approach, you will need a tool script that makes a hidden Viewport, takes a Node, checks if it has a shader, if it does, it moves it temporarily to the Viewport, get the rendered texture (get_texture()) sets it as the texture of the Node, removes the shader, and returns the Node to its original position in the scene. Or instead of looking for a shader in the Node, always apply a shader to whatever Node, perhaps loaded as a resource instead of hard-coded.
Note: I believe you need to let an idle frame pass between adding the Node to the Viewport and getting the texture, so the texture updates. Or was it two idle frames? Well, if one does not work, try adding another one.
About making an EditorPlugin
As you know, you can create an addon from project settings. This will create an EditorPlugin script for you. There you can either add an option to the tools menu (with add_tool_menu_item), or add it to the tool bar of the editor (with add_control_to_container). And have it act on the current selection in the edited scene (you can either use get_selection, or overwrite the edit and handles methods). You may also want to make an undo entry for that, see get_undo_redo.
Or, alternatively you can have it keep track (or look for) the Nodes it has to act upon, and then work on the build virtual method, which runs when the project is about to run. I haven't worked with the build virtual method, so I don't know if it has any quirks to gotchas to be aware of.

Getting image from dicom .DCM file in node.js

I would like to generate a thumbnail image from a .dcm file (Dicom) in node.js.
So far I've found a node modules called dicom-parser that extracts the metadata from a dcm file.
My test case :
var dicom = require('dicom-parser');
var fs = require('fs');
var dicomFileAsBuffer = fs.readFileSync('./FullPano.dcm');
var dataSet = dicom.parseDicom(dicomFileAsBuffer);
var pixelData = new Uint8Array(dataSet.byteArray.buffer,
dataSet.elements.x00880200.items[0].dataSet.elements.x7fe00010.dataOffset,
dataSet.elements.x00880200.items[0].dataSet.elements.x7fe00010.length);
fs.writeFileSync('test5.jpg', pixelData); // <----- not working :'(
To help you help me debug, here is the dataSet.elements.x00880200 object :
But the pixelData stored in the tag x00880200 -> x7fe00010 is not in a standard format, either jpeg, jpg, png... The idea here is to get the thumbnail of a dcm image directly from a file, on the fly, server-side in nodejs.
From the dicom doc (see below), the tag 0088,0200 holds the data for the icon, aka thumbnail.
Icon Image Sequence
(0088,0200)
3
This icon image is representative of the Image.
Only a single Item is permitted in this Sequence.
I've come around the cornerstone libs : cornerstone-js and wado-image-loader. But neither are working in a node.js environment (made an issue about that). These libs can generate the "main" image of a dcm, but only once the dcm file is loaded on the cliend-side, in js. My requirement is to do that in nodejs, for the icon/thumbnail.
If you are trying to save the image icon as a JPG, that may be your issue:
Only monochrome and palette color images shall be used. Samples per Pixel (0028,0002) shall have a Value of 1, Photometric Interpretation (0028,0004) shall have a Value of either MONOCHROME 1, MONOCHROME 2 or PALETTE COLOR, Planar Configuration (0028,0006) shall not be present.source
I'm not familiar with node.js, but the data in the Icon Image Sequence may not be appropriate for that call.
Note also that you are getting an optional, small, thumbnail of the image, not the actual image data, which can be found in the Pixel Data attribute (7FE0,0010).
A bit late but, if you are still looking for an answer, you can use dcmjs-imaging (full disclosure, I am the author). The library implements a DICOM image and overlay rendering pipeline, for Node.js and browser.
The library supports uncompressed data but also, optionally, decodes all major transfer syntaxes using a native WebAssembly module.
Given that you have already fetched the the DICOM bytes in an ArrayBuffer, you can use the following Node.js example to render the image in an RGBA pixel ArrayBuffer.
// Import objects
const dcmjsImaging = require('dcmjs-imaging');
const { DicomImage, NativePixelDecoder } = dcmjsImaging;
// Optionally register native decoders WebAssembly.
// If native decoders are not registered, only
// uncompressed syntaxes would be able to be rendered.
await NativePixelDecoder.initializeAsync();
// Create an ArrayBuffer with the contents of the DICOM P10 byte stream.
const image = new DicomImage(arrayBuffer);
// Render image.
const renderingResult = image.render();
// Rendered pixels in an RGBA ArrayBuffer.
const renderedPixels = renderingResult.pixels;
// Rendered width.
const width = renderingResult.width;
// Rendered height.
const height = renderingResult.height;

Awesomium offscreen rendered to Leadwerks texture

I'm using the Leadwerks game engine and I'm trying to get Awesomium to render to a Leadwerks Texture but having no luck. Below is the code where I create a texture, allocate unsigned char* variable that I put the Awesomium surface into via CopyTo() then set that variable to the LEadwerks Texture but the screen stays black so clearly I'm not understanding something here. Any ideas on what I'm missing?
Texture* uiTex = Texture::Create(window->GetWidth(), window->GetHeight());
unsigned char* pixels = (unsigned char*)malloc(uiTex->GetMipmapSize(0));
// copy surface to LE texture and draw that texture to screen
BitmapSurface* surface = static_cast<BitmapSurface*>(view->surface());
surface->CopyTo(pixels, 1024, 32, true, false);
uiTex->SetPixels((char*)pixels);
context->DrawImage(uiTex, 0, 0);
I know this is old but if some unlucky bastard like myself stumble upon this in the future there might as well be a little guidance.
Begin by checking that everything has been setup correctly in Awesomium. Otherwise it will seem fine but the surface returned will simply be null.
Save the Awesomium surface to file and inspect to verify that it gets generated in the first place.

iOS: how to get image dimensions without opening it

In an iOS app, I need to provide image filters based on their size (width/height), think of something similar to "Large, Medium, Small" in Google images search. Opening each image and reading its dimensions when creating the list would be very performance intensive. Is there a way to get this info without opening the image itself?
Damien DeVille answered the question below, based on his suggestion, I am now using the following code:
NSURL *imageURL = [NSURL fileURLWithPath:imagePath];
if (imageURL == nil)
return;
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithURL((CFURLRef)imageURL, NULL);
if(imageSourceRef == NULL)
return;
CFDictionaryRef props = CGImageSourceCopyPropertiesAtIndex(imageSourceRef, 0, NULL);
CFRelease(imageSourceRef);
NSLog(#"%#", (NSDictionary *)props);
CFRelease(props);
You can use ImageIO to achieve that.
If you have your image URL (or from a file path create a URL with +fileURLWithPath on NSURL) you can then create an image source with CGImageSourceCreateWithURL (you will have to bridge cast the URL to CFURLRef).
Once you have the image source, you can get a CFDictionaryRef of properties of the image (that you can again bridge cast to NSDictionary) by calling CGImageSourceCopyPropertiesAtIndex. What you get is a dictionary with plenty of properties about the image including the pixelHeight and pixelWidth.
You can pass 0 as the index. The index is because some images might have various embedded images (such as a thumbnail, or multiple frame like in a gif).
Note that by using an image source, the full image won't have to be loaded into memory but you will still be able to access its properties.
Make you you import and add the framework to your project.
Just one addition:
if you want to use web-url like http://www.example.com/image.jpg you should use then
[NSURL URLWithString:imagePath] instead [NSURL fileURLWithPath:imagePath].

How to get webcam video stream bytes in c++

I am targeting windows machines. I need to get access to the pointer to the byte array describing the individual streaming frames from an attached usb webcam. I saw the playcap directshow sample from the windows sdk, but I dont see how to get to raw data, frankly, I don't understand how the video actually gets to the window. Since I don't really need anything other than the video capture I would prefer not to use opencv.
Visual Studio 2008 c++
Insert the sample grabber filter. Connect the camera source to the sample grabber and then to the null renderer. The sample grabber is a transform, so you need to feed the output somewhere, but if you don't need to render it, the null renderer is a good choice.
You can configure the sample grabber using ISampleGrabber. You can arrange a callback to your app for each frame, giving you either a pointer to the bits themselves, or a pointer to the IMediaSample object which will also give you the metadata.
You need to implement ISampleGrabberCB on your object, and then you need something like this (pseudo code)
IFilterInfoPtr m_pFilterInfo;
ISampleGrabberPtr m_pGrabber;
m_pGrabber = pFilter;
m_pGrabber->SetBufferSamples(false);
m_pGrabber->SetOneShot(false);
// force to 24-bit mode
AM_MEDIA_TYPE mt;
ZeroMemory(&mt, sizeof(mt));
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_RGB24;
m_pGrabber->SetMediaType(&mt);
m_pGrabber->SetCallback(this, 0);
// SetCallback increments a refcount on ourselves,
// but we own the grabber so this is recursive
/// -- must addref before SetCallback(NULL)
Release();

Resources