at::Tensor to UIImage - pytorch

I have a PyTorch model and try run it on iOS. I have the next code:
at::Tensor tensor2 = torch::from_blob(imageBuffer2, {1, 1, 256, 256}, at::kFloat);
c10::InferenceMode guard;
auto output = _impl.forward({tensor1, tensor2});
torch::Tensor tensor_img = output.toTuple()->elements()[0].toTensor();
My question is "How I can convert tensor_img to UIImage?"
I found that functions in PyTorch documentation:
- (UIImage*)convertRGBBufferToUIImage:(unsigned char*)buffer
withWidth:(int)width
withHeight:(int)height {
char* rgba = (char*)malloc(width * height * 4);
for (int i = 0; i < width * height; ++i) {
rgba[4 * i] = buffer[3 * i];
rgba[4 * i + 1] = buffer[3 * i + 1];
rgba[4 * i + 2] = buffer[3 * i + 2];
rgba[4 * i + 3] = 255;
}
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, rgba, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if (colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
YES,
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if (pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if (context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage* image = nil;
if (context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
if ([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if (pixels) {
free(pixels);
}
return image;
}
#end
If I correctly understand, that function can convert unsigned char * to UIImage. I think that I need convert my tensor_img to unsigned char*, but I don't understand how I can do it.

The 1st code its a torch bridge and 2nd code is UIImage helper which I run from Swift. Anyway, I resolve that issue, we can close it. Code example:
for (int i = 0; i < 3 * width * height; i++) {
[results addObject:#(floatBuffer[i])];
}
NSMutableData* data = [NSMutableData dataWithLength:sizeof(float) * 3 * width * height];
float* buffer = (float*)[data mutableBytes];
for (int j = 0; j < 3 * width * height; j++) {
buffer[j] = [results[j] floatValue];
}
return buffer;

Related

D2D1 and D3D11 interoperability issues in a multi-thread environment with a shared ID3D11Device

Under D2D, I use the ID2D1RenderTarget object to draw line and image in separate threads, even render to another ID2D1RenderTarget object in the same thread, it always works fine.
But in the case of needing to support multi-threaded D3D / Media Foundation, I use global ID3D3d11Device and did an D2D/D3D interoperation.
if running only one thread, it works nice; bug if running two threads (each thread has a CDxRender window) at the same time, an error was reported, i don't known how to solve:
D2D DEBUG ERROR - An attempt to draw to an inaccessible target has been detected.
MFVerifDemo.exe debug breakpoint occur
Then press F5, continue to run:
D3D11 ERROR: ID3D11DeviceContext::Draw: A Vertex Shader is always required when drawing, but none is currently bound. [ EXECUTION ERROR #341: DEVICE_DRAW_VERTEX_SHADER_NOT_SET]
D3D11 ERROR: ID3D11DeviceContext::Draw: Rasterization Unit is enabled (PixelShader is not NULL or Depth/Stencil test is enabled and RasterizedStream is not D3D11_SO_NO_RASTERIZED_STREAM) but position is not provided by the last shader before the Rasterization Unit. [ EXECUTION ERROR #362: DEVICE_DRAW_POSITION_NOT_PRESENT]
D3D11: Removing Device.
1,Use d3d11createdevice to create a global variable _d3d11_device, and package the executecommandlist:
CComPtr<id3d11device> _d3d11_device
HRESULT CreateD3DDevice(ID3D11Device** d3d11_device)
{
CComPtr<ID3D11DeviceContext> d3d11_immedContex;
//Describe our Buffer
DXGI_MODE_DESC bufferDesc;
ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC));
bufferDesc.Width = 0;
bufferDesc.Height = 0;
bufferDesc.RefreshRate.Numerator = 60;
bufferDesc.RefreshRate.Denominator = 1;
bufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
//Create our _swap_chain1
HRESULT hr = D3D11CreateDevice(NULL, D3D_DRIVER_TYPE_HARDWARE , NULL, D3D11_CREATE_DEVICE_DEBUG | D3D11_CREATE_DEVICE_BGRA_SUPPORT, NULL, NULL, D3D11_SDK_VERSION, d3d11_device, NULL, &d3d11_immedContex);
RETURN_ON_FAIL(hr);
CComPtr<ID3D11Multithread> D3DDevMT;
hr = (*d3d11_device)->QueryInterface(IID_PPV_ARGS(&D3DDevMT));
RETURN_ON_FAIL(hr);
D3DDevMT->SetMultithreadProtected(TRUE);
return hr;
}
void ExecuteCommandList(CComPtr<ID3D11DeviceContext> deferred_context)
{
CComPtr<ID3D11CommandList> command_list;
HRESULT hr = deferred_context->FinishCommandList(FALSE, &command_list);
RETURN_ON_FAIL2(hr);
CComPtr<ID3D11DeviceContext> immediate_context;
_d3d11_device->GetImmediateContext(&immediate_context);
if (immediate_context)
{
std::unique_lock <std::mutex> lck(_mutex);
immediate_context->ExecuteCommandList(command_list, FALSE);
}
}
2, In window class "CDxRender", defines a series of member-variables:
const int Output_Width = 960;
const int Output_Height = 720;
const UINT32 fps = 50;
struct DefVertBuffer
{
DirectX::XMFLOAT3 pos;
DirectX::XMFLOAT4 color;
DirectX::XMFLOAT2 texCoord;
};
CComPtr<IDXGISwapChain1> _swap_chain1;
CComPtr<ID3D11DeviceContext> _deferred_context;
CComPtr<ID2D1Factory> _d2d_factory;
CComPtr<ID3D11RenderTargetView> _render_target_view;
CComPtr<ID3D11Texture2D> _back_texture2d;
CComPtr<IDXGISurface> _back_dxgi_surface;
CComPtr<ID2D1RenderTarget> _back_render_target;
CComPtr<ID2D1Bitmap> _back_d2d1_bitmap;
CComPtr<ID3D11SamplerState> _sampler_state;
CComPtr<ID3D11InputLayout> _input_layout;
CComPtr<ID3D11VertexShader> _vertex_shader;
CComPtr<ID3D11PixelShader> _pixel_shader;
CComPtr<ID3D11Buffer> _vert_buffer;
CComPtr<ID3D11Buffer> _index_buffer;
3,The main implementation is as follows:
HRESULT CDxRender::InitD3d11()
{
RECT client_rect;
GetClientRect(&client_rect);
DirectX::XMMATRIX WVP;
WVP = DirectX::XMMatrixOrthographicOffCenterLH(0.0f, static_cast<float>(client_rect.right - client_rect.left), static_cast<float>(client_rect.bottom - client_rect.top), 0.0f, 0.1f, 100.0f);
//Create our _swap_chain1
CComPtr<IDXGIFactory2> dxgi_factory2;
HRESULT hr = GetDxgiFactoryFromD3DDevice(_d3d11_device, &dxgi_factory2);
RETURN_ON_FAIL(hr);
//Describe our _swap_chain1
DXGI_SWAP_CHAIN_DESC1 swapChainDesc = { 0 };
swapChainDesc.Width = client_rect.right - client_rect.left; // Match the size of the window.
swapChainDesc.Height = client_rect.bottom - client_rect.top;
swapChainDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; // This is the most common swap chain format.
swapChainDesc.Stereo = FALSE;
swapChainDesc.SampleDesc.Count = 1; // Don't use multi-sampling.
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.BufferCount = 2; // Use double-buffering to minimize latency.
swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL; // All Windows Store apps must use this SwapEffect.
swapChainDesc.Flags = 0;
swapChainDesc.Scaling = DXGI_SCALING_NONE;
swapChainDesc.AlphaMode = DXGI_ALPHA_MODE_IGNORE;
hr = dxgi_factory2->CreateSwapChainForHwnd(_d3d11_device, m_hWnd, &swapChainDesc, nullptr, nullptr, &_swap_chain1);
RETURN_ON_FAIL(hr);
D2D1_FACTORY_OPTIONS options;
options.debugLevel = D2D1_DEBUG_LEVEL_INFORMATION;
hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, options, &_d2d_factory);
RETURN_ON_FAIL(hr);
hr = _d3d11_device->CreateDeferredContext(0, &_deferred_context);
RETURN_ON_FAIL(hr);
D3D11_BUFFER_DESC bufDs;
memset(&bufDs, 0, sizeof(bufDs));
bufDs.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bufDs.Usage = D3D11_USAGE_DEFAULT;
bufDs.ByteWidth = sizeof(DefVertBuffer) * 4;
D3D11_SUBRESOURCE_DATA subData;
std::vector<DefVertBuffer> temptBuffer;
DefVertBuffer temptData;
// lb
temptData.pos = { -1.0f, -1.0f, 0.0f };
temptData.color = { 1.0f,0.0f,0.0f,1.0f };
temptData.texCoord = { 0.0f, 1.0f };
temptBuffer.push_back(temptData);
// lt
temptData.pos = { -1.0f,1.0f,0.0f };
temptData.color = { 1.0f,0.0f,0.0f,1.0f };
temptData.texCoord = { 0.0f, 0.0f };
temptBuffer.push_back(temptData);
// rt
temptData.pos = { 1.0f, 1.0f, 0.0f};
temptData.color = { 1.0f,0.0f,0.0f,1.0f };
temptData.texCoord = { 1.0f, 0.0f };
temptBuffer.push_back(temptData);
// rb
temptData.pos = { 1.0f, -1.0f, 0.0f };
temptData.color = { 1.0f,0.0f,0.0f,1.0f };
temptData.texCoord = { 1.0f, 1.0f };
temptBuffer.push_back(temptData);
subData.pSysMem = temptBuffer.data();
hr = _d3d11_device->CreateBuffer(&bufDs, &subData, &_vert_buffer);
RETURN_ON_FAIL(hr);
unsigned __int32 indexBuff[] =
{
0,1,2,
0,2,3
};
bufDs.BindFlags = D3D11_BIND_INDEX_BUFFER;
bufDs.ByteWidth = sizeof(unsigned __int32) * 6;
subData.pSysMem = indexBuff;
hr = _d3d11_device->CreateBuffer(&bufDs, &subData, &_index_buffer);
RETURN_ON_FAIL(hr);
std::vector<D3D11_INPUT_ELEMENT_DESC> layout;
layout.push_back({ "POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT,0,0,D3D11_INPUT_PER_VERTEX_DATA,0 });
layout.push_back({ "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 });
layout.push_back({ "TEXCOORD",0,DXGI_FORMAT_R32G32_FLOAT,0,28,D3D11_INPUT_PER_VERTEX_DATA,0 });
hr = _d3d11_device->CreateInputLayout(layout.data(), layout.size(), g_VertexShader, ARRAYSIZE(g_VertexShader), &_input_layout);
RETURN_ON_FAIL(hr);
hr = _d3d11_device->CreateVertexShader(g_VertexShader, ARRAYSIZE(g_VertexShader), NULL, &_vertex_shader);
RETURN_ON_FAIL(hr);
hr = _d3d11_device->CreatePixelShader(g_PixelShader, ARRAYSIZE(g_PixelShader), NULL, &_pixel_shader);
RETURN_ON_FAIL(hr);
D3D11_SAMPLER_DESC samplerDesc;
ZeroMemory(&samplerDesc, sizeof(D3D11_SAMPLER_DESC));
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
hr = _d3d11_device->CreateSamplerState(&samplerDesc, &_sampler_state);
RETURN_ON_FAIL(hr);
return hr;
}
HRESULT CDxRender::Resize(const UINT32& w, const UINT32& h)
{
_render_target_view = nullptr;
_back_texture2d = nullptr;
_back_dxgi_surface = nullptr;
_back_d2d1_bitmap = nullptr;
_back_render_target = nullptr;
HRESULT hr = _swap_chain1->ResizeBuffers(2, w, h, DXGI_FORMAT_B8G8R8A8_UNORM, 0);
RETURN_ON_FAIL(hr);
hr = _swap_chain1->GetBuffer(0, __uuidof(ID3D11Texture2D), (void**)&_back_texture2d);
RETURN_ON_FAIL(hr);
hr = _d3d11_device->CreateRenderTargetView(_back_texture2d, NULL, &_render_target_view);
RETURN_ON_FAIL(hr);
_deferred_context->OMSetRenderTargets(1, &_render_target_view.p, NULL);
hr = _back_texture2d->QueryInterface(IID_PPV_ARGS(&_back_dxgi_surface));
RETURN_ON_FAIL(hr);
#if 1
UINT dpi = GetDpiForWindow(m_hWnd);
D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties(D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), static_cast<float>(dpi), static_cast<float>(dpi));
hr = _d2d_factory->CreateDxgiSurfaceRenderTarget(_back_dxgi_surface, &props, &_back_render_target);
RETURN_ON_FAIL(hr);
float dpi_x = 0.f;
float dpi_y = 0.f;
_back_render_target->GetDpi(&dpi_x, &dpi_y);
D2D1_SIZE_U src_sizeU = _back_render_target->GetPixelSize();
D2D1_BITMAP_PROPERTIES prop = D2D1::BitmapProperties(D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi_x, dpi_y);
hr = _back_render_target->CreateBitmap(src_sizeU, prop, &_back_d2d1_bitmap);
RETURN_ON_FAIL(hr);
#endif
return hr;
}
void CDxRender::PreRender(const UINT32& w, const UINT32& h)
{
unsigned __int32 stride = sizeof(DefVertBuffer);
unsigned __int32 offset = 0;
_deferred_context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
_deferred_context->IASetIndexBuffer(_index_buffer, DXGI_FORMAT_R32_UINT, 0);
_deferred_context->IASetVertexBuffers(0, 1, &_vert_buffer.p, &stride, &offset);
_deferred_context->PSSetShader(_pixel_shader, 0, 0);
_deferred_context->VSSetShader(_vertex_shader, 0, 0);
_deferred_context->IASetInputLayout(_input_layout);
_deferred_context->PSSetSamplers(0, 1, &_sampler_state.p);
_deferred_context->OMSetRenderTargets(1, &_render_target_view.p, nullptr);
D3D11_VIEWPORT viewPort;
unsigned int viewPort_index = 1ui32;
_deferred_context->RSGetViewports(&viewPort_index, &viewPort);
viewPort.Width = static_cast<float>(w);
viewPort.Height = static_cast<float>(h);
viewPort.MinDepth = 0.0f;
viewPort.MaxDepth = 1.0f;
_deferred_context->RSSetViewports(1, &viewPort);
}
void CDxRender::DrawScene()
{
RECT client_rect;
GetClientRect(&client_rect);
PreRender(client_rect.right - client_rect.left, client_rect.bottom - client_rect.top);
FLOAT bgArray[][4] =
{
{ 1.f, 0.f, 0.f, 1.0f },
{ 0.f, 1.f, 0.f, 1.0f },
{ 0.f, 0.f, 1.f, 1.0f },
{ 1.f, 0.f, 1.f, 1.0f },
{ 1.f, 1.f, 0.f, 1.0f },
{ 0.f, 1.f, 1.f, 1.0f },
{ 1.f, 1.f, 1.f, 1.0f },
{ 0.f, 0.f, 0.f, 1.0f },
};
INT64 counter = (_capture_counter / (5 * fps)) % ARRAYSIZE(bgArray);
FLOAT* bgColor = bgArray[counter];
#if 1
CComPtr<ID2D1SolidColorBrush> whiteBrush;
_back_render_target->CreateSolidColorBrush(D2D1::ColorF(1.f, 1.f, 0.f, 1.f), &whiteBrush);
// draw the text
D2D1_SIZE_F border = _back_render_target->GetSize();
float _offset_x = (rand() % 20 - 10) / 10.f;
float _offset_y = (rand() % 20 - 10) / 10.f;
_back_render_target->BeginDraw();
_back_render_target->Clear(D2D1::ColorF(bgColor[0], bgColor[1], bgColor[2], bgColor[3]));
_back_render_target->FillRectangle(D2D1::RectF(200 - 50 * _offset_y, 200 - 50 * _offset_x, 800 + 50 * (_offset_x + _offset_y) / 2, 600 - 50 * (_offset_x + _offset_y) / 2), whiteBrush);
_back_render_target->DrawRectangle(D2D1::RectF(100 + 50 * _offset_x, 100 + 50 * _offset_y, border.width - 100 - 50 * (_offset_x + _offset_y) / 2, border.height - 100 + 50 * (_offset_x + _offset_y) / 2), whiteBrush, 10.f);
hr = _back_render_target->EndDraw();///<------------will debug breakpoint here
#endif
ExecuteCommandList(_deferred_context);
//Present the backbuffer to the screen
_swap_chain1->Present(0, 0);
}
4,PixelShader.hlsl:
struct PS_Input
{
float4 pos:SV_POSITION;
float4 color:COLOR;
float2 texCoord:TEXCOORD;
};
Texture2D objTexture :register(t0);
SamplerState objSamplerState;
float4 main(PS_Input input) : SV_TARGET
{
/*float4 retColor = 0;
retColor = objTexture.Sample(objSamplerState, input.texCoord) * input.color;
clip(retColor.a - 0.3);
return retColor;*/
return input.color;
}
5,VertexShader.hlsl:
struct VS_Input
{
float4 pos:POSITION;
float4 color:COLOR;
float2 texCoord:TEXCOORD;
};
struct VS_Output
{
float4 pos:SV_POSITION;
float4 color:COLOR;
float2 texCoord:TEXCOORD;
};
VS_Output main(VS_Input input)
{
VS_Output output;
output.pos = input.pos;/*mul(input.pos, WVP);*/
output.color = input.color;
output.texCoord = input.texCoord;
return output;
}

directx 11 omnidirectional shadow mapping shadow wrong projected

I code ommidirectional shadow mapping by c++ directx 11. I took algorithm from book: "HLSL Development Cookbook" by Doron Feinstein. But if my screen resolution and all dependences are different with resolution of shadow map, the shadows are in wrong place and projected incorectly. How I can fix it?
XMMATRIX* PointLight::GetCubeViewProjection()
{
XMMATRIX lightProjection, positionMatrix, spotView, toShadow;
RebuildWorldMatrixPosition();
XMFLOAT3 worldPosition = this->GetWorldPosition();
positionMatrix = XMMatrixTranslation(-worldPosition.x, -worldPosition.y, -worldPosition.z);
lightProjection = XMMatrixPerspectiveFovLH(XM_PIDIV2, 1.0f, SHADOW_NEAR_PLANE, m_radius);
// Cube +X
spotView = XMMatrixRotationY(XM_PI + XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[0] = XMMatrixTranspose(toShadow);
// Cube -X
spotView = XMMatrixRotationY(XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[1] = XMMatrixTranspose(toShadow);
// Cube +Y
spotView = XMMatrixRotationX(XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[2] = XMMatrixTranspose(toShadow);
// Cube -Y
spotView = XMMatrixRotationX(XM_PI + XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[3] = XMMatrixTranspose(toShadow);
// Cube +Z
toShadow = positionMatrix * lightProjection;
m_cubeViewProjection[4] = XMMatrixTranspose(toShadow);
// Cube -Z
spotView = XMMatrixRotationY(XM_PI);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[5] = XMMatrixTranspose(toShadow);
return m_cubeViewProjection;
}
cbuffer WorldMatrixBuffer : register( b0 )
{
matrix worldMatrix;
};
//vertex shadow gen shader
float4 PointShadowGenVS(float4 Pos : POSITION) : SV_Position
{
Pos.w = 1.0f;
return mul(Pos, worldMatrix);
}
//geometry shadow gen shader
cbuffer ShadowMapCubeViewProj : register( b0 )
{
float4x4 cubeViewProj[6] : packoffset(c0);
};
struct GS_OUTPUT
{
float4 Pos: SV_POSITION;
uint RTIndex : SV_RenderTargetArrayIndex;
};
[maxvertexcount(18)]
void PointShadowGenGS(triangle float4 InPos[3] : SV_Position, inout TriangleStream<GS_OUTPUT> OutStream)
{
for (int iFace = 0; iFace < 6; ++iFace)
{
GS_OUTPUT output;
output.RTIndex = iFace;
for (int v = 0; v < 3; ++v)
{
output.Pos = mul(InPos[v], cubeViewProj[iFace]);
OutStream.Append(output);
}
OutStream.RestartStrip();
}
}
//point light pixel shader
float PointShadowPCF(float3 toPixel)
{
float3 toPixelAbs = abs(toPixel);
float z = max(toPixelAbs.x, max(toPixelAbs.y, toPixelAbs.z));
float depth = (lightPerspectiveValues.x * z + lightPerspectiveValues.y) / z;
return pointShadowMapTexture.SampleCmpLevelZero(PCFSampler, toPixel, depth).x;
}
float shadowAttenuation = PointShadowPCF(worldPosition - lightPosition);
Problem was in incorrect viewport. It must have the same width and height as shadow map. I created it incorrect.

PGraphics set different colormode not working

I have the following code:
import processing.video.*;
import oscP5.*;
import netP5.*;
//sending the data to wekinator
int numPixelsOrig, numPixels, boxWidth = 64, boxHeight = 48, numHoriz = 640/boxWidth, numVert = 480/boxHeight;
color[] downPix = new color[numHoriz * numVert];
PGraphics buffer;
Capture video;
OscP5 oscSend;
//recieving data from the wekinatorino
ArrayList<Blob> blobs = new ArrayList<Blob>();
int amt = 5;
int cons1 = 200, cons2 = 150;
float xspeed, yspeed, radius;
float p1, p2, p3, p4, p5, p6;
OscP5 oscRecieve;
NetAddress dest;
void setup() {
colorMode(RGB, 100);
size(600, 600, P2D);
buffer = createGraphics(600, 600, P3D);
buffer.beginDraw();
buffer.colorMode(HSB, 100);
buffer.endDraw();
String[] cameras = Capture.list();
if (cameras == null) {
video = new Capture(this, 640, 480);
}
if (cameras.length == 0) {
exit();
} else {
video = new Capture(this, 640, 480);
video.start();
numPixelsOrig = video.width * video.height;
}
oscSend = new OscP5(this, 9000);
oscRecieve = new OscP5(this, 12000);
dest = new NetAddress("127.0.0.1", 6448);
}
void draw() {
//println(blobs.size());
if (video.available() == true) {
video.read();
video.loadPixels();
int boxNum = 0;
int tot = boxWidth*boxHeight;
for (int x = 0; x < 640; x += boxWidth) {
for (int y = 0; y < 480; y += boxHeight) {
float red = 0, green = 0, blue = 0;
for (int i = 0; i < boxWidth; i++) {
for (int j = 0; j < boxHeight; j++) {
int index = (x + i) + (y + j) * 640;
red += red(video.pixels[index]);
green += green(video.pixels[index]);
blue += blue(video.pixels[index]);
}
}
downPix[boxNum] = color(red/tot, green/tot, blue/tot);
fill(downPix[boxNum]);
int index = x + 640*y;
red += red(video.pixels[index]);
green += green(video.pixels[index]);
blue += blue(video.pixels[index]);
noStroke();
rect(x, y, boxWidth, boxHeight);
boxNum++;
}
}
if (frameCount % 2 == 0)
sendOsc(downPix);
}
if (blobs.size() < amt) {
blobs.add(new Blob(new PVector(random(100 + (width - 200)), random(100 + (height- 200))), new PVector(-3, 3), random(100, 300)));
}
for (int i = blobs.size() - 1; i >= 0; i--) {
if (blobs.size() > amt) {
blobs.remove(i);
}
}
buffer.beginDraw();
buffer.loadPixels();
for (int x = 0; x < buffer.width; x++) {
for (int y = 0; y < buffer.height; y++) {
int index = x + y * buffer.width;
float sum = 0;
for (Blob b : blobs) {
float d = dist(x, y, b.pos.x, b.pos.y);
sum += 10 * b.r / d;
}
buffer.pixels[index] = color(sum, 255, 255); //constrain(sum, cons1, cons2)
}
}
buffer.updatePixels();
buffer.endDraw();
for (Blob b : blobs) {
b.update();
}
//if () {
//xspeed = map(p1, 0, 1, -5, 5);
//yspeed = map(p2, 0, 1, -5, 5);
//radius = map(p3, 0, 1, 100, 300);
//cons1 = int(map(p4, 0, 1, 0, 255));
//cons2 = int(map(p5, 0, 1, 0, 255));
//amt = int(map(p6, 0, 1, 1, 6));
//for (Blob b : blobs) {
// b.updateAlgorithm(xspeed, yspeed, radius);
//}
//}
image(buffer, 0, 0);
}
void sendOsc(int[] px) {
//println(px);
OscMessage msg = new OscMessage("/wek/inputs");
for (int i = 0; i < px.length; i++) {
msg.add(float(px[i]));
}
oscSend.send(msg, dest);
}
void oscEvent(OscMessage theOscMessage) {
if (theOscMessage.checkAddrPattern("/wek/outputs")==true) {
if (theOscMessage.checkTypetag("fff")) {
p1 = theOscMessage.get(0).floatValue();
p2 = theOscMessage.get(1).floatValue();
p3 = theOscMessage.get(2).floatValue();
p4 = theOscMessage.get(2).floatValue();
p5 = theOscMessage.get(2).floatValue();
p6 = theOscMessage.get(2).floatValue();
} else {
}
}
}
void mousePressed() {
xspeed = random(-5, 5);
yspeed = random(-5, 5);
radius = random(100, 300);
cons1 = int(random(255));
cons2 = int(random(255));
amt = int(random(6));
for (Blob b : blobs) {
b.updateAlgorithm(xspeed, yspeed, radius);
}
}
class Blob {
PVector pos;
PVector vel;
float r;
Blob(PVector pos, PVector vel, float r) {
this.pos = pos.copy();
this.vel = vel.copy();
this.r = r;
}
void update(){
pos.add(vel);
if (pos.x > width || pos.x < 0) {
vel.x *= -1;
}
if (pos.y > height || pos.y < 0) {
vel.y *= -1;
}
}
void updateAlgorithm(float vx, float vy, float nr){
vel.x = vx;
vel.y = vy;
r = nr;
}
}
Then I create some graphics in the buffer element. but the graphics aren't using my HSB color mode with the result i only see blue and white...
so how do i correct my code, or change the colorMode for a PGraphics element to HSB?
According to the PGraphics reference:
The beginDraw() and endDraw() methods (see above example) are
necessary to set up the buffer and to finalize it
therefore you should try this:
buffer = createGraphics(600, 600, P3D);
buffer.beginDraw();
buffer.colorMode(HSB, 100);
buffer.endDraw();
Here's a full test sketch to run and compare:
PGraphics buffer;
void setup(){
colorMode(RGB, 100);
size(600, 600, P2D);
//draw test gradient in RGB buffer
noStroke();
for(int i = 0 ; i < 10; i++){
fill(i * 10,100,100);
rect(0,i * 60,width,60);
}
buffer = createGraphics(600, 600, P3D);
buffer.beginDraw();
buffer.colorMode(HSB, 100);
buffer.endDraw();
//draw test gradient in HSB buffer
buffer.beginDraw();
buffer.noStroke();
for(int i = 0 ; i < 10; i++){
buffer.fill(i * 10,100,100);
buffer.rect(0,i * 60,width,60);
}
buffer.endDraw();
//finally render the buffer on screen, offset to the right for comparison
image(buffer,300,0);
}

how to extract the detected region in opencv?

Everyone. I have shown below my code for tracking objects and it shows background subtraction result also. Here I am using frame differencing method. now my problem is that I have to extract that moving object from the color video file. I have done segmentation. But for detection I want to extract the region on which I have drawn bounding box. So can anybody help me in this...please. thank you in advance.
int main(int argc, char* argv[])
{
CvSize imgSize;
//CvCapture *capture = cvCaptureFromFile("S:\\offline object detection database\\video1.avi");
CvCapture *capture = cvCaptureFromFile("S:\\offline object detection database\\SINGLE PERSON Database\\Walk1.avi");
if(!capture){
printf("Capture failure\n");
return -1;
}
IplImage* frame=0;
frame = cvQueryFrame(capture);
if(!frame)
return -1;
imgSize = cvGetSize(frame);
IplImage* greyImage = cvCreateImage( imgSize, IPL_DEPTH_8U, 1);
IplImage* colourImage;
IplImage* movingAverage = cvCreateImage( imgSize, IPL_DEPTH_32F, 3);
IplImage* difference;
IplImage* temp;
IplImage* motionHistory = cvCreateImage( imgSize, IPL_DEPTH_8U, 3);
CvRect bndRect = cvRect(0,0,0,0);
CvPoint pt1, pt2;
CvFont font;
int prevX = 0;
int numPeople = 0;
char wow[65];
int avgX = 0;
bool first = true;
int closestToLeft = 0;
int closestToRight = 320;
for(;;)
{
colourImage = cvQueryFrame(capture);
if( !colourImage )
{
break;
}
if(first)
{
difference = cvCloneImage(colourImage);
temp = cvCloneImage(colourImage);
cvConvertScale(colourImage, movingAverage, 1.0, 0.0);
first = false;
}
else
{
cvRunningAvg(colourImage, movingAverage, 0.020, NULL);
}
cvConvertScale(movingAverage,temp, 1.0, 0.0);
cvAbsDiff(colourImage,temp,difference);
cvCvtColor(difference,greyImage,CV_RGB2GRAY);
cvThreshold(greyImage, greyImage, 80, 250, CV_THRESH_BINARY);
cvSmooth(greyImage, greyImage,2);
cvDilate(greyImage, greyImage, 0, 1);
cvErode(greyImage, greyImage, 0, 1);
cvShowImage("back", greyImage);
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvFindContours( greyImage, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( ; contour != 0; contour = contour->h_next )
{
bndRect = cvBoundingRect(contour, 0);
pt1.x = bndRect.x;
pt1.y = bndRect.y;
pt2.x = bndRect.x + bndRect.width;
pt2.y = bndRect.y + bndRect.height;
avgX = (pt1.x + pt2.x) / 2;
if(avgX > 90 && avgX < 250)
{
if(closestToLeft >= 88 && closestToLeft <= 90)
{
if(avgX > prevX)
{
numPeople++;
closestToLeft = 0;
}
}
else if(closestToRight >= 250 && closestToRight <= 252)
{
if(avgX < prevX)
{
numPeople++;
closestToRight = 220;
}
}
cvRectangle(colourImage, pt1, pt2, CV_RGB(255,0,0), 1);
}
if(avgX > closestToLeft && avgX <= 90)
{
closestToLeft = avgX;
}
if(avgX < closestToRight && avgX >= 250)
{
closestToRight = avgX;
}
prevX = avgX;
}
cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, 0.8, 0.8, 0, 2);
cvPutText(colourImage, _itoa(numPeople, wow, 10), cvPoint(60, 200), &font, cvScalar(0, 0, 300));
cvShowImage("My Window", colourImage);
cvShowImage("fore", greyImage);
cvWaitKey(10);
}
cvReleaseImage(&temp);
cvReleaseImage(&difference);
cvReleaseImage(&greyImage);
cvReleaseImage(&movingAverage);
cvDestroyWindow("My Window");
cvReleaseCapture(&capture);
return 0;
}
In OpenCV's legacy C API, you can extract a region of interest from an image with this command. In your code you would simply add this line, and the image would be treated as if it contained only the extracted region, pretty much:
cvSetImageROI(colourImage, bndrect);
In the OpenCV 2.0 API, your old image and "extracted region" image would be stored in separate Mat objects, but point to the same data:
Mat colourImage, extractedregion;
colourImage = imread("test.bmp");
extractedregion = colourImage(bndRect); // Creates only a header, no new image data
Many helpful OpenCV tutorials use the legacy API, but you should privilege the new one.
I know how to do it with the new OpenCV interface, rather than the "legacy" interface you are using.
It would be like this:
cv::Mat frame_m(frame);
...
cv::Mat region_m = frame_m(cv::Rect(bndRect));
IplImage region = region_m; // use &iplimg when an IplImage* is needed.
If you don't want to mix interfaces, it is time to learn the new one.

resize UIImage in table cell programmatically

how can i resize the UIImage in table cell
cell.imageView.layer.masksToBounds = YES;
cell.imageView.layer.cornerRadius = 5.0;
cell.imageView.frame=CGRectMake(0, 0, 20, 30);
cell.imageView.image =[UIImage imageNamed:#"Diseases.png"];
it is not working
Use this method, pass the size and image, and get the image then show in cell
+ (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
You can also make your custom UITableViewCell, and design as you want to look it
This worked for me.
///////////////////////////////// TO RESIZE IMAGE //////////////////////////////////
////////// FIRST WRITE DESIRED SIZE AND CALL BELOW FINCTION ///////
CGSize newSize ;
newSize.width = 30;
newSize.height = 30;
UIImage *imgSelectedNew = [self imageWithImage:savedImage scaledToSize:newSize];
- (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)targetSize
{
//UIImage *sourceImage = sourceImage;//= self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor < heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
} else if (widthFactor > heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
// this is actually the interesting part:
UIGraphicsBeginImageContext(targetSize);
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil)
NSLog(#"could not scale image");
return newImage ;
}
well m also n indorian , Gud to see the other one !!! :)

Resources