SharpDX: Creating TextureCube from set of images - direct3d

So I have 6 seperate images and I would like to build a textureCube out of them. What's the best way to go about it? Right now this is what I have, but I'm getting a memory access violation when the TextureCube tries to create itself.
SharpDX.DataBox[] textureData = new SharpDX.DataBox[6];
for(int i = 0; i < 6; i++)
{
Texture2D tex = Texture2D.Load(device, resources[i].Filepath);
SharpDX.Direct3D11.Texture2D staged = (SharpDX.Direct3D11.Texture2D)tex.ToStaging();
textureData[i] = new SharpDX.DataBox(staged.NativePointer, staged.Description.Width, 0);
}
SharpDX.Toolkit.Graphics.TextureCube cube = SharpDX.Toolkit.Graphics.TextureCube.New(
device,
2048,
PixelFormat.R8G8B8A8.UNorm,
textureData);
The texture load is working fine. I can load the texture and then create a textureCube from a single image no problem. That's not much use though. When I try to create a cube using the raw data from my 6 separate images I'm getting the memory exception.
Result Message:
Test method ResourceLoadingTests.LoadCubemapAndSave threw exception:
System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Result StackTrace:
at SharpDX.Direct3D11.Device.CreateTexture2D(Texture2DDescription& descRef, DataBox[] initialDataRef, Texture2D texture2DOut)
at SharpDX.Direct3D11.Texture2D..ctor(Device device, Texture2DDescription description, DataBox[] data)
at SharpDX.Toolkit.Graphics.Texture2DBase..ctor(GraphicsDevice device, Texture2DDescription description2D, DataBox[] dataBoxes)
at SharpDX.Toolkit.Graphics.TextureCube..ctor(GraphicsDevice device, Texture2DDescription description2D, DataBox[] dataBoxes)
at SharpDX.Toolkit.Graphics.TextureCube.New(GraphicsDevice device, Int32 size, PixelFormat format, DataBox[] textureData, TextureFlags flags, ResourceUsage usage)

Related

Pyueye image saving with wrong resolution

personally pretty new to programming and I am trying to save a high mp Image from an IDS camera using the pyueye module with python.
my Code works to save the Image, but the Problem is it saves the Image as a 1280x720 Image inside a 4192x3104
I have no idea why its saving the small Image inside the larger file and am asking if anyone knows what i am doing wrong and how can I fix it so the Image is the whole 4192x3104
from pyueye import ueye
import ctypes
hcam = ueye.HIDS(0)
pccmem = ueye.c_mem_p()
memID = ueye.c_int()
hWnd = ctypes.c_voidp()
ueye.is_InitCamera(hcam, hWnd)
ueye.is_SetDisplayMode(hcam, 0)
sensorinfo = ueye.SENSORINFO()
ueye.is_GetSensorInfo(hcam, sensorinfo)
ueye.is_AllocImageMem(hcam, sensorinfo.nMaxWidth, sensorinfo.nMaxHeight,24, pccmem, memID)
ueye.is_SetImageMem(hcam, pccmem, memID)
ueye.is_SetDisplayPos(hcam, 100, 100)
nret = ueye.is_FreezeVideo(hcam, ueye.IS_WAIT)
print(nret)
FileParams = ueye.IMAGE_FILE_PARAMS()
FileParams.pwchFileName = "python-test-image.bmp"
FileParams.nFileType = ueye.IS_IMG_BMP
FileParams.ppcImageMem = None
FileParams.pnImageID = None
nret = ueye.is_ImageFile(hcam, ueye.IS_IMAGE_FILE_CMD_SAVE, FileParams, ueye.sizeof(FileParams))
print(nret)
ueye.is_FreeImageMem(hcam, pccmem, memID)
ueye.is_ExitCamera(hcam)
The size of the image depends on the sensor size of the camera.By printing sensorinfo.nMaxWidth and sensorinfo.nMaxHeight you will get the maximum size of the image which the camera captures. I think that it depends on the model of the camera. For me it is 2056x1542.
Could you please elaborate on the last sentence of the question.

How does a ioctl() call the driver code

I am working on a testing tool for nvme-cli(written in c and can run on linux).
For SSD validation purpose, i was actually looking for a custom command(For e.g. I/O command, write and then read the same and finally compare if both the data are same)
For read the ioctl() function is used as shown in the below code.
struct nvme_user_io io = {
.opcode = opcode,
.flags = 0,
.control = control,
.nblocks = nblocks,
.rsvd = 0,
.metadata = (__u64)(uintptr_t) metadata,
.addr = (__u64)(uintptr_t) data,
.slba = slba,
.dsmgmt = dsmgmt,
.reftag = reftag,
.appmask = appmask,
.apptag = apptag,
};
err = ioctl(fd, NVME_IOCTL_SUBMIT_IO, &io);
Can I to where exactly the control of execution goes in order to understand the read.
Also I want to have another command that looks like
err = ioctl(fd,NVME_IOCTL_WRITE_AND_COMPARE_IO, &io);
so that I can internally do a write, then read the same location and finally compare the both data to ensure that the disk contains only the data that I wanted to write.
Since I am new to this nvme/ioctl(), if there is any mistakes please correct me.
nvme_io() is a main command handler that accepts as a parameter the NVMe opcode that you want to send to your device. According to the standard, you have separate commands (opcodes) for read, write and compare. You could either send those commands separately, or add a vendor specific command to calculate what you need.

Is there any way to sample audio using OpenSL on android with different sampling rates and buffer sizes?

I have downloaded the audio-echo app from the android NDK portal for opensl. Due to the lack of documentation I'm not able to identify how to change the sampling rate and buffer size of the audio in and out.
If anybody has any idea on how to:
Change the buffer size and sampling rate on OpenSL
Read the buffers to be fed to a C code to be processed
Fed to the output module of OpenSL to be fed to the speakers
Another alternative I feel is read it at the preferred sampling rate and buffer size but downsample and upsample in the code itself and use a circular buffer to get desired data. But how are we reading and feeding the data in openSL?
In the OpenSL ES API, there are calls to create either a Player or a Recorder:
SLresult (*CreateAudioPlayer) (
SLEngineItf self,
SLObjectItf * pPlayer,
SLDataSource *pAudioSrc,
SLDataSink *pAudioSnk,
SLuint32 numInterfaces,
const SLInterfaceID * pInterfaceIds,
const SLboolean * pInterfaceRequired
);
SLresult (*CreateAudioRecorder) (
SLEngineItf self,
SLObjectItf * pRecorder,
SLDataSource *pAudioSrc,
SLDataSink *pAudioSnk,
SLuint32 numInterfaces,
const SLInterfaceID * pInterfaceIds,
const SLboolean * pInterfaceRequired
);
Note that both of these take a SLDataSource *pAudioSrc parameter.
To use a custom playback rate or recording rate, you have to set up this data source properly.
I use an 11Khz playback rate using this code:
// Configure data format.
SLDataFormat_PCM pcm;
pcm.formatType = SL_DATAFORMAT_PCM;
pcm.numChannels = 1;
pcm.samplesPerSec = SL_SAMPLINGRATE_11_025;
pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16;
pcm.containerSize = 16;
pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
// Configure Audio Source.
SLDataSource source;
source.pFormat = &pcm;
source.pLocator = &bufferQueue;
To feed data to the speakers, a buffer queue is used that is filled by a callback. To set this callback, use SLAndroidSimpleBufferQueueItf, documented in section 8.12 SLBufferQueueItf of the OpenGL ES specification.

VTK using VoxelModeller to build Voxel Space of Point Cloud

Here's what I'd like to do: I have a .pcd (PCL standard format) file in which it's stored a Point Cloud, I would like to build a voxel representation of it and then extract an isosurface. If I'm not wrong, I should follow this example http://www.vtk.org/Wiki/VTK/Examples/Cxx/Modelling/MarchingCubes, where I should set my pcd as input to vtkVoxelModeller instead of the sphere.
So I tried in this way:
//-------------------------------------------------------------------------
// loading Point Cloud
//-------------------------------------------------------------------------
pcl::PointCloud<PointType>::Ptr cloud (new pcl::PointCloud<PointType>);
std::string inputFilename = "GiraffeHead_2.pcd";
if (pcl::io::loadPCDFile<PointType> (inputFilename.c_str(), *cloud) == -1)
{
PCL_ERROR ("Couldn't read file test_pcd.pcd \n");
return (-1);
}
PointType min_pt,max_pt;
pcl::getMinMax3D(*cloud,min_pt,max_pt);
...
//-------------------------------------------------------------------------
// copying Point Cloud into PolyData
//-------------------------------------------------------------------------
vtkSmartPointer<vtkPoints> points = vtkSmartPointer<vtkPoints>::New();
for (size_t i = 0; i < cloud->points.size (); ++i)
points->InsertNextPoint(cloud->points[i].x,cloud->points[i].y,cloud->points[i].z);
vtkSmartPointer<vtkPolyData> PCData = vtkSmartPointer<vtkPolyData>::New();
PCData->SetPoints(points);
the rest of the code is taken from the example and the only modifications I make is to set the bounds according to my surface and:
voxelModeller->SetInputConnection(PCData->GetProducerPort());
when I run the executable I get an empty window :(
Since I'm a newbie with VTK and I strongly need it for my research project I'd be very glad if someone could explain me what I'm doing wrong and point out a correct solution.
Thanks
I found out that this tutorial was deprecated.
Following:
http://www.vtk.org/Wiki/VTK/Examples/Cxx/PolyData/ImplicitModeller
and
http://www.paraview.org/Wiki/ParaView/PCL_Plugin
made the trick!

How can i reload a wav file using openAL (java)

I made up an application which loads a .wav file, but it loads it only once.
I want to be able to reload a wav file after the first track ends.How can i make that in java?
Also, if i hit play twice after the end of the track(while the selected track is loaded) it throws an
IllegalThreadStateException.
if (threadStatus != 0 || this != me)
throw new IllegalThreadStateException();
the code i use to load the file is this:
OpenAL openal = new OpenAL();
source = openal.createSource(new File(Audioplayer.path));
System.out.println(source.toString());
source.play();
source.setGain(0.75f); // 75% volume
source.setPitch(0.85f); // 85% of the original pitch
source.setPosition(0, 0, 0); // -1 means 1 unit to the left
source.setLooping(false); // Loop the sound effect
j=source.getBuffer();
System.out.println(j);
for (i=1;i<=10000;i++){
Thread.sleep(1); // Wait for 10 seconds
}
Thread.sleep(10000); // Wait for 10 seconds
source.close();
openal.close();
Just surround the area with a loop and it will reload that bit of code every time it is done.

Resources