DICOM Dataset not loading correct - ami.js

I am trying to load a DICOM Series in all three directions (sagital, axial and coronal). The Dataset is oriented axial and displaying all the slices in axial orientation is working fine. But when displaying the sagital and coronal view, AMI.js only renders to the amount of axial slices right. You might better understand what I mean with pictures. So picture one shows the rendering until slice 147 and picture two shows slice 148. The series has 147 DICOM images.
correct slice displaying
wrong displaying of slice 148
Do you have any idea why this happens and what I have to change, that all slices will be displayed correct? Displaying another set of data, where the DICOMs are in coronal direction, is working fine.

From the screenshots, it seems that the "intensityAuto" flag on "stackHelper.slice.intensityAuto" is "true".
If "intensityAuto" is "true", the slice "helper" tries to fetch the window level of the related frame. Otherwise, it uses the values from the "stack". In addition, if "intensityAuto" is "true", the window level changes each time you change the slices, which is not what you want to do in most cases.
If you try to fetch window level from a frame that doesn't exist, the behavior is not defined.
That is consistent with your issue, you have 147 "frames", so the slice "helper" is able to retrieve a window level for the first 147 frames in each direction. If you try to access a frame superior to 147, you get unexpected results.
In your case, you should turn "auto intensity" off in all the slice helpers and manage window level yourself. Get the window level from the stack and update the slice helpers manually when you want to adjust the window level.
HTH,
Nicolas

Related

How do 3d engines decide where polygons go in a model?

I am trying to build my own 3d engine from a 2d one. so far everything works fine but its very inefficient due to the fact that the wire frame model has lines between every point on the shape. I've been doing some research but haven't been able to find anything regarding what dictates where polygons go for the most optimal rendering.
Here is what a cube looks like in my program:
Is there some mathematical way to remove all the extra geometry?
Any advice really helps, thanks.
Ok so after longer than I'd like to to admit I figured out that you don't need to order faces by their z coordinate, instead just take the surface normal of the shape and render only render it if it's above a value (most of the time 0) (also you'll want to use premade triangles from object files instead of assigning them to the faces yourself)

automatically repairing stuck pixels in scan data

I'm working on developing software for an electron microscope.This works by focusing a beam of electrons at a specific part of the sample and then recording the image using a sensor.
The scan data is saved as a 4D array where the first 2 dimensions are the location (x and y) at which the beam was focused and the other 2 dimensions are the raw sensor output which is a 2D image.
while analyzing the data, I realized that there are some stuck pixels which I would ideally be able to repair automatically via software.
Here is an example:
As you can see, the data shape is 256,256,256,256 which means we scanned 256x256 points and the sensor data is a 256x256 image.
On the right data browser window (called nav), you can see the scan location which is 0,0 (also marked on the left window by scanY and scanX). I drew circles around a few of the stuck pixels. here is another scan location for reference:
I can automatically detect these pixels by unraveling the sensor data and checking for locations where the scan values are always the same, but I'm not sure how to repair these.
My first guess was reading the data from all the pixels that are next to this pixel and averaging them, then storing the average value instead of the stuck value, but I'm not sure if this is a good approach.
How do professional software such as Photoshop "repair" or "hide" a defect in the picture? Are there any known algorithms for this issue? I did a bit of searching but didn't manage to find much.

Turn an image into lines and circles

I need to be able to turn a black and white image into series of lines (start, end points) and circles (start point, radius). I have a "pen width" that's constant.
(I'm working with a screen that can only work with this kind of graphics).
Problem is, I don't want to over complicate things - I could represent any image with loads of small lines, but it would take a lot of time to draw, so I basically want to "approximate" the image using those lines and circles.
I've tried several approaches (guessing lines, working area by area, etc) but none had any reasonable results without using a lot of lines and circles.
Any idea on how to approach this problem?
Thanks in advance!
You don't specify what language you are working in here but I'd suggest OpenCV if possible. If not, then most decent CV libraries ought to support the features that I'm about to describe here.
You don't say if the input is already composed of simple shapes ( lines and polygons) or not. Assuming that it's not, i.e. it's a photo or frame from a video for example, you'll need to do some edge extraction to find the lines that you are going to model. Use a Canny or other edge detector to convert the image into a series of lines.
I suggest that you then extract Circles as they are the richest feature that you can model directly. You should consider using a Hough Circle transform to locate circles in your edge image. Once you've located them you need to remove them from the edge image (to avoid duplicating them in the line processing section below).
Now, for each pixel in the edge image that's 'on' you want to find the longest line segment that it's a part of. There are a number of algorithms for doing this, simplest would be Probabilistic Hough Transform (also available in openCV) to extract line segments which will give you control over the minimum length, allowed gaps etc. You may also want to examine alternatives like LSWMS which has OpenCV source code freely available.
Once you have extracted the lines and circles you can plot them into a new image or save the coordinates for your output device.

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Counting foreground objects in a binary image

I have an image sequence (video). I would like to count the number of objects in the image sequence. But the main objective is to count them once, meaning not just in each and every frame, since an object may exist in for several frames. My idea is to count the objects as they exit the screen, because of less occlusions. I am thinking of doing this by scanning the bottom part of the image for non zero pixels.
I have a CV_FILLED binary image (from rectangle function) where I want to do the scanning, then create an instance on an object if abject is found. But this scanning will not be scanning each and every pixel along the horizontal line, just certain sections.
Like we could do it over ranges, say certain columns, then skip by a margin.
A sample binary image I have is attached . This is an image obtained from the feed. I do not want to count only the objects in this image, but also those that are still coming.
A full picture of detected objects is attached here.Your guidance or constructive criticism is welcome
* I do not want to use CVBlob
If you don't want to use cvBlobLib, you could use the contour detection that is part of OpenCV.
There is a tutorial on the website.
The doc for the method is here. Your image seem pretty simple, but if you get blobs with occlusions and so you want to look at the CV_RETR_EXTERNAL constant to get only the outer contours.
That is what I usualy use, even though it needs a bit more work to use the results of the method.
Hope this helps.
If the squares do not overlap at the bottom, I suggest the following:
scan the very bottom row of the image and identify those connected pixels which are white. Each white line will correspond to one square. Save the center of the white line segment and its length. In the next frame, do the same and associate the corresponding line segments to the previous (same length and center very close). When you cannot find a corresponding line segment anymore, the square has moved out of the image which means you can increase your squares counter by one. Note that line segments at the right and left ends of the line will have decreasing length with every frame.
Thx guys. I managed to solve this already. I used small ROIs along the paths of the squares, and found countNonZero() within the ROI.
I kept on checking with boolean variables to see if the ROI still had the white pixels. If not, incremented counter. Worked well, and I was able to count.
Thx for your input...

Resources