Skimage/LAB-Transformation and grayscale images - transform

I am converting several images from rgb-colorspace to LAB-colorspace usign skimage.rgb2lab.
In general this works pretty fine except for some grayscale images as they only contain a 2 color channels thus a 2-dimensional array and rgb2lab requires a 3-dimensional array.
Is there anyway to transform a grayscaleimage with only 2 channels into LAB space?

There are some slightly odd aspects to your question, but in an effort to try and move you along, I would suggest you synthesize a zero-filled additional channel and stack that with your existing 2 channels to make 3. Along these lines:
extraChannel = np.zeros_like(twoChannel[...,0])
threeChannel = np.dstack((twoChannel, extraChannel))

Related

Stitch Multiple Sequentially Taken Images

I have a set of images of a pipe taken by a camera that rotates 360 degrees and captures image every x degrees, hence inducing a constant overlap between the images. I need to stitch these images together such that they can be analysed as one big image, perhaps a panoramic image or orthomosaic. Here's a couple of examples:
Because it's a pipe, there's a slight curve in each image, so I am thinking first we can "unroll" the image, and do it for every image. After that, perhaps all those images can be stitched together.
I have tried unwrapping using "six-point" method ( you define cross-section of the cylinder with 3 points from the top and 3 from the bottom) , like you are unwrapping a sticker on a bottle, which is not terrible (can be improved of course). Here's how "unwrapping" looks like:
Second, SIFT is not working well for stitching. I am thinking it's because images are quite similar in nature. But, I am not sure how to best stitch them. This is where I need help. I need to align the crests of the pipe and stitch the images seamlessly - images could be up to 90 or 120. Would love any help here. Thanks.
This is something from a software, which is quite bad:

Can I create a color image with a GAN consisting of only FC layers?

I understand that in order to create a color image, three channel information of input data must be maintained inside the network. However, data must be flattened to pass through the linear layer. If so, can GAN consisting of only FC layer generate only black and white images?
Your fully connected network can generate whatever you want. Even three channel outputs. However, the question is: does it make sense to do so? Flattened your input will inherently lose all kinds of spatial and feature consistency that is naturally available when represented as an RGB map.
Remember that an RGB image can be thought of as 3-element features describing each spatial location of a 2D image. In other words, each of the three channels gives additional information about a given pixel, considering these channels as separate entities is a loss of information.

CycleGAN with 1-channel tiffs both as input and as output

I am running CycleGAN with different types of tiffs in trainA and trainB. The tiffs are 256x256 pixels in size and have 1 channel per pixel. I am using tiffs to have a wide range of values.
I changed the code as suggested in the pytorch-CycleGAN-and-pix2pix
repo (https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/320 and similar), but what I got out during the training in ./checkpoints are three-channels PNGs. Do you think it would be possible to change the code so that it goes from 1 channel tiff to 1 channel tiff with no information loss? As far as I understand, at present the code is converting the imported files to PNGs along the way. In other words: I would like my tensors to be [256*256*int_range,1]. Thanks for the help!
Have you tried adding the parameters --input_nc 1 --output_nc 1 during training? It will convert the number of channels from 3 to 1.

Reducing / Enhancing known features in an image

I am microbiology student new to computer vision, so any help will be extremely appreciated.
This question involves microscope images that I am trying to analyze. The goal I am trying to accomplish is to count bacteria in an image but I need to pre-process the image first to enhance any bacteria that are not fluorescing very brightly. I have thought about using several different techniques like enhancing the contrast or sharpening the image but it isn't exactly what I need.
I want to reduce the noise(black spaces) to 0's on the RBG scale and enhance the green spaces. I originally was writing a for loop in OpenCV with threshold limits to change each pixel but I know that there is a better way.
Here is an example that I did in photo shop of the original image vs what I want.
Original Image and enhanced Image.
I need to learn to do this in a python environment so that I can automate this process. As I said I am new but I am familiar with python's OpenCV, mahotas, numpy etc. so I am not exactly attached to a particular package. I am also very new to these techniques so I am open to even if you just point me in the right direction.
Thanks!
You can have a look at histogram equalization. This would emphasize the green and reduce the black range. There is an OpenCV tutorial here. Afterwards you can experiment with different thresholding mechanisms that best yields the bacteria.
Use TensorFlow:
create your own dataset with images of bacteria and their positions stored in accompanying text files (the bigger the dataset the better).
Create a positive and negative set of images
update default TensorFlow example with your images
make sure you have a bunch of convolution layers.
train and test.
TensorFlow is perfect for such tasks and you don't need to worry about different intensity levels.
I initially tried histogram equalization but did not get the desired results. So I used adaptive threshold using the mean filter:
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 2)
Then I applied the median filter:
median = cv2.medianBlur(th, 5)
Finally I applied morphological closing with the ellipse kernel:
k1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilate = cv2.morphologyEx(median, cv2.MORPH_CLOSE, k1, 3)
THIS PAGE will help you modify this result however you want.

How to average a stack of images together using Pillow? [duplicate]

For example, I have 100 pictures whose resolution is the same, and I want to merge them into one picture. For the final picture, the RGB value of each pixel is the average of the 100 pictures' at that position. I know the getdata function can work in this situation, but is there a simpler and faster way to do this in PIL(Python Image Library)?
Let's assume that your images are all .png files and they are all stored in the current working directory. The python code below will do what you want. As Ignacio suggests, using numpy along with PIL is the key here. You just need to be a little bit careful about switching between integer and float arrays when building your average pixel intensities.
import os, numpy, PIL
from PIL import Image
# Access all PNG files in directory
allfiles=os.listdir(os.getcwd())
imlist=[filename for filename in allfiles if filename[-4:] in [".png",".PNG"]]
# Assuming all images are the same size, get dimensions of first image
w,h=Image.open(imlist[0]).size
N=len(imlist)
# Create a numpy array of floats to store the average (assume RGB images)
arr=numpy.zeros((h,w,3),numpy.float)
# Build up average pixel intensities, casting each image as an array of floats
for im in imlist:
imarr=numpy.array(Image.open(im),dtype=numpy.float)
arr=arr+imarr/N
# Round values in array and cast as 8-bit integer
arr=numpy.array(numpy.round(arr),dtype=numpy.uint8)
# Generate, save and preview final image
out=Image.fromarray(arr,mode="RGB")
out.save("Average.png")
out.show()
The image below was generated from a sequence of HD video frames using the code above.
I find it difficult to imagine a situation where memory is an issue here, but in the (unlikely) event that you absolutely cannot afford to create the array of floats required for my original answer, you could use PIL's blend function, as suggested by #mHurley as follows:
# Alternative method using PIL blend function
avg=Image.open(imlist[0])
for i in xrange(1,N):
img=Image.open(imlist[i])
avg=Image.blend(avg,img,1.0/float(i+1))
avg.save("Blend.png")
avg.show()
You could derive the correct sequence of alpha values, starting with the definition from PIL's blend function:
out = image1 * (1.0 - alpha) + image2 * alpha
Think about applying that function recursively to a vector of numbers (rather than images) to get the mean of the vector. For a vector of length N, you would need N-1 blending operations, with N-1 different values of alpha.
However, it's probably easier to think intuitively about the operations. At each step you want the avg image to contain equal proportions of the source images from earlier steps. When blending the first and second source images, alpha should be 1/2 to ensure equal proportions. When blending the third with the the average of the first two, you would like the new image to be made up of 1/3 of the third image, with the remainder made up of the average of the previous images (current value of avg), and so on.
In principle this new answer, based on blending, should be fine. However I don't know exactly how the blend function works. This makes me worry about how the pixel values are rounded after each iteration.
The image below was generated from 288 source images using the code from my original answer:
On the other hand, this image was generated by repeatedly applying PIL's blend function to the same 288 images:
I hope you can see that the outputs from the two algorithms are noticeably different. I expect this is because of accumulation of small rounding errors during repeated application of Image.blend
I strongly recommend my original answer over this alternative.
One can also use numpy mean function for averaging. The code looks better and works faster.
Here the comparison of timing and results for 700 noisy grayscale images of faces:
def average_img_1(imlist):
# Assuming all images are the same size, get dimensions of first image
w,h=Image.open(imlist[0]).size
N=len(imlist)
# Create a numpy array of floats to store the average (assume RGB images)
arr=np.zeros((h,w),np.float)
# Build up average pixel intensities, casting each image as an array of floats
for im in imlist:
imarr=np.array(Image.open(im),dtype=np.float)
arr=arr+imarr/N
out = Image.fromarray(arr)
return out
def average_img_2(imlist):
# Alternative method using PIL blend function
N = len(imlist)
avg=Image.open(imlist[0])
for i in xrange(1,N):
img=Image.open(imlist[i])
avg=Image.blend(avg,img,1.0/float(i+1))
return avg
def average_img_3(imlist):
# Alternative method using numpy mean function
images = np.array([np.array(Image.open(fname)) for fname in imlist])
arr = np.array(np.mean(images, axis=(0)), dtype=np.uint8)
out = Image.fromarray(arr)
return out
average_img_1()
100 loops, best of 3: 362 ms per loop
average_img_2()
100 loops, best of 3: 340 ms per loop
average_img_3()
100 loops, best of 3: 311 ms per loop
BTW, the results of averaging are quite different. I think the first method lose information during averaging. And the second one has some artifacts.
average_img_1
average_img_2
average_img_3
in case anybody is interested in a blueprint numpy solution (I was actually looking for it), here's the code:
mean_frame = np.mean(([frame for frame in frames]), axis=0)
I would consider creating an array of x by y integers all starting at (0, 0, 0) and then for each pixel in each file add the RGB value in, divide all the values by the number of images and then create the image from that - you will probably find that numpy can help.
I ran into MemoryErrors when trying the method in the accepted answer. I found a way to optimize that seems to produce the same result. Basically, you blend one image at a time, instead of adding them all up and dividing.
N=len(images_to_blend)
avg = Image.open(images_to_blend[0])
for im in images_to_blend: #assuming your list is filenames, not images
img = Image.open(im)
avg = Image.blend(avg, img, 1/N)
avg.save(blah)
This does two things, you don't have to have two very dense copies of the image while you're turning the image into an array, and you don't have to use 64-bit floats at all. You get similarly high precision, with smaller numbers. The results APPEAR to be the same, though I'd appreciate if someone checked my math.

Resources