freeimage convert png 24bit to 1bit black/white = grey/white? - grayscale

I am using Freeimage.sf.net to load an png24bit and want to save it as black/white png.
using
FIBITMAP bib = Freeimage.Load("<24bit_png>");
FIBITMAP conv = FreeImage.ConvertColorDepth(bib, FREE_IMAGE_COLOR_DEPTH. FICD_01_BPP_THRESHOLD, (byte) 128);
FreeImage.Unload(bin); bib.SetNull();
FreeImage.Save(FREE_IMAGE_FORMAT.FIF_PNG, conv, "<out_png>", FREE_IMAGE_SAVE_FLAGS.PNG_Z_BEST_COMPRESSION);
FreeImage.Unload(conv); conv.SetNull();
but the is not black/white, as it is #0b120c / white?
why does the threshold not create a bw-palette?

The line
FIBITMAP conv = FreeImage.ConvertColorDepth( bib,
FREE_IMAGE_COLOR_DEPTH.FICD_01_BPP_THRESHOLD, (byte) 128 );
builds a palette for the most used colors.
Replacing it with
FIBITMAP bw = FreeImage.Threshold( biRef, threshold_BW );
will create a truly black-and-white image.

Related

Save VtkVolume as stl file (3d data)?

I'm making a function using vtk now.
The process is as follows:
Load dicom series
we can see only some parts give to value in vtk Piecewise function.
write stl file
I need to some function about write vtkVolume to stl data.
How can I save vtkvolume data to stl file?
MY CODE AS BELOW:-
vtkDICOMImageReader *reader = vtkDICOMImageReader::New();
reader->SetDirectoryName("Dicom_Series");
reader->Update();
vtkPiecewiseFunction* opacitytransfer = vtkPiecewiseFunction::New();
opacitytransfer->AddPoint(-700, 0.0);
opacitytransfer->AddPoint(-101, 0.0);
opacitytransfer->ClampingOff();
vtkColorTransferFunction* colortranster = vtkColorTransferFunction::New();
colortranster->AddRGBPoint(-700, 0.0, 0.0, 0.0);
colortranster->AddRGBPoint(-101, 54.0 / 255.0, 154.0 / 255.0, 254.0 / 255.0);
colortranster->AddRGBPoint(-100, 237.0 / 255.0, 204.0 / 255.0, 159.0 / 255.0);
colortranster->ClampingOff();
vtkVolumeProperty* volumeproperty = vtkVolumeProperty::New();
volumeproperty->SetColor(colortranster);
volumeproperty->SetScalarOpacity(opacitytransfer);
volumeproperty->ShadeOn();
vtkFixedPointVolumeRayCastMapper* volumeMapper = vtkFixedPointVolumeRayCastMapper::New();
volumeMapper->SetInputConnection(reader->GetOutputPort());
vtkVolume* volume1 = vtkVolume::New();
volume1->SetMapper(volumeMapper);
volume1->SetProperty(volumeproperty);
vtkRenderer* aRenderer = vtkRenderer::New();
aRenderer->AddVolume(volume1);
aRenderer->SetBackground(0, 0, 0);
vtkRenderWindow *renWin = vtkRenderWindow::New();
renWin->AddRenderer(aRenderer);
renWin->SetSize(600, 600);
renWin->Render();
vtkRenderWindowInteractor *iren = vtkRenderWindowInteractor::New();
iren->SetRenderWindow(renWin);
iren->Initialize();
iren->Start();
You cannot write out a volume image as an STL file. STL is a surface mesh format, which is completely different data type than an image. You need to extract an iso-surface from your volume. To do so, you can use VTK's MarchingCubes filter.
Here's an example:
https://lorensen.github.io/VTKExamples/site/Cxx/Modelling/MarchingCubes/
And here's the documentation for the vtkMarchingCubes class:
https://vtk.org/doc/nightly/html/classvtkMarchingCubes.html

what type of array is being returned by tiff.imread()?

I am trying to get the RGB value of pixels from the TIFF image. So, what I did is:
import tifffile as tiff
a = tiff.imread("a.tif")
print (a.shape) #returns (1295, 1364, 4)
print(a) #returns [[[205 269 172 264]...[230 357 304 515]][[206 270 174 270] ... [140 208 183 286]]]
But since we know pixel color ranges from (0,255) for RGB. So, I don't understand what are these array returning, as some values are bigger than 255 and why are there 4 values?
By the way array size is 1295*1364 i.e size of image.
The normal reasons for a TIFF (or any other image) to be 4-bands are that it is:
RGBA, i.e. it contains Red, Green and Blue channels plus an alpha/transparency channel, or
CMYK, i.e. it contains Cyan, Magenta, Yellow and Black channels - this is most common in the print industry where "separations" are used in 4-colour printing, see here, or
that it is multi-band imagery, such as satellite images with Red, Green, Blue and Near Infra-red bands, e.g. Landsat MSS (Multi Spectral Scanner) or somesuch.
Note that some folks use TIFF files for topographic information, bathymetric information, microscopy and other purposes.
The likely reason for the values to be greater than 256, is that it is 16-bit data. Though it could be 10-bit, 12-bit, 32-bit, floats, doubles or something else.
Without access to your image, it is not possible to say much more. With access to your image, you could use ImageMagick at the command-line to find out more:
magick identify -verbose YourImage.TIF
Sample Output
Image: YourImage.TIF
Format: TIFF (Tagged Image File Format)
Mime type: image/tiff
Class: DirectClass
Geometry: 1024x768+0+0
Units: PixelsPerInch
Colorspace: CMYK <--- check this field
Type: ColorSeparation <--- ... and this one
Endianess: LSB
Depth: 16-bit
Channel depth:
Cyan: 16-bit <--- ... and this
Magenta: 1-bit <--- ... this
Yellow: 16-bit <--- ... and this
Black: 16-bit
Channel statistics:
...
...
You can scale the values like this:
from tifffile import imread
import numpy as np
# Open image
img = imread('image.tif')
# Convert to numpy array
npimg = np.array(img,dtype=np.float)
npimg[:,:,0]/=256
npimg[:,:,1]/=256
npimg[:,:,2]/=256
npimg[:,:,3]/=65535
print(np.mean(npimg[:,:,0]))
print(np.mean(npimg[:,:,1]))
print(np.mean(npimg[:,:,2]))
print(np.mean(npimg[:,:,3]))

Image Processing: Merging images with PIL.paste

i have a 2 list of png images, list _c and list _v. I want to paste _v on _c using a code like:
from PIL import Image
background = [Image.open(path, 'r') for path in glob.glob(list_c_path)]
foreground = [Image.open(path, 'r') for path in glob.glob(list_v_path)]
for im in range(len(background)):
pasted = background[im].paste(foreground[im], (0, 0), foreground[im])
This code won't work but it will give you and idea of what i want. I also need to have the images read in grayscale format before they are pasted.
Here's a sample of a background image:
Here's a sample of a foreground image:
And this is the desired result:
I pasted this images using this code:
background = Image.open('1000_c.png')
foreground = Image.open('1000_v.png')
background.paste(foreground, (0, 0), foreground)
background.save('example.png')
How can i achieve this??
Thanks in advance
Mmmm... your result images are identical to your foreground images because although the foreground images have an alpha/transparency layer, they are fully opaque and completely conceal your backgrounds. You need to have a rethink!
You can use ImageMagick in the Terminal to inspect your images. So, let's look at your foreground image:
identify -verbose fg.png
Sample Output
Image: fg.png
Format: PNG (Portable Network Graphics)
Mime type: image/png
Class: DirectClass
Geometry: 118x128+0+0
Units: Undefined
Colorspace: sRGB
Type: PaletteAlpha <--- Image does have alpha/transparency layer
Base type: Undefined
Endianess: Undefined
Depth: 8-bit
Channel depth:
Red: 8-bit
Green: 8-bit
Blue: 8-bit
Alpha: 1-bit
Channel statistics:
Pixels: 15104
Red:
min: 30 (0.117647)
...
...
Alpha:
min: 255 (1) <--- ... but alpha layer is fully opaque
max: 255 (1)
mean: 255 (1)
standard deviation: 0 (0)
kurtosis: 8.192e+51
skewness: 1e+36
entropy: 0
So there is no point pasting a fully opaque image over a background as it will fully conceal it.
If we punch a transparent hole in your foreground image with ImageMagick:
convert fg.png -region 100x100+9+14 -alpha transparent fg.png
It now looks like this:
And if we then run your code:
#!/usr/local/bin/python3
from PIL import Image
background = Image.open('bg.png')
foreground = Image.open('fg.png')
background.paste(foreground, (0, 0), foreground)
background.save('result.png')
It works:
So the moral of the story is that your foreground image either needs some transparency to allow the background to show through, or you need to use some blending mode to choose one or the other of the foreground and background images at each location, or to choose some combination - e.g. the average of the two, or the brighter of the two.
If you want to average the two images, or in fact, do any other blending mode, you could consider using Pillow's ImageChops module - documentation here. So, an average would look like this:
#!/usr/local/bin/python3
from PIL import Image, ImageChops
bg = Image.open('bg.png')
fg = Image.open('fg.png')
# Average the two images, i.e. add and divide by 2
result = ImageChops.add(bg, fg, scale=2.0)
result.save('result.png')

Does lovell sharp support color modulation (hue, saturation, brightness)?

I have application where I need to change colors of image by altering values of hue, saturation and lightness.
Following is the sample image:
When I will pass HSL value as 90, 100, 50 respectively.
It should return image as follows
Any idea how to achieve this in node sharp?
Thanks in Advance.
Answered here:
https://github.com/jcupitt/libvips/issues/770
Summary: at the command-line you can do:
$ vips colourspace red-shirt.jpg x.v lch
$ vips linear x.v green-shirt.jpg "1.5 1.5 1" "0 0 120"
to swap to LCh colourspace and adjust hue and chroma, or in node-vips you can do:
var vips = require('vips');
var image = vips.Image.newFromFile(process.argv[2]);
image = image
.colourspace('lch')
.add([0, 0, 120])
.multiply([1.5, 1.5, 1]);
image.writeToFile(process.argv[3]);

R simplify heatmap to pdf

I want to plot a simplified heatmap that is not so difficult to edit with the scalar vector graphics program I am using (inkscape). The original heatmap as produced below contains lots of rectangles, and I wonder if they could be merged together in the different sectors to simplify the output pdf file:
nentries=100000
ci=rainbow(nentries)
set.seed=1
mean=10
## Generate some data (4 factors)
i = data.frame(
a=round(abs(rnorm(nentries,mean-2))),
b=round(abs(rnorm(nentries,mean-1))),
c=round(abs(rnorm(nentries,mean+1))),
d=round(abs(rnorm(nentries,mean+2)))
)
minvalue = 10
# Discretise values to 1 or 0
m0 = matrix(as.numeric(i>minvalue),nrow=nrow(i))
# Remove rows with all zeros
m = m0[rowSums(m0)>0,]
# Reorder with 1,1,1,1 on top
ms =m[order(as.vector(m %*% matrix(2^((ncol(m)-1):0),ncol=1)), decreasing=TRUE),]
rowci = rainbow(nrow(ms))
colci = rainbow(ncol(ms))
colnames(ms)=LETTERS[1:4]
limits=c(which(!duplicated(ms)),nrow(ms))
l=length(limits)
toname=round((limits[-l]+ limits[-1])/2)
freq=(limits[-1]-limits[-l])/nrow(ms)
rn=rep("", nrow(ms))
for(i in toname) rn[i]=paste(colnames(ms)[which(ms[i,]==1)],collapse="")
rn[toname]=paste(rn[toname], ": ", sprintf( "%.5f", freq ), "%")
heatmap(ms,
Rowv=NA,
labRow=rn,
keep.dendro = FALSE,
col=c("black","red"),
RowSideColors=rowci,
ColSideColors=colci,
)
dev.copy2pdf(file="/tmp/file.pdf")
Why don't you try RSvgDevice? Using it you could save your image as svg file, which is much convenient to Inkscape than pdf
I use the Cairo package for producing svg. It's incredibly easy. Here is a much simpler plot than the one you have in your example:
require(Cairo)
CairoSVG(file = "tmp.svg", width = 6, height = 6)
plot(1:10)
dev.off()
Upon opening in Inkscape, you can ungroup the elements and edit as you like.
Example (point moved, swirl added):
I don't think we (the internet) are being clear enough on this one.
Let me just start off with a successful export example
png("heatmap.png") #Ruby dev's think of this as kind of like opening a `File.open("asdfsd") do |f|` block
heatmap(sample_matrix, Rowv=NA, Colv=NA, col=terrain.colors(256), scale="column", margins=c(5,10))
dev.off()
The dev.off() bit, in my mind, reminds me of an end call to a ruby block or method, in that, the last line of the "nested" or enclosed (between png() and dev.off()) code's output is what gets dumped into the png file.
For example, if you ran this code:
png("heatmap4.png")
heatmap(sample_matrix, Rowv=NA, Colv=NA, col=terrain.colors(32), scale="column", margins=c(5,15))
heatmap(sample_matrix, Rowv=NA, Colv=NA, col=greenred(32), scale="column", margins=c(5,15))
dev.off()
it would output the 2nd (greenred color scheme, I just tested it) heatmap to the heatmap4.png file, just like how a ruby method returns its last line by default

Resources