GraphViz set page width - width

I'm using GraphViz to determine the controls location in my C# application.
But i'm not being able to give the graphViz dot generator the width of the output.
This are the paremeters of the dot file that im using
Digraph x {
autosize=false;
size="25.7,8.3!";
resolution=100;
node [shape=rect];
(edges definitions comes here)
...
But seems to have no effect on the generated plaintext file.
Am I missing something do set the page width?
Regards

I added a -> b to your example. Here's the plaintext output I get:
digraph x {
graph [autosize=false, size="25.7,8.3!", resolution=100];
node [label="\N", shape=rect];
graph [bb="0,0,54,108"];
a [pos="27,90", width=0.75, height=0.5];
b [pos="27,18", width=0.75, height=0.5];
a -> b [pos="e,27,36.104 27,71.697 27,63.983 27,54.712 27,46.112"];
}
As you can see, the size and resolution attributes are included in the output.
You may change the values of size and resolution, this won't change anything else than those attributes in the plaintext output. The positions of all nodes and edges are relative to the bounding box (bb) of the graph.
However, if you decide for example to output a png, graphviz will use this information to scale the bounding box according to your size and resolution attributes and calculate the final image size.
In this example, the resulting png will be 444 by 831 pixels (8.3 inches with a resolution of 100 dpi result in 830 pixels, the pixel on top is probably due to a rounding error).
You may find more detailed examples about size attribute and the resulting image size in this answer.

Related

Remove emtpy raster - stars package R

I have loaded a raster using stars package and then created tiles over the raster. Now i subset the raster based on these tile.
tiles[[i]] <- st_bbox(c(xmin=x0,ymin=y0,xmax=x1,ymax=y1),crs=st_crs(r))
crop_tiles[[i]] <- r[tiles[[i]]]
Here r is raster loaded using read_stars("filename.tif").
Now i want to remove among the crop_tiles the empty ones. i.e the tiles where the raster values are NA
You can check if all values of a stars object named r are NA with:
all(is.na(r[[1]]))
then remove those tiles where the above is equal to TRUE.
For more specific code, please provide reproducible sample data in your question, thanks.

Creating a surface plot from an Unstructured grid vtk file using Vedo

I have an unstructured grid vtk file that contains three different types of cells (Tetrahedral, Wedge and Hexahedral). This file contains multiple Scalars (8 attributes such as Pressure, Temperature e.t.c.) and a Single Vector (U,V,W) and I am trying to create a surface plot from this file for a Scalar or Vector at a time using the Vedo python wrapper for vtk. The vtk file contains a scalar or vector value for each cell, including the point coordinates.
I have read the documentation over and over, with examples here https://vtkplotter.embl.es/content/vtkplotter/index.html. These are the things that I have tried with the challenge that I am having with each method:
Method 1: Loading the file as a TetMesh
vp = Plotter()
test = load('Case_60.vtk')
vp.show(test)
This method doesn't plot Scalar Values and only shows points. No Solid Surface. Tried using a cuttertool() with it , it throws an error saying non-Tetrahedral Cell Encountered.
Method 2: Using the UGrid
ug = UGrid('Case_60.vtk')
show(ug)
This method plots as surface with a solid color. Does not seem to be picking the Scalars.
What is the proper way for me to display surface plot and display the scalar value for each cell? Is Vedo able to do what I'm trying to do?
You might need to specify which array is to be used for coloring, e.g.:
from vedo import *
ug = UGrid(datadir+'limb_ugrid.vtk')
print(ug.getArrayNames())
ug.selectCellArray('chem_0')
show(ug, axes=True)
if this doesn't work for your mesh please submit an issue here.

Wrong width and height when using inset_axes and transData

I want to plot an inset by specifying the width and height in the Data reference frame. However when converting these values to inches (as required by inset_axes) using transData.transform the inset axe doesn't respect the given width and height. Any idea why? Here is my code:
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from matplotlib.projections import get_projection_class,get_projection_names
fig,ax_main=plt.subplots()
width=0.5
height=0.5
x_center=0
y_center=0
new_width,_=(ax_main.transData.transform([width,0])-ax_main.transData.transform([0,0]))/72
_,new_height=(ax_main.transData.transform([0,height])-ax_main.transData.transform([0,0]))/72
print(new_width,new_height)
### Process
ax_val= inset_axes(ax_main, width=new_width, height=new_height, loc=3,
bbox_to_anchor=(x_center,y_center),
bbox_transform=ax_main.transData,
borderpad=0.0)
Though I don't know in how far the result is unexpected, the problem might be due to the wrong conversion used. transData transforms from data in pixel space. You divide the result by 72. The result of this may or may not be inches, depending on whether the figure dpi is 72 or not. By default the dpi is set to value from the rc params "figure.dpi" and that is 100 for a fresh matplotlib install and in case you haven't changed your rc params.
To be on the safe side,
either set your figure dpi to 72, plt.subplots(dpi=72)
or divide by the figure dpi, (ax_main.... ) / fig.dpi
However, much more generally, it seems you want to set the width and height of the inset_axes in data coordinates. So best don't specify the size in inches at all. Rather use the bounding box directly.
ax_val = inset_axes(ax_main, width="100%", height="100%", loc=3,
bbox_to_anchor=(x_center,y_center,width,height),
bbox_transform=ax_main.transData,
borderpad=0.0)
I updated the inset_axes documentation as well as the example a couple of months ago, so hopefully this case should also be well covered. However feel free to give feedback in case some information is still missing.
Even more interesting here might be the new option in matplotlib 3.0, to not use the mpl_toolkits.axes_grid1.inset_locator, but the Axes.inset_axes method. It's still noted to be "experimental" but should work fine.
ax_val = ax_main.inset_axes((x_center,y_center,width,height), transform=ax_main.transData)

pixel tracing not working properly

so i got an image with a curve, and i got a function that looks for 1 pixel and then connect the adjacent pixels (like forming a path), but somehow when it is going south-west or west it does not find any pixels even if there is one.
the function goes like:
pixels=img.getpixels()
window=((1,0),(0,-1),(-1,0),(0,1),(1,1),(1,-1),(-1,-1),(-1,1))
objs=[]
obj=[pixels.pop(),]
while pixels:
p=obj[-1]
neighbours=tuple((x+p[0],y+p[1]) for x,y in window)
for n in neighbours:
if n in pixels:
obj.append(n)
pixels.remove(n)
break
if p==obj[-1]:
objs.append(obj)
obj=[pixels.pop(),]
and an example of the failure:
the original image
the objects image
each different color represents a different "object"
should it not be showing just one object?
i appreciate any thoughts on this, tyvm :D

How to display a binary image in pictureBox VC++

i'm working on a VC++ and OpenCV application, i'm loading images into picturBox and make some OpenCV operations on them, i assign the loaded image into IplImage to make processing on it but then assign the processed image again into the picture box, i write this code to load the image selected from openFileDialog into IplImage ,binarize the image then reassign the binarized image back to the pictureBox
code:
const char* fileName = (const char*)(void*)
Marshal::StringToHGlobalAnsi(openFileDialog1->FileName);
IplImage *img=cvLoadImage(fileName,CV_LOAD_IMAGE_COLOR);
int width=img->width;
int height=img->height;
IplImage *grayScaledImage=cvCreateImage(cvSize(width,height),IPL_DEPTH_8U,1);
cvCvtColor(img,grayScaledImage,CV_RGB2GRAY);
cvThreshold(grayScaledImage,grayScaledImage,128,256,CV_THRESH_BINARY);
this->pictureBox1->Image=(gcnew
System::Drawing::Bitmap(grayScaledImage->width,grayScaledImage->height,grayScaledImage->widthStep,
System::Drawing::Imaging::PixelFormat::Format24bppRgb,(System::IntPtr)grayScaledImage->imageData));
but i doesn't find a format which displays a binary image, any help about that.
Original Image:
Converted image:
You seem to be creating an RGB image (System::Drawing::Imaging::PixelFormat::Format24bppRgb) but copying into it a grayscale, presumably the System::Drawing::Imaging function doesn't do conversion - or isn't doing it properly.
Edit: Some more explanation.
Your greyscale image is stored in memory as one byte for each pixel Y0, Y1, Y2,Y3...... Y639 (we use Y for brightness, and assuming a 640 pixel wide image).
You have told the .net image class that this is Format24bppRgb which would be stored as one red,one green and blue byte per pixel (3bytes = 24bpp). So the class takes your image data and assumes that Y0,Y1,Y2 are the red,green,blue values for he first pixel, Y3,Y4,Y5 for the next and so on.
This is using up 3x as many bytes as your image has, so after 1/3 of the row it starts reading the next row and so on - which gives you the three repeated pictures.
ps. the fact that you have turned it into a binary image just means that the Y values are either 0 or 255 - it doesn't change the data size or shape.

Resources