I have a dot file generated by a software called egypt. The dot file contains many nodes and edges. If I use this dot file to draw a picture, it is very hard see the picture clearly as there are too many nodes. And what I need is only a subgraph starting from a node, and I don't need the whole picture.
Is there any way to draw a subgraph from a specified node (such as start_node) using this dot file?
Related
I have an SVG image consisting exclusively of straight line segments connected by nodes. I wish to apply a transform on the coordinates of those nodes of the form (x,y) --> (x',y') such that x' = f(x,y), y' = g(x,y) but only in a certain region of the xy plane.
My question is this: if there is a node inside this region connected with a straight line segment to a node outside this region how do I split this line segment at the boundary of the region so that two nodes are created, as close as possible to each other on either side of the boundary, leaving behind a tiny gap in the original line segment at the boundary. The idea is to create two line segments from the original single line segment, with four nodes in total instead of two, and then apply the transform to the two nodes of the line segment inside the region and not to the two nodes of the line segment outside the region.
Note: The mathematics behind this are not the issue, the issue is programming it. This splitting business will of necessity require the creation of additional paths, which I don't quite know how to program. Any programming language would be fine, so long as it got the job done. If it is of any help, I am using Inkscape for editing the SVG. Thanks.
I need to be able to turn a black and white image into series of lines (start, end points) and circles (start point, radius). I have a "pen width" that's constant.
(I'm working with a screen that can only work with this kind of graphics).
Problem is, I don't want to over complicate things - I could represent any image with loads of small lines, but it would take a lot of time to draw, so I basically want to "approximate" the image using those lines and circles.
I've tried several approaches (guessing lines, working area by area, etc) but none had any reasonable results without using a lot of lines and circles.
Any idea on how to approach this problem?
Thanks in advance!
You don't specify what language you are working in here but I'd suggest OpenCV if possible. If not, then most decent CV libraries ought to support the features that I'm about to describe here.
You don't say if the input is already composed of simple shapes ( lines and polygons) or not. Assuming that it's not, i.e. it's a photo or frame from a video for example, you'll need to do some edge extraction to find the lines that you are going to model. Use a Canny or other edge detector to convert the image into a series of lines.
I suggest that you then extract Circles as they are the richest feature that you can model directly. You should consider using a Hough Circle transform to locate circles in your edge image. Once you've located them you need to remove them from the edge image (to avoid duplicating them in the line processing section below).
Now, for each pixel in the edge image that's 'on' you want to find the longest line segment that it's a part of. There are a number of algorithms for doing this, simplest would be Probabilistic Hough Transform (also available in openCV) to extract line segments which will give you control over the minimum length, allowed gaps etc. You may also want to examine alternatives like LSWMS which has OpenCV source code freely available.
Once you have extracted the lines and circles you can plot them into a new image or save the coordinates for your output device.
I am working on a 3d application and am currently looking for a way to project a line segment defined by two points in screen-space onto a three-dimensional polygonal mesh (in my case a triangle mesh). The goal is to find the intersection points in world-space of the line segment with the edges of the mesh.
I can only think of two ways to do this, but neither is ideal. The first is to sample the line segment (in screen-space) at small intervals and ray trace at those intervals to find the world-space coordinates where the ray hits the mesh, but this does not easily give me the intersection points of the line segment with the mesh edges.
The other way I can think of is to somehow back-project the mesh into screen-space, find the intersections there (in 2d) and then project those intersection points back to 3d. The problem with this is that the screen-space coordinate system may change between the selection of the first and second endpoints of the line segment (due to moving the camera).
If any of that was confusing, then here is an image that approximately shows what I'm trying to do (the white dots indicate the points that I want to find). However, in my case the yellow curve is simply a line segment.
[Yunjin Lee, et al. "Mesh scissoring with minima rule and part salience." 2005]
Any help is very much appreciated.
Here's my suggestion:
Project the screen line into world space (getting a plane in world space).
Intersect the plane with the triangles in the mesh, getting a set of edges.
Add the edges to a data structure that keeps only the parts of the edges that are closest to the camera plane (see the diagram below, in which the red line segments and their endpoints are the ones we want to keep). This is like building up an image via a Z-buffer, except that because we know that this set is piecewise linear, we don't have to rasterize it, we can just maintain a sorted list of endpoints.
I have a 3d volume given by a binary space partition tree. Usually these are made from polygon models, and the splitted polygons already stored inside the tree nodes.
But mine is not, so I have no polygons. Every node has nothing but it's cut plane (given by normal and origin distance for example). Thus the tree still represent a solid 3d volume, defined by all the cuts made. However, for visualisation I need a polygonal mesh of this volume. How can that be reconstructed efficiently?
The crude method would be to convert the infinite half spaces of the leaves to large enough polhedrons (eg. cubes) and push every single one of them upwards the tree, cutting it by every node's plane it passes. That seems extremely costly, as the tree may be unbalanced (eg. if stupidly made from a convex polyhedra). Is there any classic solution?
In order to recover the polygonal surface you need to intersect the planes. Where each vertex of a polygon is generated by an intersection of three planes and each edge by an intersection of 2 planes. But making this efficient and numerical stable is no trivial task. So i propose to use qhalf that is part of qhull. A documentation of the input and ouput of qhalf can be found here. Of course you can use qhull (and the functionality from qhalf) as a library.
I have a java applet that allows users to import a jpeg and world file from the local system. The user can then "click" draw lines on the image that was imported. Each endpoint of each line contains a set of X/Y and Lat/Long values. The XY is standard java coordinate space, the applet uses an affine transform calculation with the world file to determine the lat/long for every point on the canvas.
I have a requirement that allows a user to type a distance into a text field and use the arrow key to draw a line in a certain direction (Up, Down, Left, Right) from a single selected point on the screen. I know how to determine the lat/long of a point given a source lat/long, distance, and bearing.
So a user types "100" in the text field and presses the Right arrow key a line should be drawn 100 feet to the right from the currently selected point.
My issue is I don't know how to convert the distance( which is in feet ) into the distance in pixels. This would then tell my where to plot the point.
tcarobruce,
You are correct. The inverse transform algorithm is what I needed. Since I use java I was able to replace my "home made" transform algorithm with the java.awt.AffineTransform object which has an inverse transform function.
This seems to have solved my issue.
Thanks.
I guess you are certain your users are always uploading a raster image that is in the lat/lon wgs84 projection? Because in that case you can set a fixed coordinate transformation.
If you consider ever digitizing images from other sources with other projections, you might want to take a look at the open source geotools library: http://www.geotools.org/