I working on program (fortran90), which computes an magnetic field of some static set of wires with electric current. Its output is a magnetic field vectors in many points as file with columns "x,y,z,v_x,v_y,v_z). I able to plot this with gnuplot, e.g.:
But now I want to rewrite program to output isosurfaces (surfaces at which modulus of magnetic field vector is constant), like this (it is found in internet and don't correspond to first image)
Can I do this as second program or with using utility, which will convert my file with 6 columns into ... something format which can be drawn as surface set. Another way of doing this, as I think, is to rewrite first program to compute isosurface directly. Please, recommend me which way is better and how actually I can do this.
I think MathGL can do it easily. It is cross-platform GPL plotting library which have Fortran interface too. Here you can use a sequential call of vector fields and isosurface plotting.
Related
Let's assume I have 2 netCDF data files with data for the same region (like South America, Africa, etc) but the different grid sizes as 0.5 degrees x 0.5 degrees and 1.0 degrees x 1.0 degrees in another.
I want to increase or decrease its grid size to a different value such as 0.25 x 0.25 or 1.0 x 1.0 so that I can use this easily for raster calculations and comparison, etc.
Is there a method to do this using any bash script, CDO, etc.
A sample data can be downloaded from here. https://www.dropbox.com/sh/0vdfn20p355st3i/AABKYO4do_raGHC34VnsXGPqa?dl
Can different methods be followed for this like bilinear interpolation or cubic interpolation?
This is quite easy with ArcGIS and other software but is there a way to do it for a big netCDF file with large datasets.
Assume that this is just a subset of the data. What I will be later converting is a whole set of yearly data.
The resulted file should be a .nc file with the changed grid size as defined by the user.
You can use cdo to remap grids, e.g. to a regular 1 degree grid you can use:
cdo remapcon,r360x180 input.nc output.nc
As well as conservative first order remapping (remapcon), other options are :
remapbil : bilinear interpolation
remapnn : nearest neighbour interpolation
remapcon2 : 2nd order conservative remapping
It is also possible to remap one file to the grid used in another if you prefer:
cdo remapcon,my_target_file.nc in.nc out.nc
EDIT 2021: new video available...
To answer the comment below asking about which method to use, for a full guide on these interpolation methods and the issues you have to look out for regarding subsampling when coarse graining data, you can refer to my "regridding and interpolation" video guide on youtube.
In general if you are interpolating from high resolution to low resolution ("coarse gridding") by more than a factor of 2 you don't want to use bilinear interpolation as it will essentially subsample the field. This is especially problematic for non-smooth, highly heterogeneous fields such as precipitation. In those cases I would always suggest to use a conservative method (remapcon or remapcon2). See my video guide for details.
Another tip for speed is that, if you are performing the same interpolation procedure on many input files with the same resolution, then you can calculate the interpolation weights once using genbil, gencon etc, and then do the remapping function using those in the loop over the file. This is much faster, as the generation of the weights is the slow part of remapcon
NCO's ncremap has a one-line solution as well. Consider regriding a.nc to be on the same grid as b.nc. We will name the answer c.nc (this is the regridded a.nc).
ncremap -d b.nc a.nc c.nc
To choose conservative instead of bilinear interpolation (the default), use -a:
ncremap -a conserve -d b.nc a.nc c.nc
For one of my classes, I made a 3D graphing application (using Visual Basic). It takes in a string (z=f(x,y)) as input, parses it into RPN notation, then evaluates and graphs the equation. While it did work, it took about 20 seconds to graph. I would have liked to add slide bars to rotate the graph vertically and horizontally, but it was definitely too slow to allow that.
Does anyone know what programming languages would be best for this type of thing? Ideally, I will be able to smoothly rotate the function once it is graphed.
Also, I’m trying to find a better way to rotate the function. Right now, I evaluate it at a bunch of points, and then plot the points to the screen. Every time it is rotated, it must be re-evaluated and plot all the new points. This takes just as long as the original graph process, as it basically treats it as a completely new function.
Lastly, I need a better way to display the graph. Currently (using VB with visual studio) I plot 200,000 points to a chart, but this does not look great by any means. Eventually, I would like to be able to change color based on height, and other graphics manipulation to make it look better.
To be clear, I am not asking for someone to do any of this for me, but rather the means to go about coding this in an efficient way. I will greatly appreciate any advice anyone can give to help with any of these three concerns.
So I will explain how I would go about it using C++ and OpenGL. This doesn't mean those are the tools that you must use, it's just those are standard graphics tools.
Your function's surface is essentially a 2D manifold, which has the nice property of having an intuitive mapping to a 2D space. What is commonly referred to as UV mapping.
What you should do is pick the ranges for the rectangle domain you want to display (minimum x, maximum x, minimum y, maximum y) And make 2 nested for loops of the form:
// Pseudocode
for (x=minimum; x<maximum; x++)
for (y=minimum; y=maximum; y++)
3D point = (x,y, f(x,y))
Store all of these points into a container (std vector for c++ works fine) and this will be your "mesh".
This is done once, prior to rendering. You then render those points using, for example GL_POINTS, and rotate your graph mesh using rotations on the GPU.
This will only show scattered points, not a surface.
If you also wish to show the surface of your function, and not just the points, you can triangulate that set of points fairly easily.
Group each 4 contiguous vertices (i.e the vertices at indices <x,y>, <x+1,y>, <x+1,y>, <x+1,y+1>) and create the 2 triangles:
(<x,y>, <x+1,y>, <x,y+1>), (<x+1,y>, <x+1,y+1>, <x,y+1>)
This will fill triangulate the surface of your mesh.
Essentially you only need to build your mesh once, and this way rendering should be 60 fps for something with 20 000 vertices, regardless of whether you only render points or triangles too.
Programming language is mostly not relevant, so VB itself is probably not the issue. You can have the same issues in Python, C#, C++, etc. Of course you must master the programming language you choose.
One key aspect is using the right algorithms and data-structures. Proper use of memory allocations and memory layout for maximizing CPU (and GPU) cache are also key. Then you must take advantage of the platform and hardware capabilities (GPU and Multithreading). For the last point you definetely need to use a graphics library such as OpenGL or Vulkan.
STL is the most popular 3d model file format for 3d printing. It records triangular surfaces that makes up a 3d shape.
I read the specification the STL file format. It is a rather simple format. Each triangle is represented by 12 float point number. The first 3 define the normal vector, and the next 9 define three vertices. But here's one question. Three vertices are sufficient to define a triangle. The normal vector can be computed by taking the cross product of two vectors (each pointing from a vertex to another).
I know that a normal vector can be useful in rendering, and by including a normal vector, the program doesn't have to compute the normal vectors every time it loads the same model. But I wonder what would happen if the creation software include wrong normal vectors on purpose? Would it produce wrong results in the rendering software?
On the other hand, 3 vertices says everything about a triangle. Include normal vectors will allow logical conflicts in the information and increase the size of file by 33%. Normal vectors can be computed by the rendering software under reasonable amount of time if necessary. So why should the format include it? The format was created in 1987 for stereolithographic 3D printing. Was computing normal vectors to costly to computers back then?
I read in a thread that Autodesk Meshmixer would disregard the normal vector and graph triangles according to the vertices. Providing wrong normal vector doesn't seem to change the result.
Why do Stereolithography (.STL) files require each triangle to have a normal vector?
At least when using Cura to slice a model, the direction of the surface normal can make a difference. I have regularly run into STL files that look just find when rendered as solid objects in any viewer, but because some faces have the wrong direction of the surface normal, the slicer "thinks" that a region (typically concave) which should be empty is part of the interior, and the slicer creates a "top layer" covering up the details of the concave region. (And this was with an STL exported from a Meshmixer file that was imported from some SketchUp source).
FWIW, Meshmixer has a FlipSurfaceNormals tool to help deal with this.
I have a spatial dataset that consists of a large number of point measurements (n=10^4) that were taken along regular grid lines (500m x 500m) and some arbitrary lines and blocks in between. Single measurements taken with a spacing of about 0.3-1.0m (varying) along these lines (see example showing every 10th point).
The data can be assumed to be normally distributed but shows a strong small-scale variability in some regions. And there is some trend with elevation (r=0.5) that can easily be removed.
Regardless of the coding platform, I'm looking for a good or "the optimal" way to interpolate these points to a regular 25 x 25m grid over the entire area of interest (5000 x 7000m). I know about the wide range of kriging techniques but I wondered if somebody has a specific idea on how to handle the "oversampling along lines" with rather large gaps between the lines.
Thank you for any advice!
Leo
Kriging technique does not perform well when the points to interpolate are taken on a regular grid, because it is necessary to have a wide range of different inter-points distances in order to well estimate the covariance model.
Your case is a bit particular... The oversampling over the lines is not a problem at all. The main problem is the big holes you have in your grid. If think that these holes will create problems whatever the interpolation technique you use.
However it is difficult to predict a priori if kriging will behave well. I advise you to try it anyway.
Kriging is only suited for interpolating. You cannot extrapolate with kriging metamodel, so that you won't be able to predict values in the bottom left part of your figure for example (because you have no point here).
To perform kriging, I advise you to use the following tools (depending the languages you're more familiar with):
DiceKriging package in R (the one I use preferably)
fields package in R (which is more specialized on spatial fields)
DACE toolbox in matlab
Bonus: a link to a reference book about kriging which is available online: http://www.gaussianprocess.org/
PS: This type of question is more statistics oriented than programming and may be better suited to the stats.stackexchange.com website.
Using with vector in gnuplot, I can plot nice vector fields of data sets consisting of four columns. What are my options, if, instead of a velocity vector field, I want to plot stream traces? Does gnuplot have a built-in functionality to accomplish this?
Of course I know that I can externally calculate the stream traces based on the vector field, but I would like to have it automated in Gnuplot. How to approach this (if it is possible)?
Nope, gnuplot doesn't have that ability. There's really a huge difference in processing between plotting a vector-field and plotting streamlines. The vector field only depends on the local point whereas the streamlines need to be calculated from the previous data -- Something which gnuplot doesn't do.