Best way to work with rotated model grids (MF6) in Flopy? - flopy

I want to model a system that consist of glacifluvial sediment deposited in a NE, SW orientation (this is also the direction of GW-flow), using MF6 disv-grid through Flopy. In order to reduce risk of numerical instability, I want to rotate the model grid 45 degrees counter clockwise. However, rotating the model grid causes the grid to be displaced in relation to model pertinent information, such as digital elevation maps, boundary conditions, and observation data.
What is the recommended approach for working with rotated model grids? Is it possible to rotate the grid, refine the grid, and then rotate it back to its original position before appending boundary conditions and elevations? Or is the modeller expected to rotate all other data as well (i.e. boundary conditions, observation data, e.t.c.)? If the latter is true, then that would mean a lot of extra work for a complex model.
Ultimately, I want to perform parameter estimation / history matching using PEST/PEST++. What could be the consequence of having a rotated model grid when performing model calibration?
Very curious to hear your opinions and recommendations.

Related

How to approximate low-res 3D density map to smooth models?

3D Density maps of course can be plotted as heatmap, but when data itself is homogeneous (near 0) except for a small part (2D cross section for example):
This should give a letter 'E' shape as 2D "model". The original data is not saved as point-cloud however.
A naive approach would be to use the pixels that are more than a certain value, and then smooth the border. However this does not take into account of the border pixels being small.
Another would be to use some point-cloud based algorithms that come with modeling softwares, but then the point-cloud's probability function would still be discontinuous on pixel border, and not taking into account that only one side have signal.
Is there any tested solution to this (the example is 2D, the actual case is many 2D slices that compose a low-res 3D density map)? I was thinking of making border pixels have area proportional to signal data, and border should be defined from gradient? Any suggestions?
I was thinking of model visualization results similar to this (seems to be based on established point-cloud algorithm):

Is labelling images with polygon better than square?

I aim to make an object detection model and I labelled data with a square box
If I label the images with polygon, will it be better than square?
(labelling on image of people wearing safety helmet or not)
I did try label with polygon shape on a few images and after export txt file for YOLO
why it has only 4 points in the text file as same as labelled with a square shape
how those points will represent an area that I label accurately?
1 0.573748 0.018953 0.045332 0.036101
1 0.944520 0.098375 0.108931 0.167870
You have labeled your object in a polygonial format, but when you had made a conversion to YOLO-format the information in the labelings has reduced. The picture below shows how I suppose has happend;
...where you have done polygon shape annotation (black shape). But, the conversion has "searched" the smallest x-value from the polygonial coordinate points and smallest y-value from corresponding polygonial coordinate points. And, those are the "first two" values of your YOLO-format. The same logic has happend with the "width" and "heigth" -parameters.
A good description about the idea behind the labelling and dataset is shown in https://www.youtube.com/watch?v=h6s61a_pqfM.
In short; for your purpose (for efficiency) I propose you make fast & convenient annotation using rectangles only - no time consuming polygon annotation.
The YOLO you are using very likely only has square annotation support.
See this video showing square vs polygon quality of results for detection, and the problem of annotation time required to create custom data sets.
To use polygonal masks can I suggest switching to use YOLOv3-Polygon or YOLOv5-Polygon

Detecting center and area of shapes in an image

I am working with GD library, and I'm looking for a way to detect the nearest pixel to the middle center of shapes, as well as total area used by each shape in a monochromic black-and-white image.
I'm having difficulty coming up with an efficient algorithm to do this. If you have done something similar to this in the past, I'd be grateful for any solution that would help.
Check out the binary image library
Essentially, Otsu threshold to separate out foreground from background, then label connected components. That particular image looks very clean but you might need morph ops to clean it up a bit and get rid of small holes and other artifacts.
Then you have area trivially (count pixels in component) or almost as trivially (use the weighted area function that penalises edge pixels). Centre is just mean.
http://malcolmmclean.github.io/binaryimagelibrary/
#MalcolmMcLean is right but there are remaining difficulties (if you are after maximum accuracy).
If you threshold with Otsu, there are a few pairs of "kissing" dots which will form a single blob using connected component analysis.
In addition, Otsu threshoding will discard some of the partially filled edge pixels so that the weighted averages will be inaccurate. A cure would be to increase the threshold (up to 254 is possible), but that worsens the problem of the kissing dots.
A workaround is to keep a low threshold and dilate the blobs individually to obtain suitable masks that cover all edge pixels. Even so, slight inaccuracies will result in the vicinity of the kissings.
Blob splitting by the watershed transform is also possible but more care is required to handle the common pixels. I doubt that a prefect solution is possible.
An alternative is the use of subpixel edge detection and least-squares circle fitting (after blob detection with a very low threshold to separate the dots). By avoiding the edge pixels common to two circles, you can probably achieve excellent results.

How to compute a 3d miniature model from a large set of 3d geometric models

i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.

Use Kalman filter to track the position of an object, but need to know the position of that object as an input of Kalman filter. What is going on?

I am trying to study how to use Kalman filter in tracking an object (ball) moving in a video sequence by myself so please explain it to me as I am a child.
By some algorithms (color analysis, optical flow...), I am able to get a binary image of each video frame in which there is the tracking object ( white pixels) and background (black pixels) -> I know the object size, object centroid, object position -> Just simple draw a bounding box around the object --> Finish. Why do I need to use Kalman filter here?
Ok, somebody told me that because I can not detect the object in each video frame because of noise, I need to use Kalman filter to estimate the position of the object. Ok, fine. But as I know, I need to provide the input to Kalman filter. They are previous state and measurement.
previous state ( so I think it is the position, the velocity, acceleration...of the object in the previous frame) -> Ok, this is fine to me.
measurement of current state: Here is what I can not understand. What can measurement be?
- The position of the object in the current frame? It is funny because if I know the position of the object, all I need is just to draw a simple boundingbox (rectangular) around the object. Why I need Kalman filter here anymore? Therefore, it is impossible to take the position of the object in the current frame as measurement value.
- "Kalman Filter Based Tracking in an Video Surveillance System" article says
The main role of the Kalman filtering block is to assign a tracking
filter to each of the measurements entering the system from the
optical flow analysis block.
If you read the full paper, you will see that the author takes the maximum number of blob and the minimum size of the blob as an input to the Kalman filter. How can those parameters be used as measurement?
I think I am in a loop now. I want to use Kalman filter to track the position of an object, but I need to know the position of that object as an input of Kalman filter. What is going on?
And 1 more question, I dont understand the term "number of Kalman filter". In a video sequence, if there are 2 objects need to track -> need to use 2 Kalman filter? Is that what it means?
You don't use the Kalman filter to give you an initial estimate of something; you use it to give you an improved estimate based on a series of noisy estimates.
To make this easier to understand, imagine you're measuring something that is not dynamic, like the height of an adult. You measure once, but you're not sure of the accuracy of the result, so you measure again for 10 consecutive days, and each measurement is slightly different, say a few millimeters apart. So which measurement should you choose as the best value? I think it's easy to see that taking the average will give you a better estimate of the person's true height than using any single measurement.
OK, but what has that to do with the Kalman filter?
The Kalman filter is essentially taking an average of a series of measurements, as above, but for dynamic systems. For instance, let's say you're measuring the position of a marathon runner along a race track, using information provided by a GPS + transmitter unit attached to the runner. The GPS gives you one reading per minute. But those readings are inaccurate, and you want to improve your knowledge of the runner's current position. You can do that in the following way:
Step 1) Using the last few readings, you can estimate the runner's velocity and estimate where he will be at any time in the future (this is the prediction part of the Kalman filter).
Step 2) Whenever you receive a new GPS reading, do a weighted average of the reading and of your estimate obtained in step 1 (this is the update part of the Kalman filter). The result of the weighted average is a new estimate that lies in between the predicted and measured position, and is more accurate than either by itself.
Note that you must specify the model you want the Kalman filter to use in the prediction part. In the marathon runner example you could use a constant velocity model.
The purpose of the Kalman filter is to mitigate the noise and other inaccuracies in your measurements. In your case, the measurement is the x,y position of the object that has been segmented out of the frame. If you can perfectly segmement out the ball and only the ball from the background for every frame, there is no need for the Kalman filter since your measurements in effect contain no noise.
In most applications, perfect measurements cannot be guaranteed for a number of reasons (change in lighting, change in background, other moving objects, etc.) so there needs to be a way of filtering the measurements to produce the best estimate of the true track.
What the Kalman Filter does is use a model to predict what the next position should be assuming the model holds true, and then compares that estimate to the actual measurement you pass in. The actual measurement is used in conjunction with the prediction and noise characteristics to form the final position estimate and update a characterization of the noise (measure of how much the measurements are differing from the model).
The model could be anything that models the system you are trying to track. A common model is a constant velocity model which just assumes that the object will continue to move with the same velocity as in the previous estimate. This is not to say that this model will not track something with a changing velocity since the measurements will reflect the change in velocity and affect the estimate.
There are a number of ways you can attack the problem of tracking multiple objects at once. The simplest way is to use an independent Kalman filter for each track. This is where the Kalman filter really starts to pay off because if you are using the simple approach of just using the centroid of a bounding box, what happens if the two objects cross one another? Can you again differentiate which object is which after they separate? With the Kalman filter, you have the model and prediction that will help keep the track correct when other objects are interfering.
There are also more advanced ways of tracking multiple objects jointly like a JPDAF.
Jason has given a good start on what Kalman filter is. In regard to your question as to how the paper can use the maximum number of blobs and the minimum size of the blob, this is exactly the power of Kalman filter.
A measurement needs not be a position, a velocity or an acceleration. A measurement can be any quantity that you can observe at a time instance. If you can define a model that predict your measurement in the next time instance given the current measurement, Kalman filter can help you mitigate the noise.
I would suggest you look into more introductory materials on Image Processing and Computer Vision. These materials will almost always cover Kalman filter.
Here is a SIGGRAPH course on trackers. It is not introductory but should give you a more in-depth look at the topic.
http://www.cs.unc.edu/~tracker/media/pdf/SIGGRAPH2001_CoursePack_08.pdf
In the case that you can find the ball exactly in every frame, you don't need a Kalman filter. Just because you find some blog which is likely the ball, it doesn't mean that the center of that blob will be the perfect center of the ball. Think of that as your measurement error. Also, if you happen to pick out the wrong blog, using a Kalman filter would help prevent you from trusting that one wrong measurement. Like you said before, if you can't find the ball in a frame, you can also use the filter to estimate where it is likely to be.
Here are some of the matrices you will need, and my guess at what they would be for you. Since the x and y position of the ball is independent, I think it is easier to have two filters, one for each. Both would look kinda like this:
x = [position ; velocity] //This is the output of the filter
P = [1, 0 ; 0 ,1] //This is the uncertainty of the estimation, I am not quite sure what you should have to start, but it will converge once the filter is running.
F = [ 1,dt ; 0,1] when you do x*F this will predict the next location of the ball. Notice that this assumes the ball keeps moving with the same velocity as before, and just updates the position.
Q = [ 0,0 ; 0,vSigma^2] This is the "process noise". This one of the matrices you tune to make the filter preform well. In your system, velocity can change at any time, but position will never change without the velocity being what changed it. This is confusing. The value should be the standard deviation of what those velocity changes might be.
z = [position in x or y] This is your measurement
H = [1,0 ; 0,0] This is how your measurement gets applied to your current state. Since you are only measuring position, you only have a 1 in the first row.
R = [?] I think you will only need a scalar for R, which is the error in your measurement.
With those matrices you should be able to plug them into the formulas that are everywhere for Kalman filters.
Some good things to read:
Kalman filtering demo
Another great into, read the page linked to in the third paragraph
I had this question few weeks ago. I hope this answer helps another people.
If you can get the a good segmentation at each frame (the whole ball), you don't need to use kalman filter. But segmentation can give you a set of unconected blobs (only few parts of the ball). The problem is to know what parts (blobs) belong to the object or are just noise. Using kalman filter we can assign blobs near of the estimated position as parts of the object. E.g. if the ball has 10 pixels of radius, blobs with a distance higher than 15 should not be considered as part of the object.
Kalman filter uses the previous state to predict the current state. But, uses the current measurement (current object position) to improve its next prediction. E.g. if a vehicle is at the position 10 (previous state) and goes with a velocity of 5 m/s, kalman filter predict the next position at the position 15. But if we measure the position of the object, we found the object is at position 18. In order to improve the estimation, kalman filter updates the velocity to 8 m/s.
As summary, kalman filter is mainly used to solve the data
association problem in video tracking. It is also good to estimate
the object position, because it take into account the noise in the
source and in the observation.
And for you final question, you are right. It corresponds to the number of
object to track (one kalman filter per object).
In vision application , it is common to use your results at each frame as measurement, for example location of ball in each frame is good measurement.

Resources