Example creating a MF-USG model in flopy - flopy

I am interested in creating a MF-USG model in flopy with a quadtree grid. I have created the disu using gridgen, but now I am stuck on how to assign properties and boundaries to the model, as the all examples of unstructured grids I can find are for MF6.
This is where I am at...
m = flopy.modflow.Modflow(model_name = model_name,version = "mfusg",structured=False,model_ws=model_ws)
m.dis = g.get_disu(m,nper = 1,perlen=1000,nstp = 100,tsmult=1.2,steady = False)
Can anyone point me to an example of building a MF-USG model with properties and BCs?

The Packages of MODFLOW-USG that flopy supports are DISU and SMS
See the Git-Hud FloPy Supported Packages page
I guess it will be hard if not impossible to find a fully MODFLOW-USG oriented example with flopy.
Changing to MODFLOW-6 might be a better alternative, specially now that it supports groundwater transport, subsidence and density driven flow sharing most of the theory behind.

Related

How to get the inference compute graph of the pytorch model?

I want to hand write a framework to perform inference of a given neural network. The network is so complicated, so to make sure my implementation is correct, I need to know how exactly the inference process is done on device.
I tried to use torchviz to visualize the network, but what I got seems to be the back propagation compute graph, which is really hard to understand.
Then I tried to convert the pytorch model to ONNX format, following the instruction enter link description here, but when I tried to visualize it, it seems that the original layers of the model had been seperated into very small operators.
I just want to get the result like this
How can I get this? Thanks!
Have you tried saving the model with torch.save (https://pytorch.org/tutorials/beginner/saving_loading_models.html) and opening it with Netron? The last view you showed is a view of the Netron app.
You can try also the package torchview, which provides several features (useful especially for large models). For instance you can set the display depth (depth in nested hierarchy of moduls).
It is also based on forward prop
github repo
Disclaimer: I am the author of the package
Note: The accepted format for tool is pytorch model

what does this command do for a bert transformers?

!pip install transformers
from transformers import InputExample, InputFeatures
What are InputExample and InputFeatures here?
thanks.
Check out the documentation.
Processors
This library includes processors for several traditional tasks. These
processors can be used to process a dataset into examples that can be
fed to a model.
And
class transformers.InputExample
A single training/test example for simple sequence classification.
As well as
class transformers.InputFeatures
A single set of features of data. Property names are the same names as
the corresponding inputs to a model.
So basically InputExample is just a raw input and InputFeatures is the (numerical) feature representation of that Input that the model uses.
I couldn't find any tutorial explicitly explaining this but you can check out Chapter 4 (From text to features) in this tutorial where it is nicely explained on an example.
From my experience the transformers library has an absolute ton of classes and structures so going too deep into the technical implementation can make it easy to get lost in. For starters I would recommend trying to get an idea of the broader picture by just getting some example projects to work as well as checking out their 🤗 Course.

Is there a way to extract geometric information out of cad model for feature recognition?

I am really new to this so any help is appreciated so basically I am trying to use PyTorch geometric to identify topological features within 3D CAD models (i.e. slots, pockets, holes, etc) but in order to do that I need to represent the cad model into a graph in PyTorch geometric.
For the input data, I am thinking of using adjacency of the faces to identify the features within the model.
Below is an example that is what I want to achieve out of a 3D model. The
relationships between each face are represented in a graph format.
So after getting the above graph I want to feed that to the algorithm for the graph classification.
The issue that I am facing is how can I extract that adjacency information out of the CAD model (i.e. Let's say face 1 is connected to face 3 so I take the 2 faces as two nodes in the graph and connect the two-node with an edge as both faces are touching each other) makes a graph out of it as shown in the above image.
I did come across one tool called pythonOCC not sure I can use that to extract the adjacency information out of it, if possible please suggest what I can do using that tool.

Using edge features for GCN in DGL

I'm trying to implement a graph convolutional network (GCN) in the Deep Graph Learning (DGL) package for Python. In many papers, edges have discrete features, and each possible value is associated with a different weight matrix or set of weight matrices. An example would be here. Is anyone familiar with how to implement a model like this in DGL? The DGL team's example of GCNs for graph classification, as does another example I found online.
Not sure whether the question still needs to be answered, but I guess it boils down to how to implement models like R-GCN or HGT with DGL. Some of these layers come build-in with DGL here. But it is also easy to implement your own computations. The following explanation only makes sense if you know the basic computational process of DGL during a forward pass through a graph layer (message, reduce, apply_node), if not DGL has good tutorials on that as well. To extend the usual graph computation to for example edges of different types you need to create a heterogenous Graph object and call multi_update_all on that graph object. You can pass a dictionary to that function which specifies the computation per edge type.

The best practice to use Spark's generated mllib model as a server

I am trying to find out what the proper way is to use a model generated by Spark+MLlib (in this case a Collaborative Filtering Recommendation Engine) to provide predictions quickly, on demand, and as a server.
My current solution is to run an instance of Spark continuously for this purpose, but I wanted to know whether there are better solutions to this, perhaps a solution that does not require a running Spark. Perhaps there is a way to load and use a generated model by Spark without involving Spark?
You can export a model via pmml and then take that model and use it in another application.
Now i find the way。 First,we can save als model's product_features and user_feaures by model.productFeatures() and Model.userFeatures()
Then we get product features like this
209699159874445020
0.0533636957407,-0.0878632888198,0.105949401855,0.129774808884,0.0953511446714,0.16420891881,0.0558457262814,0.0587058141828
So we can load product features and user features into two dicts in python and make a server by tornado to predict ratings using these two dicts. I will show the code for example.
def predict(item_id, user_id):
ind = item_id_index[item_id]
gf = goods_features[ind,1:]
ind = user_id_index[user_id]
uf = user_features[ind,1:]
return blas.ddot(gf,uf,len(gf),0,1,0,1)
As conclusion. We need to persist als model by ourselves and it isn't as difficult as we thought. Any suggestions are welcome.

Resources