I would like to use paraview for postprocessing of FE models. However, I am missing an essential feature in VKT format, which probably exists, but I don't know its name or how it is implemented in VTK.
In FE models it is common to group some nodes/elements. Depending on the program these are named differently: Groups, Sets, Selections, ... . Basically they are just an array with the reference numbers for quick selection. For example: A tube could have the selections "inlet", "outlet" and "wall". Is there any possibility to store such a selection in VTK format? The goal would be to be able to apply filters only to this node selection, for example to get results only from certain nodes.
By the way, I do the export of my calculated data to VTK on my own, because my FE program does not have native support for the VTK format. So I am more interested in the required data structure than in a workflow for program XY.
In VTK, you cannot apply filters only on subset of a data object. What you need is to be able to split your data into several ones for processing.
I see two ways for that:
create one object per selection and then use a MultiBlockDataSet with one part per block. Then you can use vtkExtractBlock to apply filters on a specific part.
Add a PartId array to your data. Then you can use thresholding to extract the region of interest.
I advise to use 1. as it has more semantic.
Related
I have text data from two different groups. In total I have around 4000 text passages with around 300 words.
I am searching for a tool that allows me to analyze the difference between these two groups.
In the best case, this tool can analyze different dimensions, e.g. the length of sentences, usage of superlatives, perspective of the narrator, usage of passive form, clear and objective writing VS hedging and imprecise writing.
In Python, you can use the nltk or spacey packages to process the texts so that you can analyze them (using pandas, for example). But there's not ready-made software (as far as I know) that will do all of that for you. You're going to have to write your own code.
For example, you would create a pandas dataframe with a row for all of the texts, with their group ('A' or 'B' or whatever) as one of the columns and the raw text as the other. Then you use nltk to tokenize the text and do whatever other preprocessing you want to do, storing the clean, tokenized text in another column. Then you can have a column for, for example, sentence length (which you can compute using nltk). From there you'll be able to get the means of the two groups, standard deviation, statistical significance of difference, etc.
It's straightforward for something like sentence length, but the other features you mention are more difficult. What does it mean for a text to be clear and objective, or hedged and imprecise? That means nothing on its own: you have to decide what exactly you mean by that, and what features characterize it. For example, you could make a list of hedgers ('I think', 'may', 'might', 'I'm not sure but', etc.) and then count their frequency in each text.
Something like "perspective of the narrator" might need to be annotated manually, depending on what you mean by it. If you just mean 1st person vs. 3rd person, that could be easy to identify (compare the 'I's vs. the 'he/she's), but anything more subtle than that, I'm not sure how you'd do it.
Good luck with your project!
After some search I've learned it is possible to create multiple Vertex Buffers, each for a specific 3D model, and set them in the Input Assembler to be read by my shaders, or at least this is what I could understand. But by reading Microsoft's documentation I've got very confused of how to do this the right way, this is what I was reading, and they say I can pass in an array of Vertex Buffers to the IA stage, but it also says that the maximum number of Vertex Buffers my Input Assembler can take in D3D11 is 32. What would I do if I needed 50 different models being rendered at the same time? And also if someone could clarify how the pOffset work in this situation with multiple models would also help, as I could understand it should always be assigned a 0 value as the beginning of my buffers is always the vertex data, but I could've understood wrong. And by last I want to add I've already rendered some buffers which consists of multiple models together, but I don't know exactly how could I deal with many individual models.
The short answer is: You don't try to draw all your models in one Draw call.
You are free to organize rendering in many ways, but here is one approach:
A 'model' consists of a one or more 'meshes'. Each mesh is collection of a vertices (in a VB), indices (in an IB), and some material information associated with each 'subset' of indices.
To draw:
foreach M in models
foreach mesh in M
foreach part in mesh
Set shaders based on material
Set VB/IB based on mesh
DrawIndexed
Since this is a number of nested loops, there are several ways to improve the performance. For example, you might just queue up the information instead of actually calling DrawIndexed, then sort by material. Then call DrawIndexed from the sorted queue.
For alpha-blending to appear correct, you have to do at least two rendering passes: First to render opaque things, then the second to render alpha-blended things.
You may also want to combine all the content in a given model into one VB and one IB with offsets rather than use individual resources.
You may have the same model in multiple locations in the world, so you may have many model instances sharing the same mesh data. In this case, sorting by VB/IB as well as material could be useful. If you are drawing the same model in many locations (100s or 1000s), then you should to look into hardware instancing.
An example implementation of this can be found in DirectX Tool Kit as Model, ModelMesh, and ModelMeshPart.
How can you get information about which variables are design vars, objectives or constraints from the information saved by recorders? It would be useful to print this information to a file to track optimization progress during a run. It looks like the RecordingManager.record_iteration doesn't really allow for this at the moment, since you only pass the root system and a metadata dict meant for optimizer settings.
Would it be possible to add an argument to the RecordingManager.record_iteration called e.g. optproblem, which is a dictionary with dictionaries with desvars, constraints and objective?
A simple OptimizationRecorder could then dump out column formatted files with the quantities for easy plotting during the optimisation.
This is something we have on our list of to-do's for the near future. Our current planned approach is going to be to augment the meta-data (already being saved) of variables with labels identifying them as des-vars, objectives, and constraints. Then you could pull that information out as part of a custom case recorder if you want. We plan on doing it this way because it doesn't require modifying the recorder's api at all. I think we'll have something like this implemented in the next month or so.
I want to perform attribute selection in Weka, but my dataset is rather big, and the program runs quite a while. That's why I want to see the current best set of attributes found. How do I do it?
For example, genetic search has the "Report Frequency" parameter, but all the results are shown after the whole search is finished, that's not what I need.
There is no progress bar, so I don't even know for how long will I have to wait...
Feature or Attribute selection is a standard problem in data-mining and Machine learning domains.
If you want to select a good set of attributes, you must preprocess your data by ranking attributes based on their quality. Ranking Methods such as p-metric or t-statistic are popular, which are based on statistical measures. One cannot simply go about by randomly selecting attributes from a large set without any sort of intuition on the nature of attributes.
If you do not need to run the attribute selection on your whole dataset you could use a smaller sample of your dataset (simply edit your ARFF file) to run the attribute selection.
I am going to build an interactive Choropleth map for Bangladesh. The goal of this project is to build a map system and populate different type of data. I read the documentations of the Openlayers, Leaflet and D3. I need some advice to find the right path. The solution must be optimized enough.
The map i am going to create will be something like the following http://nasirkhan.github.io/bangladesh-gis/asset/base_files/bd_admin_3.html. It is prepared based on leaflet js. But it is not mandatory to work with this library. I tried with Leaflet because it is easy to use and found the expected solution within a very short time.
The requirement of the project is to prepare a Choropleth map where i can display the related data. for example i have to show the population of all the divisions of Bangladesh. at the same time there should be some options so that i can show the literacy rate, male-female ratio and so on.
the solution i am working now have some issues. like the load time is huge and if i want to load the 2nd dataset then i have to load the same huge geolocation data, how can i optimize or avoid this situation?
Leaflet has a layers control feature. If you cut down your data to just what is required, split it into different layers and allow the user to select that layers they are interested in viewing that might cut down on the loading of the data. Another option is to simplify the shape of the polygons.