Does dgl.nn.pytorch.GraphConv aggregate edge information? - pytorch

I am using DGL for a node classification task, but all the information I have is mainly edge feature information.
In this article (https://distill.pub/2021/gnn-intro/), the author mentions aggregating edge information, and I would like to ask if GraphConv implements the function of aggregating edge information inside

Related

UML package diagram for relationship between systems

I have this simple diagram, it doesn't follow any type of UML diagram. Its goal is to show all the parts of our solution, and how they're related.
In the image: the web scraper scraps the data in some websites and stores it in the database. The web application receives filter options and implement it using a Rest API that returns some data to be exported in xlsx and csv. The API uses the database populated by the web scraper.
I need to make a new diagram with the highlighted process above, using UML. I had a suggestion to use a package diagram, so I made this version:
Edit: In the image: Fonts -> Web Scraper -> Database -> Api(Filters(type of filters)) -> Front end (results, search options) -> User
Is it the right way of making a package diagram? I couldn't find a similar example or specific rules for this case.
Are packages the right modeling tool for your needs?
Packages are namespaces and aim to structure a model. A package diagram therefore does in not represent a process with data flows (dynamic behavior). The relations between packages are namespace relations such as «imports» and «merges» and dependencies.
Your package diagram certainly shows some valid decomposition of your design with nested packages. But you would normally not represent users (usario), or flows of data (dados) coming from a database (Banco de dados).
What are the better alternatives in UML?
Your initial diagram shows in one picture, using some flowcharting symbols, very different things:
conceptual classes of objects such as fonts, filters, or files
components such as web scraper, the database, front-end, back-end,
flows of objects like the webscraper that feeds the database that is queried by the backend, or interactions between freont-end suppliying filters and back-end that provides data.
If you want to represent this in UML you need to clarify the focus, because UML requires some precision since it separates structure and behavior. The answer will depend on what you want to show:
the flow of processes and data? Use an activity diagram (behavior). This is perfect to show the flow from the source to the end-result, but not so easily the parts of the system that are involved.
the relationship between components ? Use a component diagram (structure). This is perfect to identify the components, how these are nested, and how their interfaces are connected. But it does not show the order in which all this happens.
the interaction between components ? Use communication or sequence diagrams (behavior). Here you see what the components exchange in what order, but not so well how the components are structured.
Spontaneously, I'd go for components, since I have the impression that this dominates your original diagram. But in the end, you may use different diagrams for showing the different aspects.
Other alternatives
If you're looking for a single diagram to combine the different thoughts of your original diagram, alternatively to UML, you could consider C4 model diagrams.
It's less precise than UML, but very convenient for communicating the big picture of a system architecture. The C4 context diagram and the C4 container diagram in paticula allow to show the system's main components, and some high-level relations (including data flows) between them.
The good news is that C4 relies on UML for the the more detailed design of the identified components.

How to encode a taxonomy in Weaviate contextionary

I would like to create a semantic context for my data before vectorizing the actual data in Weaviate (https://github.com/semi-technologies/weaviate).
Lets say we have a taxonomy where we have a set of domain specific concepts together with links to their related concepts. Could you advise me what the best way is to encode not only those concepts but also relations between them using contextionary?
Depending on your use case, there are a few answers possible.
You can create the "semantic context" in a Weaviate schema and use a vectorization module to vectorized the data according to this schema.
You have domain-specific concepts in your data that the out-of-the-box vectorization modules don't know about (e.g., specific abbreviations).
You want to capture the semantic context of (i.e., vectorize) the graph itself before adding it to Weaviate.
The first is the easiest and straightforward one, the last one is the most esoteric.
Create a schema and use a vectorizer for your data
In your case, you would create a schema based on your taxonomy and load the data using an out-of-the-box vectorizer (this configurator helps you to build a Docker-compose file).
I would recommend starting with this anyway, because it will determine your data model and how you can search through and/or classify data. It might even be the case that for your use case this step already solves the problem because the out-of-the-box vectorizers are (bias alert) pretty decent.
Domain-specific concepts
At the moment of writing, Weaviate has two vectorizers, the contextionary and the transformers modules.
If you want to extend Weaviate with custom context, you can extend the contextionary or fine tune and distribute custom transformers.
If you do this, I would highly recommend still taking the first step. Because it will simply improve the results.
Capture semantic context of your graph
I don't think this is what you want, but it possible and quite esoteric. In principle, you can store your vectorized graph in Weaviate, but you need to generate the vectors on your own. For example, at the moment of writing, we are looking at RDF2Vec.
PS:
Because people often ask about the role of ontologies and taxonomies in Weaviate, I've written this blog post.

Retrieving tread depth when stairs not of Stairs class

I'm looking at the publicly available model "210 King - Autodesk Toronto.rvt" which I upgraded from 2016 to 2018 (original 2016 version here). When I select a stairs object in the model, it has an "Actual Tread Depth" in the Properties Panel.
I want to access this tread depth in the API. In the sample project that ships with Revit, the stairs are of class Autodesk.Revit.DB.Architecture.Stairs (derived from Element) which has an ActualTreadDepth property. But in this model, the stairs are all just objects of class Element. Casting them to Stairs throws an exception.
Two questions:
How can I access the tread depth?
Why aren't these of class Stairs? (I'm new to the Revit API)
You can access parameter values directly on the Element class. There is no need to cast to Stairs. That makes no difference whatsoever to the parameter access.
If you are new to the Revit API, please take a look at the getting started material. That will answer this question in more depth, and many others as well.
The answer to your 'why' question will help much, I'm afraid... historical reasons, the Revit BIM paradigm, underlying product features, you name it... There are often several different ways to represent objects in Revit. Element is the catch-all base case, as you have noted.
If the model you are working with was created using Element to represent the stairs, they may not have the property you are looking for. In that case, you may have to resort to other means to determine a useful value, e.g., (pretty complex) geometrical analysis.

Train or Custom Word Entity Types?

I was looking through the documentation and testing Google's Natural Language API and noticed it gets a number of people, events, organizations, and locations incorrect - it appears to be using Wikipedia as a major data source so if it is not in Wikipedia it seems to have trouble identifying the type of various words. Also, if certain words appear in a name (proper noun) it seems to always identify an entity as a certain type which is not always correct.
For instance: "Congress" seems to always identify as an organization [government] even when it is part of an event name. The name "WordCamp" shows as a location, but it is an event.
Is there a way to train the Natural Language engine or provide a custom set of organizations, locations, events, etc. so that it provides more accurate type information for entities that are not extremely popular?
I am the Product manager for this product. Custom entity types are not currently supported. As per your comment about not getting some entity types right, this is true for any NLP system but our goal is to keep improving. We are working on ways for you to provide us feedback on instances that we get wrong to improve our accuracy and will share the details shortly. Note we have trained our models on multiple data sources and not just Wikipedia data. The API returns the most relevant Wikipedia article for an entity detected so if an entity has multiple interpretations, we will only return the most commonly used interpretation.

Commenting Core Data Model

Is there a way to add comments to the entities, attributes, and relationships in the data model for Core Data? As my model becomes more complex, it would be useful to have comments, but I haven't seen any way to do this. Of course, it would also be helpful if the comments can be retained through migrations.
There is no way of adding comments directly to your Core Data model, but viewing the model in X-code you can switch to Table, Graph editor style in the lower right corner of the entity inspector. This view should serve plenty as documentation for your model if you've built it logically.
Of course you could always add a comment section in your App Delegate, covering the structure of your Core Data model, but I suppose you do know this alreay.

Resources