Background:
I know, via pypi, about DOAP. Is there something similar to describe software defects?
Toby has a bug ontology. It covers the sort of information you'd find in a bug tracker, and it thus more concerned with the process of handling defects rather than classifying the issue in depth.
In a similar vein tabulator is moving its issue list to an rdf form.
I saw this Change Management specification mentioned recently. Don't know exactly how it's being used, but appears to be intended for bug/issue tracking. The vocabulary itself is outlined here: http://open-services.net/bin/view/Main/CmSpecificationV2
Related
I am trying to develop an OWL ontology based on different UML file resources presented in XMI format. Reading through the internet for a while now, it seems that almost all the available tools or approaches are outdated and even when trying some of them they don't provide the expected outcome.
Since this ontology plays a really important role in our project, I wanted to know what is the best approachs/tools to be used in order to convert UML to OWL ?
I have looked into this myself as well and I have found no tools that can do this satisfactory. Problems I ran into were either the tools used an old version of UML, did not support all UML features, used OWL 1 rather than OWL 2 and was supported by only an old version of Protege.
I resorted by doing the translation by hand, which for most UML constructs are not too difficult. For this purpose I have done a write-up on UML vs OWL, which gives an intuitive explanation for why some of the translation is done in a certain why, as well as provide a reference for translating UML to OWL.
There's an OMG spec now available at https://www.omg.org/spec/MOF2RDF/
I haven't yet found an open-source tool implementing this directly (i.e. convert from UML/XMI to OWL/RDF), but there are EMF related activities, that may be relevant (haven't tried), e.g.:
https://github.com/ghillairet/emftriple
You'll probably never get exactly what you want unless you do it by hand, as Henriette mentioned. One viable option is using COGS, which I've found to work pretty well.
The catch is that it's related to Rot's answer by supporting the OMG specification. If it's not much work to make sure that your UML conforms to that specification, it may save some time in the long run. Here's an example of an OWL file produced by COGS.
Regardless of the tool being used e.g. EA Sparx, Visual Paradigm, etc how do you capture questions raised during analysis or modeling?
Do you simply use notes or is there a standard approach? For example how do I capture the question "Is there a need to backup data to an off or on premise vendor?"
There is whole books about these topics (esp. Robertson& Robertson, Mastering the Requirements Process, ISBN-13: 978-0321815743) so I seriously doubt that there will be a short answer here that extensively covers your question.
(deleted text here after 1st comment)
What do you mean by "questions that need to be answered in UML"? There is of course scenarios where a UML notations is very useful. But it remains one modeling language of many, and there is alternatives not only regarding the tool but also regarding the notation.
Edit
For open questions, you probably best use diagram notes as you suggested yourself.
But that is a UML-internal view. I'd use an issue tracker like Atlassian Jira to have a better overview and all kinds of better usability. You can then use an add-in to sync with EA.
Normally, your questions will result from requirements. You can put your open questions into the "notes" field of requirement elements.
UML does not provide any diagram type to capture requirements so you'll have to rely on modeling non-UML requirement elements.
The requirement elements in Sparx EA are not (!) standardized but a proprietary solution by Sparx. They are somewhat similar to the Requirements Diagram in OMG SysML (Systems Modeling language).
The two highest priced editions of EA do also offer SysML support where you can explicitly create such requirement diagrams using the correct SysML syntax.
SysML is an extension to UML, so they'll work fine together. You can also create <<trace>> relationships.
For the other editions of EA there is a SysML plugin. The same is true for MagicDraw.
I found a method in which documents can be captured using EA Sparx. Hopefully it helps someone in the future - Requirements Gathering
Can you suggest me a diagramming software where I can draw diagrams like this one (as much out-of-the-box as possible):
These type of diagrams are usually used to document communication protocols so one can easily understand how packets/frames are composed.
Probably the best (but not out-of-the-box) solution is to draw this in vector graphics editor (for example, Inkscape).
I've stumbled upon this question looking for the same problem solution.
And I thought it can be useful to someone else if I share my findings.
Somewhat different PDU structure diagrams can be drawn with packetdiag, in wavedrom editor or described with TeX.
Also there are some relevant info in answers to related question: https://stackoverflow.com/questions/2034636/best-way-to-document-a-packet-protocol (how packet structures are described in RFCs). Question is deleted now, but you can find it in webarchive.
It would be great if share the way you solved this problem.
I looking for a good SERM Modeling tool for linux. Is there any? Which is best?
Open ModelSphere may be the lone mature tool that fits the requirements...
open source (under GPL)
run on Linux: cross platform, actually, since it is Java-based.
standard fare for typical modeling tools, including forward and backward engineering and validation.
No explicit support for SERM, but ability to introduce new notations. Several of the notations readily included appear to be relatively close to [what I understand] SERM is.
This last point might be the show-stopper... hopefully this suggestion can be be a starting point.
Disclosure:
I'm no modeling wizard, merely an occasional user of modeling tools typically included in software development IDEs. Also, I'm not versed in SERM in particular, and unsure of its subtle (or so subtle) differences with other modeling metamodels.
I would typically remain an interested spectator of this type of questions, but in view of the little attention it has received so far (and in view of the +50 bounty, right!) I'm kindly posting the above with the intent of maybe stirring things a bit. I'll be glad to delete this answer, amend it as suggested and otherwise try and help with generating traffic in this direction.
If nothing else, this answer may prompt Anton Bessonov to elaborate a bit on specific uses and capabilities would be relevant to his quest.
I am looking for some resources pertaining to the parsing and understanding of English (or just human language in general). While this is obviously a fairly complicated and wide field of study, I was wondering if anyone had any book or internet recommendations for study of the subject. I am aware of the basics, such as searching for copulas to draw word relationships, but anything you guys recommend I will be sure to thoroughly read.
Thanks.
Check out WordNet.
You probably want a book like "Representation and Inference for Natural Language - A First Course in Computational Semantics"
http://homepages.inf.ed.ac.uk/jbos/comsem/book1.html
Another way is looking at existing tools that already do the job on the basis of research papers: http://nlp.stanford.edu/index.shtml
I've used this tool once, and it's very nice. There's even an online version that lets you parse English and draws dependency trees and so on.
So you can start taking a look at their papers or the code itself.
Anyway take in consideration that in any field, what you get from such generic tools is almost always not what you want. In the sense that the semantics attributed by such tools is not what you would expect. For most cases, given a specific constrained domain it's preferable to roll your own parser, and do your best to avoid any ambiguities beforehand.
The process that you describe is called natural language understanding. There are various algorithms and software tools that have been developed for this purpose.