Protege Export inferences does not work with rules - protege

I noticed in Protégé 5 that all the inferences obtained by SWRL rules
cannot be exported by using the "export inferred axioms as ontology" tool (with all the options enabled). For example, consider the following ontology:
`https://pastebin.com/ZCMgxzRs` .
The inference "a instaceOf B" is not exported as you can see from the results here:
`https://pastebin.com/AaABJQt4` .
Is there any way to export such type of inferences?

Problem seems disappeared after a clean install of the tool.

Related

How to get rid of automatic type annotations in LunarVim from Haskell Language Server?

I use LunarVim for editing Haskell code. Automatic type and import hints are quite annoying. How to turn them off?
For example, after the import Text.ParserCombinators.Parsec import, the following hint is automatically showed right after import statement: import Text.ParserCombinators.Parsec ( car, noneof, string , ... ). If types for a function are not specified, the hint with the inferred types is automatically showed after the first line of the function.
HLS is very helpful, but the code looks cluttered due to those hints. It would be great to disable only the hints keeping all the HLS functionality. The default LunarVim setup is used with some plugins unrelated to Haskell and some changes are made in themes.
Thanks.
On Linux, under ~/.config/lvim/lsp-settings, try creating a file haskell.json with the following setting:
{
"haskell.plugin.importLens.globalOn": "false"
}
You can do it from within Lunarvim with :LspSettings haskell
This should fix the import hints. As for the inferred types, I couldn't find a specific option.
However, you can run haskell-language-server generate-default-config to print the default configuration and check the definitions of these options in https://haskell-language-server.readthedocs.io/en/latest/configuration.html#configuring-your-editor
Note: although I have lunarvim installed, I prefer the vanilla neovim with some plugins installed, such as Coc.

How to use helm-semantic-or-imenu for code navigation with type annotated python code

I would like to use the helm-semantic-or-imenu command to navigate components of type annotated Python code, but whatever code analyzer is used to dentify the components doesn't seem to recognize the type annotated python code. Functions with the return type annotation doesn't get recognized at all and functions with annotated arguments show the type instead of the arguments names in the signatures
The main problem I have is that I do not properly understand the components that is involved in making this work (when it does work). Obviously it might help to somehow update the code analyzer, but in which project do I find that? helm? semantic? imenu? or as someone mentioned somewhere else with regards to code analysis python.el? I could really use some help getting started to solve this. If the code analyzer is found in python.el, can I then try to modify and make emacs use a local version preferentially over the installed one?
EDIT:
After making the initial post I finally made a break through in trying to figure out where the components come from. I searched for python*.el in all of the file systemsystem and discovered these:
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python.elc
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python-wy.elc
I found the source for emacs 26.2 and discovered that indeed it seems python-el is responsible for parsing python files for semantic. It also internally uses the python-wy for recognizing a large portion of the language components. But unfortunately that is where I hit a brick wall. I was hoping to be able to monkey patch the function that recognizes a function definition via an re or something, but semantic actually solves the problem the right way. So python-wy seems to be auto-generated from a formal grammar definition file (in emacs git admin/grammars/python.wy) and figuring out how to modify that it is unfortunately much beyond my abilities.
The semantic python backend doesn't appear to parse type annotations correctly (and there hasn't been much recent development on those libraries as far as I can tell). Since helm-semantic-or-imenu favors semantic when it is active, you can disable semantic altogether for python buffers unless you use its other features (personally I only use it for C/C++).
When the semantic mode-specific libraries are loaded they set imenu-create-default-create-index and imenu-default-goto-function, causing imenu to use semantic instead of python.el's imenu function.
To disable semantic support for your python files you can customize the semantic-new-buffer-setup-functions, only adding entries for modes you want semantic support for, eg. in your semantic hook (or alternatively with the customize UI),
(setq semantic-new-buffer-setup-functions
'((c-mode . semantic-default-c-setup)
(c++-mode . semantic-default-c-setup)
(srecode-template-mode . srecode-template-setup-parser)
(texinfo-mode . semantic-default-texi-setup)
;; etc.
;; (makefile-automake-mode . semantic-default-make-setup)
;; (makefile-mode . semantic-default-make-setup)
;; (makefile-gmake-mode . semantic-default-make-setup)
))

Is there a way to make Haddock render per-argument docs for type class methods?

It turns out that Haddock does not render per-argument docs for type class
methods:
class Foo a where
foo
:: Int -- ^ This string will be ignored by Haddock
-> a
This causes certain issues for users of a library I maintain, because
the methods in my case have quite lengthy signatures. I have always had the descriptions in
the source formatted like that (certainly works for ordinary functions), but
it turns out Haddock does not display them (and does not complain about them
either).
Is there a way to display the per-argument docs with Haddock? Some workaround perhaps?
OK, this was a regression. This thing should work (and worked in version 2.16.1), but stopped (2.17.1 and later).
I have reported this: https://github.com/haskell/haddock/issues/647, should be fixed in version 2.18 (you can see there is a PR for this already).

Haddock link to functions in non-imported modules

In module B I have documentation with a link 'A.foo', linking to the foo member of module A. In module A I import module B. Haddock renders this as a link to A.html#t:foo, namely pointing at the type foo (which does not exist) not the function foo, which is at A.html#v:foo.
Why does Haddock link to t: for variables that start with a lower case letter? Is that a bug? For 'A.Foo' I can see that it could be a type or a constructor, so there are namespacing issues. For foo it seems a variable is at least most plausible.
Is there any way to fake a link? I am writing this in code samples, so I need it to be rendered as foo. I tried anchors, but they render as the module name, and for direct hyperlinks you have no control over the displayed text.
I considered a post processor (replacing t:[a-z] with v:), but that requires a custom Setup.hs which causes problems and is quite ugly.
I couldn't find any Haddock command line flags to obtain a more reasonable behavior, such as specifying that foo is a variable.
I can't add an import of A to B without introducing circular imports, which is vile to add purely for documentation.
I am running into this problem in the Shake documentation, where as an example removeFilesAfter does not get the right link.
I can partially answer the the first question (Why?); not sure if it is a bug or desired behaviour.
When haddock resolves references in LexParseRn.rename, it tries to look up the identifier in the environment (via lookupGRE_RdrName). This ought to fail. Next it looks as what the thing could mean (using dataTcOccs from GHC’s RnEnv). The relevant lines are:
dataTcOccs :: RdrName -> [RdrName]
-- Return both the given name and the same name promoted to the TcClsName
-- namespace. This is useful when we aren't sure which we are looking at.
dataTcOccs rdr_name
[...]
| isDataOcc occ || isVarOcc occ
= [rdr_name, rdr_name_tc]
[...]
where
occ = rdrNameOcc rdr_name
rdr_name_tc = setRdrNameSpace rdr_name tcName
so it returns the name first interpreted as whatever it was before (likely a link to a value), and then interpreted as a type constructor. How can a regular name be a type constructor? My guess is that this was added when TypeOperators were reformed in GHC 7.6, which now do share the namespace with value-level operators.
Then haddock matches on the result: If the first one is a type constructor, use that, otherwise use the second. So either it was a type constructor before, then this is used. Or it was not, but then the modified version generated by dataTcOccs is to be used.
It seems to me that haddock should just always use the first option here, and the code is just a mislead copy from how multiple results are used when they can actually be resolved.
This was a Haddock bug #228 and Neil's Haddock bug #253 and the fix has been upstream for few months. You can build GHC HEAD and rebuild your documentation or wait for 7.8 and do it then.

Persistent model types in Fay code

I'm using the Yesod scaffolded site (yesod 1.1.9.2) and spent a few hours yesterday wrapping my head around basic usage of Fay with Yesod. I think I now understand the intended workflow for using Fay to add a chunk of AJAX functionality to a page (I'm going to be a little pedantic here just because someone else might find the step-by-step helpful):
Add a data constructor Example a to SharedTypes.Command.
In the expression case readFromFay Command of ... in Handler.Fay.onCommand, add a case that matches on my new data constructor.
Create a Fay file 'Example.hs' in /fay, patterned after fay/Home.hs. Somewhere in here, use the expression call (Example "foo") $ myFayCallback.
Define a route and handler for the page that will use the Javascript I'm generating. In the handler, use $(fayFile' (ConE 'ScriptR) "Example.hs").
My question: In the current Yesod/Fay architecture, how should I go about sharing my Persistent model types with my Fay code?
Using import Model in a Fay file doesn't work -- when I try to load the page that's using this Fay file, I get an error in the browser (Fay's standard way of alerting me to errors, I guess) indicating that it couldn't find module 'Model' but that it only searched the following directories:
projectroot/cabal-dev//share/fay-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0
projectroot/fay
projectroot/fay-shared
I also tried importing and re-exporting Model in SharedTypes.hs, but that produced the same error.
Is there a way to do this? If not, why not? (I'm a relative noob in both Haskell and Yesod, so the answer to the "why not?" question would be really helpful.)
EDIT:
I just realized that mentioning Persistent in this question's title might be misleading. To be clearer about what I'm trying to do: I just want to be able to represent data in my Fay code using the same datatypes Yesod defines for my models. E.g. if I define a model thusly in config/models...
Foo
bar BarId
textThatCanBeNull Text Maybe
deriving Show
... I want to be able to define an AJAX 'command' that receives and/or returns a value of type Foo and have my Fay code deal in Foos without me having to write any de/serialization code. I understand that I won't be able to use any of Persistent's query functionality directly from my Fay code; I only mentioned Persistent in the title because I mentally associate everything in Model.hs and config/models with Persistent.
This currently is not supported; there are many features leveraged by Persistent which Fay does not support (e.g., Template Haskell). For now, it probably makes sense to have an intermediate data type which is shared by both Fay and Yesod and convert your Persistent data to/from that type.

Resources