Hackage is great and always be the first place to study how function to be defined and used of specific package.
However, frequently, we need refers multi-packages simultaneously, e.g. Control.Monad, Data.List and ... so on, and switches among of them. The easy way to do that is open mulit-tabs in chrome, but as the number of tabs grow, there are many tabs be opened and the name of package on the tab cannot be showed fully.
So, Are there exist some web sites organize Haskell packages documents like javadoc style, we can browse and select packages in right frame and show the content of document in left frame. Furthermore, it is more appreciated if it can save and list frequent use package document in another frame.
I tend to use Dash but technically that's just downloading it from Hackage, you can also build docs locally (if library has them) but I find those a little cumbersome.
Related
Problem
I am developing a Rust program which has a GTK3 GUI using the given rust-gtk-binding.
The program should be cross-platform (at least Linux and Windows).
The GUI should be able to show custom plaintexts and small LaTeX-snippets to allow the use math environments (small means sizes of one formula as an element to display).
Therefore, I need a way to convert LaTeX-code into something which can be displayed by the GUI.
Ideas and their problems
I can see two approaches displaying LaTeX:
Compile the LaTeX-source into pdf and then into some image type. It should be possible to use Ghostscript to get the image. But I do not know how to generate the pdf in a way which is lightweight (does not include rather large packages like miktex) and cross-platform. This option could also be overkill as there is no need to dynamically download special packages, a good math support would be sufficient. The positive side is that rendering an image in GTK should be easy.
Use KaTeX which should be sufficient for math environments. I was able to install matching crates and generate HTML source from some formulas. But here it becomes difficult to render the result as GTK has no native way for displaying HTML. As it would be difficult to integrate a HTML-engine into the GUI it would be optimal to find a tool which is able to render HTML to an image type which then can be displayed.
Now I have two ways both using an intermediate step where for common LaTeX the first step is difficult and for KaTeX the second step displays a problem. For both approaches' difficult steps I could not find any feasible solution.
Are there any libraries or similar I could not find or are there any different approaches?
It would be perfectly sufficient to be able to render a single formula, I just want to avoid such massive and difficult overkills like using a complete LaTeX compiler or half a browser to render HTML.
After searching and evaluating many more approaches I got a solution which is kind of good while also having some major drawbacks:
First of all, I use TinyTex as LaTeX environment. I did not restrict the use of LaTeX to e.g. math environments. TinyTex offers support for major platforms while being lightweight and portable. Additional LaTeX packages have to be installed manually which allows me to decide which ones are being shipped with my application.
The negative side is that while TinyTex is lightweight for a LaTeX environment it still is rather big for its purpose here (about 250MB).
I installed the required packages to use \documentclass[preview]{standalone} to get an already cropped pdf.
Afterwards I use Ghostscript to get a png-image for the generated pdf. I did not use language bindings and instead just went std::process::Command.
The following lines should be sufficient to convert test.tex into test.png with the portable TinyTex installation and Ghostscripts gswin64c.exe present in subfolders of the project's directory under Windows: (As TinyTex and Ghostscript also exist for other OS the given example can easily be changed to work on other systems)
use std::process::Command;
fn main() {
let output = Command::new("TinyTex\\bin\\win32\\pdflatex.exe")
.args(&["test.tex"])
.output()
.expect("Some error message 1");
println!("{}", String::from_utf8(output.stdout).unwrap());
let output = Command::new("gs\\gswin64c.exe")
.args(&[
"-dNOPAUSE",
"-dBATCH",
"-sDEVICE=png16m",
"-r1000",
"-sOutputFile=test.png",
"test.pdf",
])
.output()
.expect("Some error message 2");
println!("{}", String::from_utf8(output.stdout).unwrap());
}
Of course this is no particulary good and useful code at this stage but it shows how to proceed with the given problem and I wanted to leave it here in case anyone with a similar problems finds this post.
In our Software we want that the user to open a project file with different number of items (similar as in a Visual Studio Project) and he should be able to extract and insert these items from external sources (from other project, for example). I know the user should open projects, save projects, extract and insert items but, in terms of UML Use Case Diagrams, I don't know how to represent the three last user cases:
Either as extensions from the first one, that only exist after the first user story occurs.
Or representing the user case Open Project as an included Use case from the other three.
In the picture I have the two Use Case Diagrams. Are they both good?
Ask yourself: is Open Project a use case? What is the added value? I guess there is none at all. So if there is no added value, it is not a use case. And if it is no use case, you don't need a bubble.
I don't think either one of your solutions is correct.
The diagram with the extends indicates that we we might save a project while opening a project, which seems odd to me.
The diagram on the right says the inverse, while saving a project we also open a project. Again that seems wrong to me.
From my viewpoint these use cases all need to be separate use cases without an extend or include relation between them. They all seem to be somewhat on the same level. I can imagine that each of the use cases could be triggered by a single menu option.
When I right-click a symbol and run the Find Usages ReSharper command, ReSharper will often seem to spend most of its time searching resx files. The majority of the time during which the progress dialog is visible, the file names shown in that dialog are various .resx files in the solution.
Is ReSharper actually searching .resx files? Why would it do this, and why would it take so long? Can I alter this behavior?
Setup: ReSharper 8.2.0.2160 C# Edition. Visual Studio 2013 Premium.
What I've tried:
Adding resx as a mask to the Code Inspection Items To Skip R# preference, but that doesn't make a difference
Find Usages Advanced, uncheck the Textual Occurrences option. Doesn't make a difference
The answer to this is all due to ReSharper's underlying architecture. When processing files, ReSharper will build an abstract syntax tree, and each node in the tree can have one or more references to an element in the semantic model. In other words, the Foo in the expression new Foo(42) will have a reference to the semantic element describing the class called Foo. Each file will have lots of references, as any usage of an element (variable, parameter, method, property, CSS class, HTML colour, file system path, and more) has one or more references to the declaration of that element.
These references are very powerful. They enable Ctrl+Click navigation, by simply navigating to the target(s) of the reference. They can provide code completion by displaying the candidate targets that would satisfy a reference at the current location in code.
And of course, they also power Find Usages, by finding all references that target a particular element. But this requires working backwards, from the target to the reference. The brute force approach would require checking the target of every reference in every file, trying to find the target. This clearly wouldn't scale, and the data set is too big to cache.
To make Find Usages run in a sane timescale, ReSharper also maintains a word index, of all words used in all files (this also helps with normal Go To navigation). When you invoke Find Usages on a symbol (such as EnterDate in the screenshot in the question), ReSharper uses the word index to narrow down the files it needs to search - it would look up EnterDate, and only use those files that include the word. Once it has a reduced subset of files to search, it needs to find any references that target the original element. To do this, it walks the syntax tree of each file in the subset. Each node is checked for references that match the name of the symbol we're looking for, e.g. EnterDate. If it matches, the reference is resolved, and the target is checked to see if it matches the same element - the EnterDate class, or property or whatever it actually was. If it does point to the expected target, it is added to the collection of usages, and gets displayed to the user.
(Things are slightly more complex than this, in that a reference might have multiple names, e.g. if you try and find usages on [Pure], ReSharper needs to find any usages of both Pure or PureAttribute. Fortunately, these alternative names are also stored in the word index, and used to help reduce the files to be searched. When checking the references, all of the alternative names are checked)
So, if you have a .resx file that contains the text EnterDate, it will be searched for a reference to the EnterDate element you're looking for - ReSharper will walk the syntax tree of the .resx file, and check each reference to see if it matches EnterDate.
We check all files, even if it seems obvious to the user that the target element can't possibly be used in that file, because we allow references to be cross language. That is, a VB file can reference a C# element, or a HTML file can reference a CSS element, or an XAML file reference a C# method element, and so on. So there is no filtering on "sensible" usage. For example, if EnterDate is a class, you as a user can tell that it's not likely to be in a .resx file, but ReSharper has no way of knowing that. After all, it's perfectly fine to use a class name in the type attribute of a web.config file, or as a parameter to typeof in a VB file. Or a plugin could add a reference provider that allows for usages of typenames in .resx files. So, we keep things simple and search all candidate references, even though it might look odd in the progress dialog.
I saw that there is a lua plugin for eclipse and there is a docpage on the awesome main page api_doc and all the .lua files in /usr/share/awesome/lib.
So I thought it must be possible to create a Library or Execution Environment so that one has tabcompletion and docview.
So I tried making my own Execution Environment:
wrote the standard .rockspec file
downloaded the documentation made an ofline version of it and put it in docs/ folder
ziped the files and folders in /usr/share/awesome/lib
ziped all up
tried it out ... and it failed.
When I try to view a documentaion for a .lua file I get "Note: This element has no attached documentation."
Questions: Am I totaly wrong in my doing (because I have the feeling I am)? Is there any way to edit the rc.lua with tabcompletion and docview?
Koneki will probably take a while to setup, but it's definitly worth it. Going for the".doclua"(by using version 1.2) would certainly make it, but I doubt that using a script to generate the information you need, would work out on the long run.
Most likely, you'll probably pass a bit of time to define what kind of object you're dealing with every time you come across one. The right to do, would be to actually take the time to see if the object/module/inner type inherit from an another object, so can actually have more completion feature as you keep using autocomplete to go from one object to another by pressing "dot"+ctrl_space.
In an ideal world, one person could probably make it right and share to other, so they can enjoy a full featured autocomplete editor.
Found solution for eclipse.
First off the idea of setting up an Execution environment was the wrong one.
So the whole thing about downloading the doc although.
For more information on that visit eclipse Wiki for LUA Development Tool.
The right thing to do is to add a source folder which contains the /usr/share/awesome/lib directory.
The bad news is that my comment from above was totally right, which means one has to configure each .lib file in /usr/share/awesome/lib to meet the requirements of the Documentation Language described here.
Than editing the rc.lua (which one can add to the project in eclipse) works with tabcompletion and doc view.
Since the Documentation Language used in the lib files is similar to the one used by "LUA Development Tool" one has not to change many things. Maybe there are even scripts for that.
We are using Cimplicity to operate some installations at our plant. The frontend consists of a lot of .cim files, which are the screens presented to the operator. These files are built with 'cimedit', which is basically a graphical click and drag program with which you can assemble the screens. Each object you drag onto the screen has the option to run a script, which brings me to my problem.
Because each screen contains a lot of small scripts and functions it is hard to keep track of what does what. For example I'm trying to figure out where a certain table from my database is being accessed or updated. Since the files all seem to be compressed (or so) I can't use a regular 'search the contents of this file' search.
Things I've tried so far are searching using windows, with the content option enabled and also tried the compression option. This had no success. It makes sense because like I said, the files seem to be compressed, so the actual script is not stored in plain text.
So, my question in short:
How do I search all the scripts of (preferably multiple) cimplicity screens?
Any tips on how to search compressed files are also very much appreciated.
I stumbled upon another stackoverflow post while searching for a better windows search tool and ended up finding this post: https://superuser.com/questions/26593/best-way-to-confidently-search-files-and-contents-in-windows-without-using-an
This posts recommends Agent Ransack and it is actually possible to search through the .cim files with this tool.