Are there any good dependency management tools that aren't language specific? [closed] - dependency-management

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm looking for dependency management tool that isn't specific to Java or any other language.
We use SystemVerilog, a hardware description language, to create stand-alone modules. We tag releases of those modules at various milestones. Higher level designs frequently pull in other modules using Subversion tags.
We attempted to use Subversion externals to automate things, so that when you check out a module you get its dependencies as well. But by the time you get to the system level, there are so many nested externals that it takes an hour to run svn update. Clearly that approach isn't working.
Basically, I want to be able to say, "My module depends on this version of module A, this version of module B, and this version of module C." The tool would do the work of checking out the dependencies, checking out the dependencies of the dependencies, and making sure that there are no conflicting dependencies (e.g. two versions of the same module).
Are there any tools out there that work well with an arbitrary language and Subversion?

I'm not feeling the pain of dependency tracking that you are describing, which means I probably don't fully understand your problem.
One approach is to keep all versions of modules in separate files in the same library. For example, you can have adder_0_0.sv for the first version of a full adder HDL module, which would describe a module called adder_0. If you find a bug in the module, you can create a file called adder_0_1.sv also describing adder_0. You will be able to use adder_0_1.sv instead of adder_0_0.sv. If you want to change the interface, either by adding or removing ports, or changing the semantics of the ports, then you can create a file called adder_1_0.sv describing a module called adder_1. Note that adder_0 and adder_1 cannot be used interchangeably.
The philosophy behind this approach is that all of these files are write once. You just keep adding new files to your library. Any project that uses this library just checks out the whole library and uses whatever files they want. Dependency management is only as complicated as putting the right filenames in the right project description file for whatever simulation or synthesis tool you use. No special dependency management tool is required. The fewer separate libraries you have, the easier it will be to manage them.

Once I had a tool flow that looked for source files in two places. First, it looked in the local directory. Second, it looked in the "master" area which was shared, read-only directory with all the code checked out for everyone to use. If I needed to modify the code, I only checked out the module I needed. Then, the script would pick up that module from my local workspace. The rest of the code it read from the master area. This was all custom scripting, no off the shelf tools, but it wasn't too hard.
If you get this working, you might be able to go further and compile the master area code into shared, master libraries. This could really speed up compile time.

Related

Should I have traces within Alloy "submodules"

This is a "best practices" question, rather than a question on the language itself.
I am working on breaking my project into modules, following the advice in the Software Abstraction book. I have signatures, facts and predicates for each small piece of my larger model in their own separate files. I have them properly included in my "main file".
For a few of the smaller modules, I found it useful to "prove out" the design by adding traces at that level. Traces are just facts though, so when I run the traces I have created in my main file, I get odd results ("two time steps at once", etc).
I see two options (after either of which I can just complete my main file traces):
comment out the traces in "sub modules"
make the "sub module" traces a bit less restrictive to allow for the option for nothing to happen
Do either of these look correct? Or is this usually done differently?
One of the best things I discovered while using Alloy is to not use facts but use predicates instead. You can control which predicates are active in a run or assert. You can always combine predicates so it is not much of an inconvenience.

Archiving scenarios and revisiting stories [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Working on a large and complex application, I wonder where and whether we should be storing scenarios to document how the software works.
When we discuss problems with an existing feature, it's hard to see what we have already done and it would be hard to look back with a scrum tool such as TFS. We have some tests but these are not visible to the product owner. Should we be looking to pull out some vast story / scenario list, amending / updating as we go or is this not agile.
We have no record of how the software works other than the code, some unit tests,some test cases and a few out of date user guides.
We tend to use our automated acceptance tests to document this. As we work on a user story we also develop automated tests and this is part of our Definition of Done.
We use SpecFlow for the tests and these are written as Given, When, Then scenarios that are easy to read and understand and can be shared with the product owners.
These tests add a lot of value as they are our automated regression suite so regression testing is quicker and easier, but as they are constantly kept up to date as we develop new stories they also act as documentation of how the system works.
You might find it useful to have a look at a few blogs around Specification by Example which is essentially what we are trying to do.
A few links I found useful in the past are:
http://www.thoughtworks.com/insights/blog/specification-example
http://martinfowler.com/bliki/SpecificationByExample.html
Apart from the tests we used also a Wiki for documentation. Especially the REST API was documented with request/response examples but also other software behaviour (results from long discussions, difficult to remember stuff).
Since you want to be able to match a description of what you've done to the running software, then it sounds like you should put that in version control along with the software. Start with a docs/ directory, then add detail as you need it. I do this frequently, and it just works. If you want to make this web-servable, then set up a web server somewhere to check out the docs every so often and point the document root at the working copy docs/ directory.

What is Device Tree?Advantages & Disadvantages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
what is Device Tree in Linux ?
what is the Advantages and Disadvantages of Device Tree?
If anyone know Device Tree in details ,Please help answering above questions .
The device tree is a description of hardware components in a system, here is the list of device tree files in linux for the arm arch:
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/arm/boot/dts?id=refs/tags/v3.10
From here:
http://devicetree.org/Device_Tree_Usage
The device tree is a simple tree structure of nodes and properties.
Properties are key-value pairs, and node may contain both properties
and child nodes
The nodes of the tree describe parameters that the linux kernel, or other software systems like u-boot, uses to init hardware.
Some of the advantages include:
Simple to change the configuration of the system without having to recompile any source code.
Can easily add support for new hardware (for example if you have a new rev of a board which only changes some minor components, you may be able to run the same software load as previous revs of the board, with only small changes to the .dts file on the new board...)
Can reuse existing .dts files with include statements, and can override previously defined functionality. For example if you include a dtsi (device tree include file) that defines a hardware component, but has it disabled, then you can just create a new node in your higher level dts file that does nothing but enable that component.
They (can) provide easy to read and understand descriptions of the hardware, and can give hardware components descriptive names.
Some of the disadvantages includes:
Not so easy to write a new .dts file, because it requires very detailed knowledge of the hardware.
Even if you know all the details of the hardware it may be hard to figure out the exact syntax to use to express what you want to do... (i.e. the documentation is lacking in many respects)
For me writing a .dts file is almost 100% trial and error, pulling examples from other .dts files and see what it does and if it gets closer to what I want... Often times the examples are all I have to work with, and there isn't much in the way of an explanation of what is going on.

What application help system (like chm files) exist on Linux/GTK? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
On windows CHM is a very good option.
Is there anything other then delivering a static set of HTML pages and using a primitive call to a webbrowser (which is even a problem itself on linux). And it would not offer any kind of fulltext searching, separated bookmarks and even the simple fact of not opening a new tab for each help call.
The Gnome yelp program is what is used for GTK/Gnome applications. It supports a number of formats, but not CHM directly. They have started to define their own markup, named Mallard. But I don't know what is the status of that.
I'd still recommend static HTML as the best option (and of course man pages!). For example you can use Sphinx to write beautiful documentation with a full-text search support!
There are CHM viewers available on Linux though frankly as a Linux user I'd prefer to get static HTML pages.
Some examples are chmsee and kchmviewer.
Afaik there is no universal system. Depending on your desktop system (gnome/kde) there might be helpsystems, but they are usually based on loose files and use full-blown browsers. (usually webkit based)
For Lazarus a CHM based helpsystem and embedded browser was created, including CHM write support.
The reasons to avoid loose static html were mostly:
the 60000 lemma static documentation took too long to install on lighter systems or systems with specialist filesystems.
CHM removes slack and adds compression.
we also support non posix and OS X systems, and little filesystem related problems (charsets/encoding, separators, path depth etc) and case insenstive filesystems on *nix caused a lot of grief. The CHM based help solved that, allowing for one set of routines to access helpdata on all systems.
indexing and toc are Btree based, and can be easily merged runtime from independently produced help sets. In general integrating independently produced helpfiles is a underappreciated aspect of helpfiles in general, while key to open platforms.
native fulltext search.
An own viewer also has the ability to take advantage of extra features on top of the base system.
I'm not mentioning the Lazarus system in the hope you adapt it, since it is at the moment too much a development system (SDK) oriented system, the viewer is not even available as a separate package. I mainly mention it to illustrate the problems of loose html.
I haven't investigate KDE/Gnome/Eclipse what they use as helpsystem for a while though. If I would have to restart from scratch, that's where I would look first.
If I had to create something myself quickly, I would use zipped static html, and a single gziped file with metadata/indexes and the lightest browser (Konquerer?) I could find. Not ideal, not like Windows, but apparently the best Linux can offer.

Verilog linting tools? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
What are some good linting tools for verilog? I'd prefer one that can be configured to either handle or ignore certain vendor specific primitives like LUT's, PLL's, etc.
I recently tried verilator-3.810, but out of the box it needs a little help with the primitives.
So what (linting) tools do you use to deal with the not-so-strict syntax of verilog?
I have never used a free linting tool, such as the one you mentioned (verilator).
My only experience has been with (expensive) commercial linting tools. Thus far, every one I have used has required me to spend time to customize the rule-set to filter out checks which I consider unimportant. For example, by default, every tool generates many warnings related to signal naming conventions. Since these in no way affect how RTL is synthesized to gates or lead to simulation issues, I choose to disable them.
The Spyglass tool (Atrenta) seems to have the widest range of capabilities, but also requires quite a bit of set-up. I like the Hal tool (Cadence) because it is very easy to start using right away (but, it too requires some set-up).
In my experience, it's generally not worth it. Anything I've tried needs loads of initial setup because out-of-the-box they try to check everything. But each shop has it's own coding standards - so you spend loads of time seasoning the linter to taste. Then once you try to integrate IP or code from another section of the company (which generally have a different idea of nice code), the linter goes mental, so you end up saying, wire im_happy = Verdi_happy & simulator_happy & synth_happy;
I also have used Spyglass and, like toolic indicated, it requires setting up a run script just to check even one file and the default checks complain about useless things like unloaded bits on array data types. Conformal will also output quite a bit of detail for its RTL warnings and you would have to have certain modules excluded anyway, if Formal Verification is part of your flow. Like Spyglass it requires a bit of setup.
Despite having access to these tools I only use them at the very end. During coding and validation I use VCS with lint checks turned on, and fix anything Verdi complains about. This catches quite a bit and does not require any configuration/script files to use. Neither are free(or cheap).
Ascent Lint from Real Intent is pretty good. It runs fast and is easy to set up.

Resources