how to show the state space in SPIN - model-checking

The "Automata View" in iSpin (v. 1.1.4) shows .. exactly what?
It seems it is just a graph of the control flow of one process.
How would I get the full state space of the system?
E.g., in Ben-Ari: Principles of the Spin Model Checker, I want Figure 4.1.; or in Overview, I want Fig. 1.

The generated pan program supports the -d and -D command-line arguments, which print state tables in ASCII resp. dot.

The book mentions (in Appendix 2) a tool spinSpider which is part of
jspin.
I could compile it from source, but did not manage to run it successfully
(error messages are unhelpful and the book does not explain usage)
Anyway, spinSpider seems deprecated in favour of VMC in Erigone.
It is not clear whether it has the same functionality (draw the complete state graph).
I could compile it but not run, as VMC seems specific to Erigone, and incompatible with Spin, e.g. it says "Can't find file check.pml.trc" -
is this the ".trail" file?

Related

How can I find some manuals about rocket-chip?

I'm learning the code of rocket-chip. But I find it difficult to read its code due to the complex relationship. So I need some maunal to help me. Unluckily, it seems that there are few manuals about it. So could anyone provide me with manuals which benifit to read rocket-chip's code?
Working with Rocket-Chip requires you to understand the following things really well:
Diplomacy -- how Rocket-Chip implements the Diplomacy framework that's used to negotiate parameters during circuit elaboration and propagate them through the chip. Look at Henry Cook's Ph.D. dissertation at U.C. Berkeley for context on this, or better yet conference talks for instance.
TileLink -- specifically the way the Rocket-Chip uses Chisel to implement the TileLink specification
Functional programming in Scala -- the Rocket-Chip code base makes extensive use of Scala language features such as case classes, pattern matching, higher-order functions, partial functions, anonymous functions, trait mix-ins, and so on. So you'll see things like important variables defined with a map function, pattern-matched to a case statement, using implicit parameters, that you can't find, or even determine at code-writing time.
Chisel -- Specifically the data types and constructors for making circuit components. Chisel is the easy part.
I don't know of any kind of manual, but here are some things to help you understand the architecture:
Draw pictures of object hierarchies -- the object hierarchies can be 10 to 12 levels deep and there are over a thousand relevant classes and objects to know about (just within Rocket-Chip's src/main/scala folder, grep -rn "class" | wc -l returns 1126, grep -rn "object" | wc -l returns 533, and grep -rn "trait" | wc -l returns 196). There are barely any comments, so you need to see how each class, object, and trait is used. Ctrl+click to follow the superclass (where it says extends RocketSubsystemModuleImp), and draw out the class hierarchies. That will help you build a big-picture sense of where behavior comes from.
Use IntelliJ's structure panel -- this can help you keep the most relevant objects, classes, and their variables in mind, at least for a given file you have open.
Look for usage -- Again in IntelliJ, Ctrl+hover on a class to see the hover window that says "show uses for BlahBlahBlah". Ctrl+click that, and then browse to the different places it is used.
Use grep and find -- If you come across a pattern or construction that's unfamiliar, search for that patterns throughout the code base to see how it's used.
Break it, then fix it -- If you're using IntelliJ, remove the ._ after the import statements, and replace it with the exact classes that are being imported. To find those classes, you can comment that entire line out, then see what breaks (as in what gets a red squiggly line underneath it). Then replace the line, and Ctrl+click the broken class or constructor to follow each one to where it is implemented in the code. Read the source to determine the parameters and how it works.
// import freechips.rocketchip.tilelink._
// change to:
import freechips.rocketchip.tilelink.{TLToAXI4, TLToAHB}
Debug -- I would say debug, so you could see how the objects go together at runtime, but I don't have a grasp on how to get that information yet since the circuit elaborates through Makefiles, scripts, and sbt, rather than just through sbt.
Write tests -- Use the scalatest framework or chisel test specs to ask discrete questions about how things get built. This will help you determine properties of individual variables, objects, or configurations, but you'll need to have a lot of structural knowledge first, and then expectation about what each value you're testing against should be. That requires a lot of the above first.
I would recommend you have a look at chisel3. The Rocket-Chip RISCV core is written in this. I've added a couple of links below to get you started:
Chisel_Homepage,Chisel_Github,Chisel_Tutorial.
There is also a RISCV mini, which is a three stage riscv that was put together for learning purposes. available below not sure how up to date this on is though.
It may also be worth looking some example project on FPGA when you get comfortable with it, Microsemi, a Microchip company, has a selection of RISC-V cores and an Ecosystem to go with it. I'll link the MiV Ecosystem below too.3_Stage_RISCV_mini,MIV_ECOSYSTEM.
Hope this helps,
Ciaran

How to protect my script from copying and modifying in it?

I created expect script for customer and i fear to customize it like he want without returning to me so I tried to encrypt it but i didn't find a way for it
Then I tried to convert it to excutable but some commands was recognized by active tcl like "send" command even it is working perfectly on red hat
So is there a way to protect my script to be reading?
Thanks
It's usually enough to just package the code in a form that the user can't directly look inside. Even the smallest of speed-bump stops them.
You can use sdx qwrap to parcel your script up into a starkit. Those are reasonably resistant to random user poking, while being still technically open (the sdx tool is freely available, after all). You can convert the .kit file it creates into an executable by merging it with a packaged runtime.
In short, it's basically like this (with some complexity glossed over):
tclkit sdx.kit qwrap myapp.tcl
tclkit sdx.kit unwrap myapp.kit
# Copy additional assets into myapp.vfs if you need to
tclkit sdx.kit wrap myapp.exe -runtime C:\path\to\tclkit.exe
More discussion is here, the tclkit runtimes are here, and sdx itself can be obtained in .kit-packaged form here. Note that the runtime you use to run sdx does not need to be the same that you package; you can deploy code for other platforms than the one you are running from. This is a packaging phase action, not a compilation or linking.
Against more sophisticated users (i.e., not Joe Ordinary User) you'll want the Tcl Compiler out of the ActiveState TclDevKit. It's a code-obscurer formally (it doesn't actually improve the performance of anything) and the TDK isn't particularly well supported any more, but it's the main current solution for commercial protection of Tcl code. I'm on a small team working on a true compiler that will effectively offer much stronger protection, but that's not yet released (and really isn't ready yet).
One way is to store the essential code running in your server as back-end. Just give the user a fron-end application to do the requests. This way essential processes are on your control, and user cannot access that code.

What exactly is the Link-edit step

Question
What exactly does the link-edit step in my COBOL complier do?
After the code is compiled, there is a link edit step performed. I am not really sure what this step does.
Background Information
Right out of school (3 years ago) I got a job as a mainframe application developer. Having learned nothing about mainframes in school, my knowledge has many gaps. Around my shop, we kind of have a "black box" attitude of we don't need to know how a lot of this stuff works, it just does. I am trying to understand why we need this link-edit step if the program as already compiled successfully.
The linkedit/binderer step makes an executable program out of the output from the compiler (or the Assembler).
If you look at the output data set on SYSLIN from your COBOL compile step (if it is a temporary dataset, you can override it to an FB, LRECL 80 sequential dataset to be able to look at it) you'll see "card images", which contain (amongst some other stuff) the machine-code generated by the compiler.
These card-images are not executable. The code is not even contiguous, and many things like necessary runtime modules are missing.
The Program Binder/Binder (PGM=HEWL) takes the object code (card-images) from the compiler/assembler and does everything necessary (according to the options it was installed with, and further options you provide, and other libraries which many contain object-code or loadmodules or Program Objects) to create an executable program.
There used to be a thing called the Linkage Editor which accomplished this task. Hence linkedit, linkedited. Unfortunately, in English, bind does not conjugate in the same way as edit. There's no good word, so I use Binderer, and Bindered, partly to rail against the establishment which decided to call it the Program Binder (not to be so confused with Binding for DB2, either).
So, today, by linkedit people mean "use of the Program Binder". It is a process to make the output from your compile/assemble into an executable program, which can be a loadmodule, or a Program Object (Enterprise COBOL V5+ can only be bindered into Program Objects, not loadmodules), or a DLL (not to be confused with .dll).
It is worth looking at the output of SYSLIN, the SYSPRINT output from the binder step, and consulting manuals/presentations of the Program Binder which will give you an idea of what goes in, what happens (look up any IEW messages, especially for non-zero-RC executions of the step) by sticking the message in a browser search box. From the documentary material you'll start to get an idea of the breadth of the subject also. The Binder is able to do many useful things.
Here's a link to a useful diagram, some more detailed explanation, and the name of the main reference document for the binder for application programmes: z/OS MVS Program Management: User's Guide and Reference
The program management binder
As an end-note, the reason they are "card images" is because... back in olden times, the object deck from compiler/assembler would be punched onto physical cards. Which would then be used as input cards to the linkage editor. I'm not sorry that I missed out on having to do that ever...
In addition to Bill's (great) answer, I think it is worth to also mention the related topics below ...
Static versus dynamic linking
If a (main) program 'calls' a subprogram, then you can either have such call to happen 'dynamic' or 'static':
dynamic: at run-time (when the main program is executed), the then current content of the subprogram is loaded and performed.
static: at link-time (when the mail program is(re-)linked), the then current content of the subprogram is included (= resolved) in the main program.
Link-edit control cards
The actual creation of the load module (output of the link-edit step) can be controlled by special directives for the link-editor, such as:
Entry points to be created.
Name of the load module to be created.
Includes (of statically linked subprograms) to be performed.
Alias-members to be created.
Storing the link-edit output in PDS or PDSE
The actual output (load module) can be stored in members located in either PDS or PDSE libraries. In doing so, you need to think ahead a bit about which format (PDS or PDSE) best fits your requirements, especially when it comes to concatenating multiple libraries (e.g a preprod environment for testing purposes).

LibreOffice: determine source code part responsible for printing

I am trying to implement some additional functionality to the LibreOffice printing process (some special info should be added automatically to the margins of every printed page). I am using RHEL 6.4 with LibreOffice 4.0.4 and Gnome 2.28.
My purpose is to research the data flow between LibreOffice and system components and determine which source codes are responsible for printing. After that I will have to modify these parts of code.
Now I need an advice on the methods of source code research. I found a plenty of tools and from my point of view:
strace seem to be very low-level;
gprof requires binaries recompiled with "-pg" CFLAGS; have no idea how to do it with LibreOffice;
systemtap can probe syscalls only, isn't it?
callgrind + Gprof2Dot are quite good together but perform strange results (see below);
For instance here is the call graph from callgrind output with Gprof2Dot visualisation. I started callgrind with such a command:
valgrind --tool=callgrind --dump-instr=yes --simulate-cache=yes --collect-jumps=yes /usr/lib64/libreoffice/program/soffice --writer
and received four output files:
-rw-------. 1 root root 0 Jan 9 21:04 callgrind.out.29808
-rw-------. 1 root root 427196 Jan 9 21:04 callgrind.out.29809
-rw-------. 1 root root 482134 Jan 9 21:04 callgrind.out.29811
-rw-------. 1 root root 521713 Jan 9 21:04 callgrind.out.29812
The last one (pid 29812) corresponds to the running LibreOffice Writer GUI application (i determined it with strace and ps aux). I pressed CTRL+P and OK button. Then I closed the application hoping to see the function responsible for printing process initialisation in logs.
The callgrind output was processed with a Gprof2Dot tool according to this answer. Unfortunately, I cannot see on the picture neither the actions I am interested in, nor the call graph as is.
I will appreciate for any info about the proper way of resolving such a problem. Thank you.
The proper way of solving this problem is remembering that LibreOffice is open source. The whole source code is documented and you can browse documentation at docs.libreoffice.org. Don't do that the hard way :)
Besides, remember that the printer setup dialog is not LibreOffice-specific, rather, it is provided by the OS.
What you want is a tool to identify the source code of interest. Test Coverage (TC) tools can provide this information.
What TC tools do is determine what code fragments have run, when the program is exercised; think of it as collecting as set of code regions. Normally TC tools are used in conjunction with (interactive/unit/integration/system) tests, to determine how effective the tests are. If only a small amount of code has been executed (as detected by the TC tool), the tests are interpreted as ineffective or incomplete; if a large percentage has been covered, one has good tests asd reasonable justification for shipping the product (assuming all the tests passed).
But you can use TC tools to find the code that implements features. First, you execute some test (or perhaps manually drive the software) to exercise the feature of interest, and collect TC data. This tells you the set of all the code exercised, if the feature is used; it is an overestimation of the code of interest to you. Then you exercise the program, asking it to do some similar activity, but which does not exercise the feature. This identifies the set of code that definitely does not implement the feature. Compute the set difference of the code-exercised-with-feature and ...-without to determine code which is more focused on supporting the feature.
You can naturally get tighter bounds by running more exercises-feature and more doesn't-exercise-feature and computing differences over unions of those sets.
There are TC tools for C++, e.g., "gcov". Most of them, I think, won't let/help you compute such set differences over the results; many TC tools seem not to have any support for manipulating covered-sets. (My company makes a family of TC tools that do have this capability, including compute coverage-set-differences, including C++).
If you actually want to extract the relevant code, TC tools don't do that.
They merely tell you what code by designating text regions in source files. Most test coverage tools only report covered lines as such text regions; this is partly because the machinery many test coverage tools use is limited to line numbers recorded by the compiler.
However, one can have test coverage tools that are precise in reporting text regions in terms of starting file/line/column to ending file/line/column (ahem, my company's tools happen to do this). With this information, it is fairly straightforward to build a simple program to read source files, and extract literally the code that was executed. (This does not mean that the extracted code is a well-formed program! for instance, the data declarations won't be included in the executed fragments although they are necessary).
OP doesn't say what he intends to do with such code, so the set of fragments may be all that is needed. If he wants to extract the code and the necessary declarations, he'll need more sophisticated tools that can determine the declarations needed. Program transformation tools with full parsers and name resolvers for source code can provide the necessary capability for this. This is considerably more complicated to use than just test coverage tools with ad hoc extraction text extraction.

Can a LabVIEW VI tell whether one of its output terminals is wired?

In LabVIEW, is it possible to tell from within a VI whether an output terminal is wired in the calling VI? Obviously, this would depend on the calling VI, but perhaps there is some way to find the answer for the current invocation of a VI.
In C terms, this would be like defining a function that takes arguments which are pointers to where to store output parameters, but will accept NULL if the caller is not interested in that parameter.
As it was said you can't do this in the natural way, but there's a workaround using data value references (requires LV 2009). It is the same idea of giving a NULL pointer to an output argument. The result is given in input as a data value reference (which is the pointer), and checked for Not a Reference by the SubVI. If it is null, do nothing.
Here is the SubVI (case true does nothing of course):
And here is the calling VI:
Images are VI snippets so you can drag and drop on a diagram to get the code.
I'd suggest you're going about this the wrong way. If the compiler is not smart enough to avoid the calculation on its own, make two versions of this VI. One that does the expensive calculation, one that does not. Then make a polymorphic VI that will allow you to switch between them. You already know at design time which version you want (because you're either wiring the output terminal or not), so just use the correct version of the polymorphic VI.
Alternatively, pass in a variable that switches on or off a Case statement for the expensive section of your calculation.
Like Underflow said, the basic answer is no.
You can have a look here to get the what is probably the most official and detailed answer which will ever be provided by NI.
Extending your analogy, you can do this in LV, except LV doesn't have the concept of null that C does. You can see an example of this here.
Note that the code in the link Underflow provided will not work in an executable, because the diagrams are stripped by default when building an EXE and because the RTE does not support some of properties and methods used there.
Sorry, I see I misunderstood the question. I thought you were asking about an input, so the idea I suggested does not apply. The restrictions I pointed do apply, though.
Why do you want to do this? There might be another solution.
Generally, no.
It is possible to do a static analysis on the code using the "scripting" features. This would require pulling the calling hierarchy, and tracking the wire references.
Pulling together a trial of this, there are some difficulties. Multiple identical sub-vi's on the same diagram are difficult to distinguish. Also, terminal references appear to be accessible mostly by name, which can lead to some collisions with identically named terminals of other vi's.
NI has done a bit of work on a variation of this problem; check out this.
In general, the LV compiler optimizes the machine code in such a way that unused code is not even built into the executable.
This does not apply to subVIs (because there's no way of knowing that you won't try to use the value of the indicators somehow, although LV could do it if it removes the FP when building an executable, and possibly does), but there is one way you can get it to apply to a subVI - inline the subVI, which should allow the compiler to see the outputs aren't used. You can also set its priority to subroutine, which will possibly also do this, but I wouldn't recommend that.
Officially, in-lining is only available in LV 2010, but there are ways of accessing the private VI property in older versions. I wouldn't recommend it, though, and it's likely that 2010 has some optimizations in this area that older versions did not.
P.S. In general, the details of the compiling process are not exposed and vary between LV versions as NI tweaks the compiler. The whole process is supposed to have been given a major upgrade in LV 2010 and there should be a webcast on NI's site with some of the details.

Resources