Default Run Command in Alloy5: what's the equivalent in Alloy4? - alloy

I'm working on a project exam and having trouble to understand what is the equivalent of the default Run command in Alloy5 when I switch to Alloy4.
As you all can see, if I write a model in Alloy5, but not a command (no check no run), if I click 'Execute', it executes the following command:
"Run Default for 4 but 4 int, 4 seq expect 1"
Which satisfies the mission of my project: check if a described model has instances. No need to check properties or something else. Just if there are instances.
In Alloy4, if I click 'Execute', it tells me "No commands defined", so there is not a predefined run command.
My question is: what is the equivalent of that Allloy5-Default-Run-Command in Alloy4?
I'm going crazy since I have to work on Alloy4 API (on Java) and can't figure out how to solve this problem (because isSolvable() doesn't work if I don't put a command in the model).

The following should work:
run {} for 4

Related

XSCT executes command in interactive shell but not within script

First, take note that I am using the Xilinx SDK 2018.2 on Kubuntu 22.04 because of my companies policy. I know from research, that the command I'm using is deprecated in newer versions, but in the version I am using, it works flawlessly - kind of... But read for yourself:
My task is to automate all steps in the FPGA build to create a pipeline which automatically builds and tests the FPGAs. To achieve this, I need to build the code - this works flawlessly in XSDK. For automation, this also has to work in the command line, so what I did is following the manual to find out how this is achieved. Everything works as expected if I write it in the interactive prompt like shown here:
user#ubuntuvm:~$ xsct
****** Xilinx Software Commandline Tool (XSCT) v2018.2
**** Build date : Jun 14 2018-20:18:43
** Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.
xsct%
Then I can enter the commands I need to import all needed files and projects (hw, bsp, main project). With this toolset, everything works as expected.
Because I want to automate it via a pipeline, I decided to pack this into a script for easier access. The script contains exactly the commands I entered in the interactive shell and therefore looks like this:
user#ubuntuvm:~/gitrepos/repository$ cat ../autoBuildScript.tcl
setws /home/user/gitrepos/repository
openhw ./hps_packages/system.hdf
openbsp ./bsp_packages/system.mss
importprojects ./sources/mainApp
importprojects ./bsp_packages
importprojects ./hps_packages
regenbsp -bsp ./bsp_packages/system.mss
projects –clean
projects -build
The commands are identical to the ones entered via the interactive CLI tool, the only difference is that this is now packed into a script. The difference is, that this now does not build completely anymore. I get the following error:
user#ubuntuvm:~/gitrepos/repository$ xsct ../autoBuildScript.tcl
INFO: [Hsi 55-1698] elapsed time for repository loading 1 seconds
Starting xsdk. This could take few seconds... done
'mainApp' will not be imported... [ALREADY EXIST]
'bsp_packages' will not be imported... [ALREADY EXIST]
'hps_packages' will not be imported... [ALREADY EXIST]
/opt/Xilinx/SDK/2018.2/gnu/microblaze/lin
unexpected arguments: –clean
while executing
"error "unexpected arguments: $arglist""
(procedure "::xsdb::get_options" line 69)
invoked from within
"::xsdb::get_options args $options"
(procedure "projects" line 12)
invoked from within
"projects –clean"
(file "../autoBuildScript.tcl" line 8)
I've inserted projects -clean only, because I got the error before with projects -build and wanted to check, if this also happens with another argument.
In the internet I didn't really find anything according to my specific problem. Also I strictly held on to the official manual, in which the command is also used just as I use it - but with the result of it being working.
Also, I've checked the line endings (set to UNIX) because I suspected xsct to read maybe a newline character or something similar, with no result. This error also occurs, when I create the bsp and hardware from sketch. Also, to me the error looks like an internal one from Xilinx, but let me know what you think.
So, it appears that I just fixed the problem on my own. Thanks on everyone reading for being my rubber ducky.
Apparently, the version 2018.2 of XSDK has a few bugs, including inconsistency with their command interpretation. For some reason the command works in the interactive shell, but not in the script - because the command is in its short form. I just learned from a Xilinx tutorial, that projects -build is - even though it works - apparently not the full command. You usually need to clarify, that this command should come from the SDK like this: sdk projects -build. The interactive shell seems to ignore this fact for a reason - and so does the script for any command except projects. Therefore, I added the "sdk" prefix to all commands which I used from the SDK, just to be safe.
I cannot believe, that I just debugged 2 days for an error whose fix only contains 3 (+1 whitespace) letters.
Thanks everybody for reading and have a nice day

Declarative Pipeline using env var as choice parameter value

Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).

Show progress in a azure-pipeline output

so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value

How to get Flow type checker to detect changes in my files?

So Flow only works correctly the first time I run it, and then I have to restart my computer before it'll work correctly again.
Specifically, the problem I'm seeing is that we are using the Flow language to add type annotations to our JS code. Our linter script is setup to run flow type checking among other things. However, when I fix an issue in my code and then rerun the linter script, it still comes back with the exact same errors... BUT when it shows the piece of code where the error is supposed to be, it actually shows my updated code that's fixed.
So as an example, I had a file I copied into the project, that I didn't think I really needed, but maybe I would. So I copied it in just in case. Well then it came up with a bunch of linter errors, so I decided to just delete the file since I didn't really need it. So then I run "yarn lint --fix" again, but it's still complaining about that file, EVEN THOUGH THE FILE DOESN"T EXIST! Now interestingly, where the linter output is supposed to show the code for those errors it's just blank.
Or another example, let's say I had a couple of functions in my code:
100: function foo() {}
...
150: function bar() {}
And foo has a lot of errors because it was some throw away code I don't need anymore and so I just delete it. So the new code looks like:
100: function bar() {}
Well I rerun the linter and get an error like:
Error ------------------------ function foo has incorrect
something...blah blah
src/.../file.js
100| function bar() {}
I also tested this out on a coworker's machine and they got the same behavior that I did. So it's not something specific to my machine, although it could be specific to our project?
Note: There doesn't appear to be a tag for Flow, but I couldn't post without including at least one tag, so I used flowlang even though that's actually a different language :-( I'm assuming that anyone looking for flow would also use that tag since it's the closest.
The first time you launch Flow it starts up a background process that is then used for subsequent type checking. Unfortunately this background process is extremely slow, and buggy to boot. In linux you can run:
killall flow
To stop the background process. Then if you rerun the flow type checker, it will actually see all your latest changes.

Run init.d script conditionally based on hostname

What would be the best way to conditionally run an init.d script on linux based on hostname? I'm working with New Relic and some of the servers simply don't need it installed, but they're all otherwise basic copies of one another. This is Ubuntu.
I've tried (and failed) to put in a host conditional but for the life of me I can't get it working. Threw exits in the top of the file as well as in the start function, but it seems to fire up every time. Without knowing completely how those scripts are fired I'm a little confused on how to alter it to not fire if it server name isn't something like production, etc.
Any guidance would be super helpful.
Put this at the top of the script you would like to disable:
if [ $(hostname) != "goodhost" ]
then
exit
fi
replacing "goodhost" with the actual name of the host where the script is supposed to run.
Does that solve the problem?

Resources