Powershell: commandline applications not working after calling method from module - sharepoint

I have created a powershell module (.psm1) file that includes a few other powershell scripts. We use it for sharepoint.
So basically, here's what happens:
I have a deploy script that retrieves the module location from the registry
It loads the module using the Import-Module cmdlet (using -force switch)
This module in turn loads the Sharepoint 2010 snap in and a few other scripts that I created
It runs runs a deployment script that references functions from the included scripts
It also runs a command line application and sends the output directly to the screen
The script will usually work the first time. However, after a few number of tries the commandline tool will stop working and sending output to the screen altogether. And if I try to run a commandline tool (not a cmd-let) after running my script, it don't worky anymore: no output, nothing is done. Its just the same as hitting enter on a blank prompt. anything powershell specific or running GUI applications will work fine but running any console application will not produce any concievable results. the only solution to this, is to just close my powershell and open it again. it will work for usually once and I will have to close it again. our users certainly wont be happy about that..
The most 'notable' things on the script:
scriptblocks are used extensively (for logging), a script block is sent to a handler that executes it using invokecommand and logs the step
its manipulating sharepoint objects
all objects are properly disposed of
no static variables are created nor changed
There are a few global variables shared across all scripts
What I have tried:
I striped my code to a bare minimum: loading an xml file, and restaring a few windows services but I'm still getting this intermittently. I have no idea which part of the code could cause this. I would love to post the code, but our company policy forbids me to. so my aplogies..
Update as per the comment below:
here's roughly how I use codeblocks. I have this function below that is used everytime I want to make the user aware of a task that I'm executing and what it outcome is.
function DoTask($someString, $scriptBlock, $param)
{
try
{
OutputTaskDescription $someString
InvokeCommand $scriptBlock -ArgumentList $param
OutputResultOK
}
catch
{
OutputResultError $_.tostring()
}
}
it could then be used like this:
$stringVar = "something"
$SpSite = new-spsite
deploySomething 'Deploying something' -param $spsite -ScriptBlock {
dosomethingToObject $stringvar
dosomethingToObject $spSite.Name
}
it would then output something like:
Deploying Something ------------- OK
Deploying Something ------------- ERROR
Also notice that I pass the $spsite in the argument list and I just use the string directly. I still don't understand how this works but it seems like I can access all primitive typed variables even without passing them as arguments but I have to pass more complex objects are params, else they dont have any value.
Update:
after much searching and days of pain. I have found others with the same pain. My code exhibits the same exact symptoms as described here:
http://connect.microsoft.com/PowerShell/feedback/details/496326/stability-problem-any-application-run-fails-with-lastexitcode-1073741502
I guess there is no solution yet to this problem.

After a little while I've noticed that if I've ran some very memory intensive functions, I too have gotten that behavior where everything you try to execute just goes to the prompt again. I'd recommend setting Set-PsDebug -Trace 2 to see what those functions are actually doing. I fixed my issue by doing this and figuring out how to make my functions more efficient.

Related

Declarative Pipeline using env var as choice parameter value

Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).

Golem Task respons back with runtime error 2, can't determine the cause

Repo for all code I've been using is updated here . When I run the requestor script it exits with a runtime error 2 (File not found). I am not sure how to further debug this or fix it. So far I've converted my code over to a python slim docker image to better mirror the example. It also works for me when I spin up a docker image that typing and running "/golem/work/imageclassifier.py --trainmodel" works from root. I switched all my code to use absolute paths. I also did make sure the shebang (#!) uses linux end of file characters rather than windows before which was giving me errors. Fixed a bug where my script returns error code 2 when called with no args to now pass.
clf.fit(trainDataGlobal, trainLabelsGlobal)
pkl_file = "classifier.pkl"
with open(pkl_file, 'wb') as file:
pickle.dump(clf, file)
is the only piece I could think of that causes the issue, but as far as I can tell this is the proper way to pickle something in python. Requestor script is also heavily based on the simple service example and I tried to mirror my design to that. I just need help in getting more information while debugging, or guidance on how to move forward from here

Show progress in a azure-pipeline output

so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value

Azure Devops logging commands in release pipeline

I am trying to customize the output of my pipeline release through setting some env variables into a task.
I found the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=powershell
which however does not seem to work.
What I am doing is simply to create a pipeline with a single task (either bash or PS), and there declaring the commands specified in the link through the inline version of the task.
Has anyone already successfully managed to make these commands work?
Do I do something wrong and/or incomplete?
Does anyone have a better way to customise the pipeline with relevant information from a task? E.g. through the release name, or the description and/or tag of the specific release?
Edit:
Write-Host "##vso[task.setvariable variable=sauce;]crushed tomatoes"
Write-Host "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
Write-Host "Non-secrets automatically mapped in, sauce is $env:SAUCE"
Write-Host "Secrets are not automatically mapped in, secretSauce is $env:SECRETSAUCE"
Write-Host "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
this is the code, copy and pasted. Now I tried also with the script, and it does not work either.
I use an hosted windows agent.
When you set a new variable with the logging command the variable is available only in the next tasks and not in the same task.
So, split your script to 2 tasks, in the second task put the last 3 lines and you will see that the first task works:
this also puzzled me for a while, in the end i found out that if you want to modify the $env:path you can call the special task called task.prependpath by using the special logging command syntax like "##vso[task.prependpath]local directory path". you can find more of this kind of special commands from their source :
https://github.com/microsoft/azure-pipelines-tasks/blob/master/docs/authoring/commands.md

How to get Flow type checker to detect changes in my files?

So Flow only works correctly the first time I run it, and then I have to restart my computer before it'll work correctly again.
Specifically, the problem I'm seeing is that we are using the Flow language to add type annotations to our JS code. Our linter script is setup to run flow type checking among other things. However, when I fix an issue in my code and then rerun the linter script, it still comes back with the exact same errors... BUT when it shows the piece of code where the error is supposed to be, it actually shows my updated code that's fixed.
So as an example, I had a file I copied into the project, that I didn't think I really needed, but maybe I would. So I copied it in just in case. Well then it came up with a bunch of linter errors, so I decided to just delete the file since I didn't really need it. So then I run "yarn lint --fix" again, but it's still complaining about that file, EVEN THOUGH THE FILE DOESN"T EXIST! Now interestingly, where the linter output is supposed to show the code for those errors it's just blank.
Or another example, let's say I had a couple of functions in my code:
100: function foo() {}
...
150: function bar() {}
And foo has a lot of errors because it was some throw away code I don't need anymore and so I just delete it. So the new code looks like:
100: function bar() {}
Well I rerun the linter and get an error like:
Error ------------------------ function foo has incorrect
something...blah blah
src/.../file.js
100| function bar() {}
I also tested this out on a coworker's machine and they got the same behavior that I did. So it's not something specific to my machine, although it could be specific to our project?
Note: There doesn't appear to be a tag for Flow, but I couldn't post without including at least one tag, so I used flowlang even though that's actually a different language :-( I'm assuming that anyone looking for flow would also use that tag since it's the closest.
The first time you launch Flow it starts up a background process that is then used for subsequent type checking. Unfortunately this background process is extremely slow, and buggy to boot. In linux you can run:
killall flow
To stop the background process. Then if you rerun the flow type checker, it will actually see all your latest changes.

Resources