For years, I've been using the following code across multiple ColdFusion environments:
<cfthread
action = "run"
name = "#Local.cachedFilename#"
src = "#Arguments.src#"
>
<!--- Process image --->
<cfset Local.objImage = This.processImage(
src = Arguments.src
) />
</cfthread>
I've come to reuse my component in a different environment today and for the first time I've hit an error, that Arguments.src does not exist inside the thread.
A bit of Googling returned an answer, I should be using the attributes scope inside a thread... so the ProcessImage call makes use of Attributes.src instead of Arguments.src.
This works fine. All is well. But I'm confused.
I wrote this code for Railo. It worked fine. I ported it over to CF10, it worked fine. I ran it on CF11. It worked fine.
The first time I've come across an error is on a particular box, also running CF10.
So my question is - was there an update somewhere, or is there some particular set of circumstances, that would allow me to use the arguments scope inside a CFThread? Essentially if I'm supposed to be using the attributes scope, how have I had this working fine for years?!
CFThread is a tag, not a function call. It therefore has attributes, not arguments. It would appear that Railo/Lucee for some reason incorrectly make the attributes available in an arguments scope as well. Adobe's behavior is correct IMO and you just got away with it by luck in the past on Railo.
Related
Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).
So Flow only works correctly the first time I run it, and then I have to restart my computer before it'll work correctly again.
Specifically, the problem I'm seeing is that we are using the Flow language to add type annotations to our JS code. Our linter script is setup to run flow type checking among other things. However, when I fix an issue in my code and then rerun the linter script, it still comes back with the exact same errors... BUT when it shows the piece of code where the error is supposed to be, it actually shows my updated code that's fixed.
So as an example, I had a file I copied into the project, that I didn't think I really needed, but maybe I would. So I copied it in just in case. Well then it came up with a bunch of linter errors, so I decided to just delete the file since I didn't really need it. So then I run "yarn lint --fix" again, but it's still complaining about that file, EVEN THOUGH THE FILE DOESN"T EXIST! Now interestingly, where the linter output is supposed to show the code for those errors it's just blank.
Or another example, let's say I had a couple of functions in my code:
100: function foo() {}
...
150: function bar() {}
And foo has a lot of errors because it was some throw away code I don't need anymore and so I just delete it. So the new code looks like:
100: function bar() {}
Well I rerun the linter and get an error like:
Error ------------------------ function foo has incorrect
something...blah blah
src/.../file.js
100| function bar() {}
I also tested this out on a coworker's machine and they got the same behavior that I did. So it's not something specific to my machine, although it could be specific to our project?
Note: There doesn't appear to be a tag for Flow, but I couldn't post without including at least one tag, so I used flowlang even though that's actually a different language :-( I'm assuming that anyone looking for flow would also use that tag since it's the closest.
The first time you launch Flow it starts up a background process that is then used for subsequent type checking. Unfortunately this background process is extremely slow, and buggy to boot. In linux you can run:
killall flow
To stop the background process. Then if you rerun the flow type checker, it will actually see all your latest changes.
From all the weirdness in our current Xpages project this one currently hits the ceiling:
we have created a few java beans in our current project. Inside Domino Designer they all are stored below Code >> Java, so that it is clear that they are automatically part of the project's classpath. All our beans belong to a package structure de.edcom.* (that's what we have been using forever without any problems). The objects are mostly called from SSJS using the full package names (the aren't registered as managed beans for various reasons) as in
var o = de.edcom.myObject.someMethod();
In none of my previous Xpages projects this caused any problems, it just worked. In the current project, however the XSP runtime all of a sudden started to interpret the package name as a String object giving us this runtime error:
Unknown member 'edcom' in Java class 'java.lang.String'
the ssjs code line in question is looking like this:
return de.edcom.TOC.buildTOC();
We absolutely don't have any clue as to what could be causing this, why only in this project, and why it sometimes IS working, but mostly isn't.
There's one difference between this projects and others before, and that is locallization: users can switch between "english" and "german" locale, and of course we are using codes like
context.setLocaleString("de")
and of course we are having several javascript code fragments looking for local settings as in
if(context.getLocalString()==="de"){...
This morning we in fact have renamed / refactored all java beans to different package names (com.edcom.*), and since then the error hasn't appeared (fingers crossed!).
But then again I think this is just too stupid, there can't really be a connection, or can it?
EDIT:
I tried using importPackage(), in conjunction with an xe:objectData datasource (as recommended by Adrian and Paul in their answers), but I'm still receiving that "unknown member 'edcom' in Java class 'java.lang,String'" message, now only at a different position in the code at my line saying importPackage(de.edcom).
I'll be switching back to the "com.edcom" package and keep looking for a better solution; unfortunately searching for the string "de" inside the entire code yields close to 12.000 matches; now way to find the real reason for this in that haystack
EDIT #2:
looks like we finally found the dreaded "de" variable: it was well hidden in a computed customControl property; I don't have a clue why all the File Searches that I performed over the last few days couldn't find this one.
Anyways it is very good to know that we have to be even more careful when naming our ssjs variables; I never would have thought that a ssjs variable name could ever interfere with TLD parts in Java packages; we probably will make it an internal policy that our variables have to must be named "vDe", "vCom", "vIt" etc. instead of just short lowercase letters...
Probably you used a variable de (which is a String) in an other SSJS script that run before that one faces the problem.
I've seen similar issues that a variable that is not explicitly declared in an script block can inherit values from another script block.
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
<xp:this.beforeRenderResponse><![CDATA[#{javascript:
var ex1 = "Hello World";
var ex2 = "Bye bye"}]]>
</xp:this.beforeRenderResponse>
<xp:this.afterRenderResponse><![CDATA[#{javascript:
print("value ex1: " + ex1);
print("value ex2: " + ex2);}]]>
</xp:this.afterRenderResponse>
</xp:view>
results in:
[1CA8:000C-4354] 10.06.2016 14:33:01 HTTP JVM: value ex1: Hello World
[1CA8:000C-4354] 10.06.2016 14:33:01 HTTP JVM: value ex2: Bye bye
So you should use the importPackage() function to import the references to your java classes or much better, use managed beans or dataContexts.
SSJS requires variables which will be put in a scope. Anything with a dot in it will first go to those variables. It sounds like localization stores the translations in a variable named "de", that would explain your problem.
Maybe importPackage(de.edcom) and then using return TOC.buildTOC(); would resolve the problem. I would consider that better practice, but either way in SSJS you're risking variable name collisions.
Personally, I prefer to back every XPage with a controller Java class (I use Jesse Gallagher's frostillicus framework, and so it's always accessible with the variable pageController), so my SSJS just calls pageController.myMethod(), which then avoids all name collisions and allows Java imports to ensure I map to the right Java class. There are more basic ways of doing it, e.g. with an xe:dataObject at the top of every XPage.
I have a very strange problem with the constructor of AptPkg::Cache object in the precise package of libapt-pkg-perl (v. 0.1.25).
The perl script is designed to download a debian package for three different architectures (i386, armel, armhf). For each architecture I do the following:
Configure AptPkg::Config '$_config' with the right parameters and package-lists for the desired architecture.
Create the cache object with AptPkg::Cache->new .
Call the method AptPkg::Cache->policy to create the AptPkg::Policy object.
Call the method AptPkg::Policy->candidate("program-name") .
Download the package for the selected architecture.
This works very well with Ubuntu Lucid, but with Ubuntu Precise I can only download the package for the first architecture defined. For the other two architectures there will be no installation candidate (method AptPkg::Policy->candidate("Package-Name") doesn't return an object).
I tried to build a workaround and I found one solution how the script works for all three architectures, without problems, in precise:
If I create the cache object (with AptPkg::Cache->new) twice in a row it works and the script downloads the debian package for all three architectures:
my $cache = AptPkg::Cache->new;
$cache = AptPkg::Cache->new;
I'm sure that the problem has something to do with the method AptPkg::Cache->new because I checked everything else, what could cause the problem, twice. All config-variables are set correctly and I even get a different Hash for AptPkg::Cache->new for each architecture, but it seems that I am overlooking something important.
I'm not very familiar with perl, so I am asking you guys if someone can explain why the script works with the workaround but not without it. Further it looks quite strange if you have the same line of code twice in your script.
Maybe you hit this bug - https://bugs.launchpad.net/ubuntu/+source/libapt-pkg-perl/+bug/994509
There is a script there to test if you're affected. If it's something else consider submitting a bug report.
edit: Just saw this is 11 months old :/
I have created a powershell module (.psm1) file that includes a few other powershell scripts. We use it for sharepoint.
So basically, here's what happens:
I have a deploy script that retrieves the module location from the registry
It loads the module using the Import-Module cmdlet (using -force switch)
This module in turn loads the Sharepoint 2010 snap in and a few other scripts that I created
It runs runs a deployment script that references functions from the included scripts
It also runs a command line application and sends the output directly to the screen
The script will usually work the first time. However, after a few number of tries the commandline tool will stop working and sending output to the screen altogether. And if I try to run a commandline tool (not a cmd-let) after running my script, it don't worky anymore: no output, nothing is done. Its just the same as hitting enter on a blank prompt. anything powershell specific or running GUI applications will work fine but running any console application will not produce any concievable results. the only solution to this, is to just close my powershell and open it again. it will work for usually once and I will have to close it again. our users certainly wont be happy about that..
The most 'notable' things on the script:
scriptblocks are used extensively (for logging), a script block is sent to a handler that executes it using invokecommand and logs the step
its manipulating sharepoint objects
all objects are properly disposed of
no static variables are created nor changed
There are a few global variables shared across all scripts
What I have tried:
I striped my code to a bare minimum: loading an xml file, and restaring a few windows services but I'm still getting this intermittently. I have no idea which part of the code could cause this. I would love to post the code, but our company policy forbids me to. so my aplogies..
Update as per the comment below:
here's roughly how I use codeblocks. I have this function below that is used everytime I want to make the user aware of a task that I'm executing and what it outcome is.
function DoTask($someString, $scriptBlock, $param)
{
try
{
OutputTaskDescription $someString
InvokeCommand $scriptBlock -ArgumentList $param
OutputResultOK
}
catch
{
OutputResultError $_.tostring()
}
}
it could then be used like this:
$stringVar = "something"
$SpSite = new-spsite
deploySomething 'Deploying something' -param $spsite -ScriptBlock {
dosomethingToObject $stringvar
dosomethingToObject $spSite.Name
}
it would then output something like:
Deploying Something ------------- OK
Deploying Something ------------- ERROR
Also notice that I pass the $spsite in the argument list and I just use the string directly. I still don't understand how this works but it seems like I can access all primitive typed variables even without passing them as arguments but I have to pass more complex objects are params, else they dont have any value.
Update:
after much searching and days of pain. I have found others with the same pain. My code exhibits the same exact symptoms as described here:
http://connect.microsoft.com/PowerShell/feedback/details/496326/stability-problem-any-application-run-fails-with-lastexitcode-1073741502
I guess there is no solution yet to this problem.
After a little while I've noticed that if I've ran some very memory intensive functions, I too have gotten that behavior where everything you try to execute just goes to the prompt again. I'd recommend setting Set-PsDebug -Trace 2 to see what those functions are actually doing. I fixed my issue by doing this and figuring out how to make my functions more efficient.