Groovy: Manipulate variable from within each - groovy

I'd like to manipulate a variable in Groovy from within the closure of each, like this:
def stringTest = ''
def foo = ['one', 'two', 'three']
foo.each {
stringTest.concat(it)
}
println stringTest
But this gives me the following error:
| Error 2013-03-13 15:26:12,330 [http-bio-8080-exec-2] ERROR
errors.GrailsExceptionResolver - NoSuchMethodError occurred when
processing request: [GET] /Reporting-Web/reporting/show/1
reporting.web.AppFiguresService$_getProductIDs_closure2.(Ljava/lang/Object;Ljava/lang/Object;Lgroovy/lang/Reference;)V.
Stacktrace follows: Message: Executing action [show] of controller
[com.xyz.reporting.ReportingController] caused exception: Runtime
error executing action Line | Method
->> 195 | doFilter in grails.plugin.cache.web.filter.PageFragmentCachingFilter
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | 63 | doFilter in grails.plugin.cache.web.filter.AbstractFilter | 895 | runTask in java.util.concurrent.ThreadPoolExecutor$Worker |
918 | run in '' ^ 680 | run . . in java.lang.Thread
Caused by ControllerExecutionException: Runtime error executing action
->> 195 | doFilter in grails.plugin.cache.web.filter.PageFragmentCachingFilter
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | 63 | doFilter in grails.plugin.cache.web.filter.AbstractFilter | 895 | runTask in java.util.concurrent.ThreadPoolExecutor$Worker |
918 | run in '' ^ 680 | run . . in java.lang.Thread
Caused by InvocationTargetException: null
->> 195 | doFilter in grails.plugin.cache.web.filter.PageFragmentCachingFilter
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | 63 | doFilter in grails.plugin.cache.web.filter.AbstractFilter | 895 | runTask in java.util.concurrent.ThreadPoolExecutor$Worker |
918 | run in '' ^ 680 | run . . in java.lang.Thread
Caused by NoSuchMethodError:
reporting.web.Foo$_getProductIDs_closure2.(Ljava/lang/Object;Ljava/lang/Object;Lgroovy/lang/Reference;)V
->> 77 | getProductIDs in reporting.web.Foo$$ENzya8Hg
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | 45 | show in com.xyz.reporting.Foo | 195 | doFilter in grails.plugin.cache.web.filter.PageFragmentCachingFilter | 63 |
doFilter in grails.plugin.cache.web.filter.AbstractFilter | 895 |
runTask in java.util.concurrent.ThreadPoolExecutor$Worker | 918 |
run in '' ^ 680 | run . . in java.lang.Thread
I'm quite new to Groovy, any help would be great here!

This works...
def stringTest = ''
def foo = ['one', 'two', 'three']
foo.each {
stringTest += it
}
println stringTest

the function "concat()" returns a string:
stringTest = stringTest.concat(it)
To modify the iterator in the closure is not possible.
//Edit
The error msg is a grails error, while the controller can not open the function "show()"

Java strings are immutable. You can collect the concatened string:
def stringTest = ''
def foo = ['one', 'two', 'three']
stringTest = foo.collect { stringTest + it }.join()
assert stringTest == "onetwothree"

Related

How to deploy a Logic App Standard that use an XSLT map/schema

Say that I have a logic app standard that have a workflow that contains something like:
I have the following structure for my files:
| .gitignore
| azure-pipelines.yml
|
+---Integrations
| +---Artifacts
| | +---Maps
| | | myMap.xslt
| | |
| | \---Schemas
| +---LogicApp
| | LogicApp-template.json
| |
| +---RequiredIntegerations
| | storage-template.json
| |
| \---Workflows
| | azure.parameters.json
| | connections.json
| | host.json
| | parameters.json
| |
| \---testXsltWorkflow
| workflow.json
|
\---releasePipeline
| commit.yml
| release.yml
| requiredDeployment.yml
|
+---commitJobs
| integrations.yml
|
\---releaseJobs
deployIntegrations.yml
I have tried to deploy the map folder in the same way I deploy the workflows inside the yaml file (zip deploy):
I build the map Artifact like this:
# Build Maps Artifacts
- job: Build_Maps
displayName: Build_Maps
dependsOn: Build_LA_Artifact
pool:
vmImage: 'ubuntu-18.04'
# Copy the files to project_output folders
steps:
- task: CopyFiles#2
displayName: 'Create project folder'
inputs:
SourceFolder: 'Integrations/'
Contents: |
Artifacts/**
TargetFolder: 'project_output'
## convert the files in project_output/Artifacts folder to a zip files
## (i.e. compress the worklflow files into a zip file) and save it in artifiact folder
- task: ArchiveFiles#2
displayName: 'Create project zip'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)/project_output/Artifacts'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
replaceExistingArchive: true
## Publish/deploy the maps inside the LA.
- task: PublishPipelineArtifact#1
displayName: 'Publish project zip artifact'
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
artifact: 'mapsArtifact'
publishLocation: 'pipeline'
And I deploy the artifacts like this:
# deploy maps
- deployment: DeployMapsArtifact
dependsOn: DeployLogicAppArtifact
pool:
vmImage: 'ubuntu-18.04'
environment: "${{parameters.environment}}"
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: 'mapsArtifact'
- task: AzureFunctionApp#1
displayName: 'Deploy logic app maps'
inputs:
azureSubscription: "${{ parameters.connectedServiceName }}"
appType: 'functionApp'
appName: "${{parameters.LA_name}}-${{ parameters.environment }}"
package: '$(Pipeline.Workspace)/mapsArtifact/*.zip'
deploymentMethod: 'zipDeploy'
The deployment succeed but I still can't see this my XSLTmap inside the Maps tab in my standard logic App.

Remove duplicated lines based on field and length

I need remove duplicated lines based on field between parenthesis (Ex: (265394673718132736)) but removing the shorter line.
Example:
SERVER: 1 - (265394673718132736) - NO - ['OK', 'GROUP1']
SERVER: 2 - (284906813495967745) - NO - ['OK', 'GROUP1']
SERVER: 3 - (184387362225258496) - NO - ['OK', 'GROUP2']
SERVER: 4 - (118642771161645056) - NO - ['OK', 'GROUP1', 'SAR']
SERVER: 4 - (118642771161645056) - NO - ['OK', 'GROUP1']
SERVER: 5 - (234329090943877122) - NO - ['OK', 'GROUP4', 'SAR']
SERVER: 5 - (234329090943877122) - NO - ['OK', 'GROUP4', 'SAR', 'NO']
SERVER: 6 - (287039745190658069) - NO - ['OK', 'GROUP6']
SERVER: 7 - (280378736145072130) - NO - ['OK', 'GROUP3']
Desidered result:
SERVER: 1 - (265394673718132736) - NO - ['OK', 'GROUP1']
SERVER: 2 - (284906813495967745) - NO - ['OK', 'GROUP1']
SERVER: 3 - (184387362225258496) - NO - ['OK', 'GROUP2']
SERVER: 4 - (118642771161645056) - NO - ['OK', 'GROUP1', 'SAR']
SERVER: 5 - (234329090943877122) - NO - ['OK', 'GROUP4', 'SAR', 'NO']
SERVER: 6 - (287039745190658069) - NO - ['OK', 'GROUP6']
SERVER: 7 - (280378736145072130) - NO - ['OK', 'GROUP3']
EDIT:
Tried with:
cat test | cut -f1 -d ":" --complement | sort -u -t'-' -k2,2
But I need remove the shorter line, not random.
awk '{a[$4]=length(a[$4])<length?$0:a[$4]}END{for(x in a)print a[x]}' file
does the job.
Note that, the order of lines in output is not preserved.

what is the difference between cmd and idle when using tqdm?

recently I want to add a simple progress bar to my script, I use tqdm to that, but what puzzle me is that the output is different when I am in the IDLE or in the cmd
for example this
from tqdm import tqdm
import time
def test():
for i in tqdm( range(100) ):
time.sleep(0.1)
give the expected output in the cmd
30%|███ | 30/100 [00:03<00:07, 9.14it/s]
but in the IDLE the output is like this
0%| | 0/100 [00:00<?, ?it/s]
1%|1 | 1/100 [00:00<00:10, 9.14it/s]
2%|2 | 2/100 [00:00<00:11, 8.77it/s]
3%|3 | 3/100 [00:00<00:11, 8.52it/s]
4%|4 | 4/100 [00:00<00:11, 8.36it/s]
5%|5 | 5/100 [00:00<00:11, 8.25it/s]
6%|6 | 6/100 [00:00<00:11, 8.17it/s]
7%|7 | 7/100 [00:00<00:11, 8.12it/s]
8%|8 | 8/100 [00:00<00:11, 8.08it/s]
9%|9 | 9/100 [00:01<00:11, 8.06it/s]
10%|# | 10/100 [00:01<00:11, 8.04it/s]
11%|#1 | 11/100 [00:01<00:11, 8.03it/s]
12%|#2 | 12/100 [00:01<00:10, 8.02it/s]
13%|#3 | 13/100 [00:01<00:10, 8.01it/s]
14%|#4 | 14/100 [00:01<00:10, 8.01it/s]
15%|#5 | 15/100 [00:01<00:10, 8.01it/s]
16%|#6 | 16/100 [00:01<00:10, 8.00it/s]
17%|#7 | 17/100 [00:02<00:10, 8.00it/s]
18%|#8 | 18/100 [00:02<00:10, 8.00it/s]
19%|#9 | 19/100 [00:02<00:10, 8.00it/s]
20%|## | 20/100 [00:02<00:09, 8.00it/s]
21%|##1 | 21/100 [00:02<00:09, 8.00it/s]
22%|##2 | 22/100 [00:02<00:09, 8.00it/s]
23%|##3 | 23/100 [00:02<00:09, 8.00it/s]
24%|##4 | 24/100 [00:02<00:09, 8.00it/s]
25%|##5 | 25/100 [00:03<00:09, 8.00it/s]
26%|##6 | 26/100 [00:03<00:09, 8.00it/s]
27%|##7 | 27/100 [00:03<00:09, 8.09it/s]
28%|##8 | 28/100 [00:03<00:09, 7.77it/s]
29%|##9 | 29/100 [00:03<00:09, 7.84it/s]
30%|### | 30/100 [00:03<00:08, 7.89it/s]
31%|###1 | 31/100 [00:03<00:08, 7.92it/s]
32%|###2 | 32/100 [00:03<00:08, 7.94it/s]
33%|###3 | 33/100 [00:04<00:08, 7.96it/s]
34%|###4 | 34/100 [00:04<00:08, 7.97it/s]
35%|###5 | 35/100 [00:04<00:08, 7.98it/s]
36%|###6 | 36/100 [00:04<00:08, 7.99it/s]
37%|###7 | 37/100 [00:04<00:07, 7.99it/s]
38%|###8 | 38/100 [00:04<00:07, 7.99it/s]
39%|###9 | 39/100 [00:04<00:07, 8.00it/s]
40%|#### | 40/100 [00:04<00:07, 8.00it/s]
41%|####1 | 41/100 [00:05<00:07, 8.00it/s]
I also get the same result if I make my own progress bar like
import sys
def progress_bar_cmd(count,total,suffix="",*,bar_len=60,file=sys.stdout):
filled_len = round(bar_len*count/total)
percents = round(100*count/total,2)
bar = "#"*filled_len + "-"*(bar_len - filled_len)
file.write( "[%s] %s%s ...%s\r"%(bar,percents,"%",suffix))
file.flush()
for i in range(101):
time.sleep(1)
progress_bar_cmd(i,100,"range 100")
why is that????
and there is a way to fix it???
Limiting ourselves to ascii characters, the program output of your second code is the same in both cases -- a stream of ascii bytes representing ascii chars. The language definition does not and cannot specify what an output device or display program will do with the bytes, in particular with control characters such as '\r'.
The Windows Command Prompt console at least sometimes interprets '\r' as 'return the cursor to the beginning of the current line without erasing anything'.
In a Win10 console:
>>> import sys; out=sys.stdout
>>> out.write('abc\rdef')
def7
However, when I run your second code, with the missing time import added, I do not see the overwrite behavior, but see the same continued line output as with IDLE.
C:\Users\Terry>python f:/python/mypy/tem.py
[------------------------------------------------------------] 0.0% ...range 100[#-----------------------------------------------------------] ...
On the third hand, if shorten the write to file.write("[%s]\r"% bar), then I do see one output overwritten over and over.
The tk Text widget used by IDLE only interprets \t and \n, but not other control characters. To some of us, this seems appropriate for a development environment, where erasing characters is less appropriate than in a production environment.

Grep is working differently accessing a server from different machines

I have a file with values separated by tabs. When a value is not present I put a '-' in the corresponding field.
Each line begins with an identifier. I'm simply searching for lines corresponding to given identifier and, using grep on a machine A (Linux) from two different machine (B and C) and two different results appear. In particular from one of the machines some consecutives '-' are missing.
The two machines are one with linux ubuntu (B) and the other with MAC OSX (C).
Here is an example:
INPUT FILE:
comp10034_c0_seq1 281 - UniRef90_B7GCX2 276 3e-29 640 98.220640569395 13.90625 Predicted_protein Phaeodactylum_tricornutum - - GO:0006200 ATP_catabolic_process GO:0005524 ATP
binding GO:0016020 membrane pfam00005 138-230 1.00e-09 - - - 93 - 0 0.136126 0
comp10036_c0_seq1 315 - - - - - - - - - - - - - - -- - - - - - - - - 77 + 2 0.00277103 0
comp10037_c0_seq1 350 - - - - - - - - - - - - - - -- - - - - - - - - 77 + 2 0.738719 0
comp6261_c0_seq1 1227 - UniRef90_K0R0D8 519 1e-82 186 42.2982885085575 98.9247311827957 Uncharacterized_protein Thalassiosira_ oceanica - - - - - - - - - - - - - -- 350 + 1 0.0034993 0
GREP FROM MACHINE B
grep 'comp6261_c0_seq1' file.txt
RESULT:
comp6261_c0_seq1 1227 - UniRef90_K0R0D8 519 1e-82 186 42.2982885085575 98.9247311827957 Uncharacterized_protein Thalassiosira_oceanica - - - - - - - - - - - - - -- 350 + 1 0.0034993 0
GREP FROM MACHINE C
grep 'comp6261_c0_seq1' file.txt
RESULT:
comp6261_c0_seq1 1227 - UniRef90_K0R0D8 519 1e-82 186 42.2982885085575 98.9247311827957 Uncharacterized_protein Thalassiosira_oceanica - 350 + 1 0.0034993 0
P.S.
Here in the forum tabs are not viewable so I chosen to write words separated by spaces.
Either your input files are different on each machine, or your input file contains control characters that are interpreted differently on each machine. Run diff and cat -v on your input files to discover which is true.
If the files are identical, perhaps the grep isn't. Check to see if your grep is a link, alias or builtin (a shell function). Try running:
which grep
`which grep` 'comp6261_c0_seq1' file.txt

Gpars withExistingPool Error jsr166y.ForkJoinPool not found

I have updated from 'org.codehaus.gpars:gpars:1.0.0' to 'org.codehaus.gpars:gpars:1.1.0'. My code works fine in 1.0.0 but in version 1.1.0 I cannot find jsr166y.ForkJoinPool anymore.
How do I get the correct ForkJoinPool?
The code I am using is:
import groovyx.gpars.GParsPool
import jsr166y.ForkJoinPool
class Test {
def pool = new ForkJoinPool()
def executeAsync(args, closure = null) {
if(!closure) {
closure = args
args = null
}
GParsPool.withExistingPool(pool) {
closure.callAsync(args)
}
}
}
I have to import java.util.concurrent.ForkJoinPool to get the ForkJoinPool class. But at runtime I get the following error:
| Error 2013-08-01 13:26:45,807 [http-nio-8080-exec-4] ERROR
errors.GrailsExceptionResolver - ClassNotFoundException occurred when processing
request: [POST] /testpackage/test/saveAll - parameters:
jsr166y.ForkJoinPool. Stacktrace follows:
Message: jsr166y.ForkJoinPool
Line | Method
->> 175 | findClass in org.codehaus.groovy.tools.RootLoader
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 423 | loadClass in java.lang.ClassLoader
| 147 | loadClass . . . . . . . . in org.codehaus.groovy.tools.RootLoader
| 356 | loadClass in java.lang.ClassLoader
| 2451 | privateGetDeclaredMethods in java.lang.Class
| 1810 | getDeclaredMethods in ''
| 46 | getLocked . . . . . . . . in org.codehaus.groovy.util.LazyReference
| 33 | get in ''
| 318 | saveAll . . . . . . . . . in testpackage.UploadImageController
| 195 | doFilter in
grails.plugin.cache.web.filter.PageFragmentCachingFilter
| 63 | doFilter . . . . . . . . in grails.plugin.cache.web.filter.AbstractFilter
| 1145 | runWorker in java.util.concurrent.ThreadPoolExecutor
| 615 | run . . . . . . . . . . . in java.util.concurrent.ThreadPoolExecutor$Worker
^ 722 | run in java.lang.Thread
| Compiling 1 source files.
Update your BuildConfog.groovy as:
compile 'org.codehaus.gpars:gpars:1.1.0'
compile 'org.codehaus.jsr166-mirror:jsr166y:1.7.0'
This should work for you.

Resources