I've been stuck on this one a few days, I'm trying to run a bash script which runs off of the first argument (maybe I should give up all hope, haha)
Syntax for running the script can be assumed to be:
sudo bash script argument or since it has og+x it can be ran as just sudo script argument
In go I'm running it using the following:
package main
import (
"os"
"os/exec"
"fmt"
)
func main() {
c := exec.Command("/bin/bash", "script " + argument)
if err := c.Run(); err != nil {
fmt.Println("Error: ", err)
}
os.Exit(0)
}
I have had absolutely no luck, I've tried loads of other variations as well for this...
exec.Command("/bin/sh", "-c", "sudo script", argument)
exec.Command("/bin/sh", "-c", "sudo script " + argument) (my first try)
exec.Command("/bin/bash", "-c", "sudo script" + argument)
exec.Command("/bin/bash", "sudo script", argument)
exec.Command("/bin/bash sudo script" + argument)
Most of these I am met with '/bin/bash sudo ect' no such file or directory, or Error: exit status 1 I have even gone as far as to write a Python wrapper looking for an argument and executing the bash script with subprocess. To rule out the path to the script not being defined I have tried all of the above with a direct route to the script rather than script name.
For the sake of my remaining hair, what am I doing wrong here? How can I better diagnose this problem so that I can get more information rather than exit status 1?
You don't need to call bash/sh at all, simply pass each argument alone, also to get the error you have to capture the command's stderr, here's a working example:
func main() {
c := exec.Command("sudo", "ls", "/tmp")
stderr := &bytes.Buffer{}
stdout := &bytes.Buffer{}
c.Stderr = stderr
c.Stdout = stdout
if err := c.Run(); err != nil {
fmt.Println("Error: ", err, "|", stderr.String())
} else {
fmt.Println(stdout.String())
}
os.Exit(0)
}
Related
I have written one program which will execute if both the remote server reachable, otherwise it will terminate the execution. For that i written below login but it did not produce as expected.
Below is code snippet.
str := "false"
Comd1 := fmt.Sprintf("ping -c 1 %s > /dev/null && echo true || echo false", serIP)
op, err := exec.Command( "/bin/sh", "-c", Comd1 ).Output()
fmt.Println(string(op))
if err != nil || str == string(op) {
fmt.Println(err)
return
}
Whenver server ip not rechable it is expected to enter in if loop and terminate the program but it always skip the condition and execute further which leads panic which is expected as remote server
is not rechable.
Any suggestion in code to compare string with []byte output would be highly appreciable.
I'm building a large project with SCONS, for reasons out of this topic (large story) I need to pass the object files options in the final linkage command inside a file.
Eg:
gcc -o program.elf #objects_file.txt -T linker_file.ld
This command works since I've tested it manually. But now I need to run it embedded in the Project build files. My first approach/idea has been to collect all the options into a file in the following way:
dbg_exe = own_env.Program('../' + target_path, components)
own_env.AddPreAction(dbg_exe, 'echo \'$SOURCES\' > objects_file.txt')
note: the $sources contains all the object files I need.
As I expected the command seems to be executed , I see the command printed in the terminal but for some reason it has not been executed since I don't find the objects_file.txt anywhere.
It's curious that if I copy & paste the printed lines in the same terminal the command execution is successful so I suppose the syntax constructed is correct.
I tried also a shorter test code:
own_env.AddPreAction(dbg_exe, 'ls -l > salida_ls.txt')
... and another surprise , this time I get syntax error in the console:
scons: done reading SConscript files.
scons: Building targets ...
ls -l > salida_ls.txt
ls: cannot access '>': No such file or directory
ls: cannot access 'salida_ls.txt': No such file or directory
a simple 'ls -l' works fine.
Any idea why this kind of bash commands don't work as expected? Is the > redirection symbol affecting the SCONS?
Some maybe useful information:
OS Windows10
Terminal mingw32
SCons v2.3.1
After searching I've found out that this is something related with the redefinition of the SPAWN construction variable:
def w32api_spawn(sh, escape, cmd, args, e_env):
print "CMD value"
print sh
print escape
print cmd
print args
print e_env
print " ********************************** "
if cmd == "SHELL":
return SCons.Platform.win32.spawn(sh,escape,args[1], args[1:],e_env)
cmdline = cmd + ' ' + string.join(args[1:], ' ')
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= _subprocess.STARTF_USESHOWWINDOW
proc = subprocess.Popen(
cmdline,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
startupinfo=startupinfo,
shell = False,
env = None
)
data, err = proc.communicate()
print data
rv = proc.wait()
if rv:
print "====="
print err
print "====="
return rv
Looks like you'll need to swap back to the default SPAWN for that Program().
Add this to the top of that SConscript
from SCons.Platform.win32 import spawn
Then replace the logic you pasted above with:
dbg_exe = own_env.Program('../' + target_path, components, SPAWN=spawn)
own_env.AddPreAction(dbg_exe, 'echo \'$SOURCES\' > objects_file.txt')
This assumes that you're only building on win32. If that's not true you'll need to conditionally add the SPAWN to your Program() above only when you're on win32.
Finally I found a workaround running a python native function to build th efile I needed. Unfortunately I cannot afford more time with this issue, I didn't find the real reason and solution but it is clear is not something related with the normal SCONS performing but with the trick performed in the SPAWN.
scons_common.GenerateObjectsFile('../' + objects_file, components)
I want a function for getting directory entries on Linux. I use ioutil.ReadDir and usually it is fast.
But if I want to read some mounted virtual file system on /run/user/1000/gvfs/, this function becomes slow. If the directory has many file entries I need to wait a long time.
I can use the ls command in a terminal and result will be the same.
When I tried ls -U -a -p -1 I got line by line output immediately.
I tried running this in Go with exec.Command, but it didn't work asynchronously. Go is waiting for full program output. What did I do wrong?
m.cmd = exec.Command("ls", "-U", "-a", "-p", "-1")
// for example some "slow" directory:
m.cmd.Dir = "/run/user/1000/gvfs/dav:host=webdav.yandex.ru,ssl=true,user=...../"
reader, _ := m.cmd.StdoutPipe()
bufReader := bufio.NewReader(reader)
go func() {
m.cmd.Start()
for {
line, _, err := bufReader.ReadLine()
if err != nil {
break
}
linestr := string(line)
if linestr != "./" && linestr != "../" {
fmt.Println(linestr)
}
}
}()
I need line by line printing immediately in Go.
Try ls -U -a -p 1 | cat to see if you get line-by-line output.
Go doesn't control ls; ls does line-by-line writing if ls chooses to do so, and ls chooses not to do that when its output is a pipe. You could allocate a pty pair and use that, but that's the wrong way to do this.
ioutil.ReadDir first reads the entire directory (by calling Readdir(-1)), then sorts the file names. If you use os.Open to open the directory, then call the Readdir or Readdirnames function with a small (but not negative) number, you should get something more to your liking.
I have something like this on a Jenkinsfile (Groovy) and I want to record the stdout and the exit code in a variable in order to use the information later.
sh "ls -l"
How can I do this, especially as it seems that you cannot really run any kind of groovy code inside the Jenkinsfile?
The latest version of the pipeline sh step allows you to do the following;
// Git committer email
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
Another feature is the returnStatus option.
// Test commit message for flags
BUILD_FULL = sh (
script: "git log -1 --pretty=%B | grep '\\[jenkins-full]'",
returnStatus: true
) == 0
echo "Build full flag: ${BUILD_FULL}"
These options where added based on this issue.
See official documentation for the sh command.
For declarative pipelines (see comments), you need to wrap code into script step:
script {
GIT_COMMIT_EMAIL = sh (
script: 'git --no-pager show -s --format=\'%ae\'',
returnStdout: true
).trim()
echo "Git committer email: ${GIT_COMMIT_EMAIL}"
}
Current Pipeline version natively supports returnStdout and returnStatus, which make it possible to get output or status from sh/bat steps.
An example:
def ret = sh(script: 'uname', returnStdout: true)
println ret
An official documentation.
quick answer is this:
sh "ls -l > commandResult"
result = readFile('commandResult').trim()
I think there exist a feature request to be able to get the result of sh step, but as far as I know, currently there is no other option.
EDIT: JENKINS-26133
EDIT2: Not quite sure since what version, but sh/bat steps now can return the std output, simply:
def output = sh returnStdout: true, script: 'ls -l'
If you want to get the stdout AND know whether the command succeeded or not, just use returnStdout and wrap it in an exception handler:
scripted pipeline
try {
// Fails with non-zero exit if dir1 does not exist
def dir1 = sh(script:'ls -la dir1', returnStdout:true).trim()
} catch (Exception ex) {
println("Unable to read dir1: ${ex}")
}
output:
[Pipeline] sh
[Test-Pipeline] Running shell script
+ ls -la dir1
ls: cannot access dir1: No such file or directory
[Pipeline] echo
unable to read dir1: hudson.AbortException: script returned exit code 2
Unfortunately hudson.AbortException is missing any useful method to obtain that exit status, so if the actual value is required you'd need to parse it out of the message (ugh!)
Contrary to the Javadoc https://javadoc.jenkins-ci.org/hudson/AbortException.html the build is not failed when this exception is caught. It fails when it's not caught!
Update:
If you also want the STDERR output from the shell command, Jenkins unfortunately fails to properly support that common use-case. A 2017 ticket JENKINS-44930 is stuck in a state of opinionated ping-pong whilst making no progress towards a solution - please consider adding your upvote to it.
As to a solution now, there could be a couple of possible approaches:
a) Redirect STDERR to STDOUT 2>&1
- but it's then up to you to parse that out of the main output though, and you won't get the output if the command failed - because you're in the exception handler.
b) redirect STDERR to a temporary file (the name of which you prepare earlier) 2>filename (but remember to clean up the file afterwards) - ie. main code becomes:
def stderrfile = 'stderr.out'
try {
def dir1 = sh(script:"ls -la dir1 2>${stderrfile}", returnStdout:true).trim()
} catch (Exception ex) {
def errmsg = readFile(stderrfile)
println("Unable to read dir1: ${ex} - ${errmsg}")
}
c) Go the other way, set returnStatus=true instead, dispense with the exception handler and always capture output to a file, ie:
def outfile = 'stdout.out'
def status = sh(script:"ls -la dir1 >${outfile} 2>&1", returnStatus:true)
def output = readFile(outfile).trim()
if (status == 0) {
// output is directory listing from stdout
} else {
// output is error message from stderr
}
Caveat: the above code is Unix/Linux-specific - Windows requires completely different shell commands.
this is a sample case, which will make sense I believe!
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
For those who need to use the output in subsequent shell commands, rather than groovy, something like this example could be done:
stage('Show Files') {
environment {
MY_FILES = sh(script: 'cd mydir && ls -l', returnStdout: true)
}
steps {
sh '''
echo "$MY_FILES"
'''
}
}
I found the examples on code maven to be quite useful.
All the above method will work. but to use the var as env variable inside your code you need to export the var first.
script{
sh " 'shell command here' > command"
command_var = readFile('command').trim()
sh "export command_var=$command_var"
}
replace the shell command with the command of your choice. Now if you are using python code you can just specify os.getenv("command_var") that will return the output of the shell command executed previously.
How to read the shell variable in groovy / how to assign shell return value to groovy variable.
Requirement : Open a text file read the lines using shell and store the value in groovy and get the parameter for each line .
Here , is delimiter
Ex: releaseModule.txt
./APP_TSBASE/app/team/i-home/deployments/ip-cc.war/cs_workflowReport.jar,configurable-wf-report,94,23crb1,artifact
./APP_TSBASE/app/team/i-home/deployments/ip.war/cs_workflowReport.jar,configurable-temppweb-report,394,rvu3crb1,artifact
========================
Here want to get module name 2nd Parameter (configurable-wf-report) , build no 3rd Parameter (94), commit id 4th (23crb1)
def module = sh(script: """awk -F',' '{ print \$2 "," \$3 "," \$4 }' releaseModules.txt | sort -u """, returnStdout: true).trim()
echo module
List lines = module.split( '\n' ).findAll { !it.startsWith( ',' ) }
def buildid
def Modname
lines.each {
List det1 = it.split(',')
buildid=det1[1].trim()
Modname = det1[0].trim()
tag= det1[2].trim()
echo Modname
echo buildid
echo tag
}
If you don't have a single sh command but a block of sh commands, returnstdout wont work then.
I had a similar issue where I applied something which is not a clean way of doing this but eventually it worked and served the purpose.
Solution -
In the shell block , echo the value and add it into some file.
Outside the shell block and inside the script block , read this file ,trim it and assign it to any local/params/environment variable.
example -
steps {
script {
sh '''
echo $PATH>path.txt
// I am using '>' because I want to create a new file every time to get the newest value of PATH
'''
path = readFile(file: 'path.txt')
path = path.trim() //local groovy variable assignment
//One can assign these values to env and params as below -
env.PATH = path //if you want to assign it to env var
params.PATH = path //if you want to assign it to params var
}
}
Easiest way is use this way
my_var=`echo 2`
echo $my_var
output
: 2
note that is not simple single quote is back quote ( ` ).
It was suggested in the IS newsgroup to use /D= but using the iscc.exe that came with version 5.2.3 I get an "Unknown option:" error.
Then in the script, how do you use the value of the command line parameter?
You do, as MicSim says, need the preprocessor. It's included in the latest ISPack. Once it's installed, iscc supports /D.
You can then use the values defined like this (assuming you'd done /DVERSION_NAME=1.23):
AppVerName=MyApplication v{#VERSION_NAME}
From the Inno Setup helpfile:
Inno Setup Preprocessor replaces the
standard Inno Setup Command Line
Compiler (ISCC.exe) by an extended
version. This extended version
provides extra parameters to control
Inno Setup Preprocessor.
The "extra parameters" include the /d option.
The point of the answer by #Steven Dunn is to solve the problem with another layer of abstraction: instead of calling iscc your_script.iss directly from the terminal, call your_script.ps1 -YourVar "value", process the switch, write a #define to the .iss file, and then compile it with iscc. This was not articulated well and I don't think the function shown to parse command line arguments added much value. However, I'm giving credit where credit is due.
As #jdigital mentioned, ISPP has the /d switch, but ISPP can't be run directly from the terminal (AFAIK). Hence, something like a secondary scripted approach hinted at by #Steven Dunn is necessary.
You can achieve this by adding placeholders to an existing .iss script and then overwrite them accordingly:
.iss Template
; -- template.iss --
#define MyVar ""
...
.ps1 Script
#requires -PSEdition Core
# create_iss_script.ps1
param(
[Parameter(Mandatory=$true)][String]$MyVar,
[Parameter(Mandatory=$true)][String]$OutFile,
)
$File = '.\template.iss'
$LineToReplace = '#define MyVar ""'
$NewLine = "#define MyVar ""${MyVar}"""
$FileContent = Get-Content -Path $File -Raw
$FileContent.Replace($LineToReplace, $NewLine) | Set-Content -Path $OutFile
Example Terminal Usage
PS> .\create_iss_script.ps1 -MyVar "HelloWorld!" -OutFile "helloworld.iss"
PS> iscc .\helloworld.iss
or run the iscc step from within your .ps1 script, if you prefer.
If you want to parse command line arguments from code in inno, then use a method similar to this. Just call the inno script from the command line as follows:
C:\MyInstallDirectory>MyInnoSetup.exe -myParam parameterValue
Then you can call the GetCommandLineParam like this wherever you need it:
myVariable := GetCommandLineParam('-myParam');
//==================================================================
{ Allows for standard command line parsing assuming a key/value organization }
function GetCommandlineParam (inParam: String):String;
var
LoopVar : Integer;
BreakLoop : Boolean;
begin
{ Init the variable to known values }
LoopVar :=0;
Result := '';
BreakLoop := False;
{ Loop through the passed in arry to find the parameter }
while ( (LoopVar < ParamCount) and
(not BreakLoop) ) do
begin
{ Determine if the looked for parameter is the next value }
if ( (ParamStr(LoopVar) = inParam) and
( (LoopVar+1) < ParamCount )) then
begin
{ Set the return result equal to the next command line parameter }
Result := ParamStr(LoopVar+1);
{ Break the loop }
BreakLoop := True;
end
{ Increment the loop variable }
LoopVar := LoopVar + 1;
end;
end;
Hope this helps...