I'd like to write a JSON file using BASH but it seem's not working well..
My code :
sudo echo -e "Name of your app?\n"
sudo read appname
sudo cat "{apps:[{name:\"${appname}\",script:\"./cms/bin/www\",watch:false}]}" > process.json
Issue : -bash: process.json: Permission denied
Generally speaking, don't do this. Use a tool that already knows how to quote values correctly, like jq:
jq -n --arg appname "$appname" '{apps: [ {name: $appname, script: "./cms/bin/www", watch: false}]}' > process.json
That said, your immediate issues is that sudo only applies the command, not the redirection. One workaround is to use tee to write to the file instead.
echo '{...}' | sudo tee process.json > /dev/null
To output text, use echo rather than cat (which outputs data from files or streams).
Aside from that, you will also have to escape the double-quotes inside your text if you want them to appear in the result.
echo -e "Name of your app?\n"
read appname
echo "{apps:[{name:\"${appname}\",script:\"./cms/bin/www\",watch:false}]}" > process.json
If you need to process more than just a simple line, I second #chepner's suggestion to use a JSON tool such as jq.
Your -bash: process.json: Permission denied comes from the fact you cannot write to the process.json file. If the file does not exist, check that your user has write permissions on the directory. If it exists, check that your user has write permissions on the file.
Related
I have the following shell script running a inotifywait command. I want to print the output echo to the console upon every modify event.
The script:
#!/bin/sh
while inotifywait -e modify -r -m ./ --exclude '\.sh$'; do
echo test
done
When I change one file in the specified directory, i get the standard output from inotifywait:
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
./postgres/ MODIFY postgres_test.go
./postgres/ MODIFY postgres_test.go
I have two questions:
Why is the modified event registered twice? I only updated the file once.
Why is "test" not being printed to the console in which I'm running the script?
I had a similar issue. I resolved the second part by restructuring my while:
inotifywait -e modify -r -m ./ --exclude '\.sh$' |
while read E; do
echo "----------------hello $E"
done
I've set up a penetration testing VM and am trying to practice privilege escalation.
I'm currently trying to read a file. I do not have access to the user's home directory where the file is located but I have permissions to run /usr/bin/perl as the user/admin.
My understanding is that I could run the following command to essentially cat the file and see what's inside using the perl permissions granted to me but it doesn't seem to be working and gives no result back
james#linuxtest:~$ sudo -l
Matching Defaults entries for james on linuxtest:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User james may run the following commands on linuxtest:
(james2) /usr/bin/perl
james#linuxtest:~$ sudo -u james2 perl -e 'print 'cat /home/james/test.txt''
I expected the result to be the contents of the file or at least an error of some sort but no result. Am I making a stupid mistake here?
I think you wanted
sudo -u james2 perl -e 'print `cat /home/james/test.txt`'
Backticks are used to execute a shell command and capture its output.
That's a weird way of doing
sudo -u james2 perl -e 'system "cat /home/james/test.txt"'
which is a weird way of doing
sudo -u james2 cat /home/james/test.txt
And since you're root, that's a weird way of doing
cat /home/james/test.txt
As the title says, within linux how can I feed input to the bash when I do sudo bash
Lets say I have a bash script that reads the name.
The way I execute the script is through sudo using:
cat read-my-name-script.sh | sudo bash
Lets just say this is how I execute the script throught the network.
Now I want to fill the name automatically, is there a way to feed the input. I tried doing this: cat read-my-name-script.sh < name-input-file | sudo bash where the name-input-file is a file for the input that the user will be using to feed the script.
I am new to linux and learning to automate the input and wanted to create a file for input where the user can fill it and feed it to my script.
This is convoluted, but might do what you want.
sudo bash -c "$(cat read-my-name.sh)" <name-input-file
The -c says the next quoted argument are the commands to run (so, read the script as a string on the command line, instead of from a file), and the calling shell interpolates the contents of the file inside the double quotes before the sudo command gets evaluated. So if read-my-name.sh contains
#!/bin/bash
read -p "I want your name please"
then the command gets expanded into
sudo bash -c '#!/bin/bash
read -p "I want your name please"' <name-input-file
(where of course at this time the shell has actually removed the outer double quotes altogether; I put in single quotes in their place instead to show how this would look as actually executable, syntactically valid code).
I think you need that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done <name-input-file
So each line of name-input-file will be passed as argument to sudo bash read-my-name-script.sh
If your argslist located on http server, you can do that:
while read -r arg; do sudo bash read-my-name-script.sh "$arg";done < <(wget -q -O- http://some/address/in/internet/name-input-file)
UPD
add [[ -f name-input-file ]] && readarray -t args <name-input-file
to read-my-name-script.sh
and use "${args[#]}" as arguments of command in the script.
For example echo "${args[#]}" or cmd "${args[0]}" "${args[1]}" ... "${args[100]}" in any order.
In this case you can use
wget -q -O- http://some/address/in/internet/read-my-name-script.sh | bash
for run your script with arguments from name-input-file whitout saving script to the local machine
I have a script which run this command successfully. I am using this command in another script which gives me error on this line (.md5: Permission denied).
I am running the previous script with sudo.
for i in ${NAME}*
do
sudo md5sum $i | sed -e "s/$i/${NAME}/" > ${NAME}.md5${i/#${NAME}/}
done
So you want to redirect output as root. It doesn't matter that you executed the command with sudo, because redirection is not part of the execution, so it's not performed by the executing user of the command, but by your current user.
The common trick is to use tee:
for i in ${NAME}*
do
md5sum $i | sed -e "s/$i/${NAME}/" | sudo tee ${NAME}.md5${i/#${NAME}/}
done
Note: I dropped the sudo from md5sum, as probably you don't need it.
Note: tee outputs in two directions: the specified file and stdout. If you want to suppress the output on stdout, redirect it to /dev/null.
You take the output of sudo md5sum $i and pipe it to a sed which is not running as root. sudo doesn't even know this sed exists.
But that's not the problem, because the sed does not need root permissions. The problem is > ${NAME}.... This redirects the output of sed to the file with this name. But the redirection is actually executed by your shell which is running as your user. And because > is a shell built-in operator, you can not prefix it with sudo.
The simple solution is to use tee. tee is a program (so you can run it with sudo) which writes it's input to the standard output and also to a file (like a T-Pipe, hence the name).
So you can just:
for i in ${NAME}*
do
md5sum $i | sed -e "s/$i/${NAME}/" | sudo tee ${NAME}.md5${i/#${NAME}/}
done
Note this will also dump all hashes to your standard output.
I need to download only defined files with wget and ftp.
For example:
1.I retrieve all files using:
echo ls -R | ftp ftp://user:password#host > ./list.txt
2.Then I will parse the result and get a list with absolute paths for each file:
/path/to-the/file-1
/path/to-the/file-2
etc.
3.And now I need to download all files from the result list using wget and ftp.
And I don't want to create a separate FTP session for each file download process.
Please give your advice. Thank you.
Update:
For recursive download I'm using it: wget -r ftp://user:password#host:/ -nH -P /download/path. It works great, but I need to pass a file with a list of remote files for downloading via FTP with one FTP session.
Sorry, I missed the "single session" part when I commented. I think you need to have your script generate a second script to run a single FTP session.
So, your script will not do any FTP itself, it will just write another script that does the transfers. So, it will write a script that does this
ftp -n <SOMEADDRESS> <<EOS
quote USER <USERNAME>
quote PASS <PASSWORD>
bin
get file1 localname1
get file2 localname2
...
get fileN localnameN
quit
EOS
Then it will execute that script, by doing:
bash < thatScript
So your script will look like this:
#!/bin/bash
ScriptName=funkyFTPer
cat - <<END > $ScriptName
ftp -n 192.168.0.1 <<EOS
quote USER freddy
quote PASS frog
END
# Your selection code goes here ***PHNQZ***
echo get file1 localname1 >> $ScriptName
echo get file2 localname2 >> $ScriptName
echo get fileN localnameN >> $ScriptName
echo quit >> $ScriptName
echo EOS >> $ScriptName
echo "Now run bash < $ScriptName"
Then delete the script as it contains your password. Or you can put the password in your .netrc file.
As regards creating directories locally, you can do that in the first script using mkdir -p. The -p has the advantage that it creates all directories in between in one go and doesn't get upset if they already exist.
So, just looking at the area of code where it says ***PHNQZ*** above, let's say your code decides you need file freddy/frog/c.txt, you could do:
remotename="freddy/frog/c.txt"
localdir=${remotename%/*} # Get just directory part using "bash Parameter Substitution"
mkdir -p "$localdir" # make directory and all parts in between