How to create a shell script file using tee [duplicate] - linux

How can I write a here document to a file in Bash script?

Read the Advanced Bash-Scripting Guide Chapter 19. Here Documents.
Here's an example which will write the contents to a file at /tmp/yourfilehere
cat << EOF > /tmp/yourfilehere
These contents will be written to the file.
This line is indented.
EOF
Note that the final 'EOF' (The LimitString) should not have any whitespace in front of the word, because it means that the LimitString will not be recognized.
In a shell script, you may want to use indentation to make the code readable, however this can have the undesirable effect of indenting the text within your here document. In this case, use <<- (followed by a dash) to disable leading tabs (Note that to test this you will need to replace the leading whitespace with a tab character, since I cannot print actual tab characters here.)
#!/usr/bin/env bash
if true ; then
cat <<- EOF > /tmp/yourfilehere
The leading tab is ignored.
EOF
fi
If you don't want to interpret variables in the text, then use single quotes:
cat << 'EOF' > /tmp/yourfilehere
The variable $FOO will not be interpreted.
EOF
To pipe the heredoc through a command pipeline:
cat <<'EOF' | sed 's/a/b/'
foo
bar
baz
EOF
Output:
foo
bbr
bbz
... or to write the the heredoc to a file using sudo:
cat <<'EOF' | sed 's/a/b/' | sudo tee /etc/config_file.conf
foo
bar
baz
EOF

Instead of using cat and I/O redirection it might be useful to use tee instead:
tee newfile <<EOF
line 1
line 2
line 3
EOF
It's more concise, plus unlike the redirect operator it can be combined with sudo if you need to write to files with root permissions.

Note:
the following condenses and organizes other answers in this thread, esp the excellent work of Stefan Lasiewski and Serge Stroobandt
Lasiewski and I recommend Ch 19 (Here Documents) in the Advanced Bash-Scripting Guide
The question (how to write a here document (aka heredoc) to a file in a bash script?) has (at least) 3 main independent dimensions or subquestions:
Do you want to overwrite an existing file, append to an existing file, or write to a new file?
Does your user or another user (e.g., root) own the file?
Do you want to write the contents of your heredoc literally, or to have bash interpret variable references inside your heredoc?
(There are other dimensions/subquestions which I don't consider important. Consider editing this answer to add them!) Here are some of the more important combinations of the dimensions of the question listed above, with various different delimiting identifiers--there's nothing sacred about EOF, just make sure that the string you use as your delimiting identifier does not occur inside your heredoc:
To overwrite an existing file (or write to a new file) that you own, substituting variable references inside the heredoc:
cat << EOF > /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, with the variable contents substituted.
EOF
To append an existing file (or write to a new file) that you own, substituting variable references inside the heredoc:
cat << FOE >> /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, with the variable contents substituted.
FOE
To overwrite an existing file (or write to a new file) that you own, with the literal contents of the heredoc:
cat << 'END_OF_FILE' > /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, without the variable contents substituted.
END_OF_FILE
To append an existing file (or write to a new file) that you own, with the literal contents of the heredoc:
cat << 'eof' >> /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, without the variable contents substituted.
eof
To overwrite an existing file (or write to a new file) owned by root, substituting variable references inside the heredoc:
cat << until_it_ends | sudo tee /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, with the variable contents substituted.
until_it_ends
To append an existing file (or write to a new file) owned by user=foo, with the literal contents of the heredoc:
cat << 'Screw_you_Foo' | sudo -u foo tee -a /path/to/your/file
This line will write to the file.
${THIS} will also write to the file, without the variable contents substituted.
Screw_you_Foo

To build on #Livven's answer, here are some useful combinations.
variable substitution, leading tab retained, overwrite file, echo to stdout
tee /path/to/file <<EOF
${variable}
EOF
no variable substitution, leading tab retained, overwrite file, echo to stdout
tee /path/to/file <<'EOF'
${variable}
EOF
variable substitution, leading tab removed, overwrite file, echo to stdout
tee /path/to/file <<-EOF
${variable}
EOF
variable substitution, leading tab retained, append to file, echo to stdout
tee -a /path/to/file <<EOF
${variable}
EOF
variable substitution, leading tab retained, overwrite file, no echo to stdout
tee /path/to/file <<EOF >/dev/null
${variable}
EOF
the above can be combined with sudo as well
sudo -u USER tee /path/to/file <<EOF
${variable}
EOF

When root permissions are required
When root permissions are required for the destination file, use |sudo tee instead of >:
cat << 'EOF' |sudo tee /tmp/yourprotectedfilehere
The variable $FOO will *not* be interpreted.
EOF
cat << "EOF" |sudo tee /tmp/yourprotectedfilehere
The variable $FOO *will* be interpreted.
EOF

For future people who may have this issue the following format worked:
(cat <<- _EOF_
LogFile /var/log/clamd.log
LogTime yes
DatabaseDirectory /var/lib/clamav
LocalSocket /tmp/clamd.socket
TCPAddr 127.0.0.1
SelfCheck 1020
ScanPDF yes
_EOF_
) > /etc/clamd.conf

For those looking for a pure bash solution (or a need for speed), here's a simple solution without cat:
# here-doc tab indented
{ read -r -d '' || printf >file '%s' "$REPLY"; } <<-EOF
foo bar
EOF
or for an easy "mycat" function (and avoid leaving REPLY in environment):
mycat() {
local REPLY
read -r -d '' || printf '%s' "$REPLY"
}
mycat >file <<-EOF
foo bar
EOF
Quick speed comparison of "mycat" vs OS cat (1000 loops >/dev/null on my OSX laptop):
mycat:
real 0m1.507s
user 0m0.108s
sys 0m0.488s
OS cat:
real 0m4.082s
user 0m0.716s
sys 0m1.808s
NOTE: mycat doesn't handle file arguments, it just handles the problem "write a heredoc to a file"

As instance you could use it:
First(making ssh connection):
while read pass port user ip files directs; do
sshpass -p$pass scp -o 'StrictHostKeyChecking no' -P $port $files $user#$ip:$directs
done <<____HERE
PASS PORT USER IP FILES DIRECTS
. . . . . .
. . . . . .
. . . . . .
PASS PORT USER IP FILES DIRECTS
____HERE
Second(executing commands):
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1
.
.
.
COMMAND n
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE
Third(executing commands):
Script=$'
#Your commands
'
while read pass port user ip; do
sshpass -p$pass ssh -o 'StrictHostKeyChecking no' -p $port $user#$ip "$Script"
done <<___HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
___HERE
Forth(using variables):
while read pass port user ip fileoutput; do
sshpass -p$pass ssh -o 'StrictHostKeyChecking no' -p $port $user#$ip fileinput=$fileinput 'bash -s'<<ENDSSH1
#Your command > $fileinput
#Your command > $fileinput
ENDSSH1
done <<____HERE
PASS PORT USER IP FILE-OUTPUT
. . . . .
. . . . .
. . . . .
PASS PORT USER IP FILE-OUTPUT
____HERE

If you want to keep the heredoc indented for readability:
$ perl -pe 's/^\s*//' << EOF
line 1
line 2
EOF
The built-in method for supporting indented heredoc in Bash only supports leading tabs, not spaces.
Perl can be replaced with awk to save a few characters, but the Perl one is probably easier to remember if you know basic regular expressions.

In addition, if you're writing to a file, it can be a good idea to check whether or not your write succeeded for failed. For example:
if ! echo "contents" > ./file ; then
echo "ERROR: failed to write to file" >& 2
exit 1
fi
To do the same with heredoc, there are two possible approaches.
1)
if ! cat > ./file << EOF
contents
EOF
then
echo "ERROR: failed to write to file" >& 2
exit 1
fi
if ! cat > ./file ; then
echo "ERROR: failed to write to file" >& 2
exit 1
fi << EOF
contents
EOF
You can test the error case in the above code by replacing the destination file ./file with /file (assuming you're not running as root).

I like the following method of basic redirection for its concision, readability and presentation in an indented script:
<<-End_of_file >file
→ foo bar
End_of_file
Where →        is a tab character.
This is standard Bourne shell redirection without forking any cat or tee process.
But it is not working with current bash even when called through /bin/sh.
It is still working with /bin/zsh since more than 20 years.

Related

Output of a command within here-document

I have a script which contains the code block:
cat << EOF > new_script.sh
...
echo "$(pwd)" >> log.txt
...
EOF
The script new_script.sh is set to run at a later time. Bash recognizes the $(pwd) within the script and evaluates it before it looks at the entire EOF block, so the pwd of the current directory is output instead of the pwd of new_script.sh when it is run. Why is this the case (what logic does bash use to know to evaluate $(command)) and what is the best solution to this?
By adding an escape $, \$ , you can solve this issue.
cat << EOF > new_script.sh
...
echo "\$(pwd)" >> log.txt
...
EOF
Unless you put single quotes around the EOF marker, the contents of the here-doc are treated like a double-quoted string, so all variables and command substitutions are expanded immediately.
To leave them as literals, use
cat << 'EOF' > new_script.sh
...
echo "$(pwd)" >> log.txt
...
EOF

What's the difference between the parameters got by read and $1

echo -n "*.xcodeproj directory: ";
read fileDirectory;
echo -n $fileDirectory;
fileExtension="pbxproj";
find $fileDirectory -name "*.${fileExtension}";
It shows "find: XXXX"(fileDirectory) no such file or directory
However if I replace read fileDirectory by
fileDirectory=$1
It works.
So what's the difference?
$1 is the first argument passed to bash script or to a function inside the script
for example:
mybashfunction /dirtofind
inside the function if you write:
echo "$1"
It should print:
/dirtofind
Edit 1:
You must place the shebang in the beginning of you file
~$ cat a.sh
#!/bin/bash
echo -n "*.xcodeproj directory: ";
read fileDirectory;
echo -n $fileDirectory;
fileExtension="pbxproj";
find "$fileDirectory" -name "*.${fileExtension}";
~$ chmod +x a.sh
~$ ./a.sh
*.xcodeproj directory: /home
/home/home/leonardo/Qt/Tools/QtCreator/share/qtcreator/qbs/share/qbs/examples/cocoa-touch-application/CocoaTouchApplication.xcodeproj/project.pbxproj
/home/leonardo/Qt/Tools/QtCreator/share/qtcreator/qbs/share/qbs/examples/cocoa-application/CocoaApplication.xcodeproj/project.pbxproj
:~$
Works like charm here. Place the shebang
#!/bin/bash
Edit 2
Yes you can use eval. Your script will be like this:
#!/bin/bash
echo -n "*.xcodeproj directory: ";
read fileDirectory;
echo -n $fileDirectory;
fileExtension="pbxproj";
eval fileDirectory=$fileDirectory
find "$fileDirectory" -name "*.${fileExtension}";
read reads data from STDIN (by default), not from positional parameters (arguments).
As you are passing the data as first argument ($1) to the script, read would not catch it; it would catch the input you are providing interactively.
Just to note, you should quote your variable expansions to avoid word splitting and pathname expansion; these are unwanted in most cases.

How can I store a command in a variable in a shell script?

I would like to store a command to use at a later time in a variable (not the output of the command, but the command itself).
I have a simple script as follows:
command="ls";
echo "Command: $command"; #Output is: Command: ls
b=`$command`;
echo $b; #Output is: public_html REV test... (command worked successfully)
However, when I try something a bit more complicated, it fails. For example, if I make
command="ls | grep -c '^'";
The output is:
Command: ls | grep -c '^'
ls: cannot access |: No such file or directory
ls: cannot access grep: No such file or directory
ls: cannot access '^': No such file or directory
How could I store such a command (with pipes/multiple commands) in a variable for later use?
Use eval:
x="ls | wc"
eval "$x"
y=$(eval "$x")
echo "$y"
Do not use eval! It has a major risk of introducing arbitrary code execution.
BashFAQ-50 - I'm trying to put a command in a variable, but the complex cases always fail.
Put it in an array and expand all the words with double-quotes "${arr[#]}" to not let the IFS split the words due to Word Splitting.
cmdArgs=()
cmdArgs=('date' '+%H:%M:%S')
and see the contents of the array inside. The declare -p allows you see the contents of the array inside with each command parameter in separate indices. If one such argument contains spaces, quoting inside while adding to the array will prevent it from getting split due to Word-Splitting.
declare -p cmdArgs
declare -a cmdArgs='([0]="date" [1]="+%H:%M:%S")'
and execute the commands as
"${cmdArgs[#]}"
23:15:18
(or) altogether use a bash function to run the command,
cmd() {
date '+%H:%M:%S'
}
and call the function as just
cmd
POSIX sh has no arrays, so the closest you can come is to build up a list of elements in the positional parameters. Here's a POSIX sh way to run a mail program
# POSIX sh
# Usage: sendto subject address [address ...]
sendto() {
subject=$1
shift
first=1
for addr; do
if [ "$first" = 1 ]; then set --; first=0; fi
set -- "$#" --recipient="$addr"
done
if [ "$first" = 1 ]; then
echo "usage: sendto subject address [address ...]"
return 1
fi
MailTool --subject="$subject" "$#"
}
Note that this approach can only handle simple commands with no redirections. It can't handle redirections, pipelines, for/while loops, if statements, etc
Another common use case is when running curl with multiple header fields and payload. You can always define args like below and invoke curl on the expanded array content
curlArgs=('-H' "keyheader: value" '-H' "2ndkeyheader: 2ndvalue")
curl "${curlArgs[#]}"
Another example,
payload='{}'
hostURL='http://google.com'
authToken='someToken'
authHeader='Authorization:Bearer "'"$authToken"'"'
now that variables are defined, use an array to store your command args
curlCMD=(-X POST "$hostURL" --data "$payload" -H "Content-Type:application/json" -H "$authHeader")
and now do a proper quoted expansion
curl "${curlCMD[#]}"
var=$(echo "asdf")
echo $var
# => asdf
Using this method, the command is immediately evaluated and its return value is stored.
stored_date=$(date)
echo $stored_date
# => Thu Jan 15 10:57:16 EST 2015
# (wait a few seconds)
echo $stored_date
# => Thu Jan 15 10:57:16 EST 2015
The same with backtick
stored_date=`date`
echo $stored_date
# => Thu Jan 15 11:02:19 EST 2015
# (wait a few seconds)
echo $stored_date
# => Thu Jan 15 11:02:19 EST 2015
Using eval in the $(...) will not make it evaluated later:
stored_date=$(eval "date")
echo $stored_date
# => Thu Jan 15 11:05:30 EST 2015
# (wait a few seconds)
echo $stored_date
# => Thu Jan 15 11:05:30 EST 2015
Using eval, it is evaluated when eval is used:
stored_date="date" # < storing the command itself
echo $(eval "$stored_date")
# => Thu Jan 15 11:07:05 EST 2015
# (wait a few seconds)
echo $(eval "$stored_date")
# => Thu Jan 15 11:07:16 EST 2015
# ^^ Time changed
In the above example, if you need to run a command with arguments, put them in the string you are storing:
stored_date="date -u"
# ...
For Bash scripts this is rarely relevant, but one last note. Be careful with eval. Eval only strings you control, never strings coming from an untrusted user or built from untrusted user input.
For bash, store your command like this:
command="ls | grep -c '^'"
Run your command like this:
echo $command | bash
Not sure why so many answers make it complicated!
use alias [command] 'string to execute'
example:
alias dir='ls -l'
./dir
[pretty list of files]
I tried various different methods:
printexec() {
printf -- "\033[1;37m$\033[0m"
printf -- " %q" "$#"
printf -- "\n"
eval -- "$#"
eval -- "$*"
"$#"
"$*"
}
Output:
$ printexec echo -e "foo\n" bar
$ echo -e foo\\n bar
foon bar
foon bar
foo
bar
bash: echo -e foo\n bar: command not found
As you can see, only the third one, "$#" gave the correct result.
I faced this problem with the following command:
awk '{printf "%s[%s]\n", $1, $3}' "input.txt"
I need to build this command dynamically:
The target file name input.txt is dynamic and may contain space.
The awk script inside {} braces printf "%s[%s]\n", $1, $3 is dynamic.
Challenge:
Avoid extensive quote escaping logic if there are many " inside the awk script.
Avoid parameter expansion for every $ field variable.
The solutions bellow with eval command and associative arrays do not work. Due to bash variable expansions and quoting.
Solution:
Build bash variable dynamically, avoid bash expansions, use printf template.
# dynamic variables, values change at runtime.
input="input file 1.txt"
awk_script='printf "%s[%s]\n" ,$1 ,$3'
# static command template, preventing double-quote escapes and avoid variable expansions.
awk_command=$(printf "awk '{%s}' \"%s\"\n" "$awk_script" "$input")
echo "awk_command=$awk_command"
awk_command=awk '{printf "%s[%s]\n" ,$1 ,$3}' "input file 1.txt"
Executing variable command:
bash -c "$awk_command"
Alternative that also works
bash << $awk_command
As you don't specify any scripting language, I would recommand tcl, the Tool Command Language for this kind of purpose.
Then in the first line, add the appropriate shebang:
#!/usr/local/bin/tclsh
with appropriate location you can retrieve with which tclsh.
In tcl scripts, you can call operating system commands with exec.
#!/bin/bash
#Note: this script works only when u use Bash. So, don't remove the first line.
TUNECOUNT=$(ifconfig |grep -c -o tune0) #Some command with "Grep".
echo $TUNECOUNT #This will return 0
#if you don't have tune0 interface.
#Or count of installed tune0 interfaces.
First of all, there are functions for this. But if you prefer variables then your task can be done like this:
$ cmd=ls
$ $cmd # works
file file2 test
$ cmd='ls | grep file'
$ $cmd # not works
ls: cannot access '|': No such file or directory
ls: cannot access 'grep': No such file or directory
file
$ bash -c $cmd # works
file file2 test
$ bash -c "$cmd" # also works
file
file2
$ bash <<< $cmd
file
file2
$ bash <<< "$cmd"
file
file2
Or via a temporary file
$ tmp=$(mktemp)
$ echo "$cmd" > "$tmp"
$ chmod +x "$tmp"
$ "$tmp"
file
file2
$ rm "$tmp"
Be careful registering an order with the: X=$(Command)
This one is still executed. Even before being called. To check and confirm this, you can do:
echo test;
X=$(for ((c=0; c<=5; c++)); do
sleep 2;
done);
echo note the 5 seconds elapsed
It is not necessary to store commands in variables even as you need to use it later. Just execute it as per normal. If you store in variables, you would need some kind of eval statement or invoke some unnecessary shell process to "execute your variable".

save wild-card in variable in shell script and evaluate/expand them at runtime

I am having trouble running the script below (in Cygwin on win 7 mind you).
Lets call it "myscript.sh"
When I run it, the following is what I input:
yearmonth: 2011-03
daypattern: 2{5,6,7}
logfilename: error*
query: WARN
#! /bin/bash
yearmonth=''
daypattern=''
logfilename=''
sPath=''
q=''
echo -n "yearmonth: "
read yearmonth
echo -n "daypattern: "
read daypattern
echo -n "logfilename: "
read logfilename
echo -n "query: "
read q
cat "$yearmonth/$daypattern/$logfilename" | grep --color $q
The output I get is:
cat: /2011-03/2{5,6,7}/error* No such
directory of file exists.
However, if I enter daypattern=25 OR daypattern=26 etc. the script will work.
Also, of course if I type the command in the shell itself, the wildcards are expanded as expected.
But this is not what I want.
I want to be able to PROMPT the user to enter the expressions as they need, and then later, in the script, execute these commands.
Any ideas how this can be possible?
Your help is much appreciated.
Try eval, this should work for the {a,d} and * cases
eval grep --color $q ${yearmonth}/${daypattern}/${logfilename}
Use quote to prevent wildcard expansion:
$ a="*.py"
$ echo $a
google.py pair.py recipe-523047-1.py
$ echo "$a"
*.py

Shell script that writes a shell script

Two questions: how can I write a shell variable from this script into its child script?
Are there any easier ways to do this?
If you can't follow what I'm doing, I'm:
1) starting with a list of directories whose names will be stored as values taken by $i
2) cd'ing to every value of $i and ls'ing its contents
3) echoing its contents into a new script with the name of the directory via cat
4) using echo and cat to write a new script that contains the ls'd values of $i and sends them all to a blogging email address called $i#tumblr.com
#/bin/sh
read -d '' commands <<EOF
#list of directories goes here
dir1
dir2
dir3
etc...
EOF
for i in $commands
do
cd $SPECIALPATH/$i
echo ("#/bin/sh \n read -d '' directives <<EOF \n") | cat >> $i.sh
ls | cat >> $i.sh
echo ("EOF \n for q in $directives \n do \n uuencode $q $q | sendmail $i \n done \n") | cat >> $i.sh
# NB -- I am asking the script to write the shell variable $i into the new
# script, called $i.sh, as the email address specified, in the middle of an
# echo statement... I am well aware that it doesn't work as is
chmod +x $i.sh
./$i.sh
done
You are abusing felines a lot - you should simply redirect, rather than pipe to cat which appends.
You can avoid the intermediary $i.sh file by bundling all the output that goes to the file with a single I/O redirection that pipes direct into a shell - no need for the intermediate file to clean up (you didn't show that happening) or the chmod operation.
I would have done this using braces:
{
echo "..."
ls
echo "..."
} | sh
However, when I looked at the script in that form, I realized that wasn't necessary. I've left the initial part of your script unchanged, but the loop is vastly simpler like this:
#/bin/sh
read -d '' commands <<EOF
#list of directories goes here
dir1
dir2
dir3
etc...
EOF
for i in $commands
do
(
cd $SPECIALPATH/$i
ls |
while read q
do uuencode $q $q | sendmail $i
done
)
done
I'm assuming the sendmail command works - it isn't the way I'd try sending email. I'd probably use mailx or something similar, and I'd avoid using uuencode too (I'd use a base-64 encoding, left to my own devices):
do uuencode $q $q | mailx -s "File $q" $i#tumblr.com
The script also uses parentheses around the cd command. It means that the cd command and what follows is run in a sub-shell, so the parent script does not change directory. In this case, with an absolute pathname for $SPECIALDIR, it would not matter much. But as a general rule, it often makes life easier if you isolate directory changes like that.
I'd probably simplify it still further for general reuse (though I'd need to add something to ensure that SPECIALPATH is set appropriately):
#/bin/sh
for i in "$#"
do
(
cd $SPECIALPATH/$i
ls |
while read q
do uuencode $q $q | sendmail $i
done
)
done
I can then invoke it with:
script-name $(<list-of-dirs)
That means that without editing the script, it can be reused for any list of directories.
Intermediate step 1:
for i in $commands
do
(
cd $SPECIALPATH/$i
{
echo "read -d '' directives <<EOF"
ls
echo "EOF"
echo "for q in $directives"
echo "do"
echo " uuencode $q $q | sendmail $i"
echo "done"
} |
sh
)
done
Personally, I find it easier to read the generated script if the code that generates makes the generated script clear - using multiple echo commands. This includes indenting the code.
Intermediate Step 2:
for i in $commands
do
(
cd $SPECIALPATH/$i
{
ls |
echo "while read q"
echo "do"
echo " uuencode $q $q | sendmail $i"
echo "done"
} |
sh
)
done
I don't need to read the data into a variable in order to step through each item in the list once - simply read each line in turn. The while read mechanism is often useful for splitting up a line into multiple variables too: while read var1 var2 var3 junk will read the first field into $var1, the second into $var2, the third into $var3, and if there's anything left over, it goes into $junk. If you've generated the data accurately, there won't be any junk; but sometimes you have to deal with other people's data.
If the generated script is meant to be temporary, I would not use files. Besides, chmoding them to executable sounds unsafe. When I needed to parallel my scripting, I used a bash script to form a set of commands (in an array, split the array in two, then implode the array) to a single \n-separated string and then pass that to a new bash instance.
Basically, in bash:
for orig in "$#"
do
commands="$commands echo \"echoeing stuff here for arguments $orig\" \n"
done
echo -e $commands |bash
And a small tip: if the script doesn't need supervising, throw in a & after the piped bash to make your first script quit and do the rest of the work forked background.
If you export a variable
export VAR1=FOO
it'll be present in any child processes.
If you take a look at the init scripts, /etc/init..d/* you'll notice that many source another file full of "external" definitions. You could set up a file like that and have your child script source these files.

Resources