Is there any security vulnerability in this code? - security

It goes like this:
exec("echo '" + param + "' > /etc/test")
And note that it's under root privilege and param has filtered characters like:
'
"
;
`
&

Your question will not get a yes/no answer without much more context. That's okay; instead let us look for weaknesses and highlight areas for concern.
Issue 1: it is almost always better to avoid passing untrusted/externally-supplied data into a shell command; the shell provides a dangerous level of power if someone manages to subvert your command with bad input. Open and write the file in your own code rather than using a shell to do it.
Issue 2: it's impossible to say if that exec() would be safe without knowing what shell the command line will be evaluated by.
Issue 3: if you must pass data via a shell (but see Issue 1), robust filtering is vital. But from your brief description of the filtering, it sounds like you are blacklisting known-bad characters - this is a poor approach, it is always better to whitelist known-good characters and keep that to a very limited set (such alphanumeric).

If you do not specify the PATH, then you cannot be sure what command is executed. You also did not say how you sanitize param. There are a few things to avoid:
Why do you use exec/echo at all? Any programming language can write to files directly.
the "single string" exec() method needs to deal with parsing/escaping. If you use a method which uses a string array, you dont need to be concerned with it.
In your case /etc is a safe dir, but if you use the same code on a less protected directory you might follow a symlink and destroy anything an attacker wants you to
You should be aware that the content of your newly created file contains blank delimited file names, but filesnames can contain blanks. So malicious attackers can produce new (non existing) names. (In fact all characters (including line breaks) besides \0 can be part of the filename).
You need to setup a clean (i.e. only add whitelisted entries) environment. LD_PRELOAD, PATH, IFS, CWD/PWD or SHELL, all can be tainted.

Related

Tcl/Tk log file has many ^H characters when called by a Jenkins job

I have a Tcl/Tk expect script, and log information is logged to external log file.
I can execute it on Linux server successfully without any wrong, and the log file do not have any weird ^H. But when the script is called by Jenkins job, run on the same Linux server. the log file will have a lot of ^H, And the expect will timeout.
What the possible reason could be?
The ^H is actually the backspace character, U+000008, and it is used in terminals (and terminal emulators) to move the current character insertion position one place to the left. This is used in turn to simulate various kinds of effects, such as making a character bold (by double-striking) or giving it an underline (by putting a _ on the same cell). Think like it's going to a traditional teletype, which prints things into the same position twice. It's definitely a hang-over from
the old days.
It seems that Jenkins (or maybe whatever it is calling, quite possibly maven though definitely not necessarily!) is using that device to back up over what it has written so it can write a new value there instead, probably for something like a simple download progress meter. Things that are writing to what they think is a terminal sometimes do this. You'll just have to cope. A \b in your Expect regular expressions will match it, though it is possibly unwise to do so, as whatever is being overwritten is transient info. If the characters are being written to a file, the col program (part of the nroff/groff suite) can be used to strip them; that might be easier.
Be aware that there are other ways that software can achieve the same effect, such as writing just a carriage return (which puts the cursor back at the start of the current line).

Add comments next to files in Linux

I'm interested in simply adding a comment next to my files in Linux (Ubuntu). An example would be:
info user ... my_data.csv Raw data which was sent to me.
info user ... my_data_cleaned.csv Raw data with duplicates filtered.
info user ... my_data_top10.csv Cleaned data with only top 10 values selected for each ID.
So sort of the way you can comment commits in Git. I don't particularly care about searching on these tags, filtering them etc. Just seeings them when I list files in a directory. Bonus if the comments/tags follow the document around as I copy or move it.
Most filesystem types support extended attributes where you could store comments.
So for example to create a comment on "foo.file":
xattr -w user.comment "This is a comment" foo.file
The attributes can be copied/moved with the file just be aware that many utilities require special options to copy the extended attributes.
Then to list files with comments use a script or program that grabs the extended attribute. Here is a simple example to use as a starting point, it just lists the files in the current directory:
#!/bin/sh
ls -1 | while read -r FILE; do
comment=`xattr -p user.comment "$FILE" 2>/dev/null`
if [ -n "$comment" ]; then
echo "$FILE Comment: $comment"
else
echo "$FILE"
fi
done
The xattr command is really slow and poorly written (it doesn't even return error status) so I suggest something else if possible. Use setfattr and getfattr in a more complex script than what I have provided. Or maybe a custom ls command that is aware of the user.comment attribute.
This is a moderately serious challenge. Basically, you want to add attributes to files, keep the attributes when the file is copied or moved, and then modify ls to display the values of these attributes.
So, here's how I would attack the problem.
1) Store the information in a sqlLite database. You can probably get away with one table. The table should contain the complete path to the file, and your comment. I'd name the database something like ~/.dirinfo/dirinfo.db. I'd store it in a subfolder, because you may find later on that you need other information in this folder. It'd be nice to use inodes rather than pathnames, but they change too frequently. Still, you might be able to do something where you store both the inode and the pathname, and retrieve by pathname only if the retrieval by inode fails, in which case you'd then update the inode information.
2) write a bash script to create/read/update/delete the comment for a given file.
3) Write another bash function or script that works with ls. I wouldn't call it "ls" though, because you don't want to mess with all the command line options that are available to ls. You're going to be calling ls always as ls -1 in your script, possibly with some sort options, such as -t and/or -r. Anyway, your script will call ls -1 and loop through the output, displaying the file name, and the comment, which you'll look up using the script from 2). You may also want to add file size, but that's up to you.
4) write functions to replace mv and cp (and ln??). These would be wrapper functions that would update the information in your table, and then call the regular Unix versions of these commands, passing along any arguments received by the functions (i.e. "$#"). If you're really paranoid, you'd also do it for things like scp, which can be used (inefficiently) to copy files locally. Still, it's unlikely you'll catch all the possibilities. What if someone else does a mv on your file, who doesn't have the function you have? What if some script moves the file by calling /bin/mv? You can't easily get around these kinds of issues.
Or if you really wanted to get adventurous, you'd write some C/C++ code to do this. It'd be faster, and honestly not all that much more challenging, provided you understand fork() and exec(). I can't recall whether sqlite has a C API. I assume it does. You'd have to tangle with that, too, but since you only have one database, and one table, that shouldn't be too challenging.
You could do it in perl, too, but I'm not sure that it would be that much easier in perl, than in bash. Your actual code isn't that complex, and you're not likely to be doing any crazy regex stuff or string manipulations. There are just lots of small pieces to fit together.
Doing all of this is much more work than should be expected for a person answering a question here, but I've given you the overall design. Implementing it should be relatively easy if you follow the design above and can live with the constraints.

Linux Sort in wrong order

I've a problem using Linux Sort when i run on input like this:
07TLF\1\1\10040
07TLF\1\1\10060
07TLF03\10060
07TLF03\10080
07TLF\1\1\10320
07TLF\1\1\10360
07TLFS\10280
(\1 is Ctrl+A character). As you can see the 3rd and 4th row are wrong (should be the 5th and 6th).
If I use LC_COLLATE=C everything goes well! but in other similar cases sort works ok even if I use a different LC_COLLATE.
I use sort in a C program using system system call, but also running in bash I have the same behavier.
Can you help me?
thx
From the manpage of sort (1):
NAME
sort - sort lines of text files
I don't think, Ctrl+A is part of any file called "text file". If you have special requirements - as sorting special chars - you have to either use a utility serving the requirement or just implement it on your own if no such exists.
Since there is no "natural order" of special chars, you will hafe do define what you want the order to be. You may either set the locale to make sort behave according to the rules defined there or you have to implement your own sort algorithm, if none of the locales matches your requirements.
By the way: Using system is not a good idea at all if you can avoid it. You may never know which version of sort is installed on your target system and which rules are used.

Loading untrusted configurations variable

I am working on a bash prompt project which acts upon different states in the directory you are currently in.
What I need now is a way to load a mini configuration from the current directory, which is easily done with . .params.conf, except that this method is extremely unsecure, as anyone with write access to the directory you are in can create and execute command as you when you stumble upon a directory with a .params.conf file in it.
What is the best way of loading variables from a file like this?
The variables is going to be mostly in a true/false state, so I am not parsing them in any way so it can be executed.
Possible solutions:
A loop, loading each predefined variable is a possible, but I want to keep the code in a readable fashion.
Put the whole file in an bash array would be the best solution, but how can I populate a key/value dict in bash like this?
If bash can source a file only loading the variables..
I dont know how the format of the params.conf file will look like yet. But I think the easiest would be one line for each param, separated by space. Like one of this on each line: key value that can have space in it.
eval can be very insecure and can still execute malicious code. It's better to use declare:
while read varname value
do
declare "$varname=$value"
done < .params.conf
If your values do not contain quotes, it is rather easy:
while read varname value ; do
eval $varname="'$value'"
done < .params.conf
If there are quotes inside of the variables, you have to be more careful and add some escaping.

Could I have any issue using the # (at sign) in a *n*x directory?

I'm developing a website, and want to create a directory after the user username, what is going to be the email address (so I don’t have to generate new ɪᴅs, etc)
I've made some tests and it seems to work fine. Also, I didn’t find any documentation against using the "#" in a directory, but could I find some problem in the future with this approach?
I mean, might some browser not be able to upload images from this directory, or some other problem?
if you plan to run perl scripts (and possibly other languages) against those files you will need to remember to escape the # sign. It's not a huge problem, but I personally would not do it.
More importantly if the path is visible to the browser you would be disclosing the user's email address to the whole world.
I would suggest using something like an MD5 hash of the user's email instead. It is (relatively) unique, and you can recalculate it very easily if you need to. Gravatar uses this approach for instance. See: http://en.gravatar.com/site/implement/hash/
no.. there should be no problems.. browsers are trying to read the file and they don't care that much about the title only file content... (header matters)
So.. there should be no problem...
Historically some remote filesystems have used the # to "escape" from normal path processing to do "interesting" stuff.
Some version control systems use # to denote a certain version of a path (e.g. Subversion, ClearCase).
Some other tools use # to denote "user#remote_host" stuff - AFAIK rsync is one of them which might bite you - you should check if that tool is used somewhere on your site for backup or syncing or something like that.
So - I would not use that character within filenames.

Resources