How to edit a file owned by root on SSH connect - linux

Hi I'm sure there is some way of doing what I want, but maybe I'm just attacking it the wrong way. Hope someone can help.
I have a dev box that I SSH in to from several other machines. In order to debug remotely I need to configure my debugger with my client machine's IP, which changes when I log in from different machines. I'm getting bored of doing this manually all the time so thought I'd try and automate it.
I'm creating a script that is automatically run upon SSH connection that will modify a configuration setting in a PHP ini file. The problem is the PHP ini files are all owned by root so I'm not sure how to modify those files if I'm just logging in as a normal user.
There's not really a security concern with my dev box so I could just change the owner of the ini file, but I wanted it to be more automated than that.
My current attempt is a python script located in my home dir, which is called from .bashrc when I connect via SSH. I don't see how I can gain root privileges from there, I am pretty new to linux though. I thought maybe there would be some other method I'm not aware of.

You have a file that is owned by root. You clearly need to either find a way to mark the file as modifiable by you; or a way for you to elevate your privileges so that you are allowed to modify it.
This leads to the two traditional unix approachs to doing this. They are:
To create a group with which to mark the file, ie. initdebug; chgrp/chmod the file so it has the initdebug group and is group writable; and, add yourself to the initdebug group so you can use the group write permission to modify the file.
To create a very small, audited binary executable (this won't work with a script) that will perform the specific modifications you desire (for simplicity I would suggest copying one of a selection of root owned PHP ini files into the right place). Then chown'ing the file so it is owned by root, and setting the suid bit on the executable so it will execute as root.
You can also combine the two approaches, either:
Not making yourself a member of the initdebug group or suid on the executable, but rather setting group of the executable to initdebug and setting its sgid bit; or,
Keeping the executable suid root but making it only executable by initdebug and therefore only executable by users added to that group.
The security trade off is in the ease/risk of privilege escalation should someone hack your account. If there is a stack/heap overflow or similar vulnerability in the executable and it is executing as root, then you are gone. If the PHP ini file can be modified to open a remote-vulnerability then if they can directly access the ini file you are gone.
As I suspect the latter is possible, you are probably best off with a small executable.
Note: As I alluded to above, unix does not acknowledge the s[ug]id bits on scripts (defined as anything using the #!... interpreter syntax). Hence, you will have to use something you can compile down to a binary. So that probably means either C, C++, Java(using gcj), ML, Scheme(mit), Haskell(ghc).
If you haven't done any C or C++ programming before, I would recommend one of the others as a suid binary is not a project with which to learn C/C++. If you don't know any of the other languages, I would recommend either ML or Java as the easiest to to write something small and simple.
(btw, http://en.wikipedia.org/wiki/List_of_compilers includes a list of alternative compilers you can use. Just make sure it compiles to native, not bytecode. As far as the OS is concerned a bytecode vm is just another interpreter).

you can do it with insert your user to sudoers file on mechine that you want to remote,
for the example you can see my blog.
this is the url : http://nanamo3lyana.blogspot.com/2012/06/give-priviledge-normal-user-as-root.html
and then on your automaticly script add sudo on your command.
i'm sorry my english not good.

Related

How to move lots of dotfiles staying at /home without breaking programs?

With more and more programs installed on my computer, I am tired of seeing lots of dotfiles while I have to access them often. For some reason I won't hide dotfiles when browsing files. Is there a way to move them to a better place I want them to stay (e.g. ~/.config/$PROGCONF) without affecting programs while running?
Symlinks still leave file symbols, which is far from my expectation. I expect that operations like listdirs() won't show the files while opening them uses a redirection.
"For some reason it won't hide dotfiles when browsing files.":
That depends on the file manager you use. nautilus hides it by default and most file managers have an option to "show/hide hidden files". The ls command by default omits out hidden files (files starting with a dot). It lists all files with the option -a.
"Is there a way to move them to a better place":
Programs which have support for "XDG user directories" can store their config files in `~/.config/$PROGRAM_NAME/. If the program doesn't support that and expects the config file to be present in the home directory, there is little you can do (Maybe you can give us a list of what programs' config files you want to move). The process differs for each program.
Let me give an example with vim. Its config file is ~/.vimrc. Lets say you move the file to ~/.config/vim/.vimrc. You can make vim read the file by launching vim using the following command.
vim -u ~/.config/vim/.vimrc
You can modify the .desktop entry or create a new shell script to launch vim using the above command and put it inside /usr/local/bin/ or create shell functions / aliases. You can read more about changing vim's config file location in this SO question.
This arch wiki article has application specific information.
"without affecting programs while running":
It depends on a few factors namely the file system used, the program we are dealing with and so on.
Generally, deleting / moving files only unlinks the file name from an inode and programs read / write files using inodes. Read more here. And most programs read the config file at the start, load the values into memory. They rarely read the config files again. So, if you move your config file while the program is running (assuming the program supports config in both places), you won't see a difference until the program is restarted.
"I expect that operations like listdirs() won't show the files"
I am assuming you are talking about os.listdir() in python. If files are present, os.listdir() will list them, there is little you can change about that. But you can write custom functions to omit out the hidden files from being listed.
This SO question can help with that.

why is bash(.sh) script not executable by default

Why is bash(.sh) script not executable by default.
I agree that while we touch any file in linux it is created for reading purpose.
But since file name extensions such as sh and csh are for execution purpose.
Won't it be ideal to touch them in an executable mode.
Question might sound redundant to, but i still thought of asking it :)
Ultimately the answer to this question is that that's not what touch was designed to do.
In fact, touch was not even designed to create files; its primary purpose in life is to change file timestamps. It is merely a side effect of that purpose, and the fact that the designers decided to be a little more generous and create the target file if it doesn't already exist (and if you didn't provide the -c option), that it allows you to create files at all.
It should also be mentioned that there are other techniques that create files, such as redirection (echo 'echo '\''I am a script.'\'';' >|script.sh;). The act of file creation is a generic one, and the whole concept of a file is a generic one. A file is just a byte stream; what goes in the byte stream is unspecified at the file abstraction layer. As #AdamGent mentioned, Windows requires certain types of executable files to have certain extensions in order to be executed properly, but even in Windows, you can put executable code in non-executable-extensioned files, and you can put non-executable content in executable-extensioned files. There's no enforcement of file name / file content correspondence at the file layer.
All of that being said, it would often be a convenience if you could easily create a script in Unix that automatically has executable permission set. I've actually written a script to allow me to edit a new file in vim, and then have its permissions set to executable after write-quitting. The reason this potential convenience has not been standardized into a utility is likely related to the concern about security; you don't want people to accidentally make files executable, because that raises the risk of security holes.
You can always write your own program to create a file and make it executable, perhaps based on the extension of the file name.
Another thing that can be added here is that even shell scripts don't always need to be executable. For example, if you write a script that is only intended to be sourced from existing shell processes (via the source or classic . builtins), then the script does not need to be executable at all. Thus, there are cases where the file extension itself does not provide enough information to determine what the appropriate permissions are for the file.
There is nothing in the file name that says a file is even a script. Common practice perhaps says that .sh and .csh are scripts but there is no such rule.
What makes a file an executable script is the magic number in the beginning of the file. The magic number #! (the shebang, which has many other names as well) means the file is a script. For example
#!/bin/bash
When you try to execute the file (it must then also be set to executable in the permissions), the Linux kernel looks at the first two bytes of the file and finds the magic number (#!). It then knows it is supposed to execute whatever comes after the Shebang with the name of the file as argument followed by the rest of the arguments passed.
If you type the following in the shell
blablabla.sh lol1 lol2
The shell recognizes that you are trying to execute something so it invokes the kernel
exec blablabla.sh lol1 lol2
The kernel finds the shebang, and it becomes
exec /bin/bash blablabla.sh lol1 lol2
By exec i mean one of the exec family system calls.
Other fun names for #! include sha-bang, hashbang, pound-bang, hash-exclam and hash-pling
Because the .sh script extension is completely arbitrary in Unix environments. Its only a convention. You can name your script whatever you like so long as it has an executable bit set. This is unlike Windows where I believe its required (.com, .exe, and I think .dll).
touch just changes the timestamp of the file. It again does not care what the file extension of the file is. In fact most tools in Unix do not care about file extension.

How to manage users in linux without using adduser/useradd

I have to create a perl script that allows me to add/delete/modify users from my Debian. I've read that is not recommended but I've been asked to do it without using system calls, so no adduser/useradd allowed. This script should check if a user has been added/deleted from a mysql database and then add the user to the system. Maybe by modifying etc/passwd directly? Is there any perl module to do this?
Thanks in advance
On most distributions nowadays, there is also a file /etc/shadow that is used in conjunction with /etc/passwd, so you most likely would have to add a line to /etc/shadow as well. In addition, you most likely would have to create a home directory and a group for the new user, as well as a few other steps. See http://www.tldp.org/LDP/sag/html/adduser.html the steps involved in creating a new user manually.
Having said all of the above, I'm curious to know what the objection is to simply using useradd called from the perl script. If it is done safely (i.e. inputs are sanitized to prevent a command line injection attack), this approach seems to be a lot simpler and less error prone than trying to script perl to add a user by doing all of the above steps.
Got Unix::Passw::File after quick CPAN search. This module can be used read and manipulate password file entries.

Determining standard file locations under Linux

Is there a standard way of determining file locations under Linux? Even better, are there any POSIX API's which allow the retrieval of standard file locations?
For example, how can I determine a user's home directory? Or, how can I determine the proper location for system configuration files?
I know that typically these locations would be "/home/username" or "/etc/". Should I just hardcode the paths as such?
The path to the current user's home directory is in the environment variable HOME. (I know systems where home dirs are spread over several partitions (say, /vol/vol[number]/[first letter]/[user name]) and not located in /home/.)
For other users, there's getpwent (and getpwent_r), which pull the home directory from the passwd entry.
For the other directories, there is the File System Hierarchy Standard, which most Linux distros adhere to and some other OSen as well.
I don't think there's an API for this. Thus, if a system does things differently, you're on your own -- good luck! ;-)
The current user's home directory can be found in the HOME environment variable. For other users, you can use the getpwnam or getpwuid functions (or the _r variants) to look up another specified user's home directory, among other things.
I know that you didn't ask this, however if you're looking to find the location of an executable, you can use which

linux script, standard directory locations

I am trying to write a bash script to do a task, I have done pretty well so far, and have it working to an extent, but I want to set it up so it's distributable to other people, and will be opening it up as open source, so I want to start doing things the "conventional" way. Unfortunately I'm not all that sure what the conventional way is.
Ideally I want a link to an in depth online resource that discusses this and surrounding topics in depth, but I'm having difficulty finding keywords that will locate this on google.
At the start of my script I set a bunch of global variables that store the names of the dirs that it will be accessing, this means that I can modify the dir's quickly, but this is programming shortcuts, not user shortcuts, I can't tell the users that they have to fiddle with this stuff. Also, I need for individual users' settings not to get wiped out on every upgrade.
Questions:
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate? tweek the reference in the source of the dev version?
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be? the user will need to access some of this data, but it's mostly for internal use. I personally work in cygwin, and I like to keep this data on separate drive, so the path is wierd, I suspect many users could find this. for a default however I'm thinking ~/gpsdata/ -- would this be normal, or should I hard code a system that ask the user at first run where to put it, and stores this in the settings folder? whatever happens I'm going ot have to store the directory reference in a file in the settings folder.
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?? though there will always be an option to add a file or folder to the command line to use that as well (it processed files all locations listed, including the "inbox")
Where should the script its self go? it's already smart enough that it can create all of it's ancillary/settings files (once I figure out the "correct" directory) if run with "./foo --setup" I could shove it in /usr/bin/ or /bin or ~/.foo/bin (and add that to the path) what's normal?
I need to store login details for a web service that it will connect to (using curl -u if it matters) plan on including a setting whereby it asks for a username and password every execution, but it currently stores it plane text in a file in ~/.foo/ -- I know, this is not good. The webservice (osm.org) does support oauth, but I have no idea how to get curl to use it -- getting curl to speak to the service in the first place was a hack. Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Sorry for the list of questions, I believe they are closely related enough for a single post. This is all stuff that stabbing at, but would like clarification/confirmation over.
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate?
Have a default of ~/.foo, and an option (for example --config-directory) that you can use to override the default while developing.
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be?
If your script is running under a normal user account, this will have to be somewhere in the user's home directory; elsewhere, you'll have no write permissions. Perhaps ~/.foo/tracklog or something? Again, add a command line option, and also an option in the configuration file, to override this.
I'm not a fan of your ~/gpsdata default; I don't want my home directory cluttered with all sorts of directories that programs created without my consent. You see this happen on Windows a lot, and it's really annoying. (Saved games in My Documents? Get out of here!)
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?
As stated above, I'd prefer ~/.foo/inbox. Also with command-line option and configuration file option to change this.
But do you really need an inbox? If the user needs to run the script manually over some files, it might be better just to accept those file names on the command line. They could just be processed wherever, without having to move them to a "magic" location.
Where should the script its self go?
This is usually up to the packaging system of the particular OS you're running on. When installing from source, /usr/local/bin is a sensible default that won't interfere with package managers.
Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Yes, there is. But it's better not to, because it creates a false sense of security. Without a master password or something, secure storage is not possible! Pidgin, for example, explicitly stores passwords in plain text, so that users won't make any false assumptions about their passwords being stored "securely". So it's best just to store them in plain text, complain if the file is world-readable, and add a clear note to the manual to warn the user what's going on.
Bottom line: don't try to reinvent the wheel. There have been thousands of scripts and programs that faced the same issues; most of them ended up adopting the same conventions, and for good reasons. Look at what they do, and mimic them instead of reinventing the wheel.
You can start with the Filesystem Hierarchy Standard. I'm not sure how well followed it is, but it does provide some guidance. In general, I try to use the following:
$HOME/.foo/ is used for user-specific settings - it is hidden
$PREFIX/etc/foo/ is for system-wide configuration
$PREFIX/foo/bin/ is for system-wide binaries
sym-links from $PREFIX/foo/bin are added to $PREFIX/bin/ for ease of use
$PREFIX/foo/var/ is where variable data would live - this is where your input spools and log files would live
$PREFIX should default to /opt/foo even though almost everyone seems to plop stuff in /usr/local by default (thanks GNU!). If someone wants to install the package in their home directory, then substitute $HOME for $PREFIX. At least that is my take on how this should all work.

Resources