Printer config file in *nix - linux

Are the settings or configuration specifics of a printer on a *nix system using CUPS stored in a file? My assumption is yes, as *nix systems seem to use files for everything as opposed to using a registry system as does Windows. If so, where are such files located? Are they capable of having their file permissions modified, and if so, what could cause such a thing to occur in a non-manual way?
This question relates to one of my other questions in helping to explore a single, individual theory toward an answer there, but is decidedly separate.

Check on /etc/cups, for printers the file is printers.conf.
They can have permissions modified since they usually belong to the lp group, not a single user. Check cron jobs, system updates and any other cups interface that your distribution provides.

Related

Best practices for maintaining configuration of embedded linux system for production

I have a small single board computer which will be running a linux distribution and some programs and has specific user configuration, directory structure, permissions settings etc.
My question is, what is the best way to maintain the system configuration for release? In my time thinking about this problem I've thought of a few ideas but each has its downsides.
Configure the system and burn the image to an iso file for distribution
This one has the advantage that the system will be configured precisely the way I want it, but committing an iso file to a repository is less than desirable since it is quite large and checking out a new revision means reflashing the system.
Install a base OS (which is version locked) and write a shell script to configure the settings from scratch.
This one has the advantage that I could maintain the script in a repository and update and config changes by pulling changes to the script and running it again, however now I have to maintain a shell script to configure a system and its another place where something can go wrong.
I'm wondering what the best practices are in embedded in general so that I can maybe implement a good deployment and maintenance strategy.
Embeddded systems tend to have a long lifetime. Do not be surprised if you need to refer to something released today in ten years' time. Make an ISO of the whole setup, source code, diagrams, everything... and store it away redundantly. Someone will be glad you did a decade from now. Just pretend it's going to last forever and that you'll have to answer a question or research a defect in ten years.

How to "safely" allow others to work on my server?

I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.

Linux file synchronization between computers

I'm looking for a software which will allow me to synchronize files in specyfic folders between my linux boxes. I have searched a lot of topics and what I've found is Unison. It looks prety good but it is not under development anymore and does not allow me to see file change history.
So the question is - what is the best linux file synchronizer, that:
(required) will synchronize only selected folders
(required) will synchronize computers at given time (for example each hour)
(required) will be intelligent - will remember what was deleted and when and will ask me if I want to delete it on remote machine too.
(optionally) will keep track of changes and allow to see history of changes
(optionally) will be multiplatform
Rsync is probably the de facto.
I see Unison is based on Rsync -- not sure if Rsync alone can achieve number 3 above.
Also, see this article with detailed information about rsync, including available GUI's for it.
While I agree Rsync is defacto swissknife for linux users, I found 2 other projects more interesting especially for use case where I have 2 workstations in different locations and laptop, all 3 machines for work, so I felt pain here. I found really nice project called:
https://syncthing.net/
I run it on public server with vpn access where my machines are always connected and it simply works. It has gui for monitoring purposes (basic, but enough infor available)
Second is paid, but with similar functionality on top built in:
https://www.resilio.com/
Osync is probably what you're looking for (see http://www.netpower.fr/osync )
Osync is actually rsync based but will handle number 3 above without trouble.
Number 4, keeping track of modified files can be more or less achieved by adding --verbose parameter which will log file updates.
Actually, only number 5 won't work. Osync runs on most unix flavors but not windows.

Hard-coding paths in Linux

Coming from Windows background here.
Is it an acceptable practice for GUI Linux applications to store their data files (not user-specific) at hard-coded locations (e. g. /etc/myapp/stuff)? I couldn't find any syscalls that would return the preferred directory for app data. Is there a convention out there as to what goes where?
/opt/appname/stuff according to the Linux Filesystem Hierarchy Standard
Your distribution's packaging system likely provides ways to handle common installation paths. What distribution are you using?
Generally speaking, yes there is a convention. On most Linux systems, application configuration files are typically located at /etc/appname/. You'll want to consult the LSB (Linux Standard Base) and the Linux FHS (Filesystem Hierarchy Standard) for their respective recommendations.
Also, if you are targeting your application towards a specific Linux distro, then that distro vendor probably has their own specific recommendations as far as packaging and related-conventions are concerned. You'll want to look at your distro vendor's developer pages for more information.
Configuration files for processes with elevated privileges are generally stored in /etc. Data files for processes with elevated privileges (Web Server, Mail Server, Chat Server, etc.) are generally stored in /var. And that's where consistency ends. Some folks say you start with the location to store them (/etc|/var) then have an appname sub-folder for your app, then continue from there as necessary.
If you're not a system daemon with elevated privileges, your only consistent choice is a dot directory in the launching user's home directory. I think the Free Desktop Standards (XDG) specify ~/.config for per-user configuration, and ~/.cache for replaceable static and/or generated data you need to save.
Looking at my Home Directory, a few key dot directories I have are:
~/.cache
~/.config
~/.irssi
~/.maildir
~/.mozilla
~/.kde
~/.ssh
~/.vnc
[edit]
While not a syscall, the XDG specifications I reference are at http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
There are certain conventions.
System-wide, readable/editable (text-based) configuration files go in /etc/appname/.
System-wide, per-machine binary data files that change (eg. binary databases) go in /var/*/appname/ - /var/cache/appname/, /var/spool/appname/ and /var/lib/appname/ are the most common.
System-wide binary data files that could notionally be shared between machines (eg. things like graphics and sound files) go in /usr/share/appname/.
The full paths that Unix/Linux/GNU applications use to store config files and other data is usually set when an application is configured prior to compilation. These paths then get hard-coded into the compiled binary (you can see examples of this by running strings(1) over some existing executables).
That is, these types of paths are build-time configurable, not run-time configurable by default. Many apps will support command line options to specify where a configuration file is, and that configuration file will usually contain paths for other application resources. This allows an application to run with minimal configuration (built-in paths) but also allows a site to customise the paths completely.
Under Linux, only the basic services (opening a file, doing networking and interprocess communication etc) are provided as system calls. The rest is done using libraries.
If you are coding a GUI application, you should look into your toolkit's documentation to see if it provides a mechanism for managing defaults. Both KDE and Gnome have one for instance.

User-dependent file content

For some unfortunate reasons, I have to convert a proprietary and binary library from a one-user per workstation to a multi-user per workstation setup.
Current setup. A user uses a program linked against a library. This library reads a system wide configuration file (using an hard-coded path, ie /usr/local/thelib/main.conf ) which itself contains several paths to several working directories. The wdir are themselves containing a bunch of user data files.
Desired outcome. Being able to manage several users on the same workstation. Of course, a user shall not be able to read nor alter any other user's data through the library, which should be taken care of by unix rights if I manage to feed the library a different working directory for each user.
The library might be used by several users at the same time so ln-ing the configuration file in /usr/local at runtime is not an option.
I was thinking of using FUSE in order to provide a different content for the file /usr/local/thelib/main.conf, depending on an environnement variable or the current unix user. The environnement var would then be used as a switch inside the code producing the configuration file.
I'm confortable using Python, Perl or C.
The workstation is running an up-to-date GNU/Linux Debian or Ubuntu distribution with a pretty recent kernel.
So. What do you think :
would you use FUSE ?
would you produce another kind of wrapper - using chroot(2) was suggested below per janneb - ?
use something else allowed by Linux ?
I kinda know that I would be able to produce something functional but I'll get the community advice since I don't want to reinvent the wheel right now.
Thanks.
Florian
you could use LD_PRELOAD to load a small stub that intercepts open() calls, and opens ~/.main.conf (assuming this is a shared object). Then in your application startup routine, check that LD_PRELOAD is set to the correct value, and if not, restart the app with the correct environment.
A simple way would be for the app to call chroot() before calling the library init function(s). E.g. if you chroot into $HOME/theapp then each user can have a private own config file in $HOME/theapp/usr/local/thelib/main.conf as well as private working dirs somewhere under $HOME/theapp.

Resources