I am currently working on a configuration application for an embedded device and was wondering if there is a standard and accepted way of keeping application settings in Linux, something analogous to the Registry system in Windows.
GNOME seems to have a gnome-settings system that some graphical applications use, but I am going to be working on a headless, embedded device. The best advice I could find so far seems to be that I should just keep it under /etc.
Is there a universally accepted way of keeping app/user settings in Linux or is it simply a case of keeping it in a file under /etc?
Thanks.
Generally, global settings are in /etc, and per-user settings would be in a .file or .directory under the user's home directory. So you have (from memory), /etc/bashrc and ~/.bashrc for bash.
Oh, and not all applications put their configuration under /etc. That's usually for applications that need some level of system management, as opposed to purely user applications like a word processor or game.
Related
Is there any software working like this?
Runs as a standalone program. No install is needed. Thus, can be used as an Ansible module.
After running the program in a remote Linux machine, I can open up a web browser, then open a web page provided by the program. The program provides features similar to file explorer, IDE-level code editor, debugger, etc. In terms of debugger, there is already similar one; gdbgui.
There is another way such as Gnome, KDE or X11. However, these requires much packages to be installed. I don't want they be installed, because my Linux machines are kept to be small and secure.
You might consider having some terminal emulator running inside a browser. Such things exist, e.g. libonion has oterm as an example application. Then you can do all the things that a command line interface thru a unix shell provides (of course, you won't be able to run GUI applications, e.g. X11 clients such as GTK or Qt applications).
You could also consider some webmin like stuff.
Notice that you don't need to have a desktop environment on a remote Linux machine. Most of them (e.g. internet servers) have only command line interface.
Learn more about X11: you could have an X11 server on your laptop (e.g. under Windows if so needed) and run remotely X11 clients (that is GUI applications) with ssh -X on your remote Linux system.
However, these requires much packages to be installed. I don't want they be installed, because my Linux machines are kept to be small and secure.
I don't understand that requirement. On my VPS, running in some OVH datacenter, I do have X11 client applications (notably emacs). I don't believe that lowers the security of my system, and the disk space consumption for X11 applications and libraries is small enough these days. And of course I use standard commands (like cp(1), mv(1), rm(1), grep(1), find(1), less(1), file(1), sed(1) ....) to manage files. Any graphical file manager is useless (and I never use them, while using Unix since 1986)
You really should learn how to use the command line on Linux. It is incredibly powerful.
Can anyone tell how I might best mirror selected files and folders to a NAS, (Network Addrssable Storage) box from a Linux workstation in real-time?
These are very large files, (> 50GB) and are being continually modified, so I would only like to change those portions of the files that have been changed, added or deleted.
FYI: These files are actually Virtual Box virtual hard disk (VDI) files.
I discovered that my Synology DS211J NAS can run an RSync service. So I enabled that and used lsyncd for the live mirror... the VirtualBox VMs... all works very well.
Rsync only synchronises the parts of files that have change and so is very efficient at synchronising large files.
Of the solutions that #awm mentioned, only drbd provides block-level, realtime synchronization. The other tools will meet your goal of only propagating deltas, but they operate asynchronously. In fact, rsync will work just as well in this case, since you're not trying to provide bi-directional synchronization.
For drbd to provide block-level replication, you need need to install the drbd kernel modules and userspace tools on both the workstation on the NAS...which means this solution is only appropriate if your NAS is actually a fairly generic Linux box over which you have a great deal of control.
Before hand I just want to suggest that you don't do this. You can easily bottlenet your network and NAS and cause all sorts of problem on your host.
That being said, these claim they can do it:
Unison can be found at: http://www.cis.upenn.edu/~bcpierce/unison/
PeerSoft can do it too: http://www.peersoftware.com/products/peersync/peersyncserver/overview.aspx
Maybe - http://www.drbd.org/
I want to code/debug collaboratively on a shared linux terminal (so that both parties can see what is being typed into it, and its output) with someone in a remote location. What is the best way to achieve this ?
Options might be:
Setting up a tunnel that would allow me to VNC into my partener's
system. Is this possible without using proprietary software ?
Using a web-based terminal, which allows limited disk-space to
upload a source code file, and then compile/debug it (don't even
know if such a service is available)
In case option 2 is not available, what is the best way to set up a publicly accessible (over the internet, but secure) VNC in ubuntu ?
You can achieve this by using ssh + screen.
I have a general "feeling" that applications open faster on Windows than on Linux. I know this is too vague/non-scientific but if I were to compare load time of an application e.g. VLC on Windows and Linux how would I go about ? Also, I would like to study the differences in loading mechanism used by windows and Linux for binaries so any reference would very much appreciated.
The Linux loader can give you lots of information about the binding process.
LD_DEBUG=help ls
See the ld.so(8) man page for more details.
To really measure this you'd need to be able to flush the file cache on each OS before measuring.
One thing that Windows does is immediately after bootup it begins loading a list of frequently used DLLs and applications into file cache. This is called SuperFetch and it works pretty well.
Linux distros sometimes have a similar list that is preloaded into file cache by a program called readahead. The problem with the Linux distros is that this list is fixed at install time and isn't automatically updated, so it usually only includes programs such as the default user desktop, web browser, email application, etc.
To flush the file cache on Linux, do the following command as root:
echo 3 > /proc/sys/vm/drop_caches
To flush the file cache on Windows? I don't know, I will need to look.
On most operating systems today, the default is that when we install a program, it is given access to many resources that it may not need, and it's user may not intend to give it access to. For example, when one installs a closed source program, in principle there is nothing to stop it from reading the private keys in ~/.ssh and send them to a malicious third party over the internet, and unless the user is a security expert proficient in using tracing programs, he will likely not be able to detect such a breach.
With the proliferation of many closed sourced programs being installed on computers, what actions are different operating systems taking to solve the problem of sandboxing third party programs?
Are there any operating system designed from the grounds up with security in mind, where every program or executable has to declare in a clearly readable format by the user what resources it requires to run, so that the OS runs it in a sandbox where it has access only to those resources? For example, an executable will have to declare that it will require access to a certain directory or a file on the filesystem, that it will have to reach certain domains or IP address over the network, that it will require certain amount of memory, etc ... If the executable lies in its declaration for system resource requirements, it should be prevented from accessing them by the operating system.
This is a the beauty of Virtualization. Anyone performing testing or operating a questionable application would be wise to use a virtual machine.
Virtual Machines:
Provide advantages of a full Operating System without direct hardware access
Can crash or fail and be restarted without affecting the host machine
Are cheap to deploy and configure to a variety of environments
Great for using applications designed for other platforms
Sandboxes applications that may attempt to access other private data on your computer
With the seamless modes virtualization programs such as VirtualBox provide you can take advantage of Virtual Machine's sandboxing in a nearly seamless fashion.
You have just described MAC (Mandatory Access Control) in your last paragraph.
I was always curious about that too.
Nowadays mobile OSes like Android do have sandboxing built-in. When installing an app, it asks for permissions to access a set of resources/features. Windows too as far as I know, at least to some extend. It is more permissive though.
Ironically, linux and others seem to be far far away concerning "software based permissions" and are stuck in the past, which is a pity. ...at least, as far as I know. I would be pleased for someone to show me wrong and show me a "usable" open source system where application sandboxing/privileges is built-in. Currently, as far as I know, permissions are solely user based.
I think this awareness that not only users need rights to access documents but also executables need rights to access resources has been missing for several decades. It might have avoided a plague of viruses and security issues of our century.