On most operating systems today, the default is that when we install a program, it is given access to many resources that it may not need, and it's user may not intend to give it access to. For example, when one installs a closed source program, in principle there is nothing to stop it from reading the private keys in ~/.ssh and send them to a malicious third party over the internet, and unless the user is a security expert proficient in using tracing programs, he will likely not be able to detect such a breach.
With the proliferation of many closed sourced programs being installed on computers, what actions are different operating systems taking to solve the problem of sandboxing third party programs?
Are there any operating system designed from the grounds up with security in mind, where every program or executable has to declare in a clearly readable format by the user what resources it requires to run, so that the OS runs it in a sandbox where it has access only to those resources? For example, an executable will have to declare that it will require access to a certain directory or a file on the filesystem, that it will have to reach certain domains or IP address over the network, that it will require certain amount of memory, etc ... If the executable lies in its declaration for system resource requirements, it should be prevented from accessing them by the operating system.
This is a the beauty of Virtualization. Anyone performing testing or operating a questionable application would be wise to use a virtual machine.
Virtual Machines:
Provide advantages of a full Operating System without direct hardware access
Can crash or fail and be restarted without affecting the host machine
Are cheap to deploy and configure to a variety of environments
Great for using applications designed for other platforms
Sandboxes applications that may attempt to access other private data on your computer
With the seamless modes virtualization programs such as VirtualBox provide you can take advantage of Virtual Machine's sandboxing in a nearly seamless fashion.
You have just described MAC (Mandatory Access Control) in your last paragraph.
I was always curious about that too.
Nowadays mobile OSes like Android do have sandboxing built-in. When installing an app, it asks for permissions to access a set of resources/features. Windows too as far as I know, at least to some extend. It is more permissive though.
Ironically, linux and others seem to be far far away concerning "software based permissions" and are stuck in the past, which is a pity. ...at least, as far as I know. I would be pleased for someone to show me wrong and show me a "usable" open source system where application sandboxing/privileges is built-in. Currently, as far as I know, permissions are solely user based.
I think this awareness that not only users need rights to access documents but also executables need rights to access resources has been missing for several decades. It might have avoided a plague of viruses and security issues of our century.
Related
I'm learning about security and wondering, if I'm using a VM on any host (Windows or GNU/Linux) is it possible for someone (either in the same network or not) to gather information about my host (ip, mac address, location, etc.)
Is it possible using only a certain hypervisor (hyper-v, virtualvox,vmware,etc) or certain host?
I read that using Tails in a Virtual Machine is not that secure, because the host can compromise a guest and vice versa. How come?
Whether or not it is possible clearly depends on virtual machine monitor (VMM) and host OS settings, and security mechanisms available/used.
Ideally, on a host system that strictly adheres to Goldberg-Popek conditions for virtualization, it is possible to write a hypervisor that provides full isolation. However, this fully applies to a simple hypothetical machine used in that paper. It says nothing about multicore systems, networking, or timing issues.
In reality, both software defects and hardware bugs/errata/oddities as last but not least poor configuration lead to situations where at least some information about host can be collected from within a guest.
If both host and guest are placed in the same network segment, one can easily learn some properties of the host by, for example, using nmap network scanner and comparing against known OS signatures.
If there is no network access but some other resources are shared (files on a shared virtual folder), their careful study can reveal a lot. Like, if you see c:\program files shared with a Linux system, it gives a hint about host's OS.
Certain properties of many hypervisors can be seen simply by inspecting system resources. Examples: CPUID instruction can report "KVMKVMKVM" on KVM; disk vendor string can tell you that it is "Virtutech drive" for Wind River Simics; presence of Xen's paravirtual devices is an obvious thing to guess that maybe the host is Xen; etc. Typically, commercial hypervisors do not have a goal to mimic real hardware as closely as possible.
If a VMM is set up to hide such apparent blunders, there still are numerous differences in behavior between real hardware and virtualized one. One of the most famous is Red Pill, but similar detection techniques are many, and at least some of them are documented in academical papers available on the Net.
It is hard to account for all of difference in machine instructions behavior, because of the complexity of underlying host architecture. For example, the architecture manual for very popular Intel 64 and IA-32 systems has more than 4000 pages describing its official behavior. Not all corner cases are apparent/easy to implement/documented/well-defined/well-studied/not affected by errata. And AMD's processors that implement "the same" x86 architecture have their own manual for AMD64; and not always all things are pretty and unambiguously defined in these two books. The same applies to IBM's ARM's MIPS's processors.
Going deeper, there are timing/side channel details of computer operation which are even harder to account for when writing a hypervisor. The signal/noise ratio for analysis of timings is lower (i.e., it is easier to make a mistake and detect a real hardware as VMM, or vice versa), but it is still valid. The security community only recently started widely exploring attacks based on timings (see the Spectre/Meltdown family), and VMMs are not excluded from begin potential targets of such attacks.
I read that using Tails in a Virtual Machine is not that secure, because the host can compromise a guest and vice versa. How come?
Unless something is formally proven, it cannot be relied upon 100%. Software can rarely be proven to be formally correct. In governmental/military applications, a sense of "security" can be assured by passing certain very strict certifications. Only few specialized operating systems are certified to be free of bugs. Tails is not one of them. Besides all the reasons above, there can simply be a bug or misconfiguration in it that allows data to leak to those who look for it.
I don't have any knowledge about Linux/Unix environment. So for some understanding I have put this question in front of all the developers and Unix/Linux technical people.
By applications I target IDE's used by developers, especially:
Visual Studio
IntelliJ Idea Community Version
PyCharm Community Version
Eclipse
And other peripheral apps used by developers, gamer and network engineers
To some experienced Linux users, my question might be baseless. But consider me a beginner with Linux. Thank You in advance.
The term "application" is a very vague, fuzzy one these days. It does not describe some artifact with a certain internal structure and way how to invoke it but merely the general fact that it is something that can be "used".
Different types of applications are in wide spread use on today's systems, that is why I asked for a clarification of your usage of the term "application" in the comments. The examples you then gave are diverse though they appear comparable at first sight.
A correct and general answer to your question would be:
One application can be used in different Linux based environments if that that environment provides the necessary preconditions to do so.
So the core of your question is kind of shifted towards whether different flavors of Linux based systems offer similar execution environments. Actually it makes sense to extend that question to operating systems in general, the difference between today's alternative is relatively small from an applications point of view.
A more detailed answer will have to differ between the different types of applications or better between their different preconditions. Those can be derived from the architectural platform the application is build on The following is a bit simplified, but should express what the situation actually is:
Take for example the IntelliJ IDEA and the Eclipse IDE. Both are java based IDEs. Java can be seen as a kind of abstraction layer that offers a very similar execution environment on different systems. Therefore both IDEs typically can be used on all systems offering such a "java runtime environment", though differences in behavior will exist where necessary. Those differences are either programmed into the IDEs or origin from the fact that certain components (for example file selection dialogs) are not actually part of the application, but the chosen platform. Naturally they may look and behave different on different platforms.
There is however another aspect that is important here especially when regarding Linux based environments: the diversity of what is today referred to as "Linux". Unlike with pure operating systems like MS-Windows or Apple's MaxOSX which both follow a centralized and restrictively controlled approach we find differences in various Linux flavors that far extend things like component versions and that availability. Freedom of choice allows for flexibility, but also holds a slightly more complex reality in the outcome. Here that means different Linux flavors do indeed offer different environments:
different hardware architecture, unlike MS-Windows and MacOSX the system can not only be used on intel x86 based hardware, but on a variety of maybe 120 completely different hardware architectures.
the graphical user interface (GUI or desktop environment, so windows, panels, buttons, ...) is not an integral part of the operating system in the Linux (Unix) world, but a separate add on. That means you can chose.
the amount of base components available in installations of different Linux flavors differs vastly. For example there are "full fledged, fat desktop flavors" like openSUSE, RedHat or Ubuntu, but there are also minimalistic variants like Raspbian, Damn Small Linux, Puppy, Scientific Linux, distributions specialized in certain tasks like firewalling or even variants tailored for embedded devices like washing machines or moon rockets. Obviously they offer a different environment for applications. They only share the same operating system core, the "kernel", which is what the name "Linux" actually only refers to.
...
However, given all that diversity with it's positive and negative aspects, the Linux community has always been extremely clever and active and crafted solutions to handle that specific situations. That is why all modern desktop targeting distributions come with a mighty software management system these days. That controls dependencies between software packages and makes sure that those dependencies are met or resolved when attempting to install some package, like for example an addition IDE as in your example. So the system would take care to install a working java environment if you attempt to install one of the two java based IDE's mentioned above. That mechanism only works however if the package to be installed is correctly prepared for the distribution. That is where the usage of Linux based systems differs dramatically from other operating systems: here comes the introduction of repositories, how to search, select and install available and usable software packages for a system and and and, all a bit to wide a field to be covered here. Basically: if the producer of a package does his homework (or someone else does for him) and correctly "packages" the product, then the dependencies are correctly resolved. If however the producer only dumps the raw bunch of files, maybe as a ZIP archive and insists on a "wild" installation as typically done for example on MS-Windows based systems, so writing files into the local file system by handing administrative rights to some bundled "installer" script that can do whatever it wants (including breaking and ruining or corrupting) the system is executed on, then the systems software management is bypassed and often the outcome is "broken".
However no sane Linux user or administrator would follow such a path and install such a software. That would show a complete lack of understanding how the own system actually works and the consequent abandonment of all the advantages and comfort offered.
To make a complex story simple:
An "application" usually can be used in different Linux based environments if that application is packaged in a suitable way and the requirements like runtime environment posed by the application are offered by that system.
I hope that shed some light on a non trivial situation ;-)
Have fun!
everyone.
When we talk about information security, we usually think that the more the system rely on secure hardware, the saver the system is than that rely secured software for the same security function. Why? Won't a secure hardware have a bug within it?
Thanks
It depends upon your system. What type of system are you talking about.
Stand alone system, server, application system etc. Suppose if you talk about server, developing firewall using s/w is not enough. We have to use different h/w devices as well as securing the server from different hazards.
When we talk about the stand alone application there can be firewall, password security and also user lock devices. So every system has its own type of security requirements.
Is it possible to create GUI firewall that works as Windows and Mac counterparts? Per program basis. Popup notification window when specific program want to send\recv data from network.
If no, than why? What Linux kernel lacks to allow existence of such programs?
If yes, than why there aren't such program?
P.S. This is programming question, not user one.
Yes it's possible. You will need to setup firewall rules to route traffic through an userspace daemon, it'll involve quite a bit of work.
N/A
Because they're pretty pointless - if the user understands which programs he should block from net access he could just as well use one of multiple existing friendly netfilter/iptables frontends to configure this.
It is possible, there are no restrictions and at least one such application exists.
I would like to clarify a couple of points though.
If I understood this article correct, the firewalls mentioned here so far and iptables this question is tagged under are packet filters and accept and drop packets depending more on IP addresses and ports they come from/sent to.
What you describe looks more like mandatory access control to me. There are several utilities for that purpose in Linux - selinux, apparmor, tomoyo.
If I had to implement a graphical utility you describe, I would pick, for example, AppArmor, which supports whitelists, and, to some extent, dynamic profiling, and tried to make a GUI for it.
OpenSUSE's YaST features graphical interface for apparmor setup and 'learning' , but it is specific to the distribution.
So Linux users and administrators have several ways to control network (and files) access on per-application basis.
Why the graphical frontends for MAC are so few is another question. Probably it's because Linux desktop users tend to trust software they install from repositories and have less reasons to control them this way (if an application is freely distributed, it has less reasons to call home and packages are normally reviewed before they get to repositories) while administrators and power users are fine with command line.
As desktop Linux gets more popular and people install more software from AUR or PPA or even from gnome-look.org where packages and scripts are not reviewed that accurately (if at all) a demand for such type of software (user-friendly, simple to configure MAC) might grow.
To answer your 3rd point.
There is such a program which provides zenity popups, it is called Leopard Flower:
http://sourceforge.net/projects/leopardflower
Yes. Everything is possible
-
There are real antiviruses for linux, so there could be firewalls with GUI also. But as a linux user I can say that such firewall is not needed.
I reached that Question as i am currently trying to migrate from a Mac to Linux. There are a lot of applications I run on my Mac and on my Linux PC. Some of them I trust fully. But others I am not fully trusting. If they are installed from a source that checks them or not, do i have to trust them because someone else did? No, I am old enough to choose myself.
In times where privacy is getting more and more complicate to achieve, and Distributions exist that show that we should not trust everyone, I like to be in control of what my applications do. This control might not end at the connection to the network/Internet but it is what this question (and mine is about.
I have used LittleSnitch for MacOSX in the past years and I was surprised how often an application likes to access the internet without me even noticing. To check for updates, to call home, ...
Now where i would like to switch to Linux, I tried to find the same thing as I want to be in control of what leaves my PC.
During my research I found a lot of questions about that topic. This one, in my opinion, best describes what it is about. The question for me is the same. I want to know when an application tries to send or receive information over the network/internet.
Solutions like SELinux and AppAmor might be able to allow or deny such connections. Configuring them means a lot of manual configuration and does not inform when a new application tries to connect somewhere. You have to know which application you want to deny access to the network.
The existence of Douane (How to control internet access for each program? and DouaneApp.com) show that there is a need for an easy solution. There is even a Distribution which seems to have such a feature included. But i am not sure what Subgraph OS (subgraph.com) is using, but they state something like this on there website. It reads exactly like the initial question: "The Subgraph OS application firewall allows a user to control which applications can initiate outgoing connections. When an unknown application attempts to make an outgoing connection, the user will be prompted to allow or deny the connection on a temporary or permanent basis. This helps prevent malicious applications from phoning home."
As it seems to me, there are only two options at the moment. One is to Compiling Douane manually mysqlf or two, switch distribution to Subgraph OS. As one of the answers state, everything is possible - So i am surprised there is no other solution. Or is there?
I hear repeatedly that while NFS-style files systems are available on IBM mainframes, they are often not enabled, presumably to minimize the security risks of the mainframe vis-a-vis the rest of the world.
Given that I'd like to produce PC-based tools that reach out and process files on the mainframe, this makes a simple problem ("open NFS file '\mainframe\foo'") much harder; what can I count on to provide file system access in a networked enviroment?
(Linux systems offer NFS via Samba pretty much as standard, so this is easy).
IBM offers Rational Developer for Z, an Eclipse variant used by IBM COBOL programmers.
RDZ seems to have direct access to the IBM mainframe file system. What are they using
to do that? Why isn't that available to me, and if it is, what is it?
RDz has a started task (a daemon in UNIX-speak) which runs on the z/OS host and accepts connections from the Eclipse plug-in. The protocol is proprietary so you're unlikely to be able to find out any information about it.
And RDz isn't just for COBOL programmers. It's used in many shops where people want to store all their source code on the mainframe - why maintain two separate repositories? That's why it has those longname/shortname and ASCII/EBCDIC translations to turn those ungodly Java paths into our beautifully elegant 8-character member names and allow us to read it under z/OS, although the ISPF editor's "source ascii" command has alleviated that last concern somewhat.
If you want to do a similar thing, you'll need to code up your own started task to accept incoming connections from your clients. This isn't as hard as it sounds. You'll actually be doing it in a UNIX environment since USS (UNIX System Services, the renamed OpenMVS) comes with z/OS as part of the Base Operating System software. And it allows you to access both USS files and z/OS datasets/members transparently.
Then, you'll need to convince the mainframe shops that your started task is not a security risk. Let me know how that works out for you :-)
You may find it easier to just make NFS a pre-requisite of your software. Then, at least, it's IBM's security problem, not yours.
RDz talks to z/OS via Remote Systems Explorer (RSE). z/OS offers SMB, NFS, FTP, and SCP, as well as other remote access methods.