Linux host specifications - linux

This is a general good-to-know query not directly related to programming.
I have been asked to find a linux host which is exactly same in specifications to our current production host.
What excatly does 'same' specification mean?
What are the parameters/factors i should equate(if possible,with commands on how to dig that up in linux).
I make know quite a few of them - but would not harm to consolidate at a place.

Exactly same in specifications means that the linux host should have the same environment that is relevant to the system you're hosting in your production environment.
What is relevant depends on the system. For example if you're testing a security system, then both environments must have the same users configured and the same firewall configuration (and all other variables affecting security). But if you're testing a system that performs heavy graphics manipulation, probably the firewall is not relevant, so this might be allowed to have a different configuration, but you would need to have the same processor and memory.
So in essence, it means that you must use your best judgement to decide what 'same specifications' means for your particular use.

Related

Assembly security

I'm currently offering an assembly compile service for some people. They can enter their assembly code in an online editor and compile it. When then compile it, the code is sent to my server with an ajax request, gets compiled and the output of the program is returned.
However, I'm wondering what I can do to prevent any serious damage to the server. I'm quite new to assembly myself so what is possible when they run their script on my server? Can they delete or move files? Is there any way to prevent these security issues?
Thank you in advance!
Have a look at http://sourceforge.net/projects/libsandbox/. It is designed for doing exactly what you want on a linux server:
This project provides API's in C/C++/Python for testing and profiling simple (single process) programs in a restricted environment, or sandbox. Runtime behaviours of binary executable programs can be captured and blocked according to configurable / programmable policies.
The sandbox libraries were originally designed and utilized as the core security module of a full-fledged online judge system for ACM/ICPC training. They have since then evolved into a general-purpose tool for binary program testing, profiling, and security restriction. The sandbox libraries are currently maintained by the OpenJudge Alliance (http://openjudge.net/) as a standalone, open-source project to facilitate various assignment grading solutions for IT/CS education.
If this is a tutorial service, so the clients just need to test miscellaneous assembly code and do not need to perform operations outside of their program (such as reading or modifying the file system), then another option is to permit only a selected subset of instructions. In particular, do not allow any instructions that can make system calls, and allow only limited control-transfer instructions (e.g., no returns, branches only to labels defined within the user’s code, and so on). You might also provide some limited ways to return output, such as a library call that prints whatever value is in a particular register. Do not allow data declarations in the text (code) section, since arbitrary machine code could be entered as numerical data definitions.
Although I wrote “another option,” this should be in addition to the others that other respondents have suggested, such as sandboxing.
This method is error prone and, if used, should be carefully and thoroughly designed. For example, some assemblers permit multiple instructions on one line. So merely ensuring that the text in the first instruction field of a line was acceptable would miss the remaining instructions on the line.
Compiling and running someone else's arbitrary code on your server is exactly that, arbitrary code execution. Arbitrary code execution is the holy grail of every malicious hacker's quest. Someone could probably use this question to find your service and exploit it this second. Stop running the service immediately. If you wish to continue running this service, you should compile and run the program within a sandbox. However, until this is implemented, you should suspend the service.
You should run the code in a virtual machine sandbox because if the code is malicious, the sandbox will prevent the code from damaging your actual OS. Some Virtual Machines include VirtualBox and Xen. You could also perform some sort of signature detection on the code to search for known malicious functionality, though any form of signature detection can be beaten.
This is a link to VirtualBox's homepage: https://www.virtualbox.org/
This is a link to Xen: http://xen.org/

memory safety and security - sandboxing arbitrary program?

In some languages (Java, C# without unsafe code, ...) it is (should be) impossible to corrupt memory - no manual memory management, etc. This allows them to restrict resources (access to files, access to net, maximum memory usage, ...) to applications quite easily - e.g. Java applets (Java web start). It's sometimes called sandboxing.
My question is: is it possible with native programs (e.g. written in memory-unsafe language like C, C++; but without having source code)? I don't mean simple bypass-able sandbox, or anti-virus software.
I think about two possibilities:
run application as different OS user, set restrictions for this user. Disadvantage - many users, for every combination of parameters, access rights?
(somehow) limit (OS API) functions, that can be called
I don't know if any of possibilities allow (at least in theory) in full protection, without possibility of bypass.
Edit: I'm interested more in theory - I don't care that e.g. some OS has some undocumented functions, or how to sandbox any application on given OS. For example, I want to sandbox application and allow only two functions: get char from console, put char to console. How is it possible to do it unbreakably, no possibility of bypassing?
Answers mentioned:
Google Native Client, uses subset of x86 - in development, together with (possible?) PNaCl - portable native client
full VM - obviously overkill, imagine tens of programs...
In other words, could native (unsafe memory access) code be used within restricted environment, e.g. in web browser, with 100% (at least in theory) security?
Edit2: Google Native Client is exactly what I would like - any language, safe or unsafe, run at native speed, sandbox, even in web browser. Everybody use whatever language you want, in web or on desktop.
You might want to read about Google's Native Client which runs x86 code (and ARM code I believe now) in a sandbox.
You pretty much described AppArmor in your original question. There are quite a few good videos explaining it which I highly recommend watching.
Possible? Yes. Difficult? Also yes. OS-dependent? Very yes.
Most modern OSes support various levels of process isolation that can be used to acheive what you want. The simplest approach is to simply attach a debugger and break on all system calls; then filter these calls in the debugger. This, however, is a large performance hit, and is difficult to make safe in the presence of multiple threads. It is also difficult to implement safely on OSes where the low-level syscall interface is not documented - such as Mac OS or Windows.
The Chrome browser folks have done a lot of work in this field. They've posted design docs for Windows, Linux (in particular the SUID sandbox), and Mac OS X. Their approach is effective but not totally foolproof - there may still be some minor information leaks between the outer OS and the guest application. In addition, some of the OSes require specific modifications to the guest program to be able to communicate out of the sandbox.
If some modification to the hosted application is acceptable, Google's native client is worth a look. This restricts the compiler's code generation choices in such a way that the loader can prove that it doesn't do anything nasty. This obviously doesn't work on arbitrary executables, but it will get you the performance benefits of native code.
Finally, you can always simply run the program in question, plus an entire OS to itself, in an emulator. This approach is basically foolproof, but adds significant overhead.
Yes this is possible IF the hardware provides mechanisms to restrict memory accesses. Desktop processors usually are equipped with an MMU and access levels, so the OS can employ these to deny access to any memory address a thread should not have access to.
Virtual memory is implemented by the very same means: any access to memory currently swapped out to disk is trapped, the memory fetched from disk and then the thread is continued. Virtualization takes it a little farther, because it also traps accesses to hardware registers.
All the OS really needs to do is properly use those features and it will be impossible for any code to break out of the sandbox. Of course this much easier said than practically applied. Mostly because the OS takes liberties if favor of performance, oversights in what certain OS calls can be used to do and last but not least bugs in the implementation.

What is sandboxing?

I have read the Wikipedia article, but I am not really sure what it means, and how similar it is to version control.
It would be helpful if somebody could explain in very simple terms what sandboxing is.
A sandpit or sandbox is a low, wide container or shallow depression filled with sand in which children can play. Many homeowners with children build sandpits in their backyards because, unlike much playground equipment, they can be easily and cheaply constructed. A "sandpit" may also denote an open pit sand mine.
Well, A software sandbox is no different than a sandbox built for a child to play. By providing a sandbox to a child we simulate the environment of real play ground (in other words an isolated environment) but with restrictions on what a child can do. Because we don't want child to get infected or we don't want him to cause trouble to others. :) What so ever the reason is, we just want to put restrictions on what child can do for Security Reasons.
Now coming to our software sandbox, we let any software(child) to execute(play) but with some restrictions over what it (he) can do. We can feel safe & secure about what the executing software can do.
You've seen & used Antivirus software. Right? It is also a kind of sandbox. It puts restrictions on what any program can do. When a malicious activity is detected, it stops and informs user that "this application is trying to access so & so resources. Do want to allow?".
Download a program named sandboxie and you can get an hands on experience of a sandbox. Using this program you can run any program in controlled environment.
The red arrows indicate changes flowing from a running program into your computer. The box labeled Hard disk (no sandbox) shows changes by a program running normally. The box labeled Hard disk (with sandbox) shows changes by a program running under Sandboxie. The animation illustrates that Sandboxie is able to intercept the changes and isolate them within a sandbox, depicted as a yellow rectangle. It also illustrates that grouping the changes together makes it easy to delete all of them at once.
Now from a programmer's point of view, sandbox is restricting the API that is allowed to the application. In the antivirus example, we are limiting the system call (operating system API).
Another example would be online coding arenas like topcoder. You submit a code (program) but it runs on the server. For the safety of the server, They should limit the level of access of API of the program. In other words, they need to create a sandbox and run your program inside it.
If you have a proper sandox you can even run a virus infected file and stop all the malicious activity of the virus and see for yourself what it is trying to do. In fact, this will be the first step of an Antivirus researcher.
This definition of sandboxing basically means having test environments (developer integration, quality assurance, stage, etc). These test environments mimic production, but they do not share any of the production resources. They have completely separate servers, queues, databases, and other resources.
More commonly, I've seen sandboxing refer to something like a virtual machine -- isolating some running code on a machine so that it can't affect the base system.
For a concrete example: suppose you have an application that deals with money transfers. In the production environment, real money is exchanged. In the sandboxed environment, everything runs exactly the same, but the money is virtual. It's for testing purposes.
Paypal offers such a sandboxed environment, for example.
For the "sandbox" in software development, it means to develop without disturbing others in an isolated way.
It is not similiar to version control. But some version control (as branching) method can help making sandboxes.
More often we refer to the other sandbox.
In anyway, sandbox often mean an isolated environment. You can do anything you like in the sandbox, but its effect won't propagate outside the sandbox. For instance, in software development, that means you don't need to mess with stuff in /usr/lib to test your library, etc.
A sandbox is an isolated testing environment that enables users to run programs or execute files without affecting the application, system, or platform on which they run. Software developers use sandboxes to test new programming code. Especially cybersecurity professionals use sandboxes to test potentially malicious software. Without sandboxing, an application or other system process could have unlimited access to all the user data and system resources on a network.
Sandboxes are also used to safely execute malicious code to avoid harming the device on which the code is running, the network, or other connected devices. Using a sandbox to detect malware offers an additional layer of protection against security threats, such as stealthy attacks and exploits that use zero-day vulnerabilities.
The main article is here.

Hardened BSD from Scratch

I am aware of the Hardened Linux from Scratch project which is a project that provides you with step-by-step instructions for building your own customized and hardened Linux system entirely from source. I would like to know what is the equivalent in BSD?
As Richard said OpenBSD is definitely worth a go, it is my #1 choice for everything that is dedicated for firewalls and gateways. For other services I tend to stick to FreeBSD although there is no obvious reason for it just a personal preference.
But I would like to point out that the from 'scratch part' concept if you want to do more secure hosting of a service can be much better done using Jails. In essence you create a limited FreeBSD environment on an a full FreeBSD install. In that limited environment you only copy/link those binaries and files that the service requires to run.
Because the hosted service has no access to any other files/binaries, all the potential security flaws in those things aren't open to exploit. If by chance your application gets 'rooted' it will not go beyond the boundaries of the jail.
See it like a sandbox on steroids with neglectable performance penalties.
OpenBSD is hardened "by default" from the installation. Only the admin opens it up... component by component.
[UPDATE] while I have not read the document for hardening linux... some of the same things might apply... for example they both use OpenSSH so the strategies would be the same. So where there is module overlap the same would apply.
You don't really do bsd 'from scratch'. All of the major projects come with a complete system in a single source repository so you're not grabbing a kernel from here, binutils and compiler from over there and c libraries and standard utilities from somewhere else and X from yet another place.
They are generally easier to get all the source for and to rebuild the entire system than your average linux distro, but that's not really customizing anything.
You could try to do something nuts, like perhaps trying to get the OpenBSD userland to run on a NetBSD kernel with FreeBSD ports, but you'd be on your own and it certainly wouldn't be 'hardened'.
HardenedBSD is a fork of the FreeBSD project with the aim of implementing PIE, RELRO, SAFESTACK, CFIHARDEN. Some goals are there, others are extreme-WIP. I wouldn't consider it as "ready for production" yet, but usable as desktop (also depends on production env requirements).
Repo: https://github.com/HardenedBSD
Everything, including "make buildworld/buildkernel" is the same as on FreeBSD and the Handbook does a good job of explaining this. You'll have a bit of reading to do though even coming from linux-land. Building your own ports is an entire topic in it's self.
Re jails, the statement is not entirely correct. While certainly adding an important security layer, Unix systems (IDK about Linux) [quoting here] "lack kernel exploit mitigations. If an attacker gains access to a jail, it's not too much work to pivot to other jails or escalate privileges via a kernel exploit." Don't misunderstand me, I place almost every service in a jail as much possible.
As to "Hardened by default" comment: It's all in the sysctl settings which can be tweaked on every *BSD flavor, but sec measures are pretty much useless if the sysadmin does not take time to read the docs.
If you are interested, your homework: https://www.freebsd.org/doc/handbook/

Best security practices in Linux

What security best-practices would you strongly recommend in maintaining a Linux server? (i.e. bring up a firewall, disable unnecessary services, beware of suid executables, and so on.)
Also: is there a definitive reference on Selinux?
EDIT: Yes, I'm planning to put the machine on the Internet, with at least openvpn, ssh and apache (at the moment, without dynamic content), and to provide shell access to some people.
For SELinux I've found SELinux By Example to be really useful. It goes quite in-depth into keeping a sever as secure as possible and is pretty well written for such a wide topic.
In general though:
Disable anything you don't need. The wider the attack domain, the more likely you'll have a breach.
Use an intrusion detection system (IDS) layer in front of any meaningful servers.
Keep servers in a different security zone from your internal network.
Deploy updates as fast as possible.
Keep up to date on 0-day attacks for your remotely-accessible apps.
The short answer is, it depends. It depends on what you're using it for, which in turn influences how much effort you should put into securing the thing.
There are some handy hints in the answers to this question:
Securing a linux webserver for public access
If you're not throwing the box up onto the internet, some of those answers won't be relevant. if you're throwing it up onto the internet and hosting something even vaguely interesting on it, those answers are far too laissez-faire for you.
There's an NSA document "NSA Security Guide for RHEL5" available at:
http://www.nsa.gov/ia/_files/os/redhat/rhel5-guide-i731.pdf
which is pretty helpful and at least systematic.
Limit the software to the only ones you really use
Limit the rights of the users, through sudo, ACLs, kernel capabilities and SELinux/AppArmor/PaX policies
Enforce use of hard passwords (no human understandable words, no birthday dates, etc.)
Make LXC countainers, chroot or vserver jails for the "dangerous" applications
Install some IDS, e.g. Snort for the network traffic and OSSEC for the log analysis
Monitor the server
Encrypt your sensible datas (truecrypt is a gift of the gods)
Patch your kernel with GRSecurity : this add a really nice level of paranoïa
That's more or less what I would do.
Edit : I added some ideas that I previously forgot to name ...
1.) Enabling only necessary and relevant ports.
2.) Regular scan of the network data in - out
3.) Regular Scan of ip addresses accessing the server and verify if any unusual data activity associated with those ip address as found from logs/traces
4.) If some some critical and confidential data and code, needs to be present on the server , may be it can be encrypted
-AD
Goals:
The hardest part is always defining your security goals. Everything else is relatively easy at that point.
Probing/research:
Consider the same approach that attackers would take, ie network reconnaissance (namp is pretty helpful for that).
More information:
SELinux by example is a helpful book, finding a good centralized source for SELinux information is still hard.
I have a small list of helpful links that I find useful time to time http://delicious.com/reverand_13/selinux
Helpful solution/tools:
As with what most people will say less is more. For an out of the box stripped down box with SELinux I would suggest clip (http://oss.tresys.com/projects/clip). A friend of mine used it in an academic simulation in which the box was under direct attack from other participants. I recall the story concluded very favorably for said box.
You will want to become familiar with writing SELinux policy. Modular policy can also become helpful. such tools as SLIDE and seedit (have not tried) may help you.
Don't use a DNS Server unless you have to . BIND has been a hotspot of security issues and exploits.
Hardening a Linux server is a vast topic and it primarily depend on your needs.
In general, you need to consider the following groups of concern (I'll give example of best practices in each group):
Boot and Disk
Ex1: Disable booting from external devices.
Ex2: Set a password for the GRUB bootloader - Ref.
File system partitioning
Ex1: Separate user Partitions (/home, /tmp, /var) from OS Partitions.
Ex2: Setup nosuid on partitions – in order to prevent privilege escalation with the setuid bit.
Kernel
Ex1: Update security patches.
Ex2: Read more in here.
Networking
Ex1: Close unused ports.
Ex2: Disable IP forwarding.
Ex3: Disable send packet redirects.
Users / Accounts
Ex1 : Enforce strong passwords (SHA512).
Ex2: Set up password aging and expiration.
Ex3: Restrict Users to Use Old Passwords.
Auditing and logging
Ex1: Configure auditd - ref.
Ex2: Configure logging with journald - ref.
Services
Ex1: Remove unused services like: FTP, DNS, LDAP, SMB, DHCP, NFS, SNMP, etc'.
EX2: If you're using a web server like Apache or nginx - don't run them as root - read more here.
Ex3: Secure SSH ref.
Software
Make sure you remove unused packages.
Read more:
https://www.computerworld.com/article/3144985/linux-hardening-a-15-step-checklist-for-a-secure-linux-server.html
https://www.tecmint.com/linux-server-hardening-security-tips/
https://cisofy.com/checklist/linux-security/
https://www.ucd.ie/t4cms/UCD%20Linux%20Security%20Checklist.pdf
https://www.cyberciti.biz/faq/linux-kernel-etcsysctl-conf-security-hardening/
https://securecompliance.co/linux-server-hardening-checklist/
Now specifically for SELinux:
First of all, make sure that SELinux is enabled in your machine.
Continue with the following guides:
https://www.digitalocean.com/community/tutorials/an-introduction-to-selinux-on-centos-7-part-1-basic-concepts
https://linuxtechlab.com/beginners-guide-to-selinux/
https://www.computernetworkingnotes.com/rhce-study-guide/selinux-explained-with-examples-in-easy-language.html

Resources