Allow RDP to public webserver? - security

Is it a huge security flaw to allow user to connect to your server via Remote Desktop? Right now i have a setup where i only allow a couple of ip-addresses to connect via the RDP port but i am thinking of removing this and allow all IP's to connect so i can RDP with my iPhone if there is some problem when I'm not at home.
So as long as i have a secure password do you guys think this is a bad idea? Is there anything else i can do to make it a bit more secure but still be able to connect from "wherever"? Is it for example possible to setup a page that i must visit that "allow anyone to login for 2 hours". Some kind of security by obscurity thingy?
Thankful for any help i can get.

Maybe you should post this question to serverfault. But anyway.
If you are using only user/password as the access method. Then it will be very easy for an attacker to lock your user ( or all users, thay don't even have to have terminal access rights ). So yes, it will be a huge security flaw. There are lots of way to protect from this treat and make rdp available from wherever. But I am not familiar with any of them.

It's very common to implement two-factor authentication for any remote access to corporate servers. In many companies you'll see the RSA tokens used as a second factor, albeit I prefer to use SMS --- it doesn't matter as long as you have two factors in play: something you know, something you have, something you are.
If your company doesn't want to implement a second factor then I still wouldn't recommend a publicly exposed RDP interface. It's open to brute force attacks, OS exploits or just plain old Denial of Service (if I blast your public interface with traffic then it will slow down legitimate machine use within your company). At a minimum I would look into tunneling over SSH, maybe with a client-side certificate authentication, or I would implement port knocking to get at the server interface in the first place.

It is a security flaw, but not so huge. Traffic is encrypted and reading user or password from it is not immediate as in text based protocols as in say ftp. It is just a little bit less secure than ssh.
It obviously has the same flaws as any other remote access (possible brute force or DOS attack). You should also use non a default account name to avoid simplifying task for attackers.
Your idea of opening access only after visiting some page is not bad either. Looks like it's a variant of the classical port knocking mechanism (but beware avoid opening a bigger hole).

Related

Azure Windows Virtual Machine RDP security

Hello & many thanks in advance.
I'm a complete beginner with Azure and have followed the tutorial in creating a VM. I access it via RDP.
I switched on the event logger and I can see that there are multiple attempts to try and login into the admin account every couple of seconds or so.
Just wondering if there is a way to secure against this?
Thanks
Will
There are multiple ways to secure, take a look at below;
https://learn.microsoft.com/en-us/azure/security/fundamentals/iaas#restrict-direct-internet-connectivity
There are several different things to consider here. First, we should identify what specifically makes RDP a favorite target of cyberattacks.
The biggest known weakness of RDP is that is requires open ports, the default values of which are widely known. This is why you are seeing all those login attempts. Cybercriminals are constantly scanning port 3389 on every know IP address to find a vulnerability. The best thing you can do here is to change the default port to something else.
Secondly, RDP passwords requirements are often not enforced. A McAfee report found that the most common passwords for vulnerable RDP services were “123456” and “password.”
Finally, and perhaps the most frustrating is that RDP is just a really old protocol not designed for the modern internet. There is a laundry list of known RDP vulnerabilities, for which many organizations simply neglect to address.
There are a few things you can do as a savvy admin:
Change your RDP port
Put your RDP server behind a firewall and/or a
VPN
Enable strong password requirements
Enable multi-factor authentication
Apply all available security patches
Use a modern zero-trust access service like Twingate, Perimeter 81, or Zscaler to limit access and detection by unauthorized users.
I tried to cover these topics in a blog post I wrote for my company (Twingate), which provides a fairly good summary of the situation and some other ideas to secure your RDP server. Hope this is helpful!

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

Protecting web-server from ssh brute force attacks

I have a server acting as a reverse-proxy connected directly to the internet. I access this computer through ssh on a non-standard port. It doesn't seem too secure. If someone found the ssh port they could brute-force it and gain access to the computer.
Is there a more secure way to configure this?
Brute forcing SSH is very slow and time consuming, by design. With OpenSSH (most implementations are similar) there is a couple second delay after submitting an incorrect password. After three failures the connection is dropped. This makes it unbelievably slow to brute force a password of even moderate entropy. Generally brute forcing of SSH is not a problem so long as your server doesn't have a ton of threads accepting connections and your passwords is sufficiently complex.
Nevertheless #demure's suggestion to use denyhosts or sshguard would not only put your mind at ease, but it would help you detect attacks and possibly take action if necessary. #Steve's suggestion to use public key authentication is also an excellent preventive measure. If you're truly paranoid, consider adding password and public key authentication to achieve two-factor authentication. Something you have and something you know.
I would recommend a tool like denyhosts or sshguard, which watch your logs for failed logins, and auto ban IPs based on the rules you set.
I use fail2ban and wrote a utility myself in C# using mono to add any IP that comes to attack my SSHD to add them to the /etc/hosts.deny as well. Now I see there are some persistent hackers whose IP addresses are kept getting refused. They always come back the next day, though. Digital Ocean has a good tutorial for setting up fail2ban here.
Edit: I changed my logging from username password to Certificate based authentication and that seemed to have reduced most of the attacks but some people even use brute force to crack the certificate. I changed my code now based on the new logs. That seems to be working fine.
Brute force SSH attacks that can suck the resources from a server. The problem can be especially exasperating on low powered servers with a minimal amount of processors (CPU) and memory (RAM).
We developed a solution that downloads IP address black lists of known SSH attackers and adds them to the /etc/hosts.deny file.
The solution is released as an open source project named am-deny-hosts on GitHub. The project consists of a set of shell scripts that does the job without taking up a lot of time, CPU, or memory.

Shared SSL - Better or worse than resorting to OpenID?

I am working on a project that requires user login/registration. I'd like to avoid setting up private SSL since I am using a shared hosting provider and would like to host multiple domains off of the same plan (but since a private SSL certificate requires a dedicated ip, I can only have 1 certificate per plan...but would still like to secure all of my sites).
I am debating between
resorting to OpenID (although for a non-technical audience all the complaints I found on SO would be further multiplied)
using my host's shared SSL (which will pop up those annoying certificate warnings in the browser saying that the sites don't match).
What seems like a better option? Or would you suggest run away from both and just suggest sucking it up and purchasing additional/better hosting plans?
From my experience in dealing with SO and a fairly simple site using Google App Engine (and their authentication system), I'd give the following advice:
Do NOT use OpenID for identification. It can work for authentication with your own identity management, but there are issues as soon as you try to identify a specific user.
Its amazing how many open ids people will have, so be prepared to support multiple OpenID auth URLs (definitely more than 1, probably more than 2)
If high security is a requirement, be very wary of OpenID. Many people will use providers that they normally only use for low-security tasks (and therefore have weak passwords). This particular issue struck Jeff Atwood directly (his account was stolen due to exactly this mistake)!
Keep things simple for your users. If you do go with OpenID, emphasize one or two providers that they likely already have (eg, Google), and then provide a deemphasized selection for generic providers. Don't make the more simple-minded users think about OpenID.
Along with that thinking, a simple "Login with your Google Account" button works surprisingly well. I thought people would find it confusing to login to a third party site with their google account, but in practice this has not been a problem with our .appspot.com domain.
The bottom line is that you shouldn't expect your users to prefer openid, but it can be an acceptable compromise. I don't think that showing an invalid certificate is a reasonable option for many end-users.
Of course, the separate certs option is the cleanest, but you have to decide if thats really worth it for the value gained. I'm a cheapskate and would tend to avoid it myself. :)
Why not roll your own from the ground up? If your database is accessible from each domain, you could keep one user store that every domain could access.
Is there a particular reason you do not want to create your own user model? It's easy to do but you may have other factors that are leaning you towards something like OpenId that I am not aware of.
If you use the shared SSL's URL, you shouldn't get the popups. That's the whole point of shared SSL. What you is the identity of your site's URL when the user jumps to the secure connection.
I would talk to your hosting provider about your options when it comes to private SSL. They're really not that expensive (even free if you're ok with poor IE support). I've been with shared providers in the past that would allocate you a dedicated IP for use with SSL for a tiny extra fee (like $2/mo).
To me, the extra $54 per year ($30 for the cert + $24 for the IP) was well worth the peace of mind for me and my users.

Client Server socket security

Assuming we have a server S and a few Clients (C) and whenever a client update a server, an internal database on the server is updated and replicated to the other clients. This is all done using sockets in an intranet environment.
I believe that an attacker can fairly easily sniff this plain text traffic. My colleagues believe I am overly paranoid because we are behind a firewall.
Am I being overly paranoid? Do you know of any exploit (link please) that took advantage of a situation such as this and what ca be done differently. Clients were rewritten in Java but server is still using C++.
Any thing in code can protect against an attack?
Inside your company's firewall, you're fairly safe from direct hack attacks from the outside. However, statistics that I won't trouble to dig out claim that most of the damage to a business' data is done from the INside. Most of that is simple accident, but there are various reasons for employees to be disgruntled and not found out; and if your data is sensitive they could hurt your company this way.
There are also boatloads of laws about how to handle personal ID data. If the data you're processing is of that sort, treating it carelessly within your company could also open your company up to litigation.
The solution is to use SSL connections. You want to use a pre-packaged library for this. You provide private/public keys for both ends and keep the private keys well hidden with the usual file access privileges, and the problem of sniffing is mostly taken care of.
SSL provides both encryption and authentication. Java has it built in and OpenSSL is a commonly used library for C/C++.
Your colleagues are naïve.
One high-profile attack occurred at Heartland Payment Systems, a credit card processor that one would expect to be extremely careful about security. Assuming that internal communications behind their firewall were safe, they failed to use something like SSL to ensure their privacy. Hackers were able to eavesdrop on that traffic, and extract sensitive data from the system.
Here is another story with a little more description of the attack itself:
Described by Baldwin as "quite a
sophisticated attack," he says it has
been challenging to discover exactly
how it happened. The forensic teams
found that hackers "were grabbing
numbers with sniffer malware as it
went over our processing platform,"
Baldwin says. "Unfortunately, we are
confident that card holder names and
numbers were exposed." Data, including
card transactions sent over
Heartland's internal processing
platform, is sent unencrypted, he
explains, "As the transaction is being
processed, it has to be in unencrypted
form to get the authorization request
out."
You can do many things to prevent a man in the middle attack. For most internal data, within a firewall/IDS protected network you really don't need to secure it. However, if you do wish to protect the data you can do the following:
Use PGP encryption to sign and encrypt messages
Encrypt sensitive messages
Use hash functions to verify that the message sent has not been modified.
It is a good standard operating proceedure to secure all data, however securing data has very large costs. With secure channels you need to have a certificate authority, and allow for extra processing on all machines that are involved in communication.
You're being paranoid. You're talking about data moving across an, ideally, secured internal network.
Can information be sniffed? Yea, it can. But it's sniffed by someone who has already breached network security and got within the firewall. That can be done in innumerable ways.
Basically, for the VAST majority of businesses, no reason to encrypt internal traffic. There are almost always far far easier ways of getting information, from inside the company, without even approaching "sniffing" the network. Most such "attacks" are from people who are simply authorized to see the data in the first place, and already have a credential.
The solution is not to encrypt all of your traffic, the solution is to monitor and limit access, so that if any data is compromised, it is easier to detect who did it, and what they had access to.
Finally, consider, the sys admins, and DBAs pretty much have carte blanche to the entire system anyway, as inevitably, someone always needs to have that kind of access. It's simply not practical to encrypt everything to keep it away from prying eyes.
Finally, you're making a big ado about something that is just as likely written on a sticky tacked on the bottom of someone's monitor anyway.
Do you have passwords on your databases? I certainly hope the answer to that is yes. Nobody would believe that password protecting a database is overly paranoid. Why wouldn't you have at least the same level of security* on the same data flowing over your network. Just like an unprotected DB, unprotected data flow over the network is vulnerable not only to sniffing but is also mutable by a malicious attacker. That is how I would frame the discussion.
*By same level of security I mean use SSL as some have suggested, or simply encrypt the data using one of the many available encryption libraries around if you must use raw sockets.
Just about every "important" application I've used relies on SSL or some other encryption methodology.
Just because you're on the intranet doesn't mean you may not have malicious code running on some server or client that may be trying to sniff traffic.
An attacker which has access to a device inside your network that offers him the possibility to sniff the entire traffic or the traffic between a client and a server is the minimum required.
Anyway, if the attacker is already inside, sniffing should be just one of the problems you'll have to take into consideration.
There are not many companies I know of which use secure sockets between clients and servers inside an intranet, mostly because of the higher costs and lower performance.

Resources