HTTPS over wiregurad - security

I have a server that has a web server on it. I have setup wireguard vpn between me and server.
Do I need to serve my web server with https for security or wireguard is enough?

You should be fine as long as your web server is listening on the Wireguard interface only (i.e. it is bound to the IP address of the Wireguard interface) so that it is not reachable from outside the VPN. You do not technically need to wrap everything in another encryption layer such as HTTPS, as the only way to establish a connection to the web server is through the VPN, which already provides encryption and authentication.
Beware though that VPN + HTTP does not offer exactly the same security features that VPN + HTTPS does, there are some subtleties. For example, in case your private VPN key is leaked, it could be used to perform a Man-In-The-Middle attack on your connection, whereas using VPN + HTTPS a potential attacker would also need to break through HTTPS, which means either getting privileged access to your machine (since a new private key is generated by your browser on each TLS handshake) or a way to forge a valid CA-signed certificate for your web server's domain (generally not possible). Whether or not you care about this additional layer of security is up to you really.

Related

Secure WebSocket Proxy for Unsecured WebSocket Server

I have a server (that I don't control, but on my network) that uses unsecured web sockets to communicate. Rather than allow communication directly with that server from outside the network, I'm wanting to set up a secure proxy that uses secured web sockets to receive the requests from outside the network, and then forward those on to the real server within the network. That way, the unsecure traffic never leaves the network and any communication with outside the network is done via the secured proxy.
What would be the best way of achieving this?
If you're talking about having the internal communication still over regular HTTP, but only have communication to the external world over HTTPS, then this is a common practice. HAProxy supports this, and the general term is called "Terminating SSL" or "Terminating TLS".
You can read more about it here: TLS termination proxy

Azure App Service Architecture understanding: IP-based SSL

Regarding this MSDN article; https://msdn.microsoft.com/en-us/magazine/mt793270
Scale Unit Network Configuration sections has below sentences;
In the case of IP-based SSL, a given application is allocated a dedicated IP address for only inbound traffic, which is associated with the Cloud Service deployment. Please note: Front ends terminate SSL connection for all HTTPS requests for all applications and any type of certificate. The front end then forwards the request to the designated worker for a given application.
But, when Please note: Front ends terminate SSL connection for all HTTPS requests for all applications and any type of certificate happens?
Is this happened right after that we configure IP-based SSL?
or, is this happened to all traffics always under IP-based SSL?
or else?
It happens for all traffics. All https traffic irrespective of whether you are using a ip-based SSL, SSL cert from external CA's or using internal Azure SSL (azurewebsites.net) the SSL traffic is terminated at the front-end each scale unit has and from front-end to worker will always be http traffic. In return the same is encrypted back at front-end before traffic goes out using the SSL uploaded for specific domain/azure provided SSL cert.

IIS Central Cert Store - Outbound Traffic

I have an F5 load-balanced 4-server cluster environment that I'm building, so I'm looking to centralize our certificates to prevent needing to install them all on every server. Windows 2012 / IIS 8 seems to have centralized certificates, but that is only to secure my endpoint in IIS for inbound traffic.
What about for outbound traffic? They all will be initiating TLS transactions to external entities, so I need a way to store all these on a single server and have each of the IIS boxes "tap into" that cert store for the private and public keys that are necessary to send that TLS message.
Any suggestions?
You're looking for an HSM which the F5 will support and IIS also supports a few major vendors (Thales and Safe-Net both have IIS supported HSMs). They're not cheap from what I remember but that's exactly what you're looking for.
If you don't want to go that route, you can opt for the dirty solution of using the BIG-IP as your cert store and rely on self-signed certs on the IIS pool members.
Inbound: Incoming traffic terminates on BIG-IP using the valid CA-signed cert SSL Client profile. BIG-IP re-encrypts to IIS using a generic SSL server profile. Not pretty but it works.
Outbound: You would have to use the BIG-IP as the default gateway of the IIS server so you can direct the outbound TLS from BIG-IP instead of IIS directly.
Devcentral: SSL Acceleration - Can I encrypt outbound traffic
Hope this helps.
-Chase

JAVA - Can we ignore SSL verification for local network

Can we ignore SSL verification for local network. My case is-
I have two applications deployed in a system. These two applications cannot communicate through internet, due to some security constraints. the two applications can communicate using their private IPs. But the certificate issued by CA is valid only for the public IP (accessible from internet), so when they tries to do a HTTP connection, it throws a Subject Alternative Name invalid exception.
I cannot use alternate certificate.
Please suggest if we can configure Java / JREs of the applications to ignore SSL validation?
Please suggest any alternate solution, if any.
It sounds to me like you might just be better off using HTTP on the local network.
If you need transport layer security on your LAN, you can probably use a VPN or SSH tunnel instead. And it sounds to me like you don't really need this, as you're OK with ignoring SSL handshake errors, which makes using SSL in the first place kind of moot.
You can set up your servers to listen on two ports, one for external requests over HTTPS, and one for internal requests on HTTP.
You can either set up your firewalls so that HTTP is only available from LAN IPs, or alternatively only listen on localhost and use a VPN or SSH tunnel to the target server and do the requests via the tunnel.

Is there any security risk if we install SSL Certificate at the load balancer instead of the servers?

I'm doing a bit of research on this, so is there any security risk if we have the SSL certificate installed at the load balancer instead of the server? And what is the industry best practice to install SSL certificates? on server, load balancer, or ADC?
This is probably better off on serverfault, but I'll give it a shot here.
There's no increased security risk for the SSL certificate itself just because you put the SSL certificate on the load balancer, assuming the load balancer is configured correctly and won't serve up the private key. This risk exists on any server, load balancer or not, a new OS compromise or attack might, although it's unlikely, allow that to happen.
However depending on how you do it traffic behind the load balancer could be sent unencrypted, if the load balancer only talks HTTP to the content servers. So you need to configure the forwarded connections to use HTTPS as well, either using internal certificates and your own CA, or by installing the externally face HTTPS cert on the content servers (and you'll need to do this if you're aiming for PCI compliance).
Remember there's also a load risk, encryption is expensive, and by putting the cert on the load balancer it increases the, errr, load, on it. If the load balancer is already over stretched this may be the final straw. If you're looking at lots of transactions then you tend to see a hardware SSL device sitting before the load balancer which takes care of the SSL traffic, then talks HTTP to the load balancer, which talks HTTP to the content servers. (Again this needs to be HTTPS if you are aiming for PCI compliance)
Here are the implications I could think of:
Unless you re-encrypt the traffic between the SSL accelerator and the final server, traffic on the internal network will be in clear text. That could cause other security flaw to become more dangerous. Depending on what your legal and contractual requirement are regarding the data you're transferring, it might be unacceptable.
You will lose te ability to use X509 certificate to identify the clients. This could be a problem or not depending on what you're doing.
As for certificate management, you're storing the private key on the SSL accelerator instead of the server. This could actually be an advantage because if the web server gets compromised, the attacker will still have no access to the private key themselves and therefore will not be able to steal them.
There are many levels of load balancing. You don't have choice but to put cert in the load balancer in most popular configurations.
For example, if you are proxying the HTTP traffic in the load balancer, it has to terminate the SSL connection so it must have the cert.
Normally, the load balancer lives in the secure zone so you don't have to use SSL between balancer and your server. If that's not the case, you can use SSL again but you defeat the purpose of the SSL acceleration feature on most switches.

Resources