Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm currently building an application and I want to make sure that I use HTTPS throughout the whole application. The application is a web application in Golang and I wanted to know how to get legit certificates so that my application can be secure.
I would say it depends on how the application is gonna be deployed.
Hosting the application on a VPS / private server as a systemd service ?
You could look into Certbot if you want to manage SSL renewal automatically. But still you'll need to provide the certificate into your application, or use a HTTP proxy such as NGINX to expose your application on HTTPS.
This approach would work, but can be painful as you'll need to install / manage Certbot & possibly Nginx on your server.
Another good option would be to use Traefik, it's a Proxy server with builtin Let's Encrypt support, so that you'll be able to use free SSL, automatically renewed, by just installing the service, and creating a little configuration file.
I would personally choose the external proxy approach on this one, and especially Traefik. It shouldn't be the job of you web application to manage HTTPS, but more an external proxy. So that if one day you need to scale your application, it shouldn't be painful.
Well, you have a few options. I found it easy to use ZeroSSL to get a trusted certificate, but there are many other ways to do so. You can also use Certbot, but it several dependencies to be installed.
If you are getting certificate for FQDN , you can use Letusencrypt which provides many clients support including certbot. You can find it here
https://letsencrypt.org/docs/client-options/ but please remember it wont work without fqdn.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 days ago.
Improve this question
I use no-ip for a domain name for my website that's hosted on my raspberry pi. I use port forwarding so the router forwards the incoming traffic to the raspberry pi. However, I just purchased a DV SSL certificate for the no-ip domain. I used the https module to create a secure server, and I supplied my express.js app as the app which handles traffic. Whenever I try to access the https version of my no-ip domain, I either get the connection timed out error when using mobile data, or the "this site uses an unsupported protocol" when using WiFi.
Make sure that your SSL certificate is configured correctly. You can use an online SSL checker to verify that your certificate is valid and installed correctly.
The errors you get usually indicate issues related to your SSL installation and server configuration. Scan your SSL certificate with an SSL checker tool like SSL Labs for example and look for potential red flags, and the TLS verisions supported.
If you're 100% confident your router is properly configured, check your firewall. It may be blocking HTTPS traffic.
You could also try checking your server logs for potential errors.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have a Virtual Private Server (VPS) running Debian 10. On this server there is an application (its code rather cannot be modified) which creates an open TCP/IP port (let's say 6000). The application has a simple database with users and passwords and all incoming messages MUST be HTTP.
Obviously, at this point I am more than worried about the security of the communication (which in fact does not exist due to the plain nature of HTTP).
My first thought would be to drop all packets on the mentioned port for the eth0 iface (which is exposed to Internet), create OpenVPN server on my VPS and connect to this VPN all clients that would like to use my application. The problem here is that these clients will most likely be Android devices and it will not be possible to upload certificates for each device and do other configuration magic to establish the VPN connection. I also would not like to implement OpenVPN in a dedicated Android app.
My another though was that there is maybe an application which I would start on the VPS and it would implement such logic:
Android app <--HTTPS--> UnknownApp(on VPS side) <--HTTP--> port6000(My original unsafe app also on the VPS side)
Is it feasible to implement such scenario? Ofc I could write such app on my own but I would prefer to use tested and reliable solutions.
The application you are looking for is stunnel. It does exactly what you described, it is well tested, based on well-known libraries, and production ready.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I got a dedicated server for a project(angular/nodejs). I already configured centos web panel and through this, my domain with let's encrypt ssl, frontend and backend api and everything is up and running so far.
The backend api is running through jenkins/pm2 and it's up on my ip:port3333 but I need it to be ssl, so for example, I would need it to be https://api.example.com:3333 otherwise I'm getting this error on my project: This request has been blocked; the content must be served over HTTPS.
If I try https://example.com:3333 or https://subdomain.example.com:3333 i get an ERR_SSL_PROTOCOL_ERROR which i guess it's normal since centos web panel seems to apply only to the main domain.
So, how can I point a domain or subdomain to the service port 3333 and apply an ssl to it? Or if I can't, how should I proceed to get the service running with ssl? Do i really need this config serverside or is it a matter of the app.
Any idea on how to proceed? Not sure what config should I share.
Thanks in advance.
Turns out it was caused by a missconfiguration on my reverse proxy on apache to a custom port.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm trying to devise a method to protect my HAProxy SSL certificates while at rest on disk so that if the load-balancer host gets hacked, the SSL certificates will not be sitting there ripe for the attacker to pluck.
I realize that at the very least, the certificates must be available in memory in order to be used by HAProxy to negotiate SSL connections. However, I’d like to do whatever is possible to keep the certificates secure.
How can I setup the ssl-cert directory to be protected and/or encrypted and be available to HAProxy only when it needs the information (presumably when the service is started)?
Currently I see two ways this could be achieved.
Use some sort of linux/*nix filesystem-level encryption.
This means munging the HAProxy init/upstart script to require a specific password or key file to exist on disk. This password is then used to extract the certs from an encrypted archive file (e.g. RAR or something?) into the HAProxy /etc/haproxy/certs directory. After the HAproxy service has started use srm the password/key file along with the /etc/haproxy/certs directory.
Create an external API service management layer which runs on a different (super secured) host. This service will store the certificates and orchestrate load-balancer service restarts and reloads. This service would rsync over the haproxy certs directory, restart or reload it via ssh, and then ssh … srm the certs directory to securely erase the /etc/haproxy/certs directory.
I’d appreciate feedback on these ideas, any relevant experience, or any other way this security goal can be achieved.
Additional resources:
Here is a relevant related question on SO regarding multi-ssl HAproxy.
HAProxy SSL termination documentation
Although this isn't the right forum for your question, here's an answer:
Simply protect your SSL certificates with a passphrase.
Upon starting HAProxy, your SSL Library will ask for the passphrase.
Keep in mind that you will need to type the passphrase every time you start/restart HAProxy.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have moved this question to serverfault where it might be more appropriate.
See https://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps
We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008.
I am now wondering about hosting all of these websites using our own virtual private server such as http://www.crystaltech.com/vps.aspx
This is clearly cheaper but comes with a lot of questions I need answers to:
Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn.
When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host:
Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com
These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS?
The control panel we have for shared hosting comes with DNS management like this:
(source: yart.com.au)
What software would I need to install to create this for each site we host at a VPS?
The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily:
(source: yart.com.au)
Is this something that can be easily set up at a VPS so clients can manage their own email addresses?
Is there software we need to install to make this work?
1) It depends on your applications, visitor patterns, required resources, etc. In general I'd say if you don't have the expertise - prefer scalable hosting solutions or managed dedicated servers (which can be quite expensive, but cheaper if you require very high availability).
Personally I host few dozen websites on my VPS and generally it is very easy to manage manually (after all it is Windows Server, you have GUI and PowerShell). That is until you hit a problem or someone hacks you.
2) You can always use free or paid DNS services or install OpenDNS on your VPS server (not recommended). Your VPS hoster might be providing DNS servers, ask them.
3) You can buy Plesk or cPanel and manage your websites the same way.
4) Same.
Everything you ask can be set up initially by your VPS provider. They will install control panels that will allow you to easily manage your websites, while having full server access as well.
You can have the best of both worlds. I use EuroVPN at www.eurovpn.com - they offer Semi-Managed plans on their VPS's (they have a sister company, EcoVPS for people who don't want this support). When I say semi-managed, the proactive monitoring is done by you, but you can always raise a ticket if you get stuck or there's a problem, and an engineer (1st/2nd & 3rd line) connects in using RDP to do the work for you.
Also, they give Plesk for "free".