How google found that the request is tampered or being interrupted - security

I started fiddler and when I tried to access google.com , I got the below error
It was able to find that, the request is coming from an untrusted tool or something like that. Can anyone please explain how they are doing it or any hint about it, so that, we could apply for our web sites.
Once I closed the fiddler, it started working fine again.
Thanks in advance
Jonathon

It's all explained in the "what does it mean" section: Fiddler has send your browser its own SSL certificate to be able to intercept the request (it –more or less– decrypts it using its certificate, then re-encrypts it using Google's one).
Chrome comes preloaded with public keys that it expects to see in the certificate chain for web sites, including of course google.* ones, so it can detect that Fiddler's certificate is not one coming from Google.
See http://blog.stalkr.net/2011/08/hsts-preloading-public-key-pinning-and.html

Related

SSL/TSL for React Frontend and Express Backend?

I have been learning more about web development so this is likely a dumb question or I do not have the knowledge to search for the answer properly.
I am revamping my current personal website (hosted on Github pages). I am making a React frontend which will be served via Github pages and with an Express backend (likely through cyclic). I want to add SSL/TSL Encryption for encrypted communication between the frontend and backend.
To my knowledge, SSL works via the server sending it's certificate to the client. It also will send it's public key so that the client can use the key to encrypt the message, send it to the server which uses the private key to decrypt said message. To me this means that I definitely would need to get a certificate for my backend.
However, I have some knowledge into how RSA encryption works (I know this is not the same) but it seems like this means that messages from the server to the client would not be secure. Would this mean that if I needed messages to be encrypted going that way that I would need to add a certificate. I personally cannot think of an example but I am sure there might be one.
First, is my assumption correct? If this is the case, how would I do so in both the general sense and with the services I am using?
Thank you for any help and I apologize for any mistakes I made, I figured to put out my thought process.
GitHub pages will do the SSL/HTTPS for you but you as part of configuring your custom domain. See Securing your GitHub Pages site with HTTPS.
In the "Code and automation" section of the sidebar, click Pages.
Under "GitHub Pages," select Enforce HTTPS.
If you were using your own servers, most people use Nginx to terminate SSL. Node.js can do it but most often Nginx is used as a reverse-proxy and SSL termination point.

How do I confirm Man in the Middle attack with these hints?

1I have an app installed on my android device that shows me if the SHA256 fingerprint has been changed. It often shows that it has been altered when I run it for YouTube.com and it once showed for Instagram.com. I tried using a VPN and it didn't show afterwards.
The app basically says that it detects the SSL interception of web traffic which will decrypt an encrypted session. The test is accomplished by comparing the HTTPS certificate fingerprint of the website on your device vs the fingerprint shown on an external server.
I'm curious if it is really a concern as I do a lot of private video calls on Instagram. Are those getting recorded or anything without my knowledge?
PS: I do not have any shady app on my device.
Check the actual certificate the sites return. Certificates will expire after a while, meaning they get replaced with new versions.
Besides that, bigger sites with multiple datacenters, such as YouTube (Google) and Instagram (Facebook), might even use different certificates for different regions. This would explain why it doesn't show up while using a VPN. Also because of IP routing, special server configurations, ... you might end up connecting to different servers/regions (with different certificates) from day to day or so.
Assuming that the certificate is properly signed, valid and not revoked, you should be fine, even if the fingerprint changes. For malicious people to perform a man-in-the-middle attack with a valid SSL, they'd either need to have a valid certificate themselves (which would get revoked), access to the site's servers (which is a lost cause) or add a malicious root certificate to your device (which is a whole other problem).
The test is accomplished by comparing the HTTPS certificate
fingerprint of the website on your device vs the fingerprint shown on
an external server.
Mind that that external server might also have a different/outdated fingerprint compared to you, for any of the reasons above or others.

WebRTC over Local Network

I'm building a React website that I want to use WebRTC to basically be able to make audio/video calls to other devices, only on my local network. Because the getUserMedia requires HTTPS, I'm running into issues whereby I basically have to bypass the SSL warnings (the "visit website anyway" buttons), which I don't want to do.
I'm using my laptop to act as the connection broker/signaling server to allow the clients to connect with each other--if I downgrade the capabilities to HTTP for text chat only, this works great--but the whole purpose is to use audio/video, so I need that SSL layer.
My question is: how do I setup the SSL layer properly so that I don't have to bypass the warnings and accept a self-signed certificate?
Strictly speaking, the self-signed certificate does work and I can do this using it, but it seems self-defeating, so it's not really the way I want to go.
Again, this is only for intranet usage, so I don't know if that makes it easier or harder, but that's my constraint.
EDIT:
The server is written in NodeJS. I've found some documentation suggesting that Node can be given additional CAs (e.g. NODE_EXTRA_CA_CERTS). Is this something that I can leverage? Would a client html page utilize this in any meaningful way?
This link seems promising: https://engineering.circle.com/https-authorized-certs-with-node-js-315e548354a2. The main thing I'm not understanding is how I would utilize that ca: fs.readFileSync('ca-crt.pem') line for a given request, as it seems like the code there is actually making the request (but one would have already been made to the server in my case, no?). https://nodejs.org/api/https.html#https_https_request_options_callback seems to indicate something similar, as well.
It is totally possible to register a domain name, and then point it at something in the Private Address Range. I do this for local development sometimes. I registered pion.io and got a wildcard cert via LetsEncrypt.
You could also use mkcert. Then either in /etc/hosts or in your router itself you can give a FQDN to your signaling/web server.
There is also the --unsafely-treat-insecure-origin-as-secure argument for Chromium, I haven't used it lately though not sure if it still works.

Security Implication on req.pipe nodejs

I am building a basic cors proxy, And in one of the use cases I need to pipe request, so I thought of using pipe with Request.js as shown in below image
I am not so expert in security. Could someone list possible security Implications from the above code?
If you look closer, you will notice that your client's request is being sent to mysite.com (req.pipe(x);). mysite.com can access your clients' cookies (they are sent along with the request headers). If it is a malicious website, they can use those cookies to imitate your users on your website. Think of it as giving someone your computer right after logging in to stackoverflow. They don't have to know your username and password to do stuff on stackoverflow after that. Giving your session cookies are basically the same thing.

Encrypting Amazon S3 URL over the network to secure data access

I want to host copyrighted data on a Amazon S3 bucket (to have a larger bandwidth available than what my servers can handle) and provide access to these copyrighted data for a large numbers of authorized clients.
My problem is:
i create signed expiring HTTPS URL for these resources on the server side
these URL are sent to clients via a HTTPS connection
when the client uses these URL to download the contents, the URL can be seen in clear for any man-in-the-middle
In details, the URL are created via a Ruby On Rails server using the fog gem.
The mobile clients I'm talking about are iOS devices.
The proxy I've used for my test is mitmproxy.
The URL I generated looked like this:
https://mybucket.s3.amazonaws.com/myFileKey?AWSAccessKeyId=AAA&Signature=BBB&Expires=CCC
I'm not a network or security expert but I had found resources stating nothing was going clear over HTTPS connections (for instance, cf. Are HTTPS headers encrypted?). Is it a misconfiguration of my test that led to this clear URL? Any tip on what could have gone wrong here? Is there a real chance I can prevent S3 URL to go clear over the network?
So firstly, when sending a request over SSL all parameters are encrypted. If you were to look at the traffic going through a normal proxy you wouldn't be able to read them.
However, many proxies allow interception of SSL data by creating dummy certificates. This is exactly what mitmproxy does. You may well have enabled this and not realised it (although you would have had to install a client-side certificate to do this).
The bottom line is that your AWS URLs could be easily intercepted by somebody looking to reverse engineer your app, either through a proxy or by tapping into the binary itself. However, this isn't a 'bad thing' per se: Amazon themselves know this happens, and that's why they're not sending the secret key directly in the URL itself, but using a signature.
I don't think this is a huge problem for you: after all, you're creating URLs that expire, so even if someone can get hold of them through a proxy they'll only be able to access the URL for as long as it is valid. To access your resources post-expiry would require direct access to your secret key. Now, it actually turns out this isn't impossible (since you've probably hard-coded it into your binary), but it's difficult enough that most users won't be bothering with it.
I'd encourage you to be realistic with your security and copyright prevention: when you've got client-side native code it's not a matter of if it gets broken but when.

Resources