Docusign Demo: https://account-d.docusign.com IP address keeps changing - docusignapi

Does anyone know what the IP range is for https://account-d.docusign.com? They cycle IPs every few minutes.
Our security team is constantly having to update our firewall.

The IP addresses will change and continue to change as DocuSign add servers, locations, etc.
Is there no way to have the firewall use the URL instead of IP Address?
You cannot expect IP addresses to be static
This page - https://www.docusign.com/trust/security/esignature list the IP Address range as:
North America-based and demo accounts (current and continuing):
**NEW 209.112.104.1 through 209.112.104.254
​64.207.216.1 through 64.207.219.254
162.248.184.1 through 162.248.187.254
European Union-based accounts (current and continuing):
185.81.100.1 through 185.81.103.254
Australian-based accounts (current and continuing):
13.72.248.93
13.72.249.142
13.70.141.103
13.70.136.159

Related

how to create spf record as spf.example.com

usually, we get an SPF record in the form of spf.thirdpartyexample.com now if we install any email server we create an SPF record with ip:xx.xx.xxx.x reference link from validity
v=spf1 -mx -ptr include:_spf.mx.cloudflare.net -all
so my question is :
how to create an SPF record as spf.example.com for our own server in the same way we get from a third-party vendor?
You mean to "authorize" your own domain/s in your Bind/dns?
If, then you should have
yourdomain.tld. IN TXT "v=spf1 a mx ~all"
in your bind zone configuration file
v=spf1 tells it's a spf record
"a" tells that "yourdomain.tld" IP is allowed to send mail from yourdomain.tld
"mx" tells that the mx server IP of yourdomain.tld is allowed to send mail
"~all" tells that SPF queries that do not match any other mechanism will return “softfail”. Messages that are not sent from an approved server (the server with ip www.xxx.yyy.zzz = the ip of the mail server yourdomain.tld = "a". The server with ip aaa.bbb.ccc.ddd = the ip of the mx record of yourdomain.tld = "mx") should still be accepted but may be subjected to greater scrutiny...
So, if your dns record is like
yourdomain.tld. IN A aaa.bbb.ccc.ddd
mail.yourdomain.tld. IN A www.xxx.yyy.zzz
yourdomain.tld. IN MX 10 mail.yourdomain.tld.
the SPF tells that
if the mail is sent from aaa.bbb.ccc.ddd (the "a") that's ok
if the mail is sent from www.xxx.yyy.zzz (the "mx") that's ok

Automatic https when using caddy for domain and ip address together

I am trying to access my website using both its domain name and static ip address via https protocol.
When I use just domain it works as expected, but when I add ip address as following:
my.domain.com {
respond "Hello from domain"
}
10.20.30.40 {
tls internal
respond "Hello"
}
it does not work. Moreover if I use tls internal for different port:
my.domain.com {
respond "Hello from domain"
}
:8080 { <----------- Here I use port
tls internal <------------- and tls internal
respond "Hello"
}
accessing by domain name in browser now warns that certs are not publicly trusted. I assume that tls internal in second block affected first block. Is it right? Why so?
Anyway, my main question is how to access my website both via domain name and ip address even if I need to use different ports with caddy over https. I know, that for some historical reason ip addresses cannot have publicly trusted certs so it is ok if ip address will use self signed certs.
pls help!
Caddy version: v2.3.0

private and public ip using AWS api gateway + lambda + Nodejs

I am trying to get the users private IP and public IP in an AWS environment. Based on this answer (https://stackoverflow.com/a/46021715/4283738) there should be a header X-Forwarded-For , separated ips and also from forum (https://forums.aws.amazon.com/thread.jspa?threadID=200198)
But when I have deployed my api via API Gateway + lambda + nodejs v8. I have consoled out the JSON for event and context varaibles for the nodejs handler function arguments for debugging (https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod) I am not getting the private ips.
The lambda function is
const AWS = require('aws-sdk');
exports.handler = function(event, context, callback){
callback(null, {
"statusCode": 200,
"body": JSON.stringify({event,context})
});
}
API Gateway Details
GET - Integration Request
Integration type -- Lambda Function
Use Lambda Proxy integration -- True
Function API : https://y0gh8upq9d.execute-api.ap-south-1.amazonaws.com/prod
Case-1 : You can not get the private IP of the user for the security reasons(If configured by the user, this is done by NAT or PAT (Network Address Translation or Port Address Translation behind the screen. NAT will add this ip in his table and send the request ahead with the public id or can say router-id).
Case-2: If here your private ip means is if multiple users are using the same public network(WIFI etc). Then again you can define two IPs one is public which is common for all but inside there public network they have another ip which is unique inside that public network.
For example: Let's say there is WIFI with public IP (1.1.1.1). It has two users A and B.Notably, as they are sharing the same WIFI so the router will have only one IP(public and common for all) but inside this router, A and B will have different IPs such as 192.1.1.1 and 192.1.1.2 which can be called as private.
In both cases, you will get only the public IP(At position 0 in X-Forwarded-For header).
You can get X-Forwarded-For header inside
event.headers.multiValueHeaders.
If you can access both then what is the benefit of having private and public ip?
To access AWS VPC private subnet as well you have to use NAT and the client will never know the actual IP for the security reasons. I request you to re-review your requirements once again.
Don't know what makes you stuck in here, correct me if I'm wrong.
From Wiki:
The X-Forwarded-For (XFF) HTTP header field is a common method for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer.
I set X-Forwarded-For in header & test with Postman:
https://imgur.com/a/8QZEdyH
The "X-Forwarded-For" header shows the public ip of the user.
Thats all you get.
Internal IPs are not visible.
Only the "public ip" which is indicated in the header.

Does the TLS cert would require an common SAN

Based on the below reference link in configuring Haproxy with TLS:
Do i need to have the certificates generated with common SAN(Subject ALternate name) on all the target nodes (or)
Having the individual certs without any common SAN would work ?
https://serversforhackers.com/c/using-ssl-certificates-with-haproxy
Look at https://security.stackexchange.com/questions/172626/chrome-requires-san-names-in-certificate-when-will-other-browsers-ie-follow : some browsers (Chrome) require names to be in the SAN part as they disregard now completely the CN field
So even for a one domain certificate you need the domain both in the CN (as this is not optional) and in the SAN part.
It is also in the CAB Forum requirements, section 7.1.4.2.1 :
Certificate Field: extensions:subjectAltName
Required/Optional: Required
Contents: This extension MUST contain at least one entry.
Each entry MUST be either a dNSName containing the Fully-Qualified
Domain Name or an iPAddress containing the IP address of a server.
The CA MUST confirm that the Applicant controls the Fully-Qualified
Domain Name or IP address or has been granted the right to use it by
the Domain Name Registrant or IP address assignee, as appropriate.
Wildcard FQDNs are permitted.
Note that some other browsers, like Firefox, fallback to the CN instead, see https://bugzilla.mozilla.org/show_bug.cgi?id=1245280 and see beginning of patch at https://hg.mozilla.org/mozilla-central/rev/dc40f46fae48 for the security.pki.name_matching_mode configuration option.

Azure copy blob using SAS Token (with IP restriction) 403 Forbidden

I am attempting to copy a blob from one uri to another (both within the same storage account), both have a SAS Token for the credentials. This works fine with a SAS Token that doesn't have an IP restriction but fails when then source blob SAS token is IP restricted.
Note: It is not failing because I have got the IP wrong, other blob functions work i.e. list, delete, upload etc.
Example code:
Uri sourceBlobUri = new Uri("https://mystorage.blob.core.windows.net/a-container/a.json");
Uri targetBlobUri = new Uri("https://mystorage.blob.core.windows.net/a-container-archive/a.json");
var prodTokenSource = #"A_SAS_TOKEN_WITH_A_IP_RESTRICTION";
var prodTokenArchive = #"A_SAS_TOKEN_WITH_A_IP_RESTRICTION";
StorageCredentials sourceCredentials = new StorageCredentials(prodTokenSource);
StorageCredentials targetCredentials = new StorageCredentials(prodTokenArchive);
CloudBlockBlob sourceBlob = new CloudBlockBlob(sourceBlobUri, sourceCredentials);
CloudBlockBlob targetBlob = new CloudBlockBlob(targetBlobUri, targetCredentials);
await targetBlob.StartCopyAsync(sourceBlob); //Fails 403 error
One guess is that the copy request is originating from within Azure so the IP address is blocked? Should I configure the source SAS Token to accept an IP range from within Azure?
Is there another way to copy blobs that allows use of SAS Tokens?
One guess is that the copy request is originating from within Azure so
the IP address is blocked? Should I configure the source SAS Token to
accept an IP range from within Azure?
You're absolutely right. Copy operation is a server-side operation and the IP address specified in SAS token is the client IP address. Because the IP address included in SAS is not an Azure IP address, the copy operation is failing. You could configure SAS Token to accept an IP range from within Azure but I am guessing for copying some internal IP address is being used so I am not sure if that would work.
Is there another way to copy blobs that allows use of SAS Tokens?
I would recommend not using IP ACLing in SAS for copy operation i.e. not specify IP address restriction in SAS for copy operation.

Resources