Disable TLS1.0 and TLS1.1 on Cherrypy, Python3 - python-3.x

I am trying to disable TLS1.0, TLS1.1, SSL2, and SSL3 on my cherrypy server. I have seen the other stack over flow posts regarding how to disable them however, when I follow the code samples, I get the following error "ValueError: certfile must be specified for server-side operations". The windows service is still running, however I cannot load any pages. I've tried adding the certificate_chain as well, but that prevents cherrypy from running at all.
I am running cherrypy as a windows service, python 3.4.4, cherrypy 5.0.1, pyOpenSSL 19.0.0.
I've tried using the built in SSl library and with pyOpenSSL, they both result in the same error.
import OpenSSL.SSL as ssl
context = ssl.Context(ssl.SSLv23_METHOD)
context.set_cipher_list('ECDHE-RSA-AES256-GCM-SHA384')
context.set_options(ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1 | ssl.OP_NO_SSLv2 | ssl.OP_NO_SSLv3)
context.use_privatekey_file('myfile.key')
context.use_certificate_file('myfile.cer')
cherrypy.config.update({
'global':{
'server.socket_host':'0.0.0.0',
'server.socket_port': 0000, # https, however not using the port 443
'server.ssl_context' : context,
},
})

Is myfile.cer in PEM format? According to the docs it appears that PEM is the default filetype, which may be the cause of the error.
I'm also trying to figure out how to use ECDHE with Cherrypy, but with other web servers to use ECDHE there needs to be a curve file to generate the ephemeral key instead of a static key file (RSA style). It doesn't appear that Cherrypy has built-in capabilities for the curve file so it may only be possible with pyOpenSSL. The command to get the supported curves is OpenSSL.crypto.get_elliptic_curves() and you can specify the curve you want with context.set_tmp_ecdh(curve).

Looks like there may be some issues in syntax: ssl.OP_NO_TLSv1 s/b ssl.SSL.OP_NO_TLSv1 (per the pyOpenSSL documentation). This affects all of the OP* variables.
Oh, wait... nvrmd that.

Related

Configuring Azure PostgreSQL in Gitlab EE

I am searching for some help in how to configure Azure PostgreSQL DB in a Docker Swarm based Gitlab instance.
Initially, I followed the documentation in https://docs.gitlab.com/13.6/ee/administration/postgresql/external.html. Yet I came to find out that the default provided user is in the form of username, whereas Azure requires it to be in the form of username#hostname. I tried passing the username in the gitlab.rb file (gitlab_rails['db_username'] = 'username#hostname') but it still failed, even after replacing the # with the %40 as URI encoded.
After some extensive searching, I found this documentation - https://docs.gitlab.com/13.6/ee/administration/environment_variables.html, which suggests using the DATABASE_URL environment variable to set the full connection string in the form postgresql://username:password#hostname:port/dbname, which I did and it did solve the issue for Gitlab itself communicating with Azure PostgreSQL (in this case I replaced the username with username%40hostname, according to Azure requirements).
Allas, the success was short lived since then I came to find out that neither Puma and Sidekiq can connect to the database, always throwing the following error:
==> /var/log/gitlab/sidekiq/current <==
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
After some searching, I found that gitlab-ctl is generating the following file when starting the Gitlab instance:
# This file is managed by gitlab-ctl. Manual changes will be
# erased! To change the contents below, edit /etc/gitlab/gitlab.rb
# and run `sudo gitlab-ctl reconfigure`.
production:
adapter: postgresql
encoding: unicode
collation:
database: <database>
username: "<username>"
password:
host: "/var/opt/gitlab/postgresql"
port: 5432
socket:
sslmode:
sslcompression: 0
sslrootcert:
sslca:
load_balancing: {"hosts":[]}
prepared_statements: false
statement_limit: 1000
connect_timeout:
variables:
statement_timeout:
(database and username where removed)
Pretty much it ignores the DATABASE_URL env variable and assumes the now non-existing configuration parameters in gitlab.rb.
So, right now, I'm a bit out of options and was wondering if anyone has had a similar issue and, if so, how where you able to overcome this.
Any help is appreciated.
Thanks in advance.
TL/DR: Pass the username#hostname string directly into the gitlab_rails['db_username'] in double quotes. The documentation for connecting to an Azure PostgreSQL in the official Gitlab page is not correct.
So, after some searching and going deep into the Gitlab configuration, I came to find out that the issue is very specific and related with the usage of docker secrets.
In my gitlab.rb configuration file, in the database configuration part, I'm using the following:
### GitLab database settings
###! Docs: https://docs.gitlab.com/omnibus/settings/database.html
###! **Only needed if you use an external database.**
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encoding'] = "unicode"
gitlab_rails['db_database'] = File.read('/run/secrets/postgresql_database')
gitlab_rails['db_username'] = File.read('/run/secrets/postgresql_user')
gitlab_rails['db_password'] = File.read('/run/secrets/postgresql_password')
gitlab_rails['db_host'] = File.read('/run/secrets/postgresql_host')
gitlab_rails['db_port'] = File.read('/run/secrets/postgresql_port')
gitlab_rails['db_sslmode'] = 'require'
Now, this exact configuration was used previously for testing purposes and worked (but without the usage of Azure PostgreSQL database). And I'm passing the correct secrets to docker and I've confirmed that the secrets in fact, do exist.
(Sidenote: Also, I've established that Gitlab uses the method ActiveRecord::Base.establish_connection from the Ruby ActiveRecord::Base library in order to connect to the database)
Yet, when using the username#hostname configuration for the user and passing that into the postgresql_user secret, suddenly the ActiveRecord::Base.establish_connection method assumes that the #hostname is the actual hostname to where I want to connect to. And I've confirmed that the secret is being generated correctly inside the docker container
Now, it gets even stranger because if I pass the username#hostname string directly to the gitlab.rb file - gitlab_rails['db_username'] parameter - in double quotes, it suddenly starts connecting without complaining.
So, in short, if using an Azure PostgreSQL database for a dockerized Gitlab instance and using secrets to pass the configuration to the gitlab.rb file, don't pass the username#hostame through a secret, but put it directly in the gitlab.rb file.
I don't know if this is a specific issue of Ruby or of Gitlab (I'm not a Ruby developer), but I did try converting the File.read output to a String, to a symbol, used the File.open('filepath', &:readline) and other shenanigans, but nothing worked. So, if anyone out there would care to add their reason for this, please feel free to do so.
Also, the tutorial provided by Azure - https://learn.microsoft.com/pt-pt/azure/postgresql/connect-ruby - doesn't work with Gitlab, since it complains about the %40.
Hope this can help anyone out there.

Python wrapper coinbase api errors

So I am trying to create a new wallet using the Python wrapper for the coinbase api.
My current code is this:
from coinbase.wallet.client import Client
client = Client('API-Key',
'SECRET',
api_version='2019-12-30')
# Get your primary coinbase account
primary_account = client.get_primary_account()
address = primary_account.create_address()
print(address)
When trying to use the code above, I always get the error:
coinbase.wallet.error.AuthenticationError: APIError(id=authentication_error): request timestamp expired
My guess is that the wrapper is not passing the right timestamp.
On the github page for this wrapper, it says that the current build is failing. I don't know how to fix this. The github hasn't had any recent updates. I tried to look at the client file to see if I could fix it myself, but I have had no luck.
I was facing the same issue but, as I've understood from the various contributions, the problem is due to the difference between the local OS time and Coinbase servers time. Besides 30 seconds of epochs difference, coinbase server returns the tedious timestamp expiration error!
I've found python code to update local Windows time based on various ntp servers ntp_update_time.py (shared by gilmotta). Launching the ntp_update_time's code before running coinbase client again makes the error disappear and everything work as indicated in Coinbase API references!!!

How to work with Python requests with hosts only supporting TLS 1.0

using OPENSSL_VERSION : OpenSSL 1.1.0j and trying to connect to a host that seem to only support TLS 1.0 cyphers and getting an error in _sslobj.do_handshake().
import OpenSSL
import requests
from urllib.request import urlopen
import ssl
...
url = 'https://slpin.universalservice.org/'
urlopen(url).read()
As you can see from this SSLLabs report the server you are trying to access is terrible broken. It gets a grade of F (worst) which is mainly due to its terrible insecure ciphers:
The only not terrible insecure but only weak cipher uses 3DES. Because of this weakness this cipher is likely not included in the openssl build on your platform (for example Debian and Debian based systems like Ubuntu don't have this cipher included).
This means the only way to access the server from your Python script would be to use a version of Python linked to an older version of OpenSSL or linked to a modern version but with this cipher explicitly included. Even then you would likely need to specifically enable 3DES since this is disabled by urllib for a while. Thus, when Python is build with an OpenSSL which has 3DES support included the following should work:
import ssl
from urllib.request import urlopen
url = 'https://slpin.universalservice.org/'
ctx = ssl.create_default_context()
ctx.set_ciphers('3DES')
urlopen(url, context = ctx).read()
In my case this gives a 403 Forbidden which matches what I get when I visit this URL with the browser.

How to add custom certificate authority (CA) to nodejs

I'm using a CLI tool to build hybrid mobile apps which has a cool upload feature so I can test the app on a device without going through the app store (it's ionic-cli). However, in my company like so many other companies TLS requests are re-signed with the company's own custom CA certificate which I have on my machine in the keychain (OS X). However, nodejs does not use the keychain to get its list of CA's to trust. I don't control the ionic-cli app so I can't simply pass in a { ca: } property to the https module. I could also see this being a problem for any node app which I do not control. Is it possible to tell nodejs to trust a CA?
I wasn't sure if this belonged in Information Security or any of the other exchanges...
Node.js 7.3.0 (and the LTS versions 6.10.0 and 4.8.0) added NODE_EXTRA_CA_CERTS environment variable for you to pass the CA certificate file. It will be safer than disabling certificate verification using NODE_TLS_REJECT_UNAUTHORIZED.
$ export NODE_EXTRA_CA_CERTS=[your CA certificate file path]
You can specify a command line option to tell node to use the system CA store:
node --use-openssl-ca
Alternatively this can be specified as an environment variable, if you are not running the node CLI directly yourself:
NODE_OPTIONS=--use-openssl-ca
There is an undocumented, seemingly stable, API for appending a certificate to the default list:
const tls = require('tls');
const secureContext = tls.createSecureContext();
// https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt
secureContext.context.addCACert(`-----BEGIN CERTIFICATE-----
MIIEkjCCA3qgAwIBAgIQCgFBQgAAAVOFc2oLheynCDANBgkqhkiG9w0BAQsFADA/
MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT
DkRTVCBSb290IENBIFgzMB4XDTE2MDMxNzE2NDA0NloXDTIxMDMxNzE2NDA0Nlow
SjELMAkGA1UEBhMCVVMxFjAUBgNVBAoTDUxldCdzIEVuY3J5cHQxIzAhBgNVBAMT
GkxldCdzIEVuY3J5cHQgQXV0aG9yaXR5IFgzMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAnNMM8FrlLke3cl03g7NoYzDq1zUmGSXhvb418XCSL7e4S0EF
q6meNQhY7LEqxGiHC6PjdeTm86dicbp5gWAf15Gan/PQeGdxyGkOlZHP/uaZ6WA8
SMx+yk13EiSdRxta67nsHjcAHJyse6cF6s5K671B5TaYucv9bTyWaN8jKkKQDIZ0
Z8h/pZq4UmEUEz9l6YKHy9v6Dlb2honzhT+Xhq+w3Brvaw2VFn3EK6BlspkENnWA
a6xK8xuQSXgvopZPKiAlKQTGdMDQMc2PMTiVFrqoM7hD8bEfwzB/onkxEz0tNvjj
/PIzark5McWvxI0NHWQWM6r6hCm21AvA2H3DkwIDAQABo4IBfTCCAXkwEgYDVR0T
AQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAYYwfwYIKwYBBQUHAQEEczBxMDIG
CCsGAQUFBzABhiZodHRwOi8vaXNyZy50cnVzdGlkLm9jc3AuaWRlbnRydXN0LmNv
bTA7BggrBgEFBQcwAoYvaHR0cDovL2FwcHMuaWRlbnRydXN0LmNvbS9yb290cy9k
c3Ryb290Y2F4My5wN2MwHwYDVR0jBBgwFoAUxKexpHsscfrb4UuQdf/EFWCFiRAw
VAYDVR0gBE0wSzAIBgZngQwBAgEwPwYLKwYBBAGC3xMBAQEwMDAuBggrBgEFBQcC
ARYiaHR0cDovL2Nwcy5yb290LXgxLmxldHNlbmNyeXB0Lm9yZzA8BgNVHR8ENTAz
MDGgL6AthitodHRwOi8vY3JsLmlkZW50cnVzdC5jb20vRFNUUk9PVENBWDNDUkwu
Y3JsMB0GA1UdDgQWBBSoSmpjBH3duubRObemRWXv86jsoTANBgkqhkiG9w0BAQsF
AAOCAQEA3TPXEfNjWDjdGBX7CVW+dla5cEilaUcne8IkCJLxWh9KEik3JHRRHGJo
uM2VcGfl96S8TihRzZvoroed6ti6WqEBmtzw3Wodatg+VyOeph4EYpr/1wXKtx8/
wApIvJSwtmVi4MFU5aMqrSDE6ea73Mj2tcMyo5jMd6jmeWUHK8so/joWUoHOUgwu
X4Po1QYz+3dszkDqMp4fklxBwXRsW10KXzPMTZ+sOPAveyxindmjkW8lGy+QsRlG
PfZ+G6Z6h7mjem0Y+iWlkYcV4PIWL1iwBi8saCbGS5jN2p8M+X+Q7UNKEkROb3N6
KOqkqm57TH2H3eDJAkSnh6/DNFu0Qg==
-----END CERTIFICATE-----`);
const sock = tls.connect(443, 'host', { secureContext });
For more information, checkout the open issue on the subject: https://github.com/nodejs/node/issues/27079
I'm aware of two npm modules that handle this problem when you control the app:
https://github.com/fujifish/syswide-cas (I'm the author of this one)
https://github.com/coolaj86/node-ssl-root-cas
node-ssl-root-cas bundles it's own copies of nodes root CAs and also enables adding your own CAs to trust. It places the certs on the https global agent, so it will only be used for https module, not pure tls connections. Also, you will need extra steps if you use a custom Agent instead of the global agent.
syswide-cas loads certificates from pre-defined directories (such as /etc/ssl/certs) and uses node internal API to add them to the trusted list of CAs in conjunction to the bundled root CAs. There is no need to use the ca option since it makes the change globally which affects all later TLS calls automatically.
It's also possible to add CAs from other directories/files if needed.
It was verified to work with node 0.10, node 5 and node 6.
Since you do not control the app you can create a wrapper script to enable syswide-cas (or node-ssl-root-cas) and then require the ionic-cli script:
require('syswide-cas'); // this adds your custom CAs in addition to bundled CAs
require('./path/to/real/script'); // this runs the actual script
This answer is more focused towards package maintainers/builders.
One can use this method if you do not want end users to rely on additional environment variables.
When nodejs is built from source, it (by default, can be overridden) embeds the Mozilla CA certificate database into the binary itself. One can add more certificates to this database using the following commands:
# Convert your PEM certificate to DER
openssl x509 -in /path/to/your/CA.pem -outform der -out CA.der
# Add converted certificate to certdata
nss-addbuiltin -n "MyCompany-CA" -t "CT,C,C" < CA.der >> tools/certdata.txt
# Regenerate src/node_root_certs.h header file
perl tools/mk-ca-bundle.pl
# Finally, compile
make install

How does one set a proxy in lazybones?

I'm behind a firewall and lazybones can't reach its repository without a proxy.
I've searched the source and can't seem to find any reference to a proxy that seems to be relevant.
Support was officially added in version 0.8.1 of Lazybones, albeit via a general mechanism to add arbitrary system properties to the application in its configuration file, ~/.lazybones/config.groovy.
You can read about the details in the project README, but in essence, simply add the following to your config.groovy file:
systemProp {
http {
proxyHost = "localhost"
proxyPort = 8181
}
https {
proxyHost = "localhost"
proxyPort = 8181
}
}
You can use the systemProp. prefix to add any system properties to Lazybones, similar to the way it works in Gradle.
Is that what You're looking for? Basically You need to add some properties to gradle.properties file.
I am using Cygwin on Windows and I have modified the last line of
~/.gvm/lazybones/current/bin/lazybones
to say
exec "$JAVACMD" "${JVM_OPTS[#]}" -classpath "$CLASSPATH" "-Dhttp.proxyHost=127.0.0.1" "-Dhttp.proxyPort=8888" "-Dhttp.nonProxyHosts=localhost|127.0.0.1" uk.co.cacoethes.lazybones.LazybonesMain "$#"
Please note the quotes around the options. It works very well with my local Fiddler installation.
I have found no better way to enable proxy support due to the way the script is using eval. Maybe a more experienced shell script programmer can come up with a more elegant solution.
I was able to get out through the proxy setting the environment settings of
Picked up JAVA_TOOL_OPTIONS: -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8080
-Dhttp.nonProxyHosts="lmig.com" -Dhttps.proxyHost=127.0.0.1 -Dhttps.proxyPort=8080
unfortunately my environment requires authentication so I couldn't provide the complete proxy this way. I first ran "OWASP Zed Attach Proxy (ZAP)" which allowed me to run a proxy on my own machine (at port 8080) which then provided the complete authentication required.
This was able to then run the complete "lazybones list" command which retrieved the contents of the respositories.
Unfortunately I was not able to create an application from those templates becuase bintray required a login (though an anonymous login would do) and couldn't seem to get an additional level of authentication (I received "Unauthorized" from bintray)

Resources