NodeJS Artillery with self signed client certificates - node.js

I installed the latest versions of nodejs and Artilliery. I want to do load tests using Artillery in this yml:
config:
target: "https://my.ip.address:443"
phases:
- duration: 60
arrivalCount: 100
tls:
rejectUnauthorized: false
client:
key: "./key.pem"
cert: "./certificate.pem"
ca: "./ca.cert"
passphrase: "mypass"
onError:
- log: "Error: invalid tls configuration"
extendedMetrics: true
https:
extendedMetrics: true
logging:
level: "debug"
scenarios:
- flow:
- log: "Current environment is set to: {{ $environment }}"
- get:
url: "/myapp/"
#sslAuth: false
capture:
json: "$.data"
as: "data"
failOnError: true
log: "Error: capturing ${json.error}"
check:
- json: "$.status"
as: "status"
comparison: "eq"
value: 200
not: true
capture:
json: "$.error"
as: "error"
log: "Error: check ${error}"
plugins:
http-ssl-auth: {}
I run Artillery with:
artillery -e production config_tests.yml
I checked the certificates, they are working when used in other applications. They are generated with Openssl
But, all the virtual users fail with error: errors.EPROTO.
Could you please help me find a solution? Thanks in advance!

Related

Failed to connect to all addresses - gRPC with Go and NodeJS

"Failed to connect to all addresses" occurs while adding TLS certs to envoy.yaml, full error:
code: 14,
metadata: Metadata { _internal_repr: {}, flags: 0 },
details: 'failed to connect to all addresses'
Envoy config (Envoy is running on port 50000, and itemService on 50052):
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
common_tls_context:
tls_certificates:
- certificate_chain:
filename: server.cert
private_key:
filename: server.key
Client code Nodejs - (NextJS on server side - getServerSideProps)
options = {
key: readFileSync("certs/client.key"),
cert: readFileSync("certs/ca.crt"),
csr: readFileSync("certs/client.crt"),
};
const creds = credentials.createSsl(
options.cert,
options.key,
options.csr
);
grpcServer.servicesList.itemsService = new ItemsServiceClient(
"localhost:50000",
creds,
{
"grpc.ssl_target_name_override": "localhost",
"grpc.default_authority": "localhost",
}
);
Request works normally when removing TLS certs from envoy.yaml.
Error I get from grpcurl tool: Failed to dial target host "localhost:50000" x509: certificate relies on legacy Common Name field, use SANs instead.
When I set GODEBUG=x509ignoreCN=0, seems like error stays same.

Gitlab CI/CD stage test fails when fetching external resource with error: dh key too small

I'm trying to work with Gitlab CI/CD but the test stage fails with the following error:
write EPROTO 140044051654592:error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small:../deps/openssl/openssl/ssl/statem/statem_clnt.c:2171:
gitlab-ci.yml
image: node:16.15.1
stages:
- test
test-job:
stage: test
script:
- npm run test
To be note that this is an integration test that calls an external resource with axios, and I have tried to set rejectUnauthorized: false and minVersion: "TLSv1" as suggested here and here
const axiosOptions = {
httpsAgent: new https.Agent({
rejectUnauthorized: false,
minVersion: "TLSv1",
})
};
const axiosInstance = axios.create(axiosOptions);
const response = await axiosInstance.get('https://www.some-domain.com/some-article.html');
This is not a problem with the test itself as it runs fine on my PC, but I suppose with the TLS of the gitlab runner.
Thanks

How can I connect to my GitLab omnibus LDAP server to running in a container?

I have been trying to work on this for weeks.
root#git:/# gitLab-rake gitlab:ldap:check
Checking LDAP ...
LDAP: ... Server: ldapmain
Deprecation warning: Net::LDAP::ConnectionRefused will be deprecated. Use Errno::ECONNREFUSED instead.
Deprecation warning: Net::LDAP::ConnectionRefused will be deprecated. Use Errno::ECONNREFUSED instead.
Could not connect to the LDAP server: Connection refused - connect(2) for 172.17.0.2:389
Checking LDAP ... Finished
These are the /etc/gitlab/gitlab.rb files:
gitlab_rails['ldap_enabled'] = true
# gitlab_rails['prevent_ldap_sign_in'] = false
###! **remember to close this block with 'EOS' below**
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS'
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'GitLab LDAP'
host: 'git.example.com'
port: 389
uid: 'sAMAccountName'
bind_dn: 'cn=admin,dc=example,dc=com'
password: 'example'
encryption: 'plain' # "start_tls" or "simple_tls" or "plain"
verify_certificates: false
active_directory: true
allow_username_or_email_login: true
lowercase_usernames: false
block_auto_created_users: false
base: 'OU=users,dc=example,dc=com'
user_filter: ''
# ## EE only
group_base: 'cn=my_group,ou=groups,dc=example,dc=com'
admin_group: 'my_admin_group'
EOS
Is there any work around this error? Thanks in advance

How to collect and pass certificate thumbprint value from win_certificate_store to win_iis_webbinding module

I'm not able to register the certificate thumbprint value from win_certificate_store module in a format that win_iis_webbinding module would accept it.
Here are my ansible tasks:
- name: Import certificate to Target local cert store
win_certificate_store:
path: C:\Certs\{{ansible_hostname}}.cert.p12
file_type: pkcs12
password: XXXXXXXXXX
store_location: LocalMachine
key_storage: machine
state: present
register: cert_import
- name: Debug thumbprints variable
debug:
var: cert_import.thumbprints
- name: Bind the issued certificate to Default Web Site in IIS
win_iis_webbinding:
name: Default Web Site
protocol: https
port: 443
certificate_hash: "{{ cert_import.thumbprints }}"
state: present
And this is the output:
TASK [Import certificate to Target local cert store] *******************************************************************************************************************************
task path: /home/weseroot/.ansible/roles/certman/tasks/import_bind_cert.yml:7
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/windows/win_certificate_store.ps1
Pipelining is enabled.
<10.0.0.5> ESTABLISH WINRM CONNECTION FOR USER: weseadmin on PORT 5985 TO 10.0.0.5
EXEC (via pipeline wrapper)
ok: [10.0.0.5] => {
"changed": false,
"invocation": {
"module_args": {
"file_type": "pkcs12",
"key_exportable": true,
"key_storage": "machine",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"path": "C:\\Certs\\w2016-IIS-1.cert.p12",
"state": "present",
"store_location": "LocalMachine",
"store_name": "My",
"thumbprint": null
}
},
"thumbprints": [
"C85F1FC23B89DFB88416EDFAE9C91C586515C8ED"
]
}
TASK [Debug thumbprints variable] **************************************************************************************************************************************************
task path: /home/weseroot/.ansible/roles/certman/tasks/import_bind_cert.yml:17
ok: [10.0.0.5] => {
"cert_import.thumbprints": [
"C85F1FC23B89DFB88416EDFAE9C91C586515C8ED"
]
}
TASK [Bind the issued certificate to Default Web Site in IIS] **********************************************************************************************************************
task path: /home/weseroot/.ansible/roles/certman/tasks/import_bind_cert.yml:27
Using module file /usr/local/lib/python3.6/dist-packages/ansible/modules/windows/win_iis_webbinding.ps1
Pipelining is enabled.
<10.0.0.5> ESTABLISH WINRM CONNECTION FOR USER: weseadmin on PORT 5985 TO 10.0.0.5
EXEC (via pipeline wrapper)
The full traceback is:
Cannot retrieve the dynamic parameters for the cmdlet. The specified wildcard character pattern is not valid: System.Object[]
At line:157 char:15
+ If (-Not (Test-Path $cert_path) )
+ ~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Test-Path], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.TestPathCommand
ScriptStackTrace:
at <ScriptBlock>, <No file>: line 157
fatal: [10.0.0.5]: FAILED! => {
"changed": false,
"msg": "Unhandled exception while executing module: Cannot retrieve the dynamic parameters for the cmdlet. The specified wildcard character pattern is not valid: System.Object[]"
}
Any suggestions how the thumbprint variable could be passed to win_iis_webbinding module in an acceptable format?
OK I don't know if you are still looking for the answer but I was. win_certificate_store returns a dictionary with the key 'thumbprints' tied to a list:
TASK [Debug thumbprints variable] **************************************************************************************************************************************************
task path: /home/weseroot/.ansible/roles/certman/tasks/import_bind_cert.yml:17
ok: [10.0.0.5] => {
"cert_import.thumbprints": [
"C85F1FC23B89DFB88416EDFAE9C91C586515C8ED"
]
}
You will want to do something like this:
- name: Import certificate to Target local cert store
win_certificate_store:
path: C:\Certs\{{ansible_hostname}}.cert.p12
file_type: pkcs12
password: XXXXXXXXXX
store_location: LocalMachine
key_storage: machine
state: present
register: cert_import
- name: Debug thumbprints variable
debug:
var: cert_import.thumbprints
- name: Bind the issued certificate to Default Web Site in IIS
win_iis_webbinding:
name: Default Web Site
protocol: https
port: 443
certificate_hash: "{{ cert_import.thumbprints[-1] }}"
state: present

Configuring express gateway to work with redis

I'm setting up an instance of the express gateway for routing requests to microservices. It works as expected, but I get the following errors when I try to include redis in my system config
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] error: Could not hot-reload gateway.config.yml. Configuration is invalid. Error: A client must be directly provided to the RedisStore
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:gateway] warn: body-parser policy hasn't provided a schema. Validation for this policy will be skipped.
0|apigateway-service | 2020-01-09T18:50:10.118Z [EG:policy] error: Failed to initialize custom express-session store, please ensure you have connect-redis npm package installed
I have installed the necessary packages
npm install redis connect-redis express-session
and have updated the system.config.yml file like so,
# Core
db:
redis:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
namespace: EG
plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
health-check:
package: './health-check/manifest.js'
body-parser:
package: './body-parser/manifest.js'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
storeProvider: connect-redis
storeOptions:
host: ${REDIS_HOST}
port: ${REDIS_PORT}
db: ${REDIS_DB}
secret: keyboard cat # replace with secure key that will be used to sign session cookie
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
My gateway.config.yml file looks like this
http:
port: 8080
admin:
port: 9876
apiEndpoints:
accounts:
paths: '/accounts*'
billing:
paths: '/billing*'
serviceEndpoints:
accounts:
url: ${ACCOUNTS_URL}
billing:
url: ${BILLING_URL}
policies:
- body-parser
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
accounts:
apiEndpoints:
- accounts
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: accounts
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
billing:
apiEndpoints:
- billing
policies:
# Uncomment `key-auth:` when instructed to in the Getting Started guide.
# - key-auth:
- body-parser:
- log: # policy name
- action: # array of condition/actions objects
message: ${req.method} ${req.originalUrl} ${JSON.stringify(req.body)} # parameter for log action
- proxy:
- action:
serviceEndpoint: billing
changeOrigin: true
prependPath: true
ignorePath: false
stripPath: true
package.json
{
"name": "max-apigateway-service",
"description": "Express Gateway Instance Bootstraped from Command Line",
"repository": {},
"license": "UNLICENSED",
"version": "1.0.0",
"main": "server.js",
"dependencies": {
"connect-redis": "^4.0.3",
"express-gateway": "^1.16.9",
"express-gateway-plugin-example": "^1.0.1",
"express-session": "^1.17.0",
"redis": "^2.8.0"
}
}
Am I missing anything?
In my case, I used AWS Elasticache for Redis. I tried to run it but I had "A client must be directly provided to the RedisStore" error. I found my problem from the security group setting. EC2(server) should have a proper security group for the port of Elasticache. And Elasticache should have the same security group.
Step1. Create new security group. Set the inbound rule
Step2. Add the security group to the EC2 server.
Step3. Add the security group to the Elasticache.

Resources