newman CLI returns "error: unable to get local issuer certificate" in teamcity build - node.js

Using the newman nodeJS CLI to run a collection of postman test I get the following error:
error: unable to get local issuer certificate
It is run as part of a Teamcity CI build using the following command:
newman run https://www.getpostman.com/collections/<COLLECTION-ID-HERE>
It is run on windows and we have a corporate proxy server (ZScaler).
How to I get newman to work?

Just add --insecure in front of collectionID i.e :
newman run https://www.getpostman.com/collections/?apiKey="your-Postman-Api-Key" --insecure
Also When triggering the execution using a json file, Just add --insecure So your command shall be :
newman run .postman_collection.json --insecure

The issue is that newman cannot find (or does not know about) the self signed SSL certificate used by the proxy server that is configured in the windows certificate store. The easiest way to make newman (and actually any recent nodeJS app) aware of the certificate is to use an environment variable:
on windows:
SET NODE_EXTRA_CA_CERTS=c:\some-folder\certificate.cer
on linux:
export NODE_EXTRA_CA_CERTS=/c/some-folder/certificate.cer
You may also need to set the proxy server url itself with the HTTP_PROXY=http://example.com:1234 env varirable as well.
Alternatively the environment variables can be added to that teamcity builds runtime environment using the build parameters feature of Teamcity
Note this is for Node.js 7.3.0 and above (and the LTS versions 6.10.0 and 4.8.0)

Related

unexpected eof when using openssl

I am unable to get a basic react app to work using a self signed certificate on a local development server (Linux Mint 21, Linux 5.15.0-46-generic).
I installed node version 18.10. This has npm/npx v 8.19.2
I created a basic react app using
npx create-react-app myapp
I generated certificate and key using openssl. The certificate has been generated with the CN=sandbox.local, so that I'm not using an IP address.
I then start the app, using
HTTPS=true \
HOST=sandbox.local BROWSER=none \
SSL_CRT_FILE=certification/sandbox.local.pem \
SSL_KEY_FILE=certification/sandbox.local.pem \
npm start
This successfully starts the application, informing me that I can browse the application on https://localhost:3000 (or via the IP address on the network).
Attempting to browse from my local machine (also running Linux Mint 21), in firefox I get:
Secure Connection Failed
An error occurred during a connection to sandbox.local:3000. PR_END_OF_FILE_ERROR
In Chromium:
This page isn’t working
sandbox.local didn’t send any data.
ERR_EMPTY_RESPONSE
And using curl,
curl: (35) error:0A000126:SSL routines::unexpected eof while reading
There's a plethora of pages out there telling me that this is something to do with openssl version 3 expecting a response and not getting one. There's nothing that I've been able to find that tells me how to fix that in the context of a react application.
On the same machine I have a dash/plotly application, running with self signed certificates that were generated in exactly the same way. That application is accessible and works, which leads me to wonder if there it's something in the react/node interaction with openssl. I can resolve sandbox.local both with http and https for other types of applications.
The created app is literally the output of create-react-app. If I install and run it on the same machine, it works fine.
Is there some configuration option I'm missing? I haven't been able to generate an error log file from node, so apart from the SSL error from curl, I'm flying blind. Will update if/when I figure out how to log errors.

Docker no basic auth credentials after succesfull login

I've moved to linux (pop_os 21.04) on my desktop and I'm having some issues with docker.
When I'm trying to run docker-compose to pull an image from a private registry I'm getting:
ERROR: Head "https://my.registry/my-image/manifests/latest": no basic auth credentials
Of course before running this command I've ran:
docker login https://my.registry.com -u user -p pass
which returns
WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
And my config.json in my .docker folder show my credentials
{
"auths": {
"my.registry.com": {
"auth": "XXXXX"
}
}
}
To install docker I've followed instructions on their page https://docs.docker.com/engine/install/ubuntu/
And my version is:
Docker version 20.10.8, build 3967b7d
The same command ran on a macos system with Docker version 20.10.8 runs without any issues so I my password and all the urls are correct for sure.
Thanks for any help!
The login commands is
docker login my.registry.com
Without the https:// in front of the host. If you still have auth issues doing that:
if the registry uses an unknown TLS certificate, load that certificate on the host and restart the docker engine
if the registry is http instead of https, configure it as an insecure registry on /etc/docker/daemon.conf
if the login is successful, but the pull fails, verify your user has access to the specific repo on the registry
double check your password was correctly entered
check for a network proxy intercepting the request (the http_proxy variable)
I reinstalled the whole thing again as the docker page states, didn't work, so I uninstalled it and proceeded to install snap version, that didn't work neither and finally I removed it and went with simple apt-get install docker.io and it works like a charm! I don't know why it didn't work previously but I won't lose more sleep over it.
On Ubuntu 20.x, I observed that the credentials are stored in home/<username>/snap/docker/1125/.docker/config.json.
If older credentials are stored in $HOME/.docker/config.json, they are not used by docker pull. Verify if docker is indeed picking up the credentials from the right config.json location.

gitlab CI/CD run commands on external server

I want to use gitlabs CI/CD to deploy my app on a external server. i have the IP, username and password, and i understand i need to connect through SSH. How can i runn all the nessesary commands on the server side. Server runs on linux.
Currently i just get the code from reposiroty and to the npm build:prod and npm serve:prod for the API and npm start for the UI. How can i do the same chain of cammands with gitlab CI/CD? Or is this even possible? I basically want it to run similarily like jenkins works. But since the code is already on gitlab, it might be simplerer to let gitlab to handle this process instead of installing and setting up jenkins.
To be able to SSH to your machine from within GitLab CI, you probably should setup ssh key authentication, since you can't just type in the password inside the CI.
When you've got that set up, you have to store the private key in an environment variable so you can use it in the CI job. How to do that can be found here.
The last part is actually executing commands over ssh. That can be done in the following way:
ssh <host> '
command1;
command2;
'

Can't annotate secret for OpenShift private git repo deployment

I tried to follow this guide on deploying an app on OpenShift from my private repository on GitLab. I tried using an SSH authentication key as a deploy key, GitLab personal access token with api scope and entering my GitLab credentials directly.
I've been failing at the step of adding annotation with repository URI to the secret using this command.
$ oc annotate secret/mysecretname \
'build.openshift.io/source-secret-match-uri-1=https://gitlab.com/username/reponame.git'
I'm getting the following error as a result.
error: at least one annotation update is required
See 'oc annotate -h' for help and examples.
build.openshift.io/source-secret-match-uri-1=https://gitlab.com/username/reponame.git
Result of oc version is as follows, matching the server version shown both in cli and on the about page in the OpenShift web console.
oc v3.10.9
kubernetes v1.10.0+b81c8f8
features: Basic-Auth
Server https://api.starter-us-east-1.openshift.com:443
openshift v3.10.9
kubernetes v1.10.0+b81c8f8

How to configure Chocolatey to use a corporate proxy?

I'm having trouble installing Chocolatey packages from behind a corporate proxy. Internet Explorer is correctly configured but I'm having issues getting it to work through PowerShell.
I can use the Web-Client to download pages e.g. Microsoft.com, but ultimately Chocolatey fails to download packages with the prompt
"Please provide proxy credentials:"
which will not accept my domain login as being valid. Sometimes I just get the error
"Exception calling "DownloadFile" with "2" argument(s): "The remote server returned an error: (407) Proxy Authentication Required."
I have two machines - one of them can download the packages fine, and the other gives the errors above, but they both show Direct access (as below):
PS C:\Windows\system32> netsh winhttp import proxy source=ie
Current WinHTTP proxy settings:
Direct access (no proxy server).
PS C:\Windows\system32> netsh winhttp show proxy
Current WinHTTP proxy settings:
Direct access (no proxy server).
I'm not too sure what is happening here. Any suggestions?
Chocolatey has proxy instructions at https://github.com/chocolatey/choco/wiki/Proxy-Settings-for-Chocolatey and specifically the section on explicit proxy. Ensure you have the proper version of choco installed for that to work. If that is incorrect, we should fix the documentation/choco to make it correct.
For posterity:
Explicit Proxy Settings
Chocolatey has explicit proxy support starting with 0.9.9.9.
You can simply configure 1 or 3 settings and Chocolatey will use a
proxy server. proxy is required and is the location and port of the
proxy server. proxyUser and proxyPassword are optional. The values for
user/password are only used for credentials when both are present.
choco config set proxy <locationandport>
choco config set proxyUser <username>
choco config set proxyPassword <passwordThatGetsEncryptedInFile>
Example
Running the following commands in 0.9.9.9:
choco config set proxy http://localhost:8888
choco config set proxyUser bob
choco config set proxyPassword 123Sup#rSecur3
I had a similar issue except that Chocolately wouldn't install in the first place due to the corporate proxy.
Was able to resolve this based on this blog post...
2016-01-22, Duane Newman, Installing Chocolatey behind a corporate proxy (archived here)
...as follows:
Open an elevated command prompt (Windows key -> Type cmd -> right-click on "Command Prompt" and select "Run as Administrator").
Run the following command: #powershell -NoProfile -ExecutionPolicy Unrestricted -Command "[Net.WebRequest]::DefaultWebProxy.Credentials = [Net.CredentialCache]::DefaultCredentials; iex ((New-Object Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%systemdrive%\chocolatey\bin
This should install Chocolatey without any errors. To verify it has worked, close the command prompt and open another (so the path environment variable change is picked up) and then run the choco command - if all is OK it should now output the Chocolatey version and help text.
Further note for node.js: I did the above after installing Node.js with the option ticked to install the extra tools/requirements including Chocolatey. Was then able to continue the failed installation via Apps & features -> Node.js -> Modify. I then followed the instructions here to configure npm for the corporate proxy.

Resources