How to download a JAR file from a URL using Groovy? - groovy

I have a JAR artifact in my nexus repository and I want to download it using Groovy script.
I tired this but it doesn't work:
new File("/dir/to/save/app.jar").withOutputStream { out ->
new URL('http://server.org.com/nexus/repo/1.11/app.jar').eachByte { b ->
out.write(b)
}
But the same scripts works perfectly when I download something from Maven central repository.
What could be the issue? Is there any SSL certificate issue with my nexus that is causing the download to be broken? Because, when I use curl and download it using -k flag it downloads the file perfectly because it makes the connection insecure.
I use curl -k <NEXUS_REPO_PATH_TO_FILE> -O app.jar
If there is any SSL issue then how can I suppress it using Groovy?

Related

newman CLI returns "error: unable to get local issuer certificate" in teamcity build

Using the newman nodeJS CLI to run a collection of postman test I get the following error:
error: unable to get local issuer certificate
It is run as part of a Teamcity CI build using the following command:
newman run https://www.getpostman.com/collections/<COLLECTION-ID-HERE>
It is run on windows and we have a corporate proxy server (ZScaler).
How to I get newman to work?
Just add --insecure in front of collectionID i.e :
newman run https://www.getpostman.com/collections/?apiKey="your-Postman-Api-Key" --insecure
Also When triggering the execution using a json file, Just add --insecure So your command shall be :
newman run .postman_collection.json --insecure
The issue is that newman cannot find (or does not know about) the self signed SSL certificate used by the proxy server that is configured in the windows certificate store. The easiest way to make newman (and actually any recent nodeJS app) aware of the certificate is to use an environment variable:
on windows:
SET NODE_EXTRA_CA_CERTS=c:\some-folder\certificate.cer
on linux:
export NODE_EXTRA_CA_CERTS=/c/some-folder/certificate.cer
You may also need to set the proxy server url itself with the HTTP_PROXY=http://example.com:1234 env varirable as well.
Alternatively the environment variables can be added to that teamcity builds runtime environment using the build parameters feature of Teamcity
Note this is for Node.js 7.3.0 and above (and the LTS versions 6.10.0 and 4.8.0)

Making a single file of a private repo on Gitlab to be publicly accessible

I have a bash script file in my GitLab private repo. I wish to download the file in Linux when running the wget command, however it fails to do so since the file is hosted in a private repo, thus it goes to Login page.
Is there a way to make this single file publicly accessible? If not, is there a way to include my credentials in the GET URL when attempting to open the file?
If you can use curl, you can use the GitLab API to get a raw file from repository. You'd need to add your private-token as well to get this file.
For example:
curl --request GET --header 'PRIVATE-TOKEN: YOUR_PRIVATE_TOKEN' 'https://gitlab.example.com/api/v4/projects/PROJECT_ID/repository/files/FILE_NAME/raw?ref=BRANCH' --output FILE_NAME
As mentioned by #Revkoni, you can use the GitLab API for this:
$ wget --header="PRIVATE-TOKEN: XXXXXXXX" "https://gitlab.example.com/api/v4/projects/PROJECT_ID/repository/files/FILE_NAME/raw?ref=BRANCH"

How to reload the configuration Jenkins from the command line?

I installed and configured Jenkins through the system configuration management (ansible). Through ansible create jobs, install modules and configure them. After installing and configuring the module authorization crowd2, to reload the config via http://localhost/jenkins/reload does not work, as required authorization. To generate an authorization token, you must first log in, but this is not desirable. Can I have root access to reload the config?
P.S. Sorry for my English :)
java -jar jenkins-cli.jar -noCertificateCheck -s https://jenkins.example.com:8443/jenkins/ reload-configuration
You can generate crumb:
curl -u 'admin:password' -X GET http://localhost:8090/crumbIssuer/api/json | jq
Response looks like:
{
"_class": "hudson.security.csrf.DefaultCrumbIssuer",
"crumb": "1348b504383211402ce562e0b46b3691",
"crumbRequestField": "Jenkins-Crumb"
}
Then take crumb field value and use it in reload call:
curl -u 'admin:password' -X POST http://localhost:8090/reload -H 'Jenkins-Crumb: 1348b504383211402ce562e0b46b3691'
One easy workaround is to use Ansible to restart the Tomcat or the Jenkins service (depending on how Jenkins is hosted).
With this solution, the configuration will be reloaded.
If Ansible is used to create a fresh install of Jenkins, nobody will be using Jenkins. So restarting the service can be an acceptable solution ;)
You can use the Jenkins CLI with the reload command. For example:
java -jar jenkins-cli.jar -s https://jenkins.example.com/ reload
Or you could use the create-job to create jobs in the first place, removing the need to reload the configuration.
The CLI lets you authenticate with an SSH key, so that may be more amenable to being run from Ansible.
Try this:
java -jar jenkins-cli.jar -s [JENKINS_URL[ -auth [USER:PASSWORD] reload-configuration
Go to Configurations -> Reload Configuration from Disk

Docker 1.6 and Registy 2.0

Has anyone tried successfully the search command with Docker 1.6 and the new registry 2.0?
I've set mine up behind Nginx with SSL, and so far it is working fine. I can push and pull images without problems. But when I try to search for them all the following command give a 404 response:
curl -k -s -X GET https://username:password#my-docker-registry.com/v1/search
404 page not found
curl -k -s -X GET https://username:password#my-docker-registry.com/v2/search
404 page not found
root#ip-10-232-0-191:~# docker search username:password#my-docker-registry.com/hello-world
FATA[0000] Invalid repository name (admin:admin), only [a-z0-9-_.] are allowed
root#ip-10-232-0-191:~# docker search my-docker-registry.com/hello-world
FATA[0000] Error response from daemon: Unexpected status code 404
I wanted to ask if anyone has any ideas why and what is the correct way to use the Docker client to search the registry for images.
Looking at the API v2.0 documentation, do they simply not support a search function? Seems a bit strange to omit such functionality.
At least something works :)
root#ip-10-232-0-191:~# curl -k -s -X GET https://username:password#my-docker-registry.com/v2/hello-world/tags/list
{"name":"hello-world","tags":["latest"]}
To Date - the search api is lacking from registry v2.0.1 and this issue is under discussion here. I believe search api is intended to land in v2.1.
EDIT: /v2/catalog endpoint is available in distribution/registry:master
Before new registry api:
If you are using REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY you may list the contents of that directory
user#host:~# tree $REGISTRY_FS_ROOTDIR/docker/registry/v2/repositories -L 2
***/docker/registry/v2/repositories
└── repository1
└── image1
This may be useful to make a quick web ui you can call to do this or if you have ssh access to the host storing the repositories:
ssh -T user#host -p <port> tree $REGISTRY_FS_ROOTDIR/docker/registry/ -L 2
Do look at the compose example which deploys both v1 & v2 registry behind an nginx reverse proxy
The latest version of Docker Registry available from https://github.com/docker/distribution supports Catalog API. (v2/_catalog). This allows for capability to search repositories.
If interested, you can try docker image registry CLI I built to make it easy for using the search features in the new Docker Registry v2 distribution : (https://github.com/vivekjuneja/docker_registry_cli)
if you're on windows, here's a Powershell script to query the v2/_catalog from windows with basic http auth.
https://gist.github.com/so0k/b59382ea7fd959cf7040
FYI, to use this you have to docker pull distribution/registry:master instead of docker pull registry:2. the registry:2 image version is currently 2.0.1 which does not come with the catalog endpoint.

How to send file to Sharepoint from Linux creating non existend directories

I have a problem while sending file from linux to SharePoint. Everything is fine if I am uploading to existing directory, I use this method:
curl --ntlm --user username:password --upload-file myfile.xls https://sharepointserver.com/sites/mysite/myfile.xls
Unfortunately problem arises when I point the target to non existing directory, like:
curl --ntlm --user username:password --upload-file myfile.xls https://sharepointserver.com/sites/mysite/nonexist/myfile.xls
I would like it to create all necessary directorie on the path. I've tried to use "--create-dirs" CURL option, but it doesn't work.
Any ideas how to achieve the goal? It doesn't have to be CURL actually, i can use different method available on linux.
As the name (CLIENT URL) suggests, you will not be able to create new directories on remote SERVERS involving http/https while uploading files.
For downloads involving http/https server, --create-dirs option is applicable only on local machines to create new directories (for instance, when you are downloading a content on to your local linux machine).
However, while using ftp/sftp to a server, you will be able to create new directories on the remote server.

Resources