How Azure pipelines can get source from Internal TFS and External Git? How can I update the proxy? - azure

I am setting up Azure Pipelines, I have few that get sources from GitHub and trying to setup pipelines to reach TFS on Intranet, I created a Service Connection of type: “Azure Repos/Team Foundation Server” using this Other Git URL: https://tfs.myCie.com/defaultcollection/MyProject/_versionControl
When I run the pipeline, it takes some time then it displays a 504 Timeout error but the pipeline is still pending. After a while, it goes into error with this message in the step “Checkout repository#master to s”:
git -c http.proxy="http://myProxy.myCie.com:80" fetch --force --tags --prune --progress --no-recurse-submodules origin
fatal: unable to access 'https://tfs. myCie.com/defaultcollection/myProject/_versionControl/': OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to tfs.oecd.org:443
##[warning] Git fetch failed with exit code 128, back off 3.667 seconds before retry.
Security team says that I should use a PAC file to setup the proxy and that should enable intranet and Internet calls but I don’t see how to update the proxy settings of my Self-Hosted Windows Agent.
Can I specify a file? Can there be a configuration for Internet and another one for intranet?

I don’t see how to update the proxy settings of my Self-Hosted Windows
Agent. Can I specify a file?
For the agent you need to create a .proxy file with the proxy URL in the root directory of your agent.
Locate the root directory of your build agent (this is the folder
that contains the run.exe and the _work folder).
Open a Command Prompt at this location.
Type this command, but replace PROXYIP & PORT with your values:
echo http://PROXYIP:PORT > .proxy
Check that your .proxy file is created at the right place:
Optional: If your proxy needs authentication, you must set these
environment variables:
set VSTS_HTTP_PROXY_USERNAME=user
set VSTS_HTTP_PROXY_PASSWORD=password
Restart the service for your build agent.
When you know that you need a proxy at the time of the installation, you can configure the proxy settings right when you call config.cmd:
./config.cmd --proxyurl http://127.0.0.1:8888 --proxyusername "user" --proxypassword "password"
For details, please refer to this blog.
Here is the official document you can refer to.

Related

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]

I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config

Windows Puppet agent does not connect to the AWSOpsWorks Puppet Enterprise master

I have created the puppet master using aws opsworks. and I am able to add ami linux nodes automatically to the puppet master.
I am having issues when I tried to to add a windows 64 bit node to my puppet master by following this link https://puppet.com/docs/pe/2017.3/installing/installing_agents.html#install-windows-agents-with-the-msi-package
I copied the puppet-agent-x64.msi from the puppet master present in location to the windows node and /opt/puppetlabs/server/data/packages/public//windows-x86_64-/ and ran the installer to install the agent. the installation is successful and the Start Menu contains a Puppet folder with shortcuts for running the agent manually, running Facter, and opening a command prompt for use with Puppet tools.
But the windows node is not showing in puppet web ui and when i tried to run the puppet agent i get this error
"Running Puppet agent on demand ...
Error: Could not request certificate: Error 403 on SERVER: Forbidden request: /puppet-ca/v1/certificate/ca (method :get). Please see the server logs for details.
Exiting; failed to retrieve certificate and waitforcert is disabled
Press any key to continue . . ."
You'll need to set allow_unauthenticated_ca to true on your OpsWorks master and then run puppet on it to make the change. Afterwards, you should be able to install the agent even if you're not provisioning from AWS or choose not to use the userdata script.
Steps:
login to console.
click on classification
under PE infrastructure, select PE master.
Go to configuration tab
look for class puppet_enterprise::profile::master
under parameters, select allow_unauthenticated_ca and set it to true
Screenshot:

Jenkins Error 128 / Git Error 403: Jenkins can't connect to my Bitbucket repository

OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.

Proxy configuration for OpenShift Origin

I am setting up an OpenShift origin server. The configurations I do heavily relies on the walkthrough description:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
After creating a project, I add a new app like this (successfully):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
OpenShift tries to build immediatelly, only to fail as follows:
F0222 15:24:58.504626 1 builder.go:204] Error: build error: fatal: unable to access 'https://github.com/openshift/ruby-hello-world.git/': Failed connect to github.com:443; Connection refused
I consulted the documentation about the proxy configuration:
https://docs.openshift.com/enterprise/3.0/admin_guide/http_proxies.html#git-repository-access
Concluded that I can simply edit the YAML descriptor for this specific app to include my corporate proxy.
...
source:
type: Git
git:
uri: "git://github.com/openshift/ruby-hello-world.git"
httpProxy: http://proxy.example.com
httpsProxy: https://proxy.example.com
...
With that change the build proceeds.
Can the HTTP proxy be configured system wide?
Note: again, I simply downloaded the binaries (client, server), did not install via ansible. And I did not find relevant properties openshift.local.config folder, inside my server binary folder.
After some time I now know enough to answer my own question.
There are two places where one needs to deal with corporate proxy settings.
Docker
This thread will tell you what to do in detail:
Cannot download Docker images behind a proxy
In my case on RHEL 7.2 I needed to edit this file: /etc/sysconfig/docker
I had to add the following entries:
HTTP_PROXY="http://proxy.company.com:4128"
HTTPS_PROXY="http://proxy.company.com:4128"
Then a restart of the docker service was necessary.
Origin Proxy
What I missed originally was the place to configure our corporate proxy settings. Currently I have a cluster (1 master, 1 node) installed via ansible.
These are the relevant files to edit on the servers:
* /etc/sysconfig/origin-master
* /etc/sysconfig/origin-node
There already placeholders in this file:
#NO_PROXY=master.example.com
#HTTP_PROXY=http://USER:PASSWORD#IPADDR:PORT
#HTTPS_PROXY=https://USER:PASSWORD#IPADDR:PORT
Documentation:
https://docs.openshift.org/latest/install_config/http_proxies.html

Development build of my Node.js site

I have a production build of my site on a VPS, and I deploy to a bare git repo which has a hook that checkouts the commits to an app directory. I use forever to keep my app running from the app directory.
What I want to do is set up a development build which I can push to. The development build could be hosted under a subdomain on my VPS. However, I'll need an authentication step that'll prevent anyone and everyone from accessing the development site. How could I put authentication in front of an entire site with little (if any) changes to my application?
Why don't you just run it on a port that isn't available to the public and then you could create an ssh tunnel and access it via localhost?
Add a dev ssh user to your VPS and assign it a password.
Your ssh tunnel would look like this (just adjust your ports accordingly):
ssh -N -L8808:localhost:8808 user#destination.com
You'll be prompted for your password and then you would leave your terminal session open and go to your dev server via "http://localhost:8808"
Another option (something I typically do). Is to have a file checked into your repo named "config.sample.json" with configuration information (in this case your username/password [development] restriction). Then you also set up git to ignore "config.json" (so you don't accidentally commit this to your repository and have to edit files on your production deployments).
Next you would write a function that would require that config.json file and use it's configuration data if the file is found otherwise it would load up as "production".
Then you would deploy your code to your development directory and afterward rename your "config.sample.json" to "config.json" and make any edits that were needed in that file to setup debugging, access control, etc.

Resources