Proxy configuration for OpenShift Origin - rhel

I am setting up an OpenShift origin server. The configurations I do heavily relies on the walkthrough description:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
After creating a project, I add a new app like this (successfully):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
OpenShift tries to build immediatelly, only to fail as follows:
F0222 15:24:58.504626 1 builder.go:204] Error: build error: fatal: unable to access 'https://github.com/openshift/ruby-hello-world.git/': Failed connect to github.com:443; Connection refused
I consulted the documentation about the proxy configuration:
https://docs.openshift.com/enterprise/3.0/admin_guide/http_proxies.html#git-repository-access
Concluded that I can simply edit the YAML descriptor for this specific app to include my corporate proxy.
...
source:
type: Git
git:
uri: "git://github.com/openshift/ruby-hello-world.git"
httpProxy: http://proxy.example.com
httpsProxy: https://proxy.example.com
...
With that change the build proceeds.
Can the HTTP proxy be configured system wide?
Note: again, I simply downloaded the binaries (client, server), did not install via ansible. And I did not find relevant properties openshift.local.config folder, inside my server binary folder.

After some time I now know enough to answer my own question.
There are two places where one needs to deal with corporate proxy settings.
Docker
This thread will tell you what to do in detail:
Cannot download Docker images behind a proxy
In my case on RHEL 7.2 I needed to edit this file: /etc/sysconfig/docker
I had to add the following entries:
HTTP_PROXY="http://proxy.company.com:4128"
HTTPS_PROXY="http://proxy.company.com:4128"
Then a restart of the docker service was necessary.
Origin Proxy
What I missed originally was the place to configure our corporate proxy settings. Currently I have a cluster (1 master, 1 node) installed via ansible.
These are the relevant files to edit on the servers:
* /etc/sysconfig/origin-master
* /etc/sysconfig/origin-node
There already placeholders in this file:
#NO_PROXY=master.example.com
#HTTP_PROXY=http://USER:PASSWORD#IPADDR:PORT
#HTTPS_PROXY=https://USER:PASSWORD#IPADDR:PORT
Documentation:
https://docs.openshift.org/latest/install_config/http_proxies.html

Related

MirrorList Issue : Failed to download metadata for repo 'epel'

Error: Failed to download metadata for repo 'epel': Cannot download repmd.xml: Empty mirrorlist specified.
The firewall has port 80, 22, 69, http, all configured.
Systemctl enabled the firewall and httpd service.
I'm not sure if it is GPG keys that are messing me up. I have a different template of another working Rhel 8 repo. It doesn't seem to make the /etc/yum.repo.d/ very important. I've matched the configuration.
However orginially instead of pointing to a local GPG file in /etc/pki
It would look to an http address that I can confirm is up and running from my repo server.
Any known fixes or places I can look to fix this. Am I better off just rebuilding all of the repos?

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]

I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config

How Azure pipelines can get source from Internal TFS and External Git? How can I update the proxy?

I am setting up Azure Pipelines, I have few that get sources from GitHub and trying to setup pipelines to reach TFS on Intranet, I created a Service Connection of type: “Azure Repos/Team Foundation Server” using this Other Git URL: https://tfs.myCie.com/defaultcollection/MyProject/_versionControl
When I run the pipeline, it takes some time then it displays a 504 Timeout error but the pipeline is still pending. After a while, it goes into error with this message in the step “Checkout repository#master to s”:
git -c http.proxy="http://myProxy.myCie.com:80" fetch --force --tags --prune --progress --no-recurse-submodules origin
fatal: unable to access 'https://tfs. myCie.com/defaultcollection/myProject/_versionControl/': OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to tfs.oecd.org:443
##[warning] Git fetch failed with exit code 128, back off 3.667 seconds before retry.
Security team says that I should use a PAC file to setup the proxy and that should enable intranet and Internet calls but I don’t see how to update the proxy settings of my Self-Hosted Windows Agent.
Can I specify a file? Can there be a configuration for Internet and another one for intranet?
I don’t see how to update the proxy settings of my Self-Hosted Windows
Agent. Can I specify a file?
For the agent you need to create a .proxy file with the proxy URL in the root directory of your agent.
Locate the root directory of your build agent (this is the folder
that contains the run.exe and the _work folder).
Open a Command Prompt at this location.
Type this command, but replace PROXYIP & PORT with your values:
echo http://PROXYIP:PORT > .proxy
Check that your .proxy file is created at the right place:
Optional: If your proxy needs authentication, you must set these
environment variables:
set VSTS_HTTP_PROXY_USERNAME=user
set VSTS_HTTP_PROXY_PASSWORD=password
Restart the service for your build agent.
When you know that you need a proxy at the time of the installation, you can configure the proxy settings right when you call config.cmd:
./config.cmd --proxyurl http://127.0.0.1:8888 --proxyusername "user" --proxypassword "password"
For details, please refer to this blog.
Here is the official document you can refer to.

Jenkins Error 128 / Git Error 403: Jenkins can't connect to my Bitbucket repository

OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.

How to deploy shield with Kibana on Bluemix

I am trying to deploy Kibana on Bluemix PaaS. Because Kibana is a Node.js application, it can be deployed as such on Bluemix. All i have to do:
Provide a simple manifest.yml file that details the app name and a couple of other things
Provide a Procfile that has just one line as web: bin/kibana --port=$PORT
Thus, I can run Kibana on Bluemix. Note that this is pushed via Cloud Foundry.
Also, I was able to install the marvel and sense plugins for Kibana.
Now, I installed the shield plugin. This plugin requires an ssl key and an ssl cert file to run. The path to these files must be provided in the kibana.yml file.
After installation, I tested the shield plugin natively and it worked just fine.
Here is the layout of the directory structure:
bin(d)
config(d)
installedPlugins(d)
node_modules(d)
sslFiles(d)
manifest.yml
Procfile
(d) represents directories. The sslFiles folder contains the ssl key and ssl cert files.
Before I could push to Bluemix, I knew that the paths to the SSL files would have to be relative to the app in Bluemix. Thus, in the kibana.yml file, I specified them as:
kibana.ssl.key:app/sslFiles/kibana.key
kibana.ssl.cert:app/sslFiles/kibana.cert
I did this as in Bluemix, I could see the following directory structure:
app(d)
bin(d)
config(d)
installedPlugins(d)
node_modules(d)
sslFiles(d)
manifest.yml
Procfile
Indentation represents containment. So, I pushed it to Bluemi using Cloud Foundry, but now I get a 502 Bad Gateway: Registered endpoint failed to handle the request error. I tried changing the paths to sslFiles/kibana.key but then I got a cannot find path sslFiles/kibana.key staging error.
What is responsible for my 502 error? Is it the path to the sslFiles? If so, how can I properly provide the paths?

Resources