MirrorList Issue : Failed to download metadata for repo 'epel' - linux

Error: Failed to download metadata for repo 'epel': Cannot download repmd.xml: Empty mirrorlist specified.
The firewall has port 80, 22, 69, http, all configured.
Systemctl enabled the firewall and httpd service.
I'm not sure if it is GPG keys that are messing me up. I have a different template of another working Rhel 8 repo. It doesn't seem to make the /etc/yum.repo.d/ very important. I've matched the configuration.
However orginially instead of pointing to a local GPG file in /etc/pki
It would look to an http address that I can confirm is up and running from my repo server.
Any known fixes or places I can look to fix this. Am I better off just rebuilding all of the repos?

Related

How to import publicly available jelastic manifests from gitlab repositories in the jelastic dashboard?

I am currently transitioning from github to gitlab. Today, my code is present at both those locations. I have a jps manifest on github:
https://github.com/shopozor/services/blob/master/manifest.jps
and the very same manifest on gitlab:
https://gitlab.hidora.com/softozor/services/blob/master/manifest.jps
In the Jelastic dashboard, I am able to load my github manifest. However, I am not able to load my manifest versioned on gitlab:
What is the problem? Do I have to configure something special somewhere? Both manifests are publicly available. Why can't I import the gitlab manifest?
I also tried to use the raw manifest:
https://gitlab.hidora.com/softozor/services/raw/master/manifest.jps
and I've also tried to get the manifest file by means of the gitlab API, without success.
EDIT
I've tried to load this manifest. There we see that I am running a command
wget "${baseUrl}/jelastic/postgres/execCmdScript.sh" -O /var/lib/pgsql/script.sh 2>&1
In the jelastic console, that command raises the error
[07:56:54 Shopozor.cluster:2]: ERROR: cmd [sqldb: 62900].response: {" result": 4109," source": “JEL”," error": “The operation could not be performed. ”," errOut": ""," nodeid": 62900," exitStatus": 4," out": “--2020-03-27 07:56:53-- https://gitlab.hidora.com/softozor/services/raw/install-postgres-in-dedicated-env/jelastic/postgres/execCmdScript.sh\nResolving gitlab.hidora.com (gitlab.hidora.com)... 10.102.1.82\nConnecting to gitlab.hidora.com (gitlab.hidora.com)|10.102.1.82|:443... failed: Connection refused.”}
If I now take a computer which I never authenticated with on gitlab through ssh, and run that very same command, then it works. This is a bit strange, isn't it? What authentication does Jelastic need??? it's all public and available to anyone, except Jelastic?
After some more research, I was finally able to load my manifests from gitlab into jelastic. The problem is probably due to the gitlab configuration. Loading the jps from the gitlab repo doesn't work over https in the settings I have (which I haven't made myself, it's a CI / CD as a service). It works, however, over http.

Jenkins Error 128 / Git Error 403: Jenkins can't connect to my Bitbucket repository

OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.

Failed to save forwarding and transfer options : Missing file

I am trying to configure Forwarding and Transfers for Global forwarding and zone transfer options via Webmin.
It gives me the following output: Failed to save forwarding and transfer options : Missing file to read at bind8::./bind8-lib.pl line 391.
Has anyone encountered this and know how to fix it?
I managed to fix this issue by removing the Bind Module from Webmin and checking the "Remove from users and reset access control settings?" option. I also purged bind in the terminal on the server. I then installed the module and re downloaded the packages again through Webmin and it is now working fine.

Proxy configuration for OpenShift Origin

I am setting up an OpenShift origin server. The configurations I do heavily relies on the walkthrough description:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
After creating a project, I add a new app like this (successfully):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
OpenShift tries to build immediatelly, only to fail as follows:
F0222 15:24:58.504626 1 builder.go:204] Error: build error: fatal: unable to access 'https://github.com/openshift/ruby-hello-world.git/': Failed connect to github.com:443; Connection refused
I consulted the documentation about the proxy configuration:
https://docs.openshift.com/enterprise/3.0/admin_guide/http_proxies.html#git-repository-access
Concluded that I can simply edit the YAML descriptor for this specific app to include my corporate proxy.
...
source:
type: Git
git:
uri: "git://github.com/openshift/ruby-hello-world.git"
httpProxy: http://proxy.example.com
httpsProxy: https://proxy.example.com
...
With that change the build proceeds.
Can the HTTP proxy be configured system wide?
Note: again, I simply downloaded the binaries (client, server), did not install via ansible. And I did not find relevant properties openshift.local.config folder, inside my server binary folder.
After some time I now know enough to answer my own question.
There are two places where one needs to deal with corporate proxy settings.
Docker
This thread will tell you what to do in detail:
Cannot download Docker images behind a proxy
In my case on RHEL 7.2 I needed to edit this file: /etc/sysconfig/docker
I had to add the following entries:
HTTP_PROXY="http://proxy.company.com:4128"
HTTPS_PROXY="http://proxy.company.com:4128"
Then a restart of the docker service was necessary.
Origin Proxy
What I missed originally was the place to configure our corporate proxy settings. Currently I have a cluster (1 master, 1 node) installed via ansible.
These are the relevant files to edit on the servers:
* /etc/sysconfig/origin-master
* /etc/sysconfig/origin-node
There already placeholders in this file:
#NO_PROXY=master.example.com
#HTTP_PROXY=http://USER:PASSWORD#IPADDR:PORT
#HTTPS_PROXY=https://USER:PASSWORD#IPADDR:PORT
Documentation:
https://docs.openshift.org/latest/install_config/http_proxies.html

Puppet agent can't find server

I'm new to puppet, but picking it up quickly. Today, I'm running into an issue when trying to run the following:
$ puppet agent --no-daemonize --verbose --onetime
**err: Could not request certificate: getaddrinfo: Name or service not known
Exiting; failed to retrieve certificate and waitforcert is disabled**
It would appear the agent doesn't know what server to connect to. I could just specify --server on the command line, but that will be of no use to me when this runs as a daemon in production, so instead, I specify the server name in /etc/puppet/puppet.conf like so:
[main]
server = puppet.<my domain>
I do have a DNS entry for puppet.<my domain> and if I dig puppet.<my domain>, I see that the name resolves correctly.
All puppet documentation I have read states that the agent tries to connect to a puppet master at puppet by default and your options are host file trickery or do the right thing, create a CNAME in DNS, and edit the puppet.conf accordingly, which I have done.
So what am I missing? Any help is greatly appreciated!
D'oh! Need to sudo to do this! Then everything works.
I had to use the --server flag:
sudo puppet agent --server=puppet.example.org
I actually had the same error but I was using the two learning puppet vm and trying run the 'puppet agent --test' command.
I solved the problem by opening the file /etc/hosts on both the master and the agent vm and the line
***.***.***.*** learn.localdomain learn puppet.localdomain puppet
The ip address (the asterisks) was originally some random number. I had to change this number on both vm so that it was the ip address of the master node.
So I guess for experienced users my advice is to check the /etc/hosts file to make sure that the ip addresses in here for the master and agent not only match but are the same as the ip address of the master.
for other noobs like me my advice is to read the documentation more clearly. This was a step in the 'setting up an agent vm' process the I totally missed xD
In my case I was getting same error but it was due to the cert which should been signed to node on puppetmaster server.
to check pending certs run following:
puppet cert list
"node.domain.com" (SHA256) 8D:E5:8A:2*******"
sign the cert to node:
puppet cert sign node.domain.com
Had the same issue today on puppet 2.6 on CentOS 6.4
All I did to resolve the issue was to check the usual stuff such as hosts and resolv.conf to ensure they were as expected (compared with a working server) and then;
Removed /var/lib/puppet directory rm -rf /var/lib/puppet
Cleared the certificate on the puppet master puppetca --clean
servername
Restarted the network service network restart
Re-ran puppet
Even though the resolv.conf was identical to the working server, puppet updated resolv.conf and immediately re-signed the certificate and replaced all the puppet lib files.
Everything was fine after that.

Resources