I have installed artifactory-HA on two nodes and load balancer is configured to probe artifactory's health in frequent intervals. Ping fails. System is built on Azure. Both, artifactory and nginx services are up and running.
Contents of $ARTIFACTORY_HOME/logs/artifactory.log
2018-10-26 18:59:22,633 [http-nio-8081-exec-5] [ERROR] (o.a.r.r.s.PingResource:76) - Ping failed since the server state is blocked
Contents of $ARTIFACTORY_HOME/logs/request.log
20181026185727|1|REQUEST|<ip>|anonymous|GET|/api/system/ping|HTTP/1.1|503|0
20181026185732|2|REQUEST|<ip>|anonymous|GET|/api/system/ping|HTTP/1.1|503|0
Request is coming in. But, I couldn't make out as why this error is happening. Kindly help.
I had the exact same issue, it turned out to be a license issue. As mentioned by #DarthFennec in the comments above Removing the expired Trial license and adding the new license allowed the curl ping command to succeed.
/usr/bin/curl -f --insecure -u admin:password -X GET -H 'Content-Type:application/json' http://artent-01.mydomain:8081/artifactory/api/system/ping
Related
(related and perhaps more simple problem to solve: proxy authentication by MSCHAPv2)
Summary: I am using a Ubuntu 18, the proxy is working with web-browser but not with terminal applications (wget, curl or apt update). Any clues? Seems the problem is to interpretate a proxy's "PAC file"... Is it? How to translate to Linux's proxy variables? ... Or the problem is simple: my proxy-config (see step-by-step procedure below) was wrong?
Details:
By terminal env | grep -i proxy we obtain
https_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
http_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
no_proxy=localhost,127.0.0.0/8,::1
NO_PROXY=localhost,127.0.0.0/8,::1
ftp_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
and browser (Firefox) is working fine for any URL, but:
wget http://google.com say Resolving pac._ProxyDomain_ (pac._ProxyDomain_)... etc.etc.0.26 connecting to pac._ProxyDomain_ (pac._ProxyDomain_)|etc.etc.0.26|:80... conected.
Proxy request has been sent, waiting for response ... 403 Forbidden
2019-07-25 12:52:19 ERROR 403: Forbidden.
curl http://google.com say "curl: (5) Could not resolve proxy: pac._ProxyDomain_/proxy.pac"
Notes
(recent news here: purge exported proxy changes something and not tested all again...)
The proxy configuration procedures that I used (there are some plug-and-play PAC file generator? I need a PAC file?)
Config procedures used
All machine was running, with a direct non-proxy internet connection... Them the machine goes to the LAN with the proxy.
Add lines of "export *_proxy" (http, https and ftp) in my ~/.profile. The URL definitions are in the form http_proxy="http://user:pwd#etc" (supposing that is correct, because testesd before with user:pwd#http://pac.domain/proxy.pac syntax and Firefox promped proxy-login)(if the current proxy-password is using # character, need to change?)
Add lines of "export *_proxy" in my ~root/.profile.(need it?)
(can reboot and test with echo $http_proxy)
visudo procedure described here
reboot and navigate by Firefox without need of login, direct (good is working!). Testing env | grep -i proxy, it shows all correct values as expected.
Testing wget and curl as the begin of this report, proxy bug.
Testing sudo apt update, bug.
... after it more one step, supponing that for apt not exist a file, created by sudo nano /etc/apt/apt.conf.d/80proxy and add 3 lines for Acquire::*::proxy "value"; with value http://user:pass#pac._ProxyDomain_/proxy.pac:8080. where pass is etc%23etc, url-encoded.
Summary of tests performed
CONTEXT-1.1
(this was a problem but now ignoring it to focus on more relevant one)
After (the proxied) cable connection and proxy configurations in the system. (see above section "Config procedures used"). Proxy-password with special character.
curl http://google.com say "curl: (5) Could not resolve proxy..."
When change all .profile from %23 to # the error on wget changes, but curl not. Wget changes to "Error parsing proxy URL http://user:pass#pac._ProxyDomain_/proxy.pac:8080: Bad port number"
PS: when used $ on password the system (something in the internal export http_proxy command or use of http_proxy confused it with a variable).
CONTEXT-1.2
Same as context-1.1 above, but password with no special character. Good and clean proxy-password.
curl http://google.com say "curl: (5) Could not resolve proxy..."
CONTEXT-2
After (the proxied) cable connection and no proxy configurations in the system (but confirmed that connection is working on browser after automatic popup form login).
curl -x 192.168.0.1:8080 http://google.com "curl: (7) Failed to connect..."
curl --verbose -x "http://user:pass#pac._proxyDomain_/proxy.pac" http://google.com say "curl: (5) Could not resolve proxy..."
Other configs in use
As #Roadowl suggested to check:
files ~/.netrc and ~root/.netrc not exists
file more /etc/wgetrc exists, but all commented, exept by passive_ftp = on
Description
I'm crawling the website :bjx.com and all codes can be run in the local.Then I put the code on the Amazon service and run ,it failed.
What I have Done
I guess that maybe the website block the server and I have tried some ways :
1) curl http://guangfu.bjx.com.cn/xtgc/List.aspx?classid=583
2) wget http://guangfu.bjx.com.cn/xtgc/List.aspx?classid=583
err msg as follows:
Resolving news.bjx.com.cn (news.bjx.com.cn)... 114.113.145.103
Connecting to news.bjx.com.cn (news.bjx.com.cn)|114.113.145.103|:80... failed: Connection timed out.
Retrying.
--2019-04-23 05:45:00-- (try: 2) http://news.bjx.com.cn/list
Connecting to news.bjx.com.cn (news.bjx.com.cn)|114.113.145.103|:80...
some reference:
https://serverfault.com/questions/124952/testing-a-website-from-linux-command-line
My question :
how to confirm whether the website has blocked me and if blocked, what can I do to solve the issue and crawl the website, thanks
How about to make the program failed with particular timeout setting?
For example, to make curl failed if it can't get response within 10 seconds
curl -m 10
And, to go over these issues, you can try to run the spiders with Proxy of VPN networking
nrpe on azure server - nrpe-srvr, user nrpe, executing script /usr/local/naemon/libexec/check_curl_http.php I'll call it script
Desired output after ./script -U www.google.com:
Page OK: HTTP Status Code 200 - 11099 bytest in 0.** seconds | time=0.059 size=11099
I achieve the above output by running the script from root or nrpe
Running sudo -u nrpe ./script -U www.google.com returns:
Error in opening page! Err:Failed to connect to [ipv6 addr] Network is
unreachable
However running su - nrpe -c './script -U www.google.com' works with the desired result.
Naemon reports:
CHECK_NRPE: Socket timeout after 30 secs
Other NRPE checks to the same host are working, so I think it's something to do with user execution of this specific script. I did have a deny from SELinux, but adjusted the context. Removing the context and setting SELinux to permissive yielded the same error. Enabled NRPE Log files, with debugging, but other than Running command it doesn't really reveal much. There is a:
WARNING: my_system() seteuid(0): Operation not permitted
in the logs, but looking at the support documentation that is "Normal" behavior.
I'll post this just in case someone else has this issue, and I'll tag Azure / AWS.
Essentially, cloud providers (mostly) have an internal proxy that is stored in an environment variable http_proxy && https_proxy. NRPE by default doesn't use load environment variables. Now I don't know if there is an option for it (it's mentioned in the docs that there is a bug when using uid instead of username (was using username)) however it's simple enough to call proxy for checks like this.
I've successfully completed the ./install.sh command. When I run the next "~/mcdev start -o" I receive this error: "Starting Microclimate
Pulling microclimate-file-watcher (ibmcom/microclimate-file-watcher:1809)...
ERROR: Get https://registry-1.docker.io/v2/ibmcom/microclimate-file-watcher/manifests/1809: net/http: TLS handshake timeout
Error starting Microclimate"
I have running on my pc:
git version 2.17.1 (Apple Git-112); Docker version 18.06.1-ce; MacOS
High Sierra v 10.13.6; latest Microclimate v18.09
My network is up and I don't have any proxy.
What am I missing or doing wrong?
I got this to work by changing to a different wifi \ internet connection. Still not sure why my home wifi, with no restrictions that I'm aware of, will 'block' the dependency downloads upon start of Microclimate.
If anybody has additional info to help find the root cause please feel free to share.
This docker error generally shows up when your internet connection is too slow for Docker, as it times out performing the TLS handshake. This would also explain why moving to another network fixes the issue.
OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.