When I use wget command in RHEL 6.5, getting the error
Error parsing proxy URL. Bad port number.
The command used to set the proxy was
export http_proxy="http_proxy://username:password#address:port/".
Yes I know this issue can be resolved by using
http_proxy=address wget --proxy-user=username --proxy-password=<password> url.
But I want to install a package and during installation, it will need to download few other packages. so the proxy should be already set and ready before the installation. How can we resolve this?
The password I used caused this issue as it had a # in it. I replaced # with %23 [UTF encoding] and now this is working fine.
make us Use of "--no-proxy"
Example : wget https://dl-ssl.google.com/linux/linux_signing_key.pub --no-proxy
This will elimimate the current proxy and try to download what ever we need..
I set up the proxy on Ubuntu using general system settings (manual option).
The problem was I pasted the proxy URL with a port, while the port is also separately specified there, in different field.
Related
(related and perhaps more simple problem to solve: proxy authentication by MSCHAPv2)
Summary: I am using a Ubuntu 18, the proxy is working with web-browser but not with terminal applications (wget, curl or apt update). Any clues? Seems the problem is to interpretate a proxy's "PAC file"... Is it? How to translate to Linux's proxy variables? ... Or the problem is simple: my proxy-config (see step-by-step procedure below) was wrong?
Details:
By terminal env | grep -i proxy we obtain
https_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
http_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
no_proxy=localhost,127.0.0.0/8,::1
NO_PROXY=localhost,127.0.0.0/8,::1
ftp_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
and browser (Firefox) is working fine for any URL, but:
wget http://google.com say Resolving pac._ProxyDomain_ (pac._ProxyDomain_)... etc.etc.0.26 connecting to pac._ProxyDomain_ (pac._ProxyDomain_)|etc.etc.0.26|:80... conected.
Proxy request has been sent, waiting for response ... 403 Forbidden
2019-07-25 12:52:19 ERROR 403: Forbidden.
curl http://google.com say "curl: (5) Could not resolve proxy: pac._ProxyDomain_/proxy.pac"
Notes
(recent news here: purge exported proxy changes something and not tested all again...)
The proxy configuration procedures that I used (there are some plug-and-play PAC file generator? I need a PAC file?)
Config procedures used
All machine was running, with a direct non-proxy internet connection... Them the machine goes to the LAN with the proxy.
Add lines of "export *_proxy" (http, https and ftp) in my ~/.profile. The URL definitions are in the form http_proxy="http://user:pwd#etc" (supposing that is correct, because testesd before with user:pwd#http://pac.domain/proxy.pac syntax and Firefox promped proxy-login)(if the current proxy-password is using # character, need to change?)
Add lines of "export *_proxy" in my ~root/.profile.(need it?)
(can reboot and test with echo $http_proxy)
visudo procedure described here
reboot and navigate by Firefox without need of login, direct (good is working!). Testing env | grep -i proxy, it shows all correct values as expected.
Testing wget and curl as the begin of this report, proxy bug.
Testing sudo apt update, bug.
... after it more one step, supponing that for apt not exist a file, created by sudo nano /etc/apt/apt.conf.d/80proxy and add 3 lines for Acquire::*::proxy "value"; with value http://user:pass#pac._ProxyDomain_/proxy.pac:8080. where pass is etc%23etc, url-encoded.
Summary of tests performed
CONTEXT-1.1
(this was a problem but now ignoring it to focus on more relevant one)
After (the proxied) cable connection and proxy configurations in the system. (see above section "Config procedures used"). Proxy-password with special character.
curl http://google.com say "curl: (5) Could not resolve proxy..."
When change all .profile from %23 to # the error on wget changes, but curl not. Wget changes to "Error parsing proxy URL http://user:pass#pac._ProxyDomain_/proxy.pac:8080: Bad port number"
PS: when used $ on password the system (something in the internal export http_proxy command or use of http_proxy confused it with a variable).
CONTEXT-1.2
Same as context-1.1 above, but password with no special character. Good and clean proxy-password.
curl http://google.com say "curl: (5) Could not resolve proxy..."
CONTEXT-2
After (the proxied) cable connection and no proxy configurations in the system (but confirmed that connection is working on browser after automatic popup form login).
curl -x 192.168.0.1:8080 http://google.com "curl: (7) Failed to connect..."
curl --verbose -x "http://user:pass#pac._proxyDomain_/proxy.pac" http://google.com say "curl: (5) Could not resolve proxy..."
Other configs in use
As #Roadowl suggested to check:
files ~/.netrc and ~root/.netrc not exists
file more /etc/wgetrc exists, but all commented, exept by passive_ftp = on
OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.
My Linux(redhat6) server has to use http proxy to connect to outside world. While it works for other things like wget, it doesn't work for cabal.
cabal update -v3
shows errors like this:
407 - proxy authentication required cabal: Failed to download
http://hackage.haskell.org/packages/archive/00-index.tar.gz :
ErrorMisc "Unsucessful HTTP code: 407"
I tried to change http_proxy environment variable to format like http:// user: passwd at proxy:port, but it doesn't work either.
The same problem has been asked here
But I'm not allowed use a proxy server like polipo, is there any other way to make cabal work behind a proxy?
You can use cntlm to talk to proxy. It will handle authentication issues. After configuring and installing cntlm, set up the new environmental variable by modifying http_proxy, https_proxy etc.
Your cabal command should work after that.
some detailed procedure here:
Download cntlm from here
It's a c program with no other dependencies so very easy to make, just follow the instructions in the downloaded package.
After installing cntlm, follow this answer from Colonel Panic. Obviously on linux you need to change cntlm.exe to ./cntlm, I named the configure file cntlm.conf
The default listen port for cntlm is 3124, if you can't use that port, change it to something else like 53124, then add this to your cntlm.conf or cntlm.ini file:
Listen 127.0.0.1:53124
Start cntlm in the background:
./cntlm -c cntlm.conf
Change your http_proxy environment variable to talk to the cntlm process rather than the real proxy.
export http_proxy=http:// 127.0.0.1:53124
That's it, cabal will work as good as ever.
You can also setup the http_proxy directly in the system setting:
http_proxy=http://username:password#hostname:port
Is it possible to run npm install behind an HTTP proxy, which uses NTLM authentication? If yes, how can I set the server's address and port, the username, and the password?
I solved it this way (OS: Windows XP SP3):
1. Download CNTLM installer and run it.
2. Find and fill in these fields in cntlm.ini. Do not fill in the Password field, it's never a good idea to store unencrypted passwords in text files.
Username YOUR_USERNAME
Domain YOUR_DOMAIN
Proxy YOUR_PROXY_IP:PORT
Listen 53128
3. Open console, and type these commands to generate password hashes.
> cd c:\the_install_directory_of_cntlm
> cntlm -H
Password: ...type proxy password here...
PassLM D6888AC8AE0EEE294D954420463215AE
PassNT 0E1FAED265D32EBBFB15F410D27994B2
PassNTLMv2 91E810C86B3FD1BD14342F945ED42CD6
4. Copy the above three lines into cntlm.ini, under the Domain field's line. Once more, do not fill in the Password field. Save cntlm.ini.
5. Open the Service Manager (from command line: services.msc), and start the service called "CNTLM Authentication Proxy".
6. In the console, type these lines:
> npm config set proxy http://localhost:53128
> npm config set https-proxy http://localhost:53128
> npm config set registry https://registry.npmjs.org
7. Now npm view, npm install etc. should work. Example:
> npm view qunit
...nice answer, no errors :)
CNTLM answer was working for me, but with connection errors make npm unusable. I've fixed them by adding this header in CNTML.
Header Connection: close
Another alternative is to use Px for Windows which talks NTLM on your behalf like Cntlm and NTLMAps without having to provide your credentials. It uses the logged in user's credentials via SSPI.
Rather than running CNTLM, you could instead try running Fiddler when you need to use npm. I've found this works in fairly locked down environments (e.g. investment banks). It's also a tool that is fairly easy to make a business case for (if you need to) since it's invaluable for checking/creating/altering HTTP traffic.
I've had to go this route before due to usage of smartpass authentication - i.e. we didn't actually have passwords. At those locations setting up CNTLM would have been impossible.
You can pass the settings as parameters:
npm --proxy=http://username:password#proxyserver:port --proxy-https=http://username:password#proxyserver:port --registry=http://registry.npmjs.org/ install whateveryouwanttoinstall
CNTLM didn't work for me. I tried all possible combinations. NPM was giving Authentication error. Fiddler came for rescue and saved my time. It is easy to install and configure. Set Fiddler Rule to Automatically Authenticated.In .npmrc set these
registry=http://registry.npmjs.org
proxy=http://127.0.0.1:8888
https-proxy=http://127.0.0.1:8888
http-proxy=http://127.0.0.1:8888
strict-ssl=false
It worked for me :)
Another Fiddler Option:
A second way to make Fiddler act as an HTTP proxy for NTLM and other protocols is to leave the auto authenticate options/rules defaults in place and go to this setting from the menu bar:
Tools > Telerik Fiddler Options > Connections tab
Click on the Allow remote computers to connect checkbox. You will see a dialog explaining the consequences of enabling this option. Restart Fiddler and update the .npmrc file as shown above. Whenever you need npm to access the registry site just run Fiddler. This setting won't affect the way Fiddler runs for other captures.
Open your .npmrc file in C:\users\username\ folder using notepad
Add the below lines..
Replace domain, username, pwd, servername with your correct values
Try to install or get packages now
If trying from Vs2017, close and reopen VS IDE, then only it works
proxy=http://DOMAIN%5CUSERNAME:PWD#proxy.servername.com:6050
https-proxy=http://DOMAIN%5CUSERNAME:PWD#proxy.servername.com:6050
http-proxy=http://DOMAIN%5CUSERNAME:PWD#proxy.servername.com:6050
strict-ssl=false
CNTLM worked for me as suggested by KOL. Thanks KOL for that. Just wanted to add that there are some oddities in individual proxies because of which the password may not be acceptable when using simple cntlm -H.
Use cntlm -I -M http://test.com and copy the below config after erasing older configs and you should be through.
The output is like
---------------------------------------------------
Auth NTLM
PassNT 8EE9B595A89F7D8774C2146FB302CBCF
PassLM 78901DA9889727EDE28EF9F2769485B9
----------------------------------------------------
My OS is ubuntu, I have some codes located on github.com, everything is ok before, but one day, when I typing:
git pull
I'm asked to input password as usual, and then I get this error.
error: couldn't connect to host while accessing https://ghosert#github.com/ghosert/VimProject.git/info/refs
fatal: HTTP request failed
until I try sudo prefix like:
sudo git pull
It works as before once again. It seems I lost the permission on accessing https when git need it. Anyone has idea on this?
The error you posted doesn't indicate that the problem was permissions.
error: couldn't connect to host while accessing
https://ghosert#github.com/ghosert/VimProject.git/info/refs fatal: HTTP request failed
"HTTP request failed" sounds like a connectivity problem.
I would simply bet that your internet connection failed when you typed it the first time, and was back up when you typed it again, with sudo, which I doubt had any effect on fixing the problem.
Worse, it probably messed up your permissions now, refer to sarnold's answer.
I faced the same problem today and here is my analysis and solution.
I set proxy settings system wide in my chrome browser for some purpose and it seems that it has created some environment variables which are causing my working shell to believe that their is no connectivity as i killed the proxy server when the job was finished. The env variables were not removed.
Check if your env has some unnecessary variables set.
env --this command will show all the env variables set in you shell.
Variables:
http_proxy, https_proxy
Remove them and everything will work.