cURL Always Returns 401 With NTLM - linux

I'm working on a library to communicate with Microsoft Exchange using PHP. Everything works fine on my production servers, but I keep getting a 401 Unauthorized on my development machine. I tried using curl from the command line and I get the same results.
Using the following returns "401" on my machine:
curl https://mail.example.com/EWS/Exchange.asmx -w %{http_code} --ntlm -u username:password
The same exact call returns "302" on my production machines, which is what I expect.
My development machine is using curl 7.19.7 and my production machine is using curl 7.18.0.

This is an old question but if it can eventually help anybody, I figured I'd post an answer.
There's a bug with NTLM and curl on certain recent version of Ubuntu (10.04 and up I believe).
Ubuntu: https://bugs.launchpad.net/ubuntu/+source/curl/+bug/675974
CentOS: https://bugzilla.redhat.com/show_bug.cgi?id=799557
If you're using the curl module of PHP on ubuntu and your libcurl version is affected by this bug, this could explain why your authentication requests are failing.
If you add the verbose flag to your command (-v), you should see something like this in the response part:
gss_init_sec_context() failed: : Credentials cache file '/tmp/krb5cc_1000' not found
If you do see this, you're affected by the bug and you'll have to either downgrade your library or find another machine.
I hope this helps :P

For all Centos / RHEL 6.X Users please take a look into:
https://bugzilla.redhat.com/show_bug.cgi?id=953864

Related

Can we copy curl package and run on Linux without installation

I have old version of curl on linux, I want upgraded the version, but old version is using by others , so i cant upgrade the old one. On windows without installation just copy curl on any drive I am able to run curl, Can i do same thing on linux means just copy the package and run the curl, if yes from where I got the package
Is there any other way? by which old version will not effect
This may need some clarification: are you talking about libcurl or curl itself? (I ask because around the release of Buster I experienced some issues between some programs needing libcurl3 and others wanting libcurl4 on Debian). Is this what you mean, or do you just mean an older version of curl itself? And in particular, how old and what programs are requiring it because you may be able to just update the repos and have everything run off of the newest versions.
if you're using a standard dynamically-compiled curl, then it's tricky because it will try to load the old libcurl from /usr/lib/x86_64-linux-gnu/libcurl.so.4 or something like that, but a statically compiled curl? you can just download it and put it wherever you want, it's standalone and static after all.
following the instructions from https://stackoverflow.com/a/56394968/1067003 with compiler flags "-s -Os" to tell the compiler to optimize for size, here is a statically compiled 64bit linux curl version 7.65.0 with httpS support via statically compiled openssl version 1.1.1c, which is xz-compressed and base64-encoded: https://pastebin.com/HhMYYQAS
use the following command to decompress it:
wget -O- 'https://pastebin.com/raw/HhMYYQAS' | base64 -d | xz -d > curl_static;chmod +x curl_static;
(i can't inline the base64 because it's too big for stackoverflow answers)
You can try out the static curl library:
https://github.com/moparisthebest/static-curl
I tried it on an old system, and it works wonder.
Even better is that build script is provided, so you can build it yourself if using third party binary is a concern.

nvm ls-remote command results in "N/A"

I'm trying to install Node with nvm, but when I type any version it's not available. When I type nvm ls-remote I just just get "N/A".
I'm able to access the Internet, so I can't figure what could be going on.
Update with comment from LJHarb, who maintains nvm.sh
LJHarb suggests that a typical problem causing this is that "the SSL certificate authorities installed in your system have gone out of date". Checking this and trying to fix this would be a better first step.
In the case where you believe there is a problem on the nvm.sh side, LJHarb asks that users file a bug on nvm.sh's issue tracker.
Feel free to see the original text in the comments section.
Also, I'd like to point out that the below solutions are intended as workarounds only to be used temporarily if you're really in a bind. Permanently modifying the exported mirror or the nvm.sh script itself is not recommended.
Edit: Found easier fix
You can export the non https version of the mirror it uses to grab the stuff:
export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist
Then nvm works
Pre edit
Had the same problem just now.
Looks like by default it tries to use curl if it's available on your system.
I assume you're on linux also, so try running curl $NVM_NODEJS_ORG_MIRROR and see if you get the same error I did:
curl: (77) error setting certificate verify locations:
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
Maybe some cert is expired or otherwise misconfigured (or someone's doing something nasty), until it's fixed, if you don't mind going around the security issue, you can find the nvm.sh file (should be at ~/.nvm/nvm.sh if you followed the install info), and you can add a -k on line 17 after the curl, so it looks like this:
-- nvm.sh --
nvm_download() {
16 if nvm_has "curl"; then
17 curl -k $*
18 elif nvm_has "wget"; then
19 # Emulate curl with wget
...
}
Don't forget to restart your shell, then try nvm ls-remote. Assuming the fix worked, you should be able to use nvm now.
Create a file called
~/.curlrc
In it insert one line
-k
Then try again.
(Warning: This answer disables curl's CA verification. "-k" is shorthand for "--insecure". Don't copy it blindly. -edit)
Most likely this is caused by curl not being able to use certificates for https urls (verify with curl $NVM_NODEJS_ORG_MIRROR). Instead of using the http url as workaround, it is better to fix curl by pointing it to the appropriate CA bundle (source1, source2). Add the following line to your .bashrc:
Ubuntu (assuming you have the ca-certificates package installed)
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
RHEL 7
export CURL_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
Changing from
export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist/
To
export NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist/
Worked for me :)
It seems the '/' is missing from the end of the url, that is why the 301 permanently moved message.
So changing the link in nvm.sh from
http://nodejs.org/dist
to
http://nodejs.org/dist/
makes it work.
If you are using nvm behide a proxy you need set proxy config to curl
edit or create the file ~/.curlrc and add this line with your proxy
echo 'proxy=http://<proxy-user>:<proxy-pass>#<proxy-url>:<proxy-port>' >> ~/.curlrc
if your proxy does not need a user and password, you can use it:
echo 'proxy=http:<proxy-url>:<proxy-port>' >> ~/.curlrc
For others like me who land here after a search:
I had the same issue today on Ubuntu, but the cause turned out to be that the /etc/ssl/certs/ca-certificates.crt file was completely empty.
The solution was to run:
sudo update-ca-certificates
I had this same problem, but none of the other solutions helped. curl -v $NVM_NODEJS_ORG_MIRROR/ showed TLS 1.2 and no problem with certs. When I tried which curl, it turns out that I had an anaconda3/bin directory in my PATH, which has it's own version of curl (not sure why they need that). Once I fixed my path, nvm ls-remote worked just fine. Hope this helps save someone else some frustration.
I solved my problem by manually upgrading nvm to the latest version
(
cd "$NVM_DIR"
git fetch --tags origin
git checkout `git describe --abbrev=0 --tags --match "v[0-9]*" $(git rev-list --tags --max-count=1)`
) && \. "$NVM_DIR/nvm.sh"
I was having this issue lately. Changing to http://nodejs.org/dist/ did not work for me because it redirecrs to https and that results in NA from nvm ls-remote. So what I've done was:
sudo update-ca-certificates
Then I edited ~/.nvm/nvm.sh and changed
http://nodejs.org/dist to https://nodejs.org/dist/ (added https and "/" to avoid redirects) and it worked
My scenario could be rare, but just want to add another data point to this thread:
Because of my local setup issue, I don't want to install curl, and I explicitly set an alias for curl to warn myself from installing it in the future, which results in the nvm believing I have curl available and use curlto download. It worked after I removed my alias.
Solution
check explicitly if your curl or wget is usable.
In my case the problem was with dns; for where I work dns is set automatically and when I ran curl -v $NVM_NODEJS_ORG_MIRROR/ it lead to Could not resolve host: nodejs.org and ping nodejs.org ran to Temporary failure in name resolution. So I changed /etc/resolv.conf and added
nameserver 8.8.8.8
nameserver 8.8.4.4
and then nvm install --lts started working.
I was running into this problem when using Vagrant 1.7.1 running a Ubuntu 14.04 box under Virtual Box 4.3.30 on Windows 7. I tried suggestions above and nothing worked for me. I found this post over here that was related to the Curl error I was getting when trying to run: curl $NVM_NODEJS_ORG_MIRROR
The error was: curl: (7) Couldn't connect to server
I was able to follow a suggestion on that post and then once I restarted my Vagrant box with a vagrant reload I was able to run nvm ls-remote and see a list of versions of node and install. Here is what I did on the vagrant box: cd /etc/
sudo nano hosts
changed 127.0.0.1 localhost
to:
0.0.0.0 localhost
Hope this helps anyone with the same issue. Thanks# Truong Nguyen
For me, this will work
nvm alias default node points "default" at the latest installed Node version (8.11.1).
It's work for me in my linux:
export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist
On Ubuntu server, the interfaces aren't setup with DHCP by default. I forgot about this and after I installed nvm, I rebooted which lost network connectivity and didn't realize it. I know that you had network connectivity, but I am posting this as something for posterity to check. Stupid simple thing that can be easily forgotten/missed.
For nvm-windows use nvm list available
I had the same problem on WSL2. I also have an https_proxy environment variable set to my company's https_proxy server.
When working inside the company VPN, this did not work since (I believe) WSL2 have a problem using the proxy settings correctly.
Outside the company VPN, un-setting this environment variable, fixed the problem.
so (outside the VPN):
unset https_proxy
and then
nvm ls-remote --lts
worked.
I found a workaround that allowed me to do what I wanted even though nvm list available still isn't working after trying everything on this list.
It might be an old version of curl but working on a server shared with others and not allowed to update that until I wait a few days for approval.
Ultimately I went to: https://nodejs.org/download/release/ I found the newest version of node I was looking to install, which was 16, located here: https://nodejs.org/download/release/latest-v16.x/
Then simply ran:
nvm install v16.16.0
And the install worked fine even if I couldn't pull available versions via nvm!

Adding Support for SCP and SFTP for Curl on Linux

I've been been desperately trying to add SFTP and SCP support for Curl on my CentOS box. I found something resembling a solution here:
http://andrewberls.com/blog/post/adding-sftp-support-to-curl
I followed these steps but found that when attempting to get a file via both SCP and SFTP, the connection hangs once the file has been found. I cannot fix this and cannot find an alternative solution.
I have to use Curl for a job at work and therefore cannot use another lib. Has anyone managed to successfully add support for SCP and SFTP on Curl? I have a test server setup and other protocols such as FTP work as expected.
Any help would be greatly appreciated!
Thanks in advance,
Peter
Although Curl does support SFTP, support isn't automatically included in the default package.
This website: http://andrewberls.com/blog/post/adding-sftp-support-to-curl provided the details which helped me add the required support for SFTP. As the site didn't work 100% for me, I've outlined the different steps taken below.
Manually downloading libssh2 didn't work for me so I used yum to install the two packages:
yum install libssh2 libssh2-devel
and then followed step two configuring Curl to install using the above libraries
The final step was to restart sshd:
service sshd restart
There you have it. Double check that SFTP is on the list of support protocols by running
curl -V
When I initially tested, Curl complained about key authentication issues, but you can force Curl to use any authentication to connect:
curl --anyauth sftp://user:passwd#127.0.0.1/directory -o Test.txt
This will round robin the different supported authentication methods and let you use you login credentials instead.
I hope this helps alleviate any other headaches for people trying to achieve the same.

Cygwin Google Cloud SDK Install Error 111

I'm receiving the below error at work (with a proxy) when running the Google Cloud SDK install script (gcloud.components.update) Unable to fetch /components-2.json
I've viewed this thread
Error while installing Google Cloud SDK in Cygwin : Unable to fetch https://dl.google.com/dl/cloudsdk/release/components-2.json
I've added in the proxy information and I can download the installer (if I don't set the proxy I can't access this).
curl /dl/cloudsdk/release/install_google_cloud_sdk.bash | bash
I can run curl https://dl.google.com/dl/cloudsdk/release/components-2.json
I'm running Python 2.7 and I'm on Windows XP. If its a proxy issue why can I download the files and access the page but can't run the login script?
I can download a local version of components-2.json I can also download all of the files that this file refers to.
Is there a config file I can edit to get it to look at the local versions of these files?
I can seems to find where the address /components-2.json is specified.
Cheers and thanks,
Rohan
PS sorry I couldn't include more than 2 links as I don't have a reputation, even though posting requirement mentions to show research.
As an alternative, please try the following:
Download google-cloud-sdk.zip or google-cloud-sdk.tar.gz
Unpack the archive
Run the ./google-cloud-sdk/install.sh script
This is described in small print on the Google Cloud SDK page, underneath the text box that suggests running the command you're likely using:
curl https://sdk.cloud.google.com | bash

Can't login to openshift origin?

I installed openshift origin in CentOS 6.5 minimal installation on Windows Azure vps using oo-installer. I used same server as both broker and node. Installation completed without any errors. After installations when I tried to login to the openshift console using username/password as openshift/openshift and admin/admin but it is giving an error stating as bad password. I tried to connect to the server using rhc but same results occurred. Now where can I change my password? Where will be the openshift's log files? How can I diagnose my openshift installation?
The default credentials are 'demo/changeme', according to this guide. They worked for my Origin install.
You can change the password and add users with htpasswd (check out this post). So, for example:
sudo yum install -y htpasswd
htpasswd -c /etc/openshift/htpasswd demo
A lot of the relevant log files were in /var/log/openshift for me.
I found the command oo-diagnostics -v –abortok useful for diagnosing issues.
Hope that helps!

Resources