I'm trying to install Node with nvm, but when I type any version it's not available. When I type nvm ls-remote I just just get "N/A".
I'm able to access the Internet, so I can't figure what could be going on.
Update with comment from LJHarb, who maintains nvm.sh
LJHarb suggests that a typical problem causing this is that "the SSL certificate authorities installed in your system have gone out of date". Checking this and trying to fix this would be a better first step.
In the case where you believe there is a problem on the nvm.sh side, LJHarb asks that users file a bug on nvm.sh's issue tracker.
Feel free to see the original text in the comments section.
Also, I'd like to point out that the below solutions are intended as workarounds only to be used temporarily if you're really in a bind. Permanently modifying the exported mirror or the nvm.sh script itself is not recommended.
Edit: Found easier fix
You can export the non https version of the mirror it uses to grab the stuff:
export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist
Then nvm works
Pre edit
Had the same problem just now.
Looks like by default it tries to use curl if it's available on your system.
I assume you're on linux also, so try running curl $NVM_NODEJS_ORG_MIRROR and see if you get the same error I did:
curl: (77) error setting certificate verify locations:
CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
Maybe some cert is expired or otherwise misconfigured (or someone's doing something nasty), until it's fixed, if you don't mind going around the security issue, you can find the nvm.sh file (should be at ~/.nvm/nvm.sh if you followed the install info), and you can add a -k on line 17 after the curl, so it looks like this:
-- nvm.sh --
nvm_download() {
16 if nvm_has "curl"; then
17 curl -k $*
18 elif nvm_has "wget"; then
19 # Emulate curl with wget
...
}
Don't forget to restart your shell, then try nvm ls-remote. Assuming the fix worked, you should be able to use nvm now.
Create a file called
~/.curlrc
In it insert one line
-k
Then try again.
(Warning: This answer disables curl's CA verification. "-k" is shorthand for "--insecure". Don't copy it blindly. -edit)
Most likely this is caused by curl not being able to use certificates for https urls (verify with curl $NVM_NODEJS_ORG_MIRROR). Instead of using the http url as workaround, it is better to fix curl by pointing it to the appropriate CA bundle (source1, source2). Add the following line to your .bashrc:
Ubuntu (assuming you have the ca-certificates package installed)
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
RHEL 7
export CURL_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
Changing from
export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist/
To
export NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist/
Worked for me :)
It seems the '/' is missing from the end of the url, that is why the 301 permanently moved message.
So changing the link in nvm.sh from
http://nodejs.org/dist
to
http://nodejs.org/dist/
makes it work.
If you are using nvm behide a proxy you need set proxy config to curl
edit or create the file ~/.curlrc and add this line with your proxy
echo 'proxy=http://<proxy-user>:<proxy-pass>#<proxy-url>:<proxy-port>' >> ~/.curlrc
if your proxy does not need a user and password, you can use it:
echo 'proxy=http:<proxy-url>:<proxy-port>' >> ~/.curlrc
For others like me who land here after a search:
I had the same issue today on Ubuntu, but the cause turned out to be that the /etc/ssl/certs/ca-certificates.crt file was completely empty.
The solution was to run:
sudo update-ca-certificates
I had this same problem, but none of the other solutions helped. curl -v $NVM_NODEJS_ORG_MIRROR/ showed TLS 1.2 and no problem with certs. When I tried which curl, it turns out that I had an anaconda3/bin directory in my PATH, which has it's own version of curl (not sure why they need that). Once I fixed my path, nvm ls-remote worked just fine. Hope this helps save someone else some frustration.
I solved my problem by manually upgrading nvm to the latest version
(
cd "$NVM_DIR"
git fetch --tags origin
git checkout `git describe --abbrev=0 --tags --match "v[0-9]*" $(git rev-list --tags --max-count=1)`
) && \. "$NVM_DIR/nvm.sh"
I was having this issue lately. Changing to http://nodejs.org/dist/ did not work for me because it redirecrs to https and that results in NA from nvm ls-remote. So what I've done was:
sudo update-ca-certificates
Then I edited ~/.nvm/nvm.sh and changed
http://nodejs.org/dist to https://nodejs.org/dist/ (added https and "/" to avoid redirects) and it worked
My scenario could be rare, but just want to add another data point to this thread:
Because of my local setup issue, I don't want to install curl, and I explicitly set an alias for curl to warn myself from installing it in the future, which results in the nvm believing I have curl available and use curlto download. It worked after I removed my alias.
Solution
check explicitly if your curl or wget is usable.
In my case the problem was with dns; for where I work dns is set automatically and when I ran curl -v $NVM_NODEJS_ORG_MIRROR/ it lead to Could not resolve host: nodejs.org and ping nodejs.org ran to Temporary failure in name resolution. So I changed /etc/resolv.conf and added
nameserver 8.8.8.8
nameserver 8.8.4.4
and then nvm install --lts started working.
I was running into this problem when using Vagrant 1.7.1 running a Ubuntu 14.04 box under Virtual Box 4.3.30 on Windows 7. I tried suggestions above and nothing worked for me. I found this post over here that was related to the Curl error I was getting when trying to run: curl $NVM_NODEJS_ORG_MIRROR
The error was: curl: (7) Couldn't connect to server
I was able to follow a suggestion on that post and then once I restarted my Vagrant box with a vagrant reload I was able to run nvm ls-remote and see a list of versions of node and install. Here is what I did on the vagrant box: cd /etc/
sudo nano hosts
changed 127.0.0.1 localhost
to:
0.0.0.0 localhost
Hope this helps anyone with the same issue. Thanks# Truong Nguyen
For me, this will work
nvm alias default node points "default" at the latest installed Node version (8.11.1).
It's work for me in my linux:
export NVM_NODEJS_ORG_MIRROR=http://nodejs.org/dist
On Ubuntu server, the interfaces aren't setup with DHCP by default. I forgot about this and after I installed nvm, I rebooted which lost network connectivity and didn't realize it. I know that you had network connectivity, but I am posting this as something for posterity to check. Stupid simple thing that can be easily forgotten/missed.
For nvm-windows use nvm list available
I had the same problem on WSL2. I also have an https_proxy environment variable set to my company's https_proxy server.
When working inside the company VPN, this did not work since (I believe) WSL2 have a problem using the proxy settings correctly.
Outside the company VPN, un-setting this environment variable, fixed the problem.
so (outside the VPN):
unset https_proxy
and then
nvm ls-remote --lts
worked.
I found a workaround that allowed me to do what I wanted even though nvm list available still isn't working after trying everything on this list.
It might be an old version of curl but working on a server shared with others and not allowed to update that until I wait a few days for approval.
Ultimately I went to: https://nodejs.org/download/release/ I found the newest version of node I was looking to install, which was 16, located here: https://nodejs.org/download/release/latest-v16.x/
Then simply ran:
nvm install v16.16.0
And the install worked fine even if I couldn't pull available versions via nvm!
Related
Using my script I'd like to be able to go to Chef Inspec and download the latest version. However the URL they use has versions. The versions will change and eventually, if I hard code the URL, I won't be getting the latest.
How do I use the wget command with wildcards to always get the latest version and never have to check it?
Here is the URL they offer:
https://packages.chef.io/files/stable/inspec/2.3.4/ubuntu/16.04/inspec_2.3.4-1_amd64.deb
I just want it downloaded and autoinstalled, but when the version changes I'm going to fall behind.
UPDATE: This doesn't answer the question exactly, but works. What I ended up doing was using the Curl command. This gave me the end result needed:
curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P inspec
You should add their repository to /etc/apt/sources.list. Then you can use apt to update their software.
Chef Software Inc Packages
Debian has a tool called uscan which can download from URLs using wildcards, but this isn't the right tool for installing packages.
After about 6+ days and numerous rounds of spin-up/destroy I have FINALLY gotten my Digital Ocean droplet server up and running (ie I can view a live page of content at my ip).
At this point I am trying to install Git, and have installed/removed it 3 times so far as I keep getting 'close' to completion but then run into some error I can't find an answer for. I'm hoping someone can help me figure out what my latest problem is so I can move forward with the actual development of my site rathe than spending over a week on the server build.
I have attempted to install version 2.6.2 of git on my server and have had to compile from source (something I am no where near familiar with). I 'thought' I had it correct this time, but received the following error when I attempted to set my git user name:
gitconfig --global user.name "MyUserName" (<--- last command I made)
bash: gitconfig: command not found (<-- error i received)
I thought it was an issue with being in the wrong directory to run the command, so i ran which git and received the following output:
/usr/local/git/bin/git
This seems to be a binary (?) file and none of the directories listed in that path allow me to use gitconfig command either.
Any ideas what I have done wrong? Do I need to remove (again!) and re-compile. I don't desire to be a server admin, but really had thought (hoped?) spinning my own LEMP server on CentOS 7 would be simple - doing so on CentOS 6.* was.
Thanks for your help/advice.
gitconfig isn't a command.
You'd do:
git config --global user.name "MyUserName"
Also you're really better off installing git via yum, rather than compiling from source unless there's a good reason to compile it yourself.
(Edit - updated answer with tested solution on Centos 7).
I've been been desperately trying to add SFTP and SCP support for Curl on my CentOS box. I found something resembling a solution here:
http://andrewberls.com/blog/post/adding-sftp-support-to-curl
I followed these steps but found that when attempting to get a file via both SCP and SFTP, the connection hangs once the file has been found. I cannot fix this and cannot find an alternative solution.
I have to use Curl for a job at work and therefore cannot use another lib. Has anyone managed to successfully add support for SCP and SFTP on Curl? I have a test server setup and other protocols such as FTP work as expected.
Any help would be greatly appreciated!
Thanks in advance,
Peter
Although Curl does support SFTP, support isn't automatically included in the default package.
This website: http://andrewberls.com/blog/post/adding-sftp-support-to-curl provided the details which helped me add the required support for SFTP. As the site didn't work 100% for me, I've outlined the different steps taken below.
Manually downloading libssh2 didn't work for me so I used yum to install the two packages:
yum install libssh2 libssh2-devel
and then followed step two configuring Curl to install using the above libraries
The final step was to restart sshd:
service sshd restart
There you have it. Double check that SFTP is on the list of support protocols by running
curl -V
When I initially tested, Curl complained about key authentication issues, but you can force Curl to use any authentication to connect:
curl --anyauth sftp://user:passwd#127.0.0.1/directory -o Test.txt
This will round robin the different supported authentication methods and let you use you login credentials instead.
I hope this helps alleviate any other headaches for people trying to achieve the same.
I am working on AWS services. I have an ec2 ( centos ) instance. I need to configure SQL*Plus client on this centos machine.
The server with whom I want to connect is at some remote area. The server version is oracle-se(11.2.0.2)
How can I get the client installed on the CentOS machine?
Go to Oracle Linux x86-64 instant clients download page
Download the matching client
oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm
oracle-instantclient11.2-sqlplus-11.2.0.2.0.x86_64.rpm
Install
rpm -ivh oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm
rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.2.0.x86_64.rpm
Set environment variables in your ~/.bash_profile
ORACLE_HOME=/usr/lib/oracle/11.2/client64
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_HOME
export LD_LIBRARY_PATH
export PATH
Reload your .bash_profile by simply typing source ~/.bash_profile (suggested by jbass) or Log-out user and log-in again.
Now you're ready to use SQL*Plus and connect your server. Type in :
sqlplus "username/pass#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.1)(PORT=1521))(CONNECT_DATA=(SID=YOURSID)))"
The solution by #ChamaraKeragala is good, but it is unnecessary to logout/login. Instead type:
source ~/.bash_profile
For everyone still getting the following error:
sqlplus command not found
The original post refers to a set of environment variables, the most important of which is ORACLE_HOME. This is the parent directory where the oracle binaries get installed.
Depending on what version of oracle you downloaded you'll have to change the ORACLE_HOME accordingly. For example, the original question's ORACLE_HOME was set to:
ORACLE_HOME=/usr/lib/oracle/11.2/client64
My version of Oracle happens to be 12.1, so my ORACLE_HOME is set to:
ORACLE_HOME=/usr/lib/oracle/12.1/client64
If you are unsure of the version that you downloaded, you can:
cd /usr/lib/oracle after the installation and find the version.
Look at the RPM file oracle-instantclient12.1, where the bolded bits would refer to the version number.
There's a good blog post[1] on $subject. setup oracle client in ubuntu with minimum effort. Following are the main steps on how to step up the client.
In my case, I was installing rpm files using alien package.
Install alien and related packages
sudo apt-get install alien
Install oracle client packages using alien.
sudo alien -i oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm
sudo alien -i oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64.rpm
In my opinion these two steps are the easiest way to install oracle client rpm's on your ubuntu system. (I'm not going to mention about export oracle specific variables as it's already clearly explained in above answers)
Hope it helps someone.
[1] http://pumuduruhunage.blogspot.com/2016/04/setup-oracle-sql-plus-client-on-aws.html
For any one who is using proxy, you'd need to add an extra line to the bash profile. At least this is what made it work for me. I'm using cntlm.
export no_proxy=
Install via zip (tried with 12_2)
First of all there is no need to set ORACLE_HOME.
Simply download the .zip files from here starting with the first one Basic: followed by SQL*Plus: and any additional zips you may need.
Extract them all under /opt/oracle
You will then have a directory: /opt/oracle/instantclient_x_y
On ubuntu I had to do also:
sudo apt install libaio1
To run:
# This can be also done by adding only the path below in: /etc/ld.so.conf.d/oracle-instantclient.conf
export LD_LIBRARY_PATH=/opt/oracle/instantclient_x_y:$LD_LIBRARY_PATH
# This can be added in ~/.profile or ~/.bashrc
export ORACLE_HOME=/opt/oracle/instantclient_x_y
/opt/oracle/instantclient_x_y/sqlplus user/pass#hostname:1521/sidorservicename
At the bottom of the the above link page there are more details.
Somehow, I can't run vagrant or heroku in cygwin. It works fine when I'm using the default windows cmd application, but in cygwin, I get this error for vagrant:
C:/vagrant/vagrant/embedded/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find vagrant (>= 0) amongst [] (Gem::LoadError) from C:/vagrant/vagrant/embedded/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from C:/vagrant/vagrant/embedded/lib/ruby/site_ruby/1.9.1/rubygems.rb:1231:in gem' from C:/vagrant/vagrant/embedded/gems/bin/vagrant:22:in'
And for heroku:
C:\Program Files (x86)\ruby-1.9.2\bin\ruby.exe: No such file or directory -- /cygdrive/c/Program Files (x86)/Heroku/bin/heroku (LoadError)
What is this thing about ruby? I have no idea what I should be doing - developing in windows is a real pain, can anyone provide any insight into how I might solve this problem?
Appreciate any help. Thanks!
I ran in the same problem using Rails and Heroku on Windows.
It seems that the Toolbelt is not supported under Cygwin. Moreover colors are not always rendered in the right way (for example, I did not manage to render heroku logs colors, even after using ansicon -i).
I also considered using the CMD Prompt augmented with GOW but that means you have to append ".bat" to every command, and colors are still a problem.
I ended up using the Git Bash shell that is included with the RailsInstaller package.
It recongnizes all paths to relevant files, it has all the shell commands you need, and every color seems to be rendered correctly (e.g. rails logs, cucumber and rspec tests, heroku logs, etc.).
You've probably solved your issue a long time ago but I just wanted to add the steps I went through as I had the same issue on Windows with Cygwin.
Firstly always try to do an update of your Cygwin installation especially when you see an error similar to the one you've posted (I had the same error):
/ruby: No such file or directoryin/heroku: line 4: /cygdrive/d/Development/Heroku/ruby-1.9.2/bin
So I updated Cygwin and made sure to select all necessary ruby packages/interpreters etc, but this still didn't solve the problem as I kept getting the same error message.
Then I followed the steps outlined in Running the Heroku Command-Line Client Under Cygwin:
(1) Download RubyGem 1.9.3 from
http://rubyforge.org/frs/download.php/76072/rubygems-1.8.24.zip
(2) Then run the following -
$ unzip rubygems-1.8.24.zip
$ cd rubygems-1.8.24/rubygems-1.8.24
$ ruby setup.rb install
$ gem update --system
$ gem install heroku
(3) Open a new shell window and verify the version -
$ heroku version
heroku-gem/2.28.10 (i386-cygwin) ruby/1.8.7
This solved my problem and I can now run heroku commands from the Cygwin shell on Windows.
For me #Azkuma's answer only got me half the way. What worked for me:
1) Download and extract RubyGem zip: https://rubygems.org/pages/download
2) Set aliases to gem and heroku
alias gem='C:/ruby/bin/gem'
alias heroku='"C:/Program Files (x86)/Heroku/bin/heroku.bat"'
3) install as above
ruby setup.rb install
gem update --system
gem install heroku
4) login to heroku
heroku login
I found simply setting an alias worked for me.
alias heroku=c:/Program\\\ Files\\\ \\\(x86\\\)/Heroku/bin/heroku.bat
Then I can just use the heroku command directly with Cygwin.
The only thing I have a problem with is heroku login (and by extension, git push heroku master) whereby I'm prompted to use cmd.exe. For that part, I just open my Git Bash window from within the relevant folder, login and push from there.