I am trying to run npm install in a new Angular app. I am running it behind a corporate firewall, which is obviously the issue.
I am running into errors specifically with the pemrouz/buble package, which is a dependency of a dependency. One of Angular's dependencies is explicitly specifying that buble be downloaded using SSH. Originally, the SSH connection was getting blocked, but I was able to convince the network group to allow the connection.
However, it's still being proxied, which is messing up SSL stuff.
I can git clone the package by specifying that git not use strict SSL, but NPM ignores this setting.
I can also specify that NPM not use strict SSL, which allows HTTPS connections to work properly -- but apparently this also doesn't apply to SSH.
I am not using Docker or Chocolatey (this seems to be the common source of the issue for other people seeing this error).
I have seen the other questions, on StackOverflow and elsewhere, where people have had this issue, but the answer is always "you're running an old version of NPM." However, I am running 6.14.4, which appears to be the latest. Besides, I don't see how that could cause a host key issue?
How can I get NPM to make the SSH connection without using strict SSL? Or otherwise, how can I fix the host key verification issue?
Thanks!
EDIT: I ran ssh git#github.com and it prompted me "do you want to trust this?" and I said yes -- now it's showing the error "Permission denied (publickey)" when I run npm install.
This is not a private repo, it's the public repo at https://github.com/pemrouz/buble.
Related
For a college assignment I had to configure gitlab on my virtual machine that’s hosted on google cloud engine and is currently running Ubuntu 20.04.
I tried to install gitlab twice but the install fails (first it got stuck for at least 5 minutes on unpacking github-ce (13.10.2-ce.0) then it failed.
Reconfiguring gets me the same message but without any context, I don’t know where the error is, what is causing it and how to fix it.
I did research this error but the only thing that I found out is that it’s probably related with the config file. Only line in my config line that’s not commented out is the external url and it has a value so I have no idea what to do.
I guess something on the machine is fup. Try the same on a known-good new machine / fresh install.
Ubunut 20.04
Sounds like a fun OS.
To build TensorFlow from source I installed bazelisk as recommended. Then when I call ./configure which calls bazelisk I get the following error:
Downloading https://releases.bazel.build/0.29.1/release/bazel-0.29.1-lin
ux-x86_64...
2021/04/07 13:24:54 could not download Bazel: HTTP GET https://releases.bazel.build/0.29.1/r
elease/bazel-0.29.1-linux-x86_64 failed: Get "https://releases.bazel.build/0.29.1/release/ba
zel-0.29.1-linux-x86_64": proxyconnect tcp: net/http: TLS handshake timeout
Bazel is banned in the place I live and I set proxy to Tor to download it (via https://127.0.0.1:8118 through privoxy), but somehow it fails to download it. What's the solution?
As said in the docs bazelisk is just a wrapper that makes sure the version of bazel you are using to build is the best it can be.
Bazelisk does not yet have an offline mode, and appears to always execute an http request or two on invocation.
This model doesn’t appear to work well with your internet settings, so it sounds like you might be better off manually downloading the appropriate release of bazel (check for a .bazelversion file in the project) using their instructions for your platform for the direct binary installation, or if you can use the apt repositories, those appear to be more recommended.
With a direct installation of bazel, you may have to do a couple more things manually, but it won’t be doing those http requests to figure out the right version.
After that, you may find that other dependencies of tensorflow are blocked, and you may have to get bazel itself to use your proxy. Following the instructions for an air gapped environment might be useful for building regularly, but you’ll still need to do the first build. This SO answer appears to be a place to start for your proxy.
As a solution, one can force bazelisk to download bazel from github. To do so you can set the following environment variable before running ./configure
export BAZELISK_BASE_URL="https://github.com/bazelbuild/bazel/releases/download"
For testing, I have installed two instances of Ubuntu server 18.04 on VirtualBox. I then installed one with Puppet-server 6.1.0 and one with Puppet-agent 6.1.0, as per the documentation at Puppetlabs for version 6.1. Foreman is not installed.
After registering my agent at the puppetserver and signing the certificate, starting a puppet-run (sudo /opt/puppetlabs/bin/puppet agent --test) fails with the following error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed when searching for node puppetagent.fritz.box: Exception while executing '/etc/puppetlabs/puppet/node.rb': Cannot run program "/etc/puppetlabs/puppet/node.rb" (in directory "."): error=2, No such file or directory
I was dumbstruck to find that the script /etc/puppetlabs/puppet/node.rb was indeed missing and was also not included in the packages of puppetserver, puppet-agent or facter (sudo dpkg-query -L ...).
Googling for it, I only found a script of the same name that belonged to Foreman.
The file does also not seem to be present in the puppetserver source-code at github.
Is anyone able to shed some light on this?
Your server configuration seems to be set up to specify use of an external node classifier. This is optional: Puppet does not require an ENC and does not provide one by default. That's part of what makes them "external". If you obtained the result you describe straight out of the box then it probably reflects a packaging flaw that you should report.
In the meantime, you should be able to update the configuration to disable use of an ENC by changing the value of the node_terminus setting to plain. Alternatively, you should be able to just delete both node_terminus and external_nodes from your configuration, because the default for the former is plain.
Tagging on to John's answer, your configuration is probably configured to talk to the Foreman. If you didn't write it yourself or copy it from somewhere and you're sure you don't have any Foreman packages installed, then it's definitely a packaging error that you should report.
That said, puppet repos are almost always the right answer rather than distro packages.
I have have just downloaded arango with a
brew install arangodb
the symlink component of installation failed, since brew no longer enables this for what seem to be good reasons.
Next, I modify
/usr/local/etc/arangodb3/arangosh.conf
/usr/local/etc/arangodb3/arangod.conf
to point at localhost:#### and yet all arango db executables still attempt to connect to the default IP address, and do not connect to the localhost.
How do I motivate this change?
Editing the files only is not enough. Please restart the service.
/usr/local/etc/arangodb3/arangod.conf is the correct file though.
So,
I used to install puppet module via http://forge.puppetlabs.com without any issues. But then all of a sudden I am getting an error 301 Moved Permanently when trying to install any modules.
I would run the following command
puppet module install --module_repository http://forge.puppetlabs.com puppetlabs-dism
Now this appears to fail.
I have used both 3.4.2 and 3.8.7 (updated thinking their was an issue with the version)
I also have a similar command for ubuntu which works fine, but this is without the module_repository parameter.
The reason for the --module_repository flag is to get around the ssl certificate issue not being valid.
So the question being, has this functionality been removed, or does anyone know how to get the ssl certificate to be valid.
The url is in the process of being changed to https://forge.puppet.com as explained in ticket FORGE-327. Try using the updated url and see if that works for you.
As for the ssl error, see the documentation here and look at the first bullet point. It describes the reason this is happening and solutions for it.