Arango: changing config in MacOSX does not change endpoint - arangodb

I have have just downloaded arango with a
brew install arangodb
the symlink component of installation failed, since brew no longer enables this for what seem to be good reasons.
Next, I modify
/usr/local/etc/arangodb3/arangosh.conf
/usr/local/etc/arangodb3/arangod.conf
to point at localhost:#### and yet all arango db executables still attempt to connect to the default IP address, and do not connect to the localhost.
How do I motivate this change?

Editing the files only is not enough. Please restart the service.
/usr/local/etc/arangodb3/arangod.conf is the correct file though.

Related

Puppet 6.1.0: node.rb missing from installed files?

For testing, I have installed two instances of Ubuntu server 18.04 on VirtualBox. I then installed one with Puppet-server 6.1.0 and one with Puppet-agent 6.1.0, as per the documentation at Puppetlabs for version 6.1. Foreman is not installed.
After registering my agent at the puppetserver and signing the certificate, starting a puppet-run (sudo /opt/puppetlabs/bin/puppet agent --test) fails with the following error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed when searching for node puppetagent.fritz.box: Exception while executing '/etc/puppetlabs/puppet/node.rb': Cannot run program "/etc/puppetlabs/puppet/node.rb" (in directory "."): error=2, No such file or directory
I was dumbstruck to find that the script /etc/puppetlabs/puppet/node.rb was indeed missing and was also not included in the packages of puppetserver, puppet-agent or facter (sudo dpkg-query -L ...).
Googling for it, I only found a script of the same name that belonged to Foreman.
The file does also not seem to be present in the puppetserver source-code at github.
Is anyone able to shed some light on this?
Your server configuration seems to be set up to specify use of an external node classifier. This is optional: Puppet does not require an ENC and does not provide one by default. That's part of what makes them "external". If you obtained the result you describe straight out of the box then it probably reflects a packaging flaw that you should report.
In the meantime, you should be able to update the configuration to disable use of an ENC by changing the value of the node_terminus setting to plain. Alternatively, you should be able to just delete both node_terminus and external_nodes from your configuration, because the default for the former is plain.
Tagging on to John's answer, your configuration is probably configured to talk to the Foreman. If you didn't write it yourself or copy it from somewhere and you're sure you don't have any Foreman packages installed, then it's definitely a packaging error that you should report.
That said, puppet repos are almost always the right answer rather than distro packages.

No Internet on Custom Image VM for Azure

I launched an Ubuntu 18.04 VM with Azure. I installed a bunch of stuff that I need. Then, I used the dashboard to create a custom image from this machine. After that, I checked that the image was okay by launching some machines with that image. Everything seemed to be working fine.
Today, I launched a new instance with my custom image. Then I tried to install a few things with apt-get install and I get the following error (e.g. for unzip):
sudo: unable to resolve host ABCDEFG: Resource temporarily unavailable
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package unzip is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'unzip' has no installation candidate
This same thing happens for any package I try to install. After testing some basic things with my repositories, I checked the internet connection with ping. E.g. ping www.google.com which is also not working. I launched a vanilla Ubuntu 18.04 instance and I am not having these problems with that machine.
I have also tried sudo reboot but no luck with that. I did notice that when the system booted it shows the following error, also indicating that something is wrong with the internet:
Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings
Any help is greatly appreciated.
So, after some digging around, I found this answer to something similar: https://askubuntu.com/questions/1045278/ubuntu-server-18-04-temporary-failure-in-name-resolution.
I used the following command and the internet started working again:
sudo ln -s ../run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
This is a little different than the answer on askubuntu because this is on an Azure image. First, I noticed that my image was missing resolv.conf in /etc. Using ls -la /etc/resolv.conf on a different azure image, I saw that it was a symbolic link to ../run/systemd/resolve/stub-resolve.conf, so I created a link that matched this format on my machine and that fixed things.
** EDIT **
It's worth noting that when you deprovision the VM to create the custom image, it does say:
WARNING! The waagent service will be stopped.
WARNING! Cached DHCP leases will be deleted.
WARNING! root password will be disabled. You will not be able to login as root.
WARNING! /etc/resolv.conf will be deleted.
WARNING! xxxx account and entire home directory will be deleted.

centOS7 netinstall with kickstart fails to get installation source

I'm trying to install CentOS7 using a kickstart file with a VM. I am using a netinstall version of the ISO.
When I try to put the URL in the kickstart file, it will take a long time to check the installation source, and then fail.
I have checked the ISO, installing successfully without kickstart and using this address for the source:
url --url="http://sunsite.rediris.es/mirror/CentOS/7/os/x86_64/"
However, when using kickstart file, I install and then fail with below error message -
Error setting up base repository
Even if I manually type it in after it errors out.
Does anyone have any ideas? I have reduced my kickstart file to just that one line and it still shows the same behaviour. I don't have this problem with kickstart using the minimal or full install ISO's.
I'm just learning Linux so I didn't realize you could switch into another screen and monitor the install/run commands simultaneously.
After doing so I realised it wouldn't resolve names.
My DNS is dead/isn't responding. Used kickstart to manually assign another DNS server to the interface. This allowed the install to resolve the url. This would explain why the install worked with the netinstall iso on its own, as it was using default settings.
Hope this helps someone.

Virtualmin Installation

I had installed virtualmin on a RHEL system and a couple of very strange problems have cropped up.
Firstly, the Apache test page now says - powered by CentOS instead of RHEL. All the files and filesystems are intact therefore I am at a loss as to why it would report another version of linux altogether.
Secondly, my sudo access has been overwritten / removed after installation. It just comes up with a message that XXXX (username) does not have sudo access....etc
And lastly, trying to access the virtualmin page over the port 10000 is just returning an "unable to connect" error. [Since I am locked out of using sudo, I am at a loss of how to proceed].
Thank you in advance for your help.
The Apache package we ship is a rebuild of the SRPM from CentOS. The default page is simply an HTML file...it is not "reporting" anything, really, except that you haven't setup any websites yet. On CentOS/RHEL Apache has to be rebuilt in order to support virtual servers in /home when using suexec. So, this is expected behavior and no reason for alarm. We used to ship a custom error page instead (with Virtualmin logo instead of CentOS, but the patch broke a while back and I never got around to fixing it...might go back to that next time we roll an Apache update).
Virtualmin did not touch your sudoers file. That problem is unrelated to the Virtualmin installation. (I wrote the install.sh and the virtualmin-base package; I'm 100% certain your sudoers issue is unrelated to Virtualmin). I don't have any guesses about what went wrong there, or how to fix it if you don't have any way to access the machine as root (rebooting into single user mode would be the right thing if you have hardware access or can get access via a KVM from your hosting provider/colo).
We would need to see the last few dozen lines of the install log to know what went wrong with the Virtualmin installation, and why Webmin failed to start.

Cloud9 on Raspberry Pi, Unable to save files

I'm trying to get the cloud9 local server working on my Raspberry Pi(512mb model B, running raspbian).
I followed this installation guide:
http://www.slickstreamer.info/2013/02/how-to-install-cloud-9-on-raspberry-pi.html
After this installation everything appeared to be working properly, I can start the server with the following command:
~/cloud9/bin/cloud9.sh -l 0.0.0.0 -w ~/Documents/www/workspace/
when I start the server all the files in the workspace are displayed properly and I can view, duplicate, delete, and create files remotely no problem. But when I edit an existing file and try to save it remotely a little spinning wheel pops up on the tab of the file I'm saving and it continues to spin endlessly.
When I start the server a warning pops up saying: 'path.existsSync is now called fs.existsSync.' I'm not sure if that is relevant or not.
I found another thread somewhere saying that I should go through cloud9/configs/default.js and replace any instance of localhost with 0.0.0.0. I tried that, but it didn't fix my problem.
Does anyone have any suggestions about how to get cloud9 saving files properly?
Thanks in advance for your help.
There were several complains about IDE file saving hangs on cloud9 support. At the bottom of the page there is a solution you can try.
I fully removed cloud9 and node(followed these directions to remove node: Uninstall Node.JS using Linux command line?), and then did a clean install following these directions: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=63&t=30265. In addition to those commands I also had to run the following:
sudo npm install formidable
sudo npm install gnu-tools
sudo npm install xmldom
after that I was able to start the cloud9 server without issue, and now I'm able to save files.
thanks for trying to help

Resources