I am getting following error when i run Vagrant up, after google i found people saying puppet latest version doesn't support that option, but don't know how to fix that issue?
==> centos7base: Running provisioner: puppet...
==> centos7base: Running Puppet with site.pp...
==> centos7base: Error: Could not parse application options: invalid option: --manifestdir
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Vagrant version: 1.7.4
Vagrantfile:
config.vm.provision "puppet" do |puppet|
puppet.facter = {
"vagrant" => "1"
}
puppet.manifests_path = ["vm", "/Vagrant/puppet"]
puppet.hiera_config_path = "puppet/hiera.yaml"
puppet.module_path = "puppet/modules"
puppet.manifest_file = "site.pp"
end
You should be using a folder in relation to the Vagrantfile's location, Vagrant is smart enough to mount it into the instance all by itself.
This is an example from a working instance for my development machine. Which has vagrant/Vagrantfile, but it also has folders for manifests and modules in the same vagrant/ folder as subdirectories.
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "manifests"
puppet.module_path = "modules"
puppet.options = ['--verbose']
puppet.facter = {
"fqdn" => "vagrant-test.domain.env"
}
end
I had this problem using the Ubuntu / xenial64 box; the solution was
to install the puppet-common package. I provisioned through shell its installation as below:
config.vm.provision "shell", path: "install-puppet.sh"
File content install-puppet.sh:
#!/bin/bash
cd /tmp
wget http://apt.puppetlabs.com/puppetlabs-release-pc1-xenial.deb
dpkg -i puppetlabs-release-pc1-xenial.deb
apt-get update
apt-get -y install puppet-common
echo "PATH=/opt/puppetlabs/bin:$PATH" >> /etc/bash.bashrc
echo "export PATH" >> /etc/bash.bashrc
export PATH=/opt/puppetlabs/bin:$PATH
Related
I installed tflint on my mac and when I try to execute --init it is throwing 401 error.
Could you tell me if I need to export any env variables to fetch git repo.
tflint --init
Installing `azurerm` plugin...
Failed to install a plugin. An error occurred:
Error: Failed to fetch GitHub releases: GET https://api.github.com/repos/terraform-
linters/tflint-ruleset-azurerm/releases/tags/v0.14.0: 401 Bad credentials []
.tflint.hcl file
plugin "azurerm" {
enabled = true
version = "0.14.0"
source = "github.com/terraform-linters/tflint-ruleset-azurerm"
}
i searched tflint documentation but could not find anything.
thanks,
Santosh
tflint requires azurem plugin to be installed. For that download the azurem proper plugin binary here: https://github.com/terraform-linters/tflint-ruleset-azurerm/releases/tag/v0.16.0 (check the version that you need) , unzip it and then move it to your user's .tflint.d/plugins directory (create it if it doesn't exist)
mv ~/Downloads/tflint-ruleset-azurerm ~/.tflint.d/plugins/
I was recently was trying to use tflint behind a corporate firewall and was getting checksums errors. I was able to resolve it by:
Adding the following to my .zshrc file. Try open ~/.zshrc to open the file from the Mac terminal.
setup_local_tflint_plugin() {
for PLUGIN in ${PLUGINS[#]}; do
TFLINT_PLUGIN_NAME=${PLUGIN%|*}
TFLINT_PLUGIN_VERSION=${PLUGIN#*|}
TFLINT_PLUGIN_DIR=~/.tflint.d/plugins/terraform-linters/tflint-ruleset-${TFLINT_PLUGIN_NAME}/${TFLINT_PLUGIN_VERSION}
mkdir -p $TFLINT_PLUGIN_DIR
FILE=$TFLINT_PLUGIN_DIR/tflint-ruleset-${TFLINT_PLUGIN_NAME}
if [ ! -f "$FILE" ]; then
echo "Downloading version ${TFLINT_PLUGIN_VERSION} of the ${TFLINT_PLUGIN_NAME} plugin."
curl -L "https://github.com/terraform-linters/tflint-ruleset-${TFLINT_PLUGIN_NAME}/releases/download/v${TFLINT_PLUGIN_VERSION}/tflint-ruleset-${TFLINT_PLUGIN_NAME}_${PLATFORM_ARCHITECTURE}.zip" > ${TFLINT_PLUGIN_DIR}/provider.zip
yes yes | unzip "${TFLINT_PLUGIN_DIR}/provider.zip" -d ${TFLINT_PLUGIN_DIR} | rm ${TFLINT_PLUGIN_DIR}/provider.zip
fi
done
chmod -R +x ~/.tflint.d/plugins
}
# Valid values for PLATFORM_ARCHITECTURE are:
# 'darwin_amd64', 'darwin_arm64', 'linux_386', 'linux_amd64',
# 'linux_arm', 'linux_arm64', 'windows_386', 'windows_amd64'
PLATFORM_ARCHITECTURE="darwin_amd64"
PLUGINS=("azurerm|0.16.0" "aws|0.16.0")
setup_local_tflint_plugin
Opening up my code editor and navigating to my Terraform scripts.
Creating a .tflint.hcl configuration file in the same folder as my Terraform scripts (like below).
config {
module = true
force = false
disabled_by_default = false
plugin_dir = "~/.tflint.d/plugins/terraform-linters/tflint-ruleset-azurerm/0.16.0"
}
plugin "azurerm" {
enabled = true
}
Opening a new terminal window (plugins should start installing).
Running tflint . --config ./.tflint.hcl.
Note: This only works for one plugin at a time (e.g. azurerm, aws, etc.).
To install a new plugin or plugin version simply add more to the PLUGINS attribute in the .zshrc file. To select the plugin, update the .tflint.hcl files plugin_dir attribute to point to the right plugin and version.
I am at my wits end trying to figure this out
When I execute the following command:
sudo -u icinga '/usr/lib//nagios/plugins/check_db2_health' '--database' 'mydatabase' '--environment' 'DB2DIR=/opt/IBM/db2/V11.1.4fp5a' '--environment' 'DB2INSTANCE=mydatabase' '--environment' 'INSTHOME=/srv/db2/home/mydatabase' '--report' 'short' '--username' 'icinga' '--mode' 'connection-time' '--warning' '50'
The output as follow
[DBinstance : mydatabase] Status : CRITICAL - cannot connect to mydatabase. install_driver(DB2) failed: Can't load '/usr/lib/nagios/plugins/PerlLib/lib/perl5/site_perl/5.18.2/x86_64-linux-thread-multi/auto/DBD/DB2/DB2.so' for module DBD::DB2: libdb2.so.1: cannot open shared object file: No such file or directory at /usr/lib/perl5/5.18.2/x86_64-linux-thread-multi/DynaLoader.pm line 190.
at (eval 10) line 3.
Compilation failed in require at (eval 10) line 3.
Perhaps a required shared library or dll isn't installed where expected
at /usr/lib//nagios/plugins/check_db2_health line 2627.
But when I login to the user icinga using su - icinga
And run
'/usr/lib//nagios/plugins/check_db2_health' '--database' 'mydatabase' '--environment' 'DB2DIR=/opt/IBM/db2/V11.1.4fp5a' '--environment' 'DB2INSTANCE=mydatabase' '--environment' 'INSTHOME=/srv/db2/home/mydatabase' '--report' 'short' '--username' 'icinga' '--mode' 'connection-time' '--warning' '50'
It works fine.
How do I setup environment variables when sudo - u icinga command is fired ?
I am on a SUSE linux
I am kind of trying to setup a global environment variable just like the environment variable in icinga which can work across all commands executed on the server without have to use sudo -E etc because I cannot change the way icinga calls the plugin
you need to run the db2profile when you sudo
sudo -u icinga sqllib/db2profile; '/usr/lib//nagios/plugins/check_db2_health' ...
I am trying to create a payara cluster and I get an error during creation of a remote node:
./asadmin create-node-ssh --nodehost 10.198.228.240 --sshkeyfile /root/.ssh/id_rsa --force true --install true computer2
Enter admin user name> admin
Enter admin password for user "admin">
Created installation zip /root/payara5/glassfish/domains/domain1/config/glassfish1664073687432568371.zip
Successfully connected to root#10.198.228.240 using keyfile /root/.ssh/id_rsa
Copying /root/payara5/glassfish/domains/domain1/config/glassfish1664073687432568371.zip (146575218 bytes) to 10.198.228.240:/root/payara5
Installing glassfish1664073687432568371.zip into 10.198.228.240:/root/payara5
jar command failed while installing glassfish on host 10.198.228.240. Command output bash: jar: command not found
Command install-node-ssh failed.
Remote command output: bash: jar: command not found
Command create-node-ssh executed successfully.
Is there a solution for this issue?
jar command failed while installing glassfish on host 10.198.228.240. Command output bash: jar: command not found
the solution is :
1- add a path of jdj to /root/.bashrc :
export JAVA_HOME=/opt/java-jdk/jdk1.8.0_201
export PATH="$PATH:$JAVA_HOME/bin"
2- source .bashrc
3- check witch jar executable shell was trynig to execute :
$ which jar
/opt/java-jdk/jdk1.8.0_201/bin/jar
4- Now create a symbolic link to the jar executable file frome /usr/bin directory
# cd /usr/bin/
# ln -s /opt/java-jdk/jdk1.8.0_201/bin/jar
# which jar
/usr/bin/jar
after that create node-ssh frome computer1 :
root#computer1:~/payara5/bin# ./asadmin create-node-ssh --nodehost computer2 --sshkeyfile /root/.ssh/id_rsa --force true --install true computer2-node
Enter admin user name> admin
Enter admin password for user "admin">
Successfully installed Payara on computer2.
Command create-node-ssh executed successfully.
I have a very simple VagrantFile and Ansible Playbook. I just want to test install httpd. But every time I run vagrant provision after the VM is up I get this error:
Rons-MacBook-Pro:development you$ vagrant provision
[default] Running provisioner: ansible...
PLAY [Install and start apache] ***********************************************
GATHERING FACTS ***************************************************************
<10.0.0.111> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-1384111346.71-231091208956411 && echo $HOME/.ansible/tmp/ansible-1384111346.71-231091208956411']
<10.0.0.111> REMOTE_MODULE setup
<10.0.0.111> PUT /var/folders/h7/3b23bqhs5g39w_jntlkz3hpm0000gn/T/tmpQ3Hvaw TO /Users/you/.ansible/tmp/ansible-1384111346.71-231091208956411/setup
<10.0.0.111> EXEC ['/bin/sh', '-c', '/usr/bin/python2.6 /Users/you/.ansible/tmp/ansible-1384111346.71-231091208956411/setup; rm -rf /Users/you/.ansible/tmp/ansible-1384111346.71-231091208956411/ >/dev/null 2>&1']
ok: [10.0.0.111]
TASK: [Update apt cache] ******************************************************
<10.0.0.111> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-1384111347.33-79837739787852 && echo $HOME/.ansible/tmp/ansible-1384111347.33-79837739787852']
<10.0.0.111> REMOTE_MODULE apt upgrade=yes update_cache=yes
<10.0.0.111> PUT /var/folders/h7/3b23bqhs5g39w_jntlkz3hpm0000gn/T/tmpr5r1YH TO /Users/you/.ansible/tmp/ansible-1384111347.33-79837739787852/apt
<10.0.0.111> EXEC ['/bin/sh', '-c', '/usr/bin/python2.6 /Users/you/.ansible/tmp/ansible-1384111347.33-79837739787852/apt; rm -rf /Users/you/.ansible/tmp/ansible-1384111347.33-79837739787852/ >/dev/null 2>&1']
failed: [10.0.0.111] => {"failed": true}
msg: Could not import python modules: apt, apt_pkg. Please install python-apt package.
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/you/playbook.retry
10.0.0.111 : ok=1 changed=0 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
This is my VagrantFile:
Vagrant.configure("2") do |config|
# Every Vagrant virtual environment requires a box to build off of.
config.vm.box = "development-precise64"
# config.vm.host_name = "development.somethingwithcomputers.com"
# The url from where the 'config.vm.box' box will be fetched if it
# doesn't already exist on the user's system.
config.vm.box_url = "http://files.vagrantup.com/precise64.box"
config.vm.network "private_network", ip: "10.0.0.111"
# Share an additional folder to the guest VM. The first argument is
# an identifier, the second is the path on the guest to mount the
# folder, and the third is the path on the host to the actual folder.
# config.vm.share_folder "v-data", "/vagrant_data", "../data"
config.vm.synced_folder "/Users/rontalman/Public/Dropbox/Development/Code/Webdevelopment/htdocs/mgc.com", "/var/www", id: "vagrant-root", :nfs => false
config.vm.usable_port_range = (2200..2250)
config.vm.provider :virtualbox do |virtualbox|
virtualbox.customize ["modifyvm", :id, "--name", "mgc"]
virtualbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
virtualbox.customize ["modifyvm", :id, "--memory", "512"]
virtualbox.customize ["setextradata", :id, "--VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
end
# SSH
config.ssh.username = "vagrant"
config.ssh.shell = "bash -l"
config.ssh.keep_alive = true
config.ssh.forward_agent = false
config.ssh.forward_x11 = false
config.vagrant.host = :detect
# Ansible
config.vm.provision "ansible" do |ansible|
ansible.inventory_path = "provisioning/inventory.yml"
ansible.playbook = "provisioning/playbook.yml"
ansible.verbose = "vvvv"
end
end
And this is my simple playbook.yml:
---
- name: Install and start apache
hosts: all
user: root
tasks:
- name: Update apt cache
apt: upgrade=yes
update_cache=yes
- name: Install httpd
apt: pkg=httpd
- name: Start httpd
service: name=httpd state=running
And my inventory.yml:
[vagrant]
# Set at config.vm.network in the VagrantFile
10.0.0.111 ansible_connection=local ansible_ssh_port=2222 ansible_ssh_user=root ansible_ssh_pass=vagrant ansible_python_interpreter=/usr/bin/python2.6
I did install the python-apt package on the virtual machine, but still no dice.
If anybody has any tips, I'd love to hear them.
I see you're using a specific python interpreter for python2.6, and that you're using an Ubuntu Precise 64 image. I believe the Precise 64 package for apt-get is for python 2.7 (per apt-cache show python-apt). Assuming you used apt & default sources to install python-apt, I don't think the apt packages will be available to the 2.6 interpreter.
The workaround I used is installing apt-get packgages in separate role, without ansible_python_interpreter explicitly set. Then doing the rest in next role which has ansible_python_interpreter set. Hope this helps.
I'm trying to deploy a Node.js app to a VPS running on DigitalOcean and so far I'm getting well..very far. My understanding of *nix is very limited so please bear with me :)
I can ssh as root into my VPS (Ubuntu 13.04 x32) with my SSH keys without any problems. When I run "$cap deploy:setup" on my local machine I get this result:
* 2013-09-11 12:39:08 executing `deploy:setup'
* executing "mkdir -p /var/www/yable /var/www/yable/releases /var/www/yable/shared /var/www/yable/shared/system /var/www/yable/shared/log /var/www/yable/shared/pids"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
** [out :: 162.243.1.207] env: sh: No such file or directory
command finished in 118ms
failed: "env PATH=/var/www/yable NODE_ENV=production sh -c 'mkdir -p /var/www/yable /var/www/yable/releases /var/www/yable/shared /var/www/yable/shared/system /var/www/yable/shared/log /var/www/yable/shared/pids'" on 162.243.1.207
When I run "$cap deploy:check" I get the following output:
* 2013-09-11 12:40:36 executing `deploy:check'
* executing "test -d /var/www/yable/releases"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 67ms
* executing "test -w /var/www/yable"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 76ms
* executing "test -w /var/www/yable/releases"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 69ms
* executing "which git"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 75ms
The following dependencies failed. Please check them and try again:
--> `/var/www/yable/releases' does not exist. Please run `cap deploy:setup'. (162.243.1.207)
--> You do not have permissions to write to `/var/www/yable'. (162.243.1.207)
--> You do not have permissions to write to `/var/www/yable/releases'. (162.243.1.207)
--> `git' could not be found in the path (162.243.1.207)
Here's my config/deploy.rb file:
set :application, "Yable.com"
set :scm, :git
set :repository, "git#github.com:Yable/yable-node-js.git"
set :user, "root"
set :ssh_options, { :forward_agent => true }
default_run_options[:pty] = true
set :use_sudo, false
set :branch, "master"
role :app, "162.243.1.207"
set :deploy_to, "/var/www/yable"
set :default_environment, {
'PATH' => "/var/www/yable",
'NODE_ENV' => 'production'
}
I'm dumbfounded as the directory mentioned (/var/www/yable/releases) does exist and that git has been installed. Any ideas?
Thanks,
Francis
I installed ruby 2.0.0 and Bundler and it seems to have solved my deployment issues.