Capistrano check fails on DigitalOcean VPS - node.js

I'm trying to deploy a Node.js app to a VPS running on DigitalOcean and so far I'm getting well..very far. My understanding of *nix is very limited so please bear with me :)
I can ssh as root into my VPS (Ubuntu 13.04 x32) with my SSH keys without any problems. When I run "$cap deploy:setup" on my local machine I get this result:
* 2013-09-11 12:39:08 executing `deploy:setup'
* executing "mkdir -p /var/www/yable /var/www/yable/releases /var/www/yable/shared /var/www/yable/shared/system /var/www/yable/shared/log /var/www/yable/shared/pids"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
** [out :: 162.243.1.207] env: sh: No such file or directory
command finished in 118ms
failed: "env PATH=/var/www/yable NODE_ENV=production sh -c 'mkdir -p /var/www/yable /var/www/yable/releases /var/www/yable/shared /var/www/yable/shared/system /var/www/yable/shared/log /var/www/yable/shared/pids'" on 162.243.1.207
When I run "$cap deploy:check" I get the following output:
* 2013-09-11 12:40:36 executing `deploy:check'
* executing "test -d /var/www/yable/releases"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 67ms
* executing "test -w /var/www/yable"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 76ms
* executing "test -w /var/www/yable/releases"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 69ms
* executing "which git"
servers: ["162.243.1.207"]
[162.243.1.207] executing command
command finished in 75ms
The following dependencies failed. Please check them and try again:
--> `/var/www/yable/releases' does not exist. Please run `cap deploy:setup'. (162.243.1.207)
--> You do not have permissions to write to `/var/www/yable'. (162.243.1.207)
--> You do not have permissions to write to `/var/www/yable/releases'. (162.243.1.207)
--> `git' could not be found in the path (162.243.1.207)
Here's my config/deploy.rb file:
set :application, "Yable.com"
set :scm, :git
set :repository, "git#github.com:Yable/yable-node-js.git"
set :user, "root"
set :ssh_options, { :forward_agent => true }
default_run_options[:pty] = true
set :use_sudo, false
set :branch, "master"
role :app, "162.243.1.207"
set :deploy_to, "/var/www/yable"
set :default_environment, {
'PATH' => "/var/www/yable",
'NODE_ENV' => 'production'
}
I'm dumbfounded as the directory mentioned (/var/www/yable/releases) does exist and that git has been installed. Any ideas?
Thanks,
Francis

I installed ruby 2.0.0 and Bundler and it seems to have solved my deployment issues.

Related

Oracle Silent Install Failing in Chef

I'm trying to install Oracle on a RHEL VM in Chef. When I directly log into the VM as the install user ("oracle1") and run the silent install command:
./runInstaller -ignorePrereq -waitforcompletion -silent -responseFile /u01/app/oracle/product/19.0.0/dbhome_1/install/response/db_install.rsp
the installation is successful.
I want to automate this installation by adding it to my existing Chef recipes, which I am currently attempting using the following block:
execute 'install oracle' do
command './runInstaller -ignorePrereq -waitforcompletion -silent -responseFile /u01/app/oracle/product/19.0.0/dbhome_1/install/response/db_install.rsp'
cwd '/u01/app/oracle/product/19.0.0/dbhome_1'
user 'oracle1'
group 'oinstall'
#not_if { ::File.exist?("/u01/app/oracle/product/completed.txt") }
end
However, this block fails and results in the following error:
[FATAL] [INS-32042] The Installer has detected that the user (oracle1) is not a member of the central inventory group: oinstall
ACTION: Make sure that the user (oracle1) is member of the central inventory group (oinstall)
But, previously in the recipe, I run the block:
execute 'luseradd' do
command 'sudo luseradd -g oinstall -d /home/oracle1 -s /bin/bash oracle1'
not_if { Dir.exist?("/home/oracle1") }
end
which (as far as I am aware) contradicts the error message I get. Also, when I check the groups that oracle1 is part of, oinstall is listed as one of them.
Any help/pointers would be appreciated!
You can modify the execute block like this:
execute 'install oracle' do
command 'sudo -Eu oracle1 ./runInstaller -ignorePrereq -waitforcompletion -silent -responseFile /u01/app/oracle/product/19.0.0/dbhome_1/install/response/db_install.rsp'
cwd '/u01/app/oracle/product/19.0.0/dbhome_1'
#not_if { ::File.exist?("/u01/app/oracle/product/completed.txt") }
end
Additionally you may need to modify your user oracle1 so it can execute commands without passing root password.

Node.js permission denied public key

I'm trying to run a Git command via node over SSH but I keep getting the error:
Permission denied (publickey)
I guess this is because Node does not have access to my SSH keys since it is running in a child process?
How can I get past this?
On windows, if your credentials are correct then it should work.
I guess you are running something like this -
require('child_process').spawn('git', ['push', 'origin', 'master']);
It works for me in both cases, ssh and https.
I fixed this running a Bash script from node as follows:
"scripts": {
"start": "npm-run-all -p server update",
"server": "dyson rest 7070",
"update": "sh update.sh"
}
update.sh looks like:
#!/usr/bin/env bash
set -e
set -o pipefail
SSH_KEY=/path/.ssh/id_rsa
function update {
eval $(ssh-agent -s)
ssh-add ${SSH_KEY}
git submodule update --recursive --remote
}
update
The main thing is to start the ssh-agent in Node and then add your ssh key to the agent before running your git command.
UPDATE: The above works but I found a better answer. The reason it was failing is because inside the executing script the HOME environment variable was not pointing to the same HOME when outside the script. I fixed this by putting my ssh keys in both HOME folders. Then the script was able to find the right key and voilĂ .
PS You can determine the value of the HOME variable in Node by logging process.env.HOME or in a shell script with echo "${HOME}".

127 Build step > 'Execute shell script on remote host using ssh' marked build as > failure Finished: FAILURE

I am trying to use sh or ssh to connect to a linux box via jenkins (I am a noob admittedly). Even trying a ls command I am getting error - I did have this working before however - any help greatly appreciated.
Building in workspace /var/lib/jenkins/jobs/Demo/workspace executing
script:
USER="jenkins" sh '''#!/bin/bash
HOST=10.59.151.121
USER=devuser
PASSWORD=TGMCfpfS
ls
bye
EOF
'''
: No such file or directory [SSH] exit-status: 127 Build step
'Execute shell script on remote host using ssh' marked build as
failure Finished: FAILURE
For some reason I found that adding commands after the ''' allows them to be executed - even although the same warning appears, it works fine!

Ansible - how to run Java jar with parameters?

I have a problem with ansible playbook. I am trying to run a Java jar as a command. Whenever I run this directly on the virtual machine - it works all the time:
java -jar Installer20161018.jar -readImage Linux_x86-64_20161111.zip -installDir /opt/installPath/vf5511/instDir
important information: the installation HAS to be run from user vf5511, and his home folder is /opt/installPath/vf5511
But when trying to write a playbook and run it - it gets all wrong.
This is the playbook:
---
- hosts: webmwc10
become: yes
become_user: wm5511
become_method: sudo
tasks:
- name: installing server
shell: java -jar Installer20161018.jar -readImage Linux_x86-64_20161111.zip -installDir /opt/installPath/vf5511/instDir
When I run the playbook, I get an error:
"rc": 127,
"start": "2017-06-02 09:21:31.931049",
"stderr": "/bin/sh: java: command not found",
"stderr_lines": [
"/bin/sh: java: command not found"
],
"stdout": "",
"stdout_lines": []
Java not found? I don't understand this. Java is installed and working properly!
Can anyone help me with this?
Run below commands on your target server to rule out Java issues
which java
java -version
Upon successful results add quotes to your shell command like below and run the playbook again.
shell: "java -jar Installer20161018.jar -readImage Linux_x86-64_20161111.zip -installDir /opt/installPath/vf5511/instDir"
You should add your java address before "java". This problem may be occurs when using ssh too. For example:
shell: /your_java_address_in_target_server/java -jar Installer20161018.jar -readImage Linux_x86-64_20161111.zip -installDir /opt/installPath/vf5511/instDir
#1. make sure you "become_user" who has access to java
#2. In the .bash_profile, make sure you are setting the Java home path.
#3. Before calling the java command, run .bash_profile to make sure the JDK path is set.
Eg: - name: unjar abc.jar
shell: source ~/.bash_profile; jar xvf abc.jar

Vagrant Error: Could not parse application options: invalid option: --manifestdir

I am getting following error when i run Vagrant up, after google i found people saying puppet latest version doesn't support that option, but don't know how to fix that issue?
==> centos7base: Running provisioner: puppet...
==> centos7base: Running Puppet with site.pp...
==> centos7base: Error: Could not parse application options: invalid option: --manifestdir
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
Vagrant version: 1.7.4
Vagrantfile:
config.vm.provision "puppet" do |puppet|
puppet.facter = {
"vagrant" => "1"
}
puppet.manifests_path = ["vm", "/Vagrant/puppet"]
puppet.hiera_config_path = "puppet/hiera.yaml"
puppet.module_path = "puppet/modules"
puppet.manifest_file = "site.pp"
end
You should be using a folder in relation to the Vagrantfile's location, Vagrant is smart enough to mount it into the instance all by itself.
This is an example from a working instance for my development machine. Which has vagrant/Vagrantfile, but it also has folders for manifests and modules in the same vagrant/ folder as subdirectories.
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "manifests"
puppet.module_path = "modules"
puppet.options = ['--verbose']
puppet.facter = {
"fqdn" => "vagrant-test.domain.env"
}
end
I had this problem using the Ubuntu / xenial64 box; the solution was
to install the puppet-common package. I provisioned through shell its installation as below:
config.vm.provision "shell", path: "install-puppet.sh"
File content install-puppet.sh:
#!/bin/bash
cd /tmp
wget http://apt.puppetlabs.com/puppetlabs-release-pc1-xenial.deb
dpkg -i puppetlabs-release-pc1-xenial.deb
apt-get update
apt-get -y install puppet-common
echo "PATH=/opt/puppetlabs/bin:$PATH" >> /etc/bash.bashrc
echo "export PATH" >> /etc/bash.bashrc
export PATH=/opt/puppetlabs/bin:$PATH

Resources