I'm trying to run a Git command via node over SSH but I keep getting the error:
Permission denied (publickey)
I guess this is because Node does not have access to my SSH keys since it is running in a child process?
How can I get past this?
On windows, if your credentials are correct then it should work.
I guess you are running something like this -
require('child_process').spawn('git', ['push', 'origin', 'master']);
It works for me in both cases, ssh and https.
I fixed this running a Bash script from node as follows:
"scripts": {
"start": "npm-run-all -p server update",
"server": "dyson rest 7070",
"update": "sh update.sh"
}
update.sh looks like:
#!/usr/bin/env bash
set -e
set -o pipefail
SSH_KEY=/path/.ssh/id_rsa
function update {
eval $(ssh-agent -s)
ssh-add ${SSH_KEY}
git submodule update --recursive --remote
}
update
The main thing is to start the ssh-agent in Node and then add your ssh key to the agent before running your git command.
UPDATE: The above works but I found a better answer. The reason it was failing is because inside the executing script the HOME environment variable was not pointing to the same HOME when outside the script. I fixed this by putting my ssh keys in both HOME folders. Then the script was able to find the right key and voilà.
PS You can determine the value of the HOME variable in Node by logging process.env.HOME or in a shell script with echo "${HOME}".
Related
What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.
I have the following script:
#!/bin/bash
su newuser
node index.js
This script is an entrypoint for a docker container.
When I run the container, I see that the script gets executed and I switch to newuser. However, index.js does not get called. But as soon as I type "exit" to exit newuser, index.js starts running.
Can someone explain what the problem is here, please?
su newuser will create a new shell. Basically, that command launches a process that takes time to exit. Only once it exits will the next command in your original bash script execute.
If you want to run node as newuser, use this command instead:
su newuser -c "node index.js"
Probably you want to include the full path to node as well, because launching scripts this way often doesn't bring up the full environment that you might expect (PATH might not be complete compared to running a full shell):
su newuser -c "/path/to/node index.js"
If the script is an entrypoint script, you shouldn’t need to set the username at all; a USER directive in the Dockerfile can set the (default) user name or user ID.
For this simple setup I wouldn’t use an entrypoint script at all. I’d put in my Dockerfile
USER newuser
CMD ["node", "index.js"]
In general I’d avoid entrypoint scripts or ENTRYPOINT directives that run fixed commands (and prefer CMD over ENTRYPOINT) because they make it difficult to do the otherwise very-useful-when-things-are-broken
docker run --rm -it myimage sh
I'm following a tutorial on PluralSight regarding vagrant and hubot slack setup.
The only difference is that I'm using hubot-slack.
If I start the hubot by invoking hubot script from terminal - everything works fine - the bot connects and responds to commands.
Unfortunately, when the hubot is started as a service from by the upstart - I get this logged into /var/log/upstart/myhubot.log `Cannot load adapter slack - Error: Cannot find module 'hubot-slack'
my /bin/hubot file looks like this (this works just fine when executed from cli):
#!/bin/sh
set -e
npm install
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:$PATH"
export HUBOT_SLACK_TOKEN={}
exec node_modules/.bin/hubot --name "hubot" --adapter slack "$#"
my .conf file that's executed as a service looks like this (can't find module):
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
setuid vagrant
env HOME="/home/vagrant"
chdir /vagrant/my-awesome-hubot
console log
script
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
export HUBOT_SLACK_TOKEN={}
echo "DEBUG: `set`" >> /tmp/myhubot.log
exec node_modules/.bin/hubot --name "hubot" --adapter slack
end script
respawn
Keep in mind that the slack token is excluded from these scripts.
Debug reveals that chdir does the correct thing and the pwd is exactly the same as when I execute the script manually.
I've tried removing entire nodejs project and generating with yeoman from scratch and also tried installing hubot-slack both globaly and localy but to no avail.
In case of a .conf file - there is no npm install but in the provision.sh file - I am cd-ing (as a vagrant user) to the root directory, doing npm install - and only then, service restart. I am also making sure to clean up everything before another round of testing before I do - vagrant provision
cp /vagrant/upstart/myhubot.conf /etc/init/myhubot.conf
sudo -u vagrant -i sh -c 'cd /vagrant/my-awesome-hubot; npm install'
service myhubot restart
Do you have any suggestions.
I've just spent the day working through the same issue as this unanswered question so thought I would update with my solution.
The current hubot generated app is started with the cli with command HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack whilst in the folder where hubot was generated. Therefore the utilises the default bin/hubot script.
Your conf file needs to pick this up therefore should run the following:
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
script
chdir /vagrant/my-awesome-hubot
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack --name "hubot" >> /tmp/myhubot.log
end script
respawn
I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.
I am trying to deploy my app to Heroku however I rely on using some private git repos as modules. I do this for code reuse between projects, e.g. I have a custom logger I use in multiple apps.
"logger":"git+ssh://git#bitbucket.org..............#master"
The problem is Heroku obviously does not have ssh access to this code. I can't find anything on this problem. Ideally Heroku have a public key I can can just add to the modules.
Basic auth
GitHub has support for basic auth:
"dependencies" : {
"my-module" : "git+https://my_username:my_password#github.com/my_github_account/my_repo.git"
}
As does BitBucket:
"dependencies" : {
"my-module": "git+https://my_username:my_password#bitbucket.org/my_bitbucket_account/my_repo.git"
}
But having plain passwords in your package.json is probably not desired.
Personal access tokens (GitHub)
To make this answer more up-to-date, I would now suggest using a personal access token on GitHub instead of username/password combo.
You should now use:
"dependencies" : {
"my-module" : "git+https://<username>:<token>#github.com/my_github_account/my_repo.git"
}
For Github you can generate a new token here:
https://github.com/settings/tokens
App passwords (Bitbucket)
App passwords are primarily intended as a way to provide compatibility with apps that don't support two-factor authentication, and you can use them for this purpose as well. First, create an app password, then specify your dependency like this:
"dependencies" : {
"my-module": "git+https://<username>:<app-password>#bitbucket.org/my_bitbucket_account/my_repo.git"
}
[Deprecated] API key for teams (Bitbucket)
For BitBucket you can generate an API Key on the Manage Team page and then use this URL:
"dependencies" : {
"my-module" : "git+https://<teamname>:<api-key>#bitbucket.org/team_name/repo_name.git"
}
Update 2016-03-26
The method described no longer works if you are using npm3, since npm3 fetches all modules described in package.json before running the preinstall script. This has been confirmed as a bug.
The official node.js Heroku buildpack now includes heroku-prebuild and heroku-postbuild, which will be run before and after npm install respectively. You should use these scripts instead of preinstall and postinstall in all cases, to support both npm2 and npm3.
In other words, your package.json should resemble:
"scripts": {
"heroku-prebuild": "bash preinstall.sh",
"heroku-postbuild": "bash postinstall.sh"
}
I've come up with an alternative to Michael's answer, retaining the (IMO) favourable requirement of keeping your credentials out of source control, whilst not requiring a custom buildpack. This was borne out of frustration that the buildpack linked by Michael is rather out of date.
The solution is to setup and tear down the SSH environment in npm's preinstall and postinstall scripts, instead of in the buildpack.
Follow these instructions:
Create two scripts in your repo, let's call them preinstall.sh and postinstall.sh.
Make them executable (chmod +x *.sh).
Add the following to preinstall.sh:
#!/bin/bash
# Generates an SSH config file for connections if a config var exists.
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Detected SSH key for git. Adding SSH config" >&1
echo "" >&1
# Ensure we have an ssh folder
if [ ! -d ~/.ssh ]; then
mkdir -p ~/.ssh
chmod 700 ~/.ssh
fi
# Load the private key into a file.
echo $GIT_SSH_KEY | base64 --decode > ~/.ssh/deploy_key
# Change the permissions on the file to
# be read-only for this user.
chmod 400 ~/.ssh/deploy_key
# Setup the ssh config file.
echo -e "Host github.com\n"\
" IdentityFile ~/.ssh/deploy_key\n"\
" IdentitiesOnly yes\n"\
" UserKnownHostsFile=/dev/null\n"\
" StrictHostKeyChecking no"\
> ~/.ssh/config
fi
Add the following to postinstall.sh:
#!/bin/bash
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Cleaning up SSH config" >&1
echo "" >&1
# Now that npm has finished running, we shouldn't need the ssh key/config anymore.
# Remove the files that we created.
rm -f ~/.ssh/config
rm -f ~/.ssh/deploy_key
# Clear that sensitive key data from the environment
export GIT_SSH_KEY=0
fi
Add the following to your package.json:
"scripts": {
"preinstall": "bash preinstall.sh",
"postinstall": "bash postinstall.sh"
}
Generate a private/public key pair using ssh-agent.
Add the public key as a deploy key on Github.
Create a base64 encoded version of your private key, and set it as the Heroku config var GIT_SSH_KEY.
Commit and push your app to Github.
When Heroku builds your app, before npm installs your dependencies, the preinstall.sh script is run. This creates a private key file from the decoded contents of the GIT_SSH_KEY environment variable, and creates an SSH config file to tell SSH to use this file when connecting to github.com. (If you are connecting to Bitbucket instead, then update the Host entry in preinstall.sh to bitbucket.org). npm then installs the modules using this SSH config. After installation, the private key is removed and the config is wiped.
This allows Heroku to pull down your private modules via SSH, while keeping the private key out of the codebase. If your private key becomes compromised, since it is just one half of a deploy key, you can revoke the public key in GitHub and regenerate the keypair.
As an aside, since GitHub deploy keys have read/write permissions, if you are hosting the module in a GitHub organization, you can instead create a read-only team and assign a 'deploy' user to it. The deploy user can then be configured with the public half of the keypair. This adds an extra layer of security to your module.
It's a REALLY bad idea to have plain text passwords in your git repo, using an access token is better, but you will still want to be super careful.
"my_module": "git+https://ACCESS_TOKEN:x-oauth-basic#github.com/me/my_module.git"
I created a custom nodeJS buildpack that will allow you to specify an SSH key that is registered with ssh-agent and used by npm when dynos are first setup. It seamlessly allows you to specify your module as an ssh url in your package.json like shown:
"private_module": "git+ssh://git#github.com:me/my_module.git"
To setup your app to use your private key:
Generate a key: ssh-keygen -t rsa -C "your_email#example.com" (Enter no passphrase. The buildpack does not support keys with passphrases)
Add the public key to github: pbcopy < ~/.ssh/id_rsa.pub (in OS X) and paste the results into the github admin
Add the private key to your heroku app's config: cat id_rsa | base64 | pbcopy, then heroku config:set GIT_SSH_KEY=<paste_here> --app your-app-name
Setup your app to use the buildpack as described in the heroku nodeJS buildpack README included in the project. In summary the simplest way is to set a special config value with heroku config:set to the github url of the repository containing the desired buildpack. I'd recommend forking my version and linking to your own github fork, as I'm not promising to not change my buildpack.
My custom buildpack can be found here: https://github.com/thirdiron/heroku-buildpack-nodejs and it works for my system. Comments and pull requests are more than welcome.
Based on the answer from #fiznool I created a buildpack to solve this problem using a custom ssh key stored as an environment variable. As the buildpack is technology agnostic, it can be used to download dependencies using any tool like composer for php, bundler for ruby, npm for javascript, etc: https://github.com/simon0191/custom-ssh-key-buildpack
Add the buildpack to your app:
$ heroku buildpacks:add --index 1 https://github.com/simon0191/custom-ssh-key-buildpack
Generate a new SSH key without passphrase (lets say you named it deploy_key)
Add the public key to your private repository account. For example:
Github: https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/
Bitbucket: https://confluence.atlassian.com/bitbucket/add-an-ssh-key-to-an-account-302811853.html
Encode the private key as a base64 string and add it as the CUSTOM_SSH_KEY environment variable of the heroku app.
Make a comma separated list of the hosts for which the ssh key should be used and add it as the CUSTOM_SSH_KEY_HOSTS environment variable of the heroku app.
# MacOS
$ heroku config:set CUSTOM_SSH_KEY=$(base64 --input ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
# Ubuntu
$ heroku config:set CUSTOM_SSH_KEY=$(base64 ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
Deploy your app and enjoy :)
I was able to setup resolving of Github private repositories in Heroku build via Personal access tokens.
Generate Github access token here: https://github.com/settings/tokens
Set access token as Heroku config var: heroku config:set GITHUB_TOKEN=<paste_here> --app your-app-name or via Heroku Dashboard
Add heroku-prebuild.sh script:
#!/bin/bash
if [ "$GITHUB_TOKEN" != "" ]; then
echo "Detected GITHUB_TOKEN. Setting git config to use the security token" >&1
git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf git#github.com:
fi
add the prebuild script to package.json:
"scripts": {
"heroku-prebuild": "bash heroku-prebuild.sh"
}
For local environment we can also use git config ... or we can add the access token to ~/.netrc file:
machine github.com
login PASTE_GITHUB_USERNAME_HERE
password PASTE_GITHUB_TOKEN_HERE
and installing private github repos should work.
npm install OWNER/REPO --save will appear in package.json as: "REPO": "github:OWNER/REPO"
and resolving private repos in Heroku build should also work.
optionally you can setup a postbuild script to unset the GITHUB_TOKEN.
This answer is good https://stackoverflow.com/a/29677091/6135922, but I changed a little bit preinstall script. Hope this will help someone.
#!/bin/bash
# Generates an SSH config file for connections if a config var exists.
echo "Preinstall"
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Detected SSH key for git. Adding SSH config" >&1
echo "" >&1
# Ensure we have an ssh folder
if [ ! -d ~/.ssh ]; then
mkdir -p ~/.ssh
chmod 700 ~/.ssh
fi
# Load the private key into a file.
echo $GIT_SSH_KEY | base64 --decode > ~/.ssh/deploy_key
# Change the permissions on the file to
# be read-only for this user.
chmod o-w ~/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/deploy_key
# Setup the ssh config file.
echo -e "Host bitbucket.org\n"\
" IdentityFile ~/.ssh/deploy_key\n"\
" HostName bitbucket.org\n" \
" IdentitiesOnly yes\n"\
" UserKnownHostsFile=/dev/null\n"\
" StrictHostKeyChecking no"\
> ~/.ssh/config
echo "eval `ssh-agent -s`"
eval `ssh-agent -s`
echo "ssh-add -l"
ssh-add -l
echo "ssh-add ~/.ssh/deploy_key"
ssh-add ~/.ssh/deploy_key
# uncomment to check that everything works just fine
# ssh -v git#bitbucket.org
fi
You can use in package.json private repository with authentication example below:
https://usernamegit:passwordgit#github.com/reponame/web/tarball/branchname
In short it is not possible. The best solution to this problem I came up with is to use the new git subtree's. At the time of writing they are not in the official git source and so needs to be installed manual but they will be included in v1.7.11. At the moment it is available on homebrew and apt-get. it is then a case of doing
git subtree add -P /node_modules/someprivatemodue git#github.......someprivatemodule {master|tag|commit}
this bulks out the repo size but an update is easy by doing the command above with gitsubtree pull.
I have done this before with modules from github. Npm currently accepts the name of the package or a link to a tar.gz file which contains the package.
For example if you want to use express.js directly from Github (grab the link via the download section) you could do:
"dependencies" : {
"express" : "https://github.com/visionmedia/express/tarball/2.5.9"
}
So you need to find a way to access you repository as a tar.gz file via http(s).