'dzdo: service: command not found' when starting a service via SSH - linux

In my gitlab script I'm trying to install and start a java application remotely. The installation part goes fine but when I'm trying to start the service using this command:
ssh $DEPLOY_USER#$DEPLOY_HOST "sudo service my-service start"
I get the next error:
dzdo: service: command not found
All the previous commands with sudo rights are executed successfully. What's wrong with this one?

Adding bash to the command solved the issue:
ssh $DEPLOY_USER#$DEPLOY_HOST "sudo bash service my-service start"

Related

Docker flag "--gpu" does not work without sudo command

I'm ubuntu user. I use the following docker image, tensorflow/tensorflow:nightly-gpu
If I try to run this command
$ docker run -it --rm --gpus all tensorflow/tensorflow:nightly-gpu bash
There's permission denied error.
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: open failed: /sys/fs/cgroup/devices/user.slice/devices.allow: permission denied: unknown.
Of course, I can run this command if I am using sudo, but I want to use gpu without sudo.
Is there any good solution? Any leads, please?
As your problem seems to be only when running "--gpu".
Add/update these two sections of /etc/nvidia-container-runtime/config.toml
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
Source: https://github.com/containers/podman/issues/3659#issuecomment-543912380
If you can't use docker without sudo at all
If you are running in a Linux environment, you need to create a user for docker so you won't need to use sudo every time. Below are the steps to create:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ newgrp docker
Source: https://docs.docker.com/engine/install/linux-postinstall/

using pm2 with ansible

I am trying to start a node program using pm2 via ansible. The problem is that the pm2 start command is not idempotent under ansible. It gives error when run again.
This is my ansible play
- name: start the application
become_user: ubuntu
command: pm2 start app.js -i max
tags:
- app
Now if i run this the first time then it runs properly but when i run this again then i get the error telling me that the script is already running.
What would be the correct way to get around this error and handle pm2 properly via ansible.
Before starting the script you should delete previous, like this:
- name: delete existing pm2 processes if running
command: "pm2 delete {{ server_id }}"
ignore_errors: True
become: yes
become_user: rw_user
- name: start pm2 process
command: 'pm2 start -x -i 4 --name "{{server_id}}" server.js'
become: yes
become_user: rw_user
environment:
NODE_ENV: "{{server_env}}"
I would use
pm2 reload app.js -i max
It will allow you to reload configuration ;-)
I ended up on this page looking for a solution to start PM2 multiple times when I rerun my playbook. I also wanted PM2 to reload the server when it was already running and pickup the new code I might have deployed. It turns out that PM2 has such an interface:
- name: Start/reload server
command: '{{path_to_deployed_pm2}} startOrReload pm2.ecosystem.config.js'
The startOrReload command requires a so-called "ecosystem" file to be present. See the documentation for more details: Ecosystem File.
This is a minimal pm2.ecosystem.config.js that is working for me:
module.exports = {
apps : [{
script: 'app.js',
name: "My app"
}],
};
Here we can use the "register" module to perform a conditional restart/start.
register the output of following command:
shell: pm2 list | grep <app_name> | awk '{print $2}'
register: APP_STATUS
become: yes
and the use APP_STATUS.stdout to make a conditional start and restart tasks. This way we don't need a pm2 delete step.

Node.js app can't access any env variables when pm2 is started from a script but can when launched from ssh

I am trying to launch a node.js app on a production EC2 server with pm2 process manager.
When I ssh into the instance and run pm2 start app.js, PM2 starts just fine and has access to all environment variables. Everything good.
However, I want to run pm2 start app.js from a Codedeploy hook script called applicationstart.sh, the app fails with an errored status becasue it is missing all environment variables.
Here is where the script is added so it is launched with each deployment and calls pm2 start: appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/teller-install
hooks:
AfterInstall:
- location: scripts/afterinstall.sh
timeout: 1000
runas: root
ApplicationStart:
- location: scripts/applicationstart.sh
timeout: 300
runas: ubuntu
Here is the applicationstart script:
#!/bin/bash
echo "Running Hook: applicationstart.sh"
cd /home/ubuntu/teller-install/Server/
pm2 start app.js
exit 0
I am logged in as ubuntu when I run the script from ssh and I set the script in the appconfig.yml to run as ubuntu as well.
Why would there be any difference between starting pm2 from terminal and starting it from a launch hook script?
Here is running directly from ssh:
I can provide any information necessary in dire need of solution. Thanks!
I had to add source /etc/profile before I call pm2 start app.js to load in environment variables.
The script now looks like
#!/bin/bash
echo "Running Hook: applicationstart.sh"
cd /home/ubuntu/teller-install/Server/
source /etc/profile
pm2 start app.js
exit 0

how to automate docker development environment startup for a multi-service web app (on Linux)

I currently have to do the following 9+ steps just to launch my dev stack using Docker on Ubuntu 16.04 before I can start writing code:
open a terminal and cd into service #1's source code directory
docker-compose up service #1 (Python/Django, Redis, and Postgres containers)
docker exec service1 bash; start Django dev server for debugging
repeat for service #2, using terminal tabs to keep things organized
open a terminal and cd into the front-end Angular app source directory
run a webpack dev server with npm
open one or more terminals and cd into the appropriate source code directories to edit
I tried writing a shell script to launch everything with gnome-terminal --tab -e "bash -c docker-compose up", etc, but this gets awkward and will fail when trying to then shell into containers and run things, e.g. gnome-terminal --tab -e "bash -c \"docker-compose exec service1 bash -c rundev.sh \"". I also tried using xdotool, but it can't identify the docker shell terminal tabs for some reason.
Running a SPA with two back-end services and doing local development on each of the three code-bases doesn't seem like a bizarre use-case for Docker app development to me.
Does anyone have any suggestions of tools or an alternative dev environment setup for simplifying things?
You can do it with a bash script, but the proper way would be docker-compose. You need to create 2x services with their respective commands to run. Here is an example for rails app. Your docker-compose.yml should be something similar to this.
version: '2'
services:
db:
image: postgres
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db

How can i execute commands inside containers during host provision

I am using vagrant to build docker host and then i have shell script which basically install all required packages for host and That script also build and run containers
vagrant file
config.vm.provision :shell, :inline => "sudo /vagrant/bootstrap.sh"
Inside that i run containers like
docker run -d . .bla bla .. .
This works fine but i have to ssh into container and run make deploy to install all the stuff.
Is there any way i can run that make deploy from within my bootsrap.sh.
The one way is make that as entry point but then that will do with every run,
I just want that when i provision host then that command should run inside some container and show me output like vagarnt shows for host
use docker exec
see the doc
http://docs.docker.com/reference/commandline/exec/
for example
docker exec -it container_id make deploy
or
docker exec -it container_id bash
and then
make deploy
inside your container

Resources