While executing Docker command "docker-compose up" getting error - - "node-app-1 | /bin/sh: [npm,: not found" - node.js

My docker file
FROM node:alpine
WORKDIR /usr/src/app/
COPY package.json .
RUN npm install
COPY . /usr/src/app/
CMD ["npm" ,"start"]
My docker.yml file
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
- '8080:8080'
docker-compose output
$ docker-compose up
Container visits-node-app-1 Created
Container visits-redis-server-1 Created
Attaching to visits-node-app-1, visits-redis-server-1
visits-redis-server-1 | 1:C 19 Oct 2021 13:10:09.712 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
visits-redis-server-1 | 1:C 19 Oct 2021 13:10:09.712 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
visits-redis-server-1 | 1:C 19 Oct 2021 13:10:09.712 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.713 * monotonic clock: POSIX clock_gettime
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.715 * Running mode=standalone, port=6379.
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.715 # Server initialized
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.715 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.717 * Loading RDB produced by version 6.2.6
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.717 * RDB age 272 seconds
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.717 * RDB memory usage when created 0.77 Mb
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.717 # Done loading RDB, keys loaded: 0, keys expired: 0.
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.717 * DB loaded from disk: 0.000 seconds
visits-redis-server-1 | 1:M 19 Oct 2021 13:10:09.717 * Ready to accept connections
visits-node-app-1 | /bin/sh: [npm,: not found
visits-node-app-1 exited with code 127

Related

alpine image error: /bin/sh: can't access tty; job control turned off

I have redis pod with cpec:
spec:
containers:
- name: master
image: xyzwy/redis:7.0
command: ["sh", "-ic"]
args:
- redis-server
- /bin/sh
when I deploy it i get an error in a first line:
***/bin/sh: can't access tty; job control turned off***
1:C 09 May 2022 13:31:44.287 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 09 May 2022 13:31:44.287 # Redis version=7.0.0, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 09 May 2022 13:31:44.287 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 09 May 2022 13:31:44.288 * monotonic clock: POSIX clock_gettime
1:M 09 May 2022 13:31:44.289 * Running mode=standalone, port=6379.
1:M 09 May 2022 13:31:44.289 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 09 May 2022 13:31:44.290 # Server initialized
1:M 09 May 2022 13:31:44.291 * The AOF directory appendonlydir doesn't exist
1:M 09 May 2022 13:31:44.291 * Ready to accept connections
how can I solve this?
Problem was in
command: ["sh", "-ic"]
-i can not be there, it is not interactive

Redis-server command works inside container but not when docker-compose starts

I have been working on a project to dockerize and automate redis deployments when i stumbled upon a very weird issue with my build. This is my current Dockerfile
ARG BUILD_VERSION=5
FROM redis:${BUILD_VERSION}
RUN mkdir /var/log/redis
RUN chown -R redis:redis /data /var/log/redis
EXPOSE 6379
WORKDIR /usr/local/etc/redis
CMD ["/usr/local/bin/redis-server", "/etc/redis.conf"]
This is my compose:
version: '2.2'
services:
redis:
image: test:red5
restart: unless-stopped
ports:
- "6379:6379"
user: $UID:$GID
volumes:
- /var/lib/redis/:/var/lib/redis/
- /var/log/redis/:/var/log/redis/
- /etc/redis.conf:/etc/redis.conf
to be clear i am mounting the redis dirs and configs as volumes because that is whats on the server.. the UID and GID variable gets called in my .env file.
When i docker exec inside the container and run "/usr/local/bin/redis-server", "/etc/redis.conf" the redis server intializes with no problem. but when i run doocker-compose up i get this exit code.
docker-compose up
Creating network "redis_default" with the default driver
Creating redis_redis_1 ... done
Attaching to redis_redis_1
redis_redis_1 exited with code 0
First question ever on stack overflow :).. Assistance is appreciated
logs(bind address issue was already resolved):
I have no name!#b306768fd72f:/usr/local/src$ tail -f /var/log/redis/redis.log
37:C 19 Jan 2022 18:41:09.928 # Configuration loaded
38:M 19 Jan 2022 18:41:09.931 # Could not create server TCP listening socket *:6379: bind: Address already in use
41:C 19 Jan 2022 18:41:40.389 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
41:C 19 Jan 2022 18:41:40.389 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=41, just started
41:C 19 Jan 2022 18:41:40.389 # Configuration loaded
42:M 19 Jan 2022 18:41:40.393 * Running mode=standalone, port=6379.
42:M 19 Jan 2022 18:41:40.393 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
42:M 19 Jan 2022 18:41:40.393 # Server initialized
42:M 19 Jan 2022 18:41:40.394 * DB loaded from disk: 0.001 seconds
42:M 19 Jan 2022 18:41:40.394 * Ready to accept connections
Found the issue in /etc/redis.conf..
daemonize yes
this was blocking docker from running redis-server properly since it was trying to create a pidfile.. container is now running

Automatically restart a service after unattended updates under systemd

I've added a service (Seafile, in this case) that I want to have running at all times to systemd with a service file. It works great, but every time the unattended updates run, the service gets shutdown properly - but never restarted.
Here's what the service file looks like:
[Unit]
Description=Seafile
Requires=mysql.service
After=mysql.service
[Install]
WantedBy=multi-user.target
[Service]
Type=forking
User=root
Group=root
PermissionsStartOnly=true
ExecStart=/srv/start-seafile
TimeoutSec=600
Restart=on-failure
/srv/start-seafile looks like this:
#!/bin/bash
cd /srv/seafile/XXXX/seafile-server-latest && nohup ./seafile.sh start
Like I said - this works perfectly - systemd can enable / disable / start / stop the service, realize if it's started / running - so I must be doing something right.
# systemctl start seafile
# systemctl status seafile
* seafile.service - Seafile
Loaded: loaded (/etc/systemd/system/seafile.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2018-01-28 17:53:51 CET; 8s ago
Process: 6569 ExecStart=/srv/start-seafile (code=exited, status=0/SUCCESS)
CGroup: /system.slice/seafile.service
|-6598 /srv/seafile/XXXX/seafile-server-5.1.4/seafile/bin/seafile-controller -c /srv/seafile/XXXX/ccnet -d /volume1/Seafile -F /srv/seafile/XXXX/conf
|-6600 ccnet-server -F /srv/seafile/XXXX/conf -c /srv/seafile/XXXX/ccnet -f /srv/seafile/XXXX/logs/ccnet.log -d -P /srv/seafile/XXXX/pids/ccnet.pid
`-6602 seaf-server -F /srv/seafile/XXXX/conf -c /srv/seafile/XXXX/ccnet -d /volume1/Seafile -l /srv/seafile/XXXX/logs/seafile.log -P /srv/seafile/XXXX/pids/seaf-server.pid
Jan 28 17:53:48 XXXX systemd[1]: Starting Seafile...
Jan 28 17:53:48 XXXX start-seafile[6569]: [01/28/18 17:53:48] ../common/session.c(132): using config file /srv/seafile/XXXX/conf/ccnet.conf
Jan 28 17:53:48 XXXX start-seafile[6569]: Starting seafile server, please wait ...
Jan 28 17:53:51 XXXX start-seafile[6569]: Seafile server started
Jan 28 17:53:51 XXXX start-seafile[6569]: Done.
Jan 28 17:53:51 XXXX systemd[1]: Started Seafile.
However, every time the unattended updates come, this happens:
Jan 23 06:38:08 XXXX systemd[1]: Starting Daily apt activities...
Jan 23 06:40:13 XXXX systemd[1]: Reloading.
Jan 23 06:40:13 XXXX systemd[1]: Stopping Seahub...
Jan 23 06:40:14 XXXX systemd[1]: Stopped Seahub.
Jan 23 06:40:14 XXXX systemd[1]: Stopping Seafile...
Jan 23 06:40:15 XXXX systemd[1]: Stopped Seafile.
Jan 23 06:40:15 XXXX systemd[1]: Stopping MySQL Community Server...
Jan 23 06:40:16 XXXX systemd[1]: Stopped MySQL Community Server.
Jan 23 06:40:17 XXXX systemd[1]: Reloading.
Jan 23 06:40:17 XXXX systemd[1]: Stopped MySQL Community Server.
...
Jan 23 06:40:43 XXXX systemd[1]: Reloading.
Jan 23 06:40:44 XXXX systemd[1]: Stopped MySQL Community Server.
Jan 23 06:40:44 XXXX systemd[1]: Reloading.
Jan 23 06:40:44 XXXX systemd[1]: Reloading.
Jan 23 06:40:44 XXXX systemd[1]: Starting MySQL Community Server...
Jan 23 06:40:45 XXXX systemd[1]: Started MySQL Community Server.
Jan 23 06:40:47 XXXX systemd[1]: Reloading.
Jan 23 06:40:48 XXXX systemd[1]: Stopping MySQL Community Server...
Jan 23 06:40:49 XXXX systemd[1]: Stopped MySQL Community Server.
Jan 23 06:40:49 XXXX systemd[1]: Starting MySQL Community Server...
Jan 23 06:40:50 XXXX systemd[1]: Started MySQL Community Server.
(... continues with unrelated services )
So it realizes that Seafile needs to be stopped before MySQL, and does so, but no longer starts it after it restarts MySQL.
Does anyone have any experience as to what could be causing this? ie. under which circumstances systemd services will be stopped during an update, but not restarted?
Use Restart=always if you want it to run at all times. When mysql service update happens, this service does a clean stop and therefore systemd doesn't restart it. You have Restart=on-failure set, which only restarts if the stop has a return code other than 0.
Restart = always
RestartSec = 10
RestartSec
Configures the time to sleep before restarting a service (as configured with Restart=). Takes a unit-less value in seconds, or a time span value such as "5min 20s". Defaults to 100ms.

BBB Palette not displayed on Node-Red on Beaglebone Black Wireless

My node-red installation does not display the nodes needed for accessing beaglebone IOs (node-red-node-beaglebone).
In my opinion error is actually caused because node-red-node-beaglebone loads octalbonescript which needs serialport, and something requests serialPort instead of serialport there.
I already tried the preinstalled, stable and lts nodejs version. Additionally npm#2 and npm#3 for installing the nodes. The .node-red folder was also already deleted by me and the node-red-node-beaglebone packaged got installed in the .node-red/node_modules folder started from scrach.
root#beaglebone:/etc# cat debian_version
8.7
root#beaglebone:~# npm -v
2.15.11
root#beaglebone:~# uname -a
Linux beaglebone 4.4.30-ti-r64 #1 SMP Fri Nov 4 21:23:33 UTC 2016 armv7l GNU/Linux
root#beaglebone:~# export AUTO_LOAD_CAPE=0 #optional
root#beaglebone:~# node-red-pi
1487183133432 Board Looking for connected device
15 Feb 19:25:34 - [info]
Welcome to Node-RED
===================
15 Feb 19:25:34 - [info] Node-RED version: v0.16.2
15 Feb 19:25:34 - [info] Node.js version: v6.9.5
15 Feb 19:25:34 - [info] Linux 4.4.30-ti-r64 arm LE
15 Feb 19:25:44 - [info] Loading palette nodes
15 Feb 19:25:57 - [warn] ------------------------------------------------------
15 Feb 19:25:57 - [warn] [bbb] ReferenceError: serialPort is not defined +seems to be the problem
15 Feb 19:25:57 - [warn] ------------------------------------------------------
15 Feb 19:25:57 - [info] Settings file : /root/.node-red/settings.js
15 Feb 19:25:57 - [info] User directory : /root/.node-red
15 Feb 19:25:57 - [info] Flows file : /root/.node-red/flows_beaglebone.json
15 Feb 19:25:57 - [info] Creating new flow file
15 Feb 19:25:57 - [debug] loaded flow revision: 513fd923d68021b8ee98fcb250470340
15 Feb 19:25:57 - [debug] red/runtime/nodes/credentials.load : no user key present
15 Feb 19:25:57 - [debug] red/runtime/nodes/credentials.load : using default key
15 Feb 19:25:57 - [info] Starting flows
15 Feb 19:25:57 - [info] Started flows
15 Feb 19:25:57 - [info] Server now running at http://127.0.0.1:1880/

Can't start newly installed PostgreSQL on Fedora

I just installed PostgreSQL (9.3) on my Linux Fedora (21) - Shown below
yum installer returns 'already installed' as this is the second time I run it, to get a screengrab for you.
systemctl enable postgresq, seems to work. At least it returns no errors.
systemctl start postgresql fails. The log says "data directory "/ConfigDir" does not exist" So I guess that's the key.
But the /ConfigDir do exis. It's /usr/local/pgsql/data as is default on a Fedora PostgreSQL instalation. It is tough owned by user:postgres
I guess that the start of the postgres server service fails because it somehow can't see the directory, likely because it's not trying as user:postgres.
But I don't know how to check this, nor how to mend it.
[martin#helium ~]$ sudo yum install postgresql-server postgresql-contrib
[sudo] password for martin:
Loaded plugins: langpacks
Package postgresql-server-9.3.9-1.fc21.x86_64 already installed and latest version
Package postgresql-contrib-9.3.9-1.fc21.x86_64 already installed and latest version
Nothing to do
[martin#helium ~]$ sudo systemctl enable postgresql
[martin#helium ~]$ sudo systemctl start postgresql
Job for postgresql.service failed. See "systemctl status postgresql.service" and "journalctl -xe" for details.
[martin#helium ~]$ journalctl -xn
-- Logs begin at Fri 2014-12-26 16:16:25 CET, end at Sun 2016-02-07 14:35:11 CET. --
Feb 07 14:35:06 helium.hvidberg.net sudo[3466]: martin : TTY=pts/0 ; PWD=/home/martin ; USER=root ; COMMAND=/bin/systemctl start postgresql
Feb 07 14:35:06 helium.hvidberg.net sudo[3466]: pam_unix(sudo:session): session opened for user root by martin(uid=0)
Feb 07 14:35:06 helium.hvidberg.net pg_ctl[3476]: FATAL: data directory "/ConfigDir" does not exist
Feb 07 14:35:11 helium.hvidberg.net pg_ctl[3476]: pg_ctl: could not start server
Feb 07 14:35:11 helium.hvidberg.net pg_ctl[3476]: Examine the log output.
Feb 07 14:35:11 helium.hvidberg.net systemd[1]: postgresql.service: control process exited, code=exited status=1
Feb 07 14:35:11 helium.hvidberg.net systemd[1]: Failed to start PostgreSQL database server.
-- Subject: Unit postgresql.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit postgresql.service has failed.
--
-- The result is failed.
Feb 07 14:35:11 helium.hvidberg.net systemd[1]: Unit postgresql.service entered failed state.
Feb 07 14:35:11 helium.hvidberg.net systemd[1]: postgresql.service failed.
Feb 07 14:35:11 helium.hvidberg.net sudo[3466]: pam_unix(sudo:session): session closed for user root
[martin#helium ~]$
Change to root to be able to see the postgresql data directory
[root#helium data]# pwd
/usr/local/pgsql/data
[root#helium data]# ls -l
total 100
drwx------. 5 postgres postgres 4096 Feb 6 19:00 base
drwx------. 2 postgres postgres 4096 Feb 6 19:03 global
drwx------. 2 postgres postgres 4096 Feb 6 19:00 pg_clog
-rw-------. 1 postgres postgres 4476 Feb 6 19:00 pg_hba.conf
-rw-------. 1 postgres postgres 1636 Feb 6 19:00 pg_ident.conf
drwx------. 2 postgres postgres 4096 Feb 6 19:03 pg_log
drwx------. 4 postgres postgres 4096 Feb 6 19:00 pg_multixact
drwx------. 2 postgres postgres 4096 Feb 6 19:03 pg_notify
drwx------. 2 postgres postgres 4096 Feb 6 19:00 pg_serial
drwx------. 2 postgres postgres 4096 Feb 6 19:00 pg_snapshots
drwx------. 2 postgres postgres 4096 Feb 6 19:05 pg_stat
drwx------. 2 postgres postgres 4096 Feb 6 19:05 pg_stat_tmp
drwx------. 2 postgres postgres 4096 Feb 6 19:00 pg_subtrans
drwx------. 2 postgres postgres 4096 Feb 6 19:00 pg_tblspc
drwx------. 2 postgres postgres 4096 Feb 6 19:00 pg_twophase
-rw-------. 1 postgres postgres 4 Feb 6 19:00 PG_VERSION
drwx------. 3 postgres postgres 4096 Feb 6 19:00 pg_xlog
-rw-------. 1 postgres postgres 20733 Feb 7 10:26 postgresql.conf
-rw-------. 1 postgres postgres 47 Feb 6 19:03 postmaster.opts
[root#helium data]#

Resources