kubelet failed to find mountpoint for CPU - linux

I'm using kubeadm 1.15.3, docker-ce 18.09 on Debian 10 buster 5.2.9-2, and seeing errors in journalctl -xe | grep kubelet:
server.go:273] failed to run Kubelet: mountpoint for cpu not found
My /sys/fs/cgroup contains:
-r--r--r-- 1 root root 0 Sep 2 18:49 cgroup.controllers
-rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.max.depth
-rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.max.descendants
-rw-r--r-- 1 root root 0 Sep 2 18:49 cgroup.procs
-r--r--r-- 1 root root 0 Sep 2 18:50 cgroup.stat
-rw-r--r-- 1 root root 0 Sep 2 18:49 cgroup.subtree_control
-rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.threads
-rw-r--r-- 1 root root 0 Sep 2 18:50 cpu.pressure
-r--r--r-- 1 root root 0 Sep 2 18:50 cpuset.cpus.effective
-r--r--r-- 1 root root 0 Sep 2 18:50 cpuset.mems.effective
drwxr-xr-x 2 root root 0 Sep 2 18:49 init.scope
-rw-r--r-- 1 root root 0 Sep 2 18:50 io.pressure
-rw-r--r-- 1 root root 0 Sep 2 18:50 memory.pressure
drwxr-xr-x 20 root root 0 Sep 2 18:49 system.slice
drwxr-xr-x 2 root root 0 Sep 2 18:49 user.slice
docker.service is running okay and has /etc/docker/daemon.json:
{
"exec-opts": [
"native.cgroupdriver=systemd"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
The kubeadm docs say if using docker the cgroup driver will be autodetected, but I tried supplying it anyway for good measure - no change.
With mount or cgroupfs-mount:
$ mount -t cgroup -o all cgroup /sys/fs/cgroup
mount: /sys/fs/cgroup: cgroup already mounted on /sys/fs/cgroup/cpuset.
$ cgroupfs-mount
mount: /sys/fs/cgroup/cpu: cgroup already mounted on /sys/fs/cgroup/cpuset.
mount: /sys/fs/cgroup/blkio: cgroup already mounted on /sys/fs/cgroup/cpuset.
mount: /sys/fs/cgroup/memory: cgroup already mounted on /sys/fs/cgroup/cpuset.
mount: /sys/fs/cgroup/pids: cgroup already mounted on /sys/fs/cgroup/cpuset.
Is the problem that it's at cpuset rather than cpu? I tried to create a symlink, but root does not have write permission for /sys/fs/cgroup. (Presumably I can change it, but I took that as enough warning not to meddle.)
How can let kubelet find my CPU cgroup mount?

I would say that something very weird with your docker-ce installation and not kubelet. You are looking into the right direction showing mapping problem.
I have tried 3 different docker versions on both GCP and AWS environments instances.
What I have noticed comparing our results - you have wrong folder structure under /sys/fs/cgroup. Pay attention that I have much more permissions in /sys/fs/cgroup comparing to your output. This is how my results looks like:
root#instance-3:~# docker version
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.39 (downgraded from 1.40)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:21:24 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 4c52b90
Built: Wed Jan 9 19:02:44 2019
OS/Arch: linux/amd64
Experimental: false
root#instance-3:~# ls -la /sys/fs/cgroup
total 0
drwxr-xr-x 14 root root 360 Sep 3 11:30 .
drwxr-xr-x 6 root root 0 Sep 3 11:30 ..
dr-xr-xr-x 5 root root 0 Sep 3 11:30 blkio
lrwxrwxrwx 1 root root 11 Sep 3 11:30 cpu -> cpu,cpuacct
dr-xr-xr-x 5 root root 0 Sep 3 11:30 cpu,cpuacct
lrwxrwxrwx 1 root root 11 Sep 3 11:30 cpuacct -> cpu,cpuacct
dr-xr-xr-x 2 root root 0 Sep 3 11:30 cpuset
dr-xr-xr-x 5 root root 0 Sep 3 11:30 devices
dr-xr-xr-x 2 root root 0 Sep 3 11:30 freezer
dr-xr-xr-x 5 root root 0 Sep 3 11:30 memory
lrwxrwxrwx 1 root root 16 Sep 3 11:30 net_cls -> net_cls,net_prio
dr-xr-xr-x 2 root root 0 Sep 3 11:30 net_cls,net_prio
lrwxrwxrwx 1 root root 16 Sep 3 11:30 net_prio -> net_cls,net_prio
dr-xr-xr-x 2 root root 0 Sep 3 11:30 perf_event
dr-xr-xr-x 5 root root 0 Sep 3 11:30 pids
dr-xr-xr-x 2 root root 0 Sep 3 11:30 rdma
dr-xr-xr-x 5 root root 0 Sep 3 11:30 systemd
dr-xr-xr-x 5 root root 0 Sep 3 11:30 unified
root#instance-3:~# ls -la /sys/fs/cgroup/unified/
total 0
dr-xr-xr-x 5 root root 0 Sep 3 11:37 .
drwxr-xr-x 14 root root 360 Sep 3 11:30 ..
-r--r--r-- 1 root root 0 Sep 3 11:42 cgroup.controllers
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.max.depth
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.max.descendants
-rw-r--r-- 1 root root 0 Sep 3 11:30 cgroup.procs
-r--r--r-- 1 root root 0 Sep 3 11:42 cgroup.stat
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.subtree_control
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.threads
drwxr-xr-x 2 root root 0 Sep 3 11:30 init.scope
drwxr-xr-x 52 root root 0 Sep 3 11:30 system.slice
drwxr-xr-x 3 root root 0 Sep 3 11:30 user.slice
Encourage you completely reinstall docker from scratch(or recreate instance and install docker again). That should help.
Let me share with you my docker-ce installation steps:
$ sudo apt update
$ sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
$ sudo apt update
$ apt-cache policy docker-ce
$ sudo apt install docker-ce=5:18.09.1~3-0~debian-buster
I have also seen a workaroung in Kubelet: mountpoint for cpu not found issue answer, but also dont have a permission under root to fix it:
mkdir /sys/fs/cgroup/cpu,cpuacct
mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct

Related

How to view logs of container before restart

I have a container which was restart 14 hours ago. The container is running since 7 weeks. I want to inspect the container logs during a certain interval. When i run below command, I see there is no output
docker container logs pg-connect --until 168h --since 288h
When i run below commands i only see logs since the container was restarted.
docker logs pg-connect
Any idea how to retrieve older logs for the container?
More info if helps
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f08fb6fb0fb kosta709/alpine-plus:0.0.2 "/connectors-restart…" 7 weeks ago Up 14 hours connectors-monitor
7e919a253a29 debezium/connect:1.2.3.Final "/docker-entrypoint.…" 7 weeks ago Up 14 hours pg-connect
>
>
> docker logs 7e919a253a29 -n 2
2022-08-26 06:37:10,878 INFO || WorkerSourceTask{id=relations-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2022-08-26 06:37:10,878 INFO || WorkerSourceTask{id=relations-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
> docker logs 7e919a253a29 |head
org.apache.kafka.common.KafkaException: Producer is closed forcefully.
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766)
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279)
at java.base/java.lang.Thread.run(Thread.java:834)
2022-08-24 16:13:06,567 ERROR || WorkerSourceTask{id=session-0} failed to send record to barclays.public.session: [org.apache.kafka.connect.runtime.WorkerSourceTask]
org.apache.kafka.common.KafkaException: Producer is closed forcefully.
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766)
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279)
>
> ls -lart /var/lib/docker/containers/7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1/
total 90720
drwx------ 2 root root 6 Jul 1 10:39 checkpoints
drwx--x--- 2 root root 6 Jul 1 10:39 mounts
drwx--x--- 4 root root 150 Jul 1 10:40 ..
-rw-r----- 1 root root 10000230 Aug 24 16:13 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.9
-rw-r----- 1 root root 10000163 Aug 24 16:13 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.8
-rw-r----- 1 root root 10000054 Aug 24 16:16 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.7
-rw-r----- 1 root root 10000147 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.6
-rw-r----- 1 root root 10000123 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.5
-rw-r----- 1 root root 10000019 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.4
-rw-r----- 1 root root 10000159 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.3
-rw-r----- 1 root root 10000045 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.2
-rw-r--r-- 1 root root 199 Aug 25 16:30 hosts
-rw-r--r-- 1 root root 68 Aug 25 16:30 resolv.conf
-rw-r--r-- 1 root root 25 Aug 25 16:30 hostname
-rw------- 1 root root 7205 Aug 25 16:30 config.v2.json
-rw-r--r-- 1 root root 1559 Aug 25 16:30 hostconfig.json
-rw-r----- 1 root root 10000085 Aug 25 16:31 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.1
drwx--x--- 4 root root 4096 Aug 25 16:31 .
-rw-r----- 1 root root 2843232 Aug 26 06:38 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log
As stated by [the official guide][1]:
The docker logs command batch-retrieves logs present at the time of execution.```
To solve this issue you should instrument the container software to log its output to a persistent (rotated if you want) log file.
[1]: https://docs.docker.com/engine/reference/commandline/logs/

cloud-init-output.log increases it size very quickly on RHEL 8

I use Linux machine on AWS EC2 instance with Red Hat Enterprise Linux 8.6 and my cloud-init-output.log is increasing its size very quickly causing my app logs to stop writing in one-two days even though I have 20GB of storage.
user$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 195M 1.7G 11% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/xvda2 20G 17G 3.5G 83% /
tmpfs 373M 0 373M 0% /run/user/1000
user$ ls -ltr /var/log
total 10499168
drwxr-x---. 2 chrony chrony 6 Jun 15 2021 chrony
drwxr-xr-x. 2 root root 6 Apr 5 18:43 qemu-ga
drwx------. 2 root root 6 Apr 8 04:50 insights-client
drwx------. 2 root root 6 May 3 08:58 private
-rw-rw----. 1 root utmp 0 May 3 08:58 btmp
-rw-------. 1 root root 0 May 3 08:59 maillog
-rw-------. 1 root root 0 May 3 08:59 spooler
drwxr-x---. 2 sssd sssd 73 Jun 3 11:15 sssd
drwx------. 2 root root 23 Jul 12 14:43 audit
drwxr-xr-x. 2 root root 23 Jul 12 14:44 tuned
-rw-r--r--. 1 root root 128263 Jul 12 14:44 cloud-init.log
drwxr-xr-x. 2 root root 43 Jul 12 14:44 rhsm
-rw-r--r--. 1 root root 806 Jul 12 14:44 kdump.log
-rw-r--r--. 1 root root 1017 Jul 12 14:45 choose_repo.log
drwxr-xr-x. 2 root root 67 Jul 12 14:47 amazon
-rw-r--r--. 1 root root 1560 Jul 14 05:04 hawkey.log
-rw-------. 1 root root 26318 Jul 14 06:39 secure
-rw-rw-r--. 1 root utmp 4224 Jul 14 06:58 wtmp
-rw-rw-r--. 1 root utmp 292292 Jul 14 06:58 lastlog
-rw-r--r--. 1 root root 10752 Jul 14 07:00 dnf.rpm.log
-rw-r--r--. 1 root root 48816 Jul 14 07:00 dnf.librepo.log
-rw-r--r--. 1 root root 97219 Jul 14 07:00 dnf.log
-rw-------. 1 root root 12402 Jul 14 07:01 cron
-rw-------. 1 root root 2160833934 Jul 14 07:02 messages
-rw-r-----. 1 root adm 5257112056 Jul 14 07:03 cloud-init-output.log
I changed logging level from default DEBUG to ERROR in /etc/cloud/cloud.cfg.d but it didn't help. Messages are also filling up fast.
Is this log file even supposed to be filling in after the EC2 instance is up?
Is there something I can do to stop the size increase?
I tried also to manually do the logrotate logrotate --force /etc/logrotate.d/ but it didn't do much.

Amazon-ssm-agent unrecognized service (just installed it via Docker)

I am trying to figure out why I cannot start and stop the amazon-ssm-agent service manually in a Kali Linux Focker image running on an Ubuntu 20.04.1 LTS host. Per their instructions, I have obtained the .deb file and installed it with dpkg -i. Although I can interact with it via amazon-ssm-agent -h and registering it just fine, etc., I cannot restart the service which sometimes fixes the Connection Lost issue after registering.
As you can see below, I am using wget to get the .deb file, and installing it:
➜ ~ wget https://s3.us-east-1.amazonaws.com/amazon-ssm-us-east-1/latest/debian_amd64/amazon-ssm-agent.deb
--2020-12-27 22:21:32-- https://s3.us-east-1.amazonaws.com/amazon-ssm-us-east-1/latest/debian_amd64/amazon-ssm-agent.deb
Resolving s3.us-east-1.amazonaws.com (s3.us-east-1.amazonaws.com)... 52.217.109.126
Connecting to s3.us-east-1.amazonaws.com (s3.us-east-1.amazonaws.com)|52.217.109.126|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 41537900 (40M) [binary/octet-stream]
Saving to: 'amazon-ssm-agent.deb'
amazon-ssm-agent.deb 100%[========================================================================================================================================================================================================================================>] 39.61M 105MB/s in 0.4s
2020-12-27 22:21:33 (105 MB/s) - 'amazon-ssm-agent.deb' saved [41537900/41537900]
➜ ~ dpkg -i amazon-ssm-agent.deb
Selecting previously unselected package amazon-ssm-agent.
(Reading database ... 231292 files and directories currently installed.)
Preparing to unpack amazon-ssm-agent.deb ...
Preparing for install
Unpacking amazon-ssm-agent (3.0.431.0-1) ...
Setting up amazon-ssm-agent (3.0.431.0-1) ...
Starting agent
➜ ~ service amazon-ssm-agent status
amazon-ssm-agent: unrecognized service
➜ ~
I also cannot use systemctl because of the following error:
➜ ~ systemctl status amazon-ssm-agent
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
➜ ~
I tried looking in /etc/init.d as well, but no luck:
➜ ~ ls /etc/init.d -l
total 240
-rwxr-xr-x 1 root root 2489 Aug 8 07:47 apache-htcacheclean
-rwxr-xr-x 1 root root 8181 Aug 8 07:47 apache2
-rwxr-xr-x 1 root root 1614 Jul 14 2019 atftpd
-rwxr-xr-x 1 root root 2401 May 26 2020 avahi-daemon
-rwxr-xr-x 1 root root 1175 Apr 17 2020 binfmt-support
-rwxr-xr-x 1 root root 2948 Sep 16 07:49 bluetooth
-rwxr-xr-x 1 root root 1232 Dec 1 01:02 console-setup.sh
-rwxr-xr-x 1 root root 937 Sep 3 22:30 cryptdisks
-rwxr-xr-x 1 root root 896 Sep 3 22:30 cryptdisks-early
-rwxr-xr-x 1 root root 3152 Jul 2 13:19 dbus
-rwxr-xr-x 1 root root 1408 Aug 4 23:00 dns2tcp
-rwxr-xr-x 1 root root 7159 May 23 2020 exim4
-rwxr-xr-x 1 root root 3708 Nov 25 21:07 hwclock.sh
-rwxr-xr-x 1 root root 3615 Sep 5 2019 inetsim
-rwxr-xr-x 1 root root 4113 Sep 26 16:48 iodined
-rwxr-xr-x 1 root root 1479 Oct 9 2016 keyboard-setup.sh
-rwxr-xr-x 1 root root 2044 Apr 18 2020 kmod
-rwxr-xr-x 1 root root 5966 Nov 22 15:42 mariadb
-rwxr-xr-x 1 root root 2882 Jul 26 2019 miredo
-rwxr-xr-x 1 root root 4486 Sep 21 14:45 networking
-rwxr-xr-x 1 root root 5658 Jul 26 12:02 nfs-common
-rwxr-xr-x 1 root root 4579 May 28 2020 nginx
-rwxr-xr-x 1 root root 1934 Jul 7 05:55 nmbd
-rwxr-xr-x 1 root root 1494 Sep 23 11:46 ntp
-rwxr-xr-x 1 root root 9138 Oct 28 18:37 openvpn
-rwxr-xr-x 1 root root 3720 Jun 14 2020 pcscd
-rwxr-xr-x 1 root root 1490 Nov 15 2019 postgresql
-rwxr-xr-x 1 root root 924 May 16 2020 procps
-rwxr-xr-x 1 root root 3699 Jul 22 2017 ptunnel
-rwxr-xr-x 1 root root 3836 Jan 2 2017 redsocks
-rwxr-xr-x 1 root root 1615 Aug 19 2018 rlinetd
-rwxr-xr-x 1 root root 2507 Jul 13 01:22 rpcbind
-rwxr-xr-x 1 root root 4417 Aug 26 20:23 rsync
-rwxr-xr-x 1 root root 2864 Oct 20 19:45 rsyslog
-rwxr-xr-x 1 root root 1661 Jun 5 2013 rwhod
-rwxr-xr-x 1 root root 2259 Jul 7 05:55 samba-ad-dc
-rwxr-xr-x 1 root root 1222 Apr 2 2017 screen-cleanup
-rwxr-xr-x 1 root root 3088 Oct 10 2019 smartmontools
-rwxr-xr-x 1 root root 2061 Jul 7 05:55 smbd
-rwxr-xr-x 1 root root 1175 Sep 24 23:10 snmpd
-rwxr-xr-x 1 root root 4056 Dec 2 10:32 ssh
-rwxr-xr-x 1 root root 4440 Sep 5 2019 sslh
-rwxr-xr-x 1 root root 5730 Sep 13 10:43 stunnel4
-rwxr-xr-x 1 root root 1030 Dec 2 03:10 sudo
-rwxr-xr-x 1 root root 1581 Dec 16 08:36 sysstat
-rwxr-xr-x 1 root root 6871 Dec 3 22:53 udev
-rwxr-xr-x 1 root root 2757 Oct 9 08:13 x11-common
➜ ~
However, you can see that running the amazon-ssm-agent command works just fine:
➜ ~ amazon-ssm-agent
Error occurred fetching the seelog config file path: open /etc/amazon/ssm/seelog.xml: no such file or directory
Initializing new seelog logger
New Seelog Logger Creation Complete
2020-12-27 22:24:08 ERROR error fetching the instanceID, Failed to fetch instance ID. Data from vault is empty. EC2MetadataError: failed to make EC2Metadata request
status code: 404, request id:
caused by: not found
2020-12-27 22:24:08 ERROR error occurred when starting amazon-ssm-agent: error fetching the instanceID, Failed to fetch instance ID. Data from vault is empty. EC2MetadataError: failed to make EC2Metadata request
status code: 404, request id:
caused by: not found
➜ ~
The only reason that I need to restart the service after registering is because sometimes I get a "Connection Lost" on the managed instance's ping status after registering. Usually restarting the service seem to do the trick for me.
I'm able to restart the service successfully when just using the host (Ubuntu 20.04) and even when the host is running Kali Linux as well, but not when it's a docker container, which doesn't make any sense to me because everything is functional with the exception of being able to start/stop the service manually.
I was able to get this running by cloning this repository: https://github.com/gdraheim/docker-systemctl-replacement
After cloning, I ran the following:
/root/docker-systemctl-replacement/files/docker/systemctl.py restart amazon-ssm-agent

AWS cli is installed but doesn't work, Centos8

Hi I have installed awscli in centos8 trough pip3, I can see the files in my /root/.local/bin directory
-rwxr-xr-x 1 root root 817 Mar 4 18:50 aws
-rwxr-xr-x 1 root root 204 Mar 4 18:49 aws_bash_completer
-rwxr-xr-x 1 root root 1432 Mar 4 18:49 aws.cmd
-rwxr-xr-x 1 root root 1138 Mar 4 18:50 aws_completer
-rwxr-xr-x 1 root root 1807 Mar 4 18:49 aws_zsh_completer.sh
-rwxr-xr-x 1 root root 218 Mar 4 18:49 pyrsa-decrypt
-rwxr-xr-x 1 root root 234 Mar 4 18:49 pyrsa-decrypt-bigfile
-rwxr-xr-x 1 root root 218 Mar 4 18:49 pyrsa-encrypt
-rwxr-xr-x 1 root root 234 Mar 4 18:49 pyrsa-encrypt-bigfile
-rwxr-xr-x 1 root root 216 Mar 4 18:49 pyrsa-keygen
-rwxr-xr-x 1 root root 239 Mar 4 18:49 pyrsa-priv2pub
-rwxr-xr-x 1 root root 212 Mar 4 18:49 pyrsa-sign
-rwxr-xr-x 1 root root 216 Mar 4 18:49 pyrsa-verify
but when I try to entablish the credentials with "aws configure" or simply view the version with "aws --version" I get:
bash: aws: command not found
when I run "pip3 uninstall awscli" and before uninstalling it shows me a list of files, some of them I can not find when I look for them like the ones on the site-package folder, ex:
Uninstalling awscli-1.18.14:
/root/.local/bin/aws
/root/.local/bin/aws.cmd
/root/.local/bin/aws_bash_completer
/root/.local/bin/aws_completer
/root/.local/bin/aws_zsh_completer.sh
/root/.local/lib/python3.6/site-packages/awscli-1.18.14.dist-info/DESCRIPTION.rst
/root/.local/lib/python3.6/site-packages/awscli-1.18.14.dist-info/INSTALLER
/root/.local/lib/python3.6/site-packages/awscli-1.18.14.dist-info/METADATA
/root/.local/lib/python3.6/site-packages/awscli-1.18.14.dist-info/RECORD
What should I do?

`cd` to a folder in bash script not working: No such folder [duplicate]

This question already has answers here:
Why can't I change directories using "cd" in a script?
(33 answers)
'\r': command not found [duplicate]
(3 answers)
Closed 4 years ago.
This is my entire script. it is simply to avoid having to type it again and again
cd api
rails s -p 3001 -b 0.0.0.0
cd ..
When I am in the directory of the script and run cd api it works just fine. However when I run the script via ./start_server It does not work. Here is the output of ls -al:
mendel#DESKTOP-LIKG5E5:/mnt/c/Projects/chaverim-update$ ls -al
total 8
drwxrwxrwx 0 root root 512 Apr 13 12:32 .
drwxrwxrwx 0 root root 512 Apr 5 12:40 ..
drwxrwxrwx 0 root root 512 Apr 13 12:09 api
-rwxrwxrwx 1 root root 1237 Apr 5 12:40 boxfile.yml
drwxrwxrwx 0 root root 512 Apr 8 16:54 .bundle
drwxrwxrwx 0 root root 512 Apr 8 16:54 client
drwxrwxrwx 0 root root 512 Apr 13 12:09 .git
-rwxrwxrwx 1 root root 11 Apr 5 12:40 .gitignore
-rwxrwxrwx 1 root root 1097 Apr 5 12:40 LICENSE
drwxrwxrwx 0 root root 512 Apr 5 12:40 nginx
-rwxrwxrwx 1 root root 82 Apr 5 12:40 Procfile
-rwxrwxrwx 1 root root 67 Apr 5 12:40 README.md
-rwxrwxrwx 1 root root 188 Apr 5 12:40 run_tests
-rwxrwxrwx 1 root root 28 Apr 5 12:40 start_client
-rwxrwxrwx 1 root root 41 Apr 13 12:52 start_server
drwxrwxrwx 0 root root 512 Apr 8 16:54 vendor
As you can see there is a folder called api at the top and the start_server script is set to have execution permissions.

Resources