AZ CLI/kubectl apply error - - the path does not exist - azure

I'm working through a tutorial on Kubernetes on Azure tutorial (here) and everything has worked perfectly until I run the kubectl apply to configure the cluster:
bash-4.4# kubectl apply -f azure-vote-all-in-one-redis.yaml
error: the path "azure-vote-all-in-one-redis.yaml" does not exist
I found this question which is right on point if I were trying to use a URL for the file.
I've tried:
kubectl apply -f azure-vote-all-in-one-redis.yaml
and
kubectl apply -f /Users/bill/Documents/GitHub/azure-voting-app-redis/azure-vote-all-in-one-redis.yaml
The kubectl command is run from the AZ CLI (I'm using VSCode - with everything configured for Azure and Docker - no problems seeing anything).
if I ls from the AZ CLI I get:
bash-4.4# ls
azure-cli dev home media proc run srv tmp var
bin etc lib mnt root sbin sys usr
bash-4.4#
I've also looked through the docs for AZ CLI and kubectl and every indication is that it should simply work. I also tried kubectl from the console which obviously didn't work...

As I know, most command like this, when you execute the command with an argument file, you should in the same directory which the file in or with an absolute path of the file.
So you can use two ways to execute the command:
First, you can go into the directory which the file azure-vote-all-in-one-redis.yaml in and then execute the command kubectl apply -f azure-vote-all-in-one-redis.yaml.
Second, you can use an absolute path. How to get the path? You can go into the directory and execute the command pwd. Or you can use the command whereis azure-vote-all-in-one-redis.yaml to get the path.
Update
Here is my test screenshot:
Hope it will help you!

Related

error: stat C:\Users\icnpuser\.kube\config: no such file or directory

I am getting the following error
error: stat C:\Users\icnpuser\.kube\config: no such file or directory
error: stat C:\Users\icnpuser\.kube\config: no such file or directory
when i execute the below commands:
kubectl --kubeconfig=C:\\Users\\icnpuser\\.kube\\config apply -f ./Orchestration/dev/deployment.yaml
kubectl --kubeconfig=C:\\Users\\icnpuser\\.kube\\config apply -f ./Orchestration/dev/service.yaml
The above commands are written in a deploy.sh file in the azure repo, as shown below
The error is because of these commands as looking kubeconfig from the azure repo and not from my Windows VM.
My windows VM where I have configured kubectl already have kubeconfig, and I am able to list the contexts through kubectl command.
PS: My Node pool is running on two linux vms.
Also, I am triggering this deploy.sh file from the pipeline.
Please help me with how to execute this deployment.YAML & service.YAML to create pods in my aks cluster

Unable to start TeamCity Build agent on Docker

I'm trying to create a TeamCity build agent on docker. i pulled the official image and tried to start it with default configurations with below command
docker run -d --name teamcity-agent -e SERVER_URL="http://teamcity-server-instance:80" -v /opt/docker/teamCity/teamcity_agent/conf:/data/teamcity_agent/conf jetbrains/teamcity-agent
but it exits with code 1 every time i run it and below are the logs for that
can anyone suggest a solution to this
Thank you in advance
Did you try changing the mod? sudo chmod 666 /opt/docker.
This gives the File owner, The group members, and others read + write permission over that directory.
you can check the permissions you have via: ls -la /opt

Are Azure container instances made for running simple command with simple output?

i am trying to make use of Azure Instances and i need some explanation about the service itself.
I want to use ACI to launch the docker running a command prompting the output of the command and stop the docker.
Is ACI the good service for that kind of things ?
The Docker file look like this.
FROM alpine
RUN apk add ffmpeg
CMD ffprobe -show_streams -show_format -loglevel warning -v quiet -print_format json /input.video
The docker run command to make it work look like this
docker run --name ffprobe-docker -i -v /path/test.ts:/input.video --rm 72e84b2825af
The issue ?
I am not able to launch my script like i can make it work on my machine on azure
What i have done?
I created a private registery where i uploaded my Image.
I ran az container createcommand witch created the ressource
Now i don't know what to do next in order to make it work as expected?
because the container is terminated and the az container exec --exec-command is not showing anything on the terminal once the command is ended.
For ACI, you can create it from your own Docker image in the ACR or other Registries. You can also run the command in it. But you should pay attention to that you cannot run the Docker command in it, because you can not nest container in it. It cannot be a Docker server. It just can be a container.
If you use the CLI command az container exec --exec-command then it will like this:
And the command as the --exec-command parameter should a bash command that can run in your Docker image.
I think the biggest advantage of ACI is the fastest and simplest, and without having to manage any virtual machines and without having to adopt a higher-level service.
Hope this will help you. Any more question please give me the message.

ElasticBeanstalk - Adding ec2-user to another group

I have a cron job that needs to be run under ec2-user on my EC2 instance and it needs to be able to write to the standard log files for my web app. However, the log files are owned by webapp (as per normal).
I've successfully changed the permissions on the log files so that they are accessible by both the owner and the group webapp:webapp. But where I'm running into trouble is when I try to add the ec2-user to the webapp group.
I can do it fine in SSH with sudo usermod -a -G webapp ec2-user but when I try to add this command via EB container-commands, I get an error saying that you must have a tty to run sudo. Running the command without sudo gives me /bin/sh: usermod: command not found.
Anybody know of any other way to be able to add ec2-user to the webapp group via the Elastic Beanstalk deployment config.
Not sure about the issue with the sudoers file, but generally a cleaner way to add a user to a group (than manually executing a command) is to use the users section of the .ebextensions file. For example, in .ebextensions/something.config:
users:
ec2-user:
groups:
- webapp
You should not use sudo, the deploy script is ran by root.
Also, this is a server command, do it in the commands section instead of container commands section.
commands:
01_set_user_role:
command: "usermod -a -G webapp ec2-user"
You need to run this command from a container_command before executing any commands with sudo:
echo Defaults:root \!requiretty >> /etc/sudoers
In context (in .ebextensions/yourconf.config)
container_commands:
001-enableroot:
command: echo Defaults:root \!requiretty >> /etc/sudoers #disables error related to needing a tty for sudo, allows running without cli

Mount data volume to docker with read&write permission

I want to mount a host data volume to docker. But the container should have read and write permission to it, meantime, any changes on the data volumes should not affect the data in host.
I can image a solution that mount several data volumes to single folder, one is read only another is read and write. But only this second '-v' works in my command,
docker run -ti --name build_cent1 -v /codebase/:/code:ro -v /temp:/code:rw centos6:1.0 bash
only this second '-v' works in my command,
That might be because both -v options attempt to mount host folders on the same container destination folder /code.
-v /codebase/:/code:ro
^^^^^
-v /temp:/code:rw
^^^^^
You could mount those host folders in two separate folders within /code.
As in:
-v /codebase/:/code/base:ro -v /temp:/code/temp:rw.
Normally in this case I think you ADD the folder to the Docker image, so that any container running it will have it in its (writeable) filesystem, but writes will go to a different layer.
You need to write a Dockerfile in the folder above the one you wish to use, which should look something like this:
FROM my/image
ADD codebase /codebase
Then you build the container using docker build -t some-name <path>. These steps could be added to the build scripts of your app (maybe you will find some plugin to help there). Then you can docker run some-name.
The downside is that there is one copy to do and the image creation, but should you launch many containers they will share the same copy of the layer in read-only and write their own modifications to independent layers above.
Got one answer from nixun in github.
you can simply use overlayfs to fix this:
mount -t overlay overlay \
-olowerdir=/codebase,upperdir=/temp,workdir=/workdir /codebase_new
docker run -ti --name build_cent1 -v /codebase_new:/code:rw centos6:1.0 bash
This solution has a good flexibility. Create image with share folder would be a solution, but it cannot update folder data easily.
This answer is not for docker users but it will help anyone who uses Lima to manage their containers.
I was stuck trying to solve the issue with limactl and lima nerdctl . I thought it is worth sharing the fix so that it may help anyone in the community who's using lima instead of docker:
By default Lima mounts volumes as read only. to be make them writeable by default do the following:
Edit the file and set write: true under mount section
$ vim ~/.lima/default/lima.yaml
then restart lima
limactl list #this lists all running vms
limactl stop default #or name of the machine
limactl start default #or name of the machine
you would still need to specify mount options exactly as with docker
lima nerdctl run -ti --name build_cent1 \
-v /codebase/:/code/base:ro \
-v /temp:/code/temp:rw \
centos6:1.0 bash
For more information about lima, please check this out

Resources