I need to somehow install the service package on alpine linux, in order for my tests to run correctly. Tests are written using the testinfra module.
My test works fine on ubuntu and centos but doesn't work on alpine.
import testinfra
def test_nginx_running_and_enabled(host):
nginx = host.service('nginx')
assert nginx.is_running
assert nginx.is_enabled
I get an error
apk add --no-cache openrc
rc-service nginx status
Related
This is the link to my project HERE
I am trying to containerize this and i have written a dockerfile for this which below,
FROM python:3-alpine3.15
WORKDIR /app
COPY . /app
RUN apk update
RUN apk add python3
RUN apk add python3-tkinter
#RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 3000
CMD python3 ./sortingAlgs.py
It doesn't run since "tkinter-tclerror-no-display-name-and-no-display-environment-variable" (the error in docker desktop terminal)
After a lot of struggle I found "matplotlib" module can help and i tried including
import matplotlib
matplotlib.use('Agg')
at the starting of my source-code file ("sortingAlgs.py")
but no luck..
I am unable to make use of "requirements.txt" since i am new to docker.. and i am getting errors like (module not found) when it comes at "pip install -r requirements.txt" and fails.
and i only have experience on debian linux (ubuntu). hencce, working with alpine is a problem as well..
i want to host this application as an containerized application and push it on docker hub
Background
I am using docker to do a school project. Specifically, I pulled an ubuntu image and here is the system config:
I then logged into the docker container (ubuntu) and set up elasticsearch. When I try to run
./bin/elasticsearch
I get the following error inside the docker container's terminal
/lib64/ld-linux-x86-64.so.2: No such file or directory
I have two main confusions:
what does that even mean?
How to solve it?
If you are running this on an M1 macbook, it's possible that you are running a native Arm image of ubuntu, instead of the emulated x86 image. If the elasticsearch distribution you are trying to install is for x86_64, then it attempts to link to the x86-64-native ld.so, which of course isn't present on different platforms.
Either install the package for the arm platform specifically if they provide one, or - more likely - run docker explicitly as the emulated x86_64 platform:
docker run --platform linux/x86_64 <image>
For docker-compose, add platform: linux/x86_64 according to the docs
services:
my-app:
platform: linux/x86_64
No idea what you are running in your container but for me, the reason was simply because a package (Prisma https://github.com/prisma/prisma/issues/8478#) did not find openssl packages and installing them on alpine image failed even with openssl manually installed.
It was fixed by switching to slim image and installing openssl with apt-get update && apt-get -y install openssl. I highly recommend not changing your platform since with my M1 the build time increased by 200s using linux/x86_64.
Completing #misnomer answer, I could not even build the image.
If that is the case just add FROM --platform=linux/x86_64 ..., from this source. Ex: FROM --platform=linux/x86_64 python:slim ...
I have started learning django and Python.
I have developed one service using Visual Studio Code on my Windows machine.
It is working as expected on Windows machine.
Now I want to deploy the same on a Ubuntu server. (But failing as there is no 'bin' folder inside virtual environment)
How can I do it? I know I am missing something basic here.
Could you please help/ point me where I can read about it?
Let's suppose you are deploying your application at path /opt/.
Follow the below steps to deploy:
To get started we need to install some packages, run the following commands:
sudo apt-get update
sudo apt-get install python3-pip python3-dev libmysqlclient-dev ufw virtualenv
Above command would install the basics python dev packages on ubuntu server.
Create a virtualenv at some path of your choice(might be at /opt/.env), following commands:
virtualenv .env
Activate the environment: source .env/bin/activate
Install all requirement packages in virtual-env that you require to run your Django application.
Test your service manually by running: python manage.py runserver (this would shows that all dependencies are installed)
After that, You can install the web server gateway i.e Gunicorn and Supervisor as process monitoring tool for your service. Please refer: http://rahmonov.me/posts/run-a-django-app-with-nginx-gunicorn-and-supervisor/
Nginx can be run on server as web server for routing request to your application port/socket file.
Above are the high level steps to deploy the Django Application.
I'm making a web service in NodeJs that needs to support a specific xml request. So I'm using libxmljs to parse xml and validate it against an xsd.
On my Windows machine everything works well, so when doing this:
isValid = xml.validate(xsd)
isValid will be set as a boolean and xml will have items in the property validationErrors. Everything is fine until I run it in a docker container running node:10.15.2-alpine.
As long as the validation passes, everything is fine, but when there are validation errors, the entire docker container crashes.
I could not find an answer to this when googling so I will provide the answer myself :-)
Change in your Dockerfile to use FROM node:10.15.2-slim and not FROM node:10.15.2-alpine.
Yes it uses more space, but the alpine edition is appearently not compatible with some of the prebuild python libraries the libxmljs uses.
I faced the same problem, I was able to resolve it for some of the alpine distributions by installing python, g++ and make.
apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python && apk add --update --no-cache g++ && apk add --update --no-cache make
I am running Jenkins in docker from official docker hub .
I created job which runs my own shell script, however I see some binaries
are missing in docker e.g.file command.
They mention on docker hub that one can install additional binaries over Ubuntu's aptitude however I don't know which package to install to get e.g file command working.
Unless Ubuntu did something different than the base Debian environment, file is included in the file package.
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -f file