I would like to run a PHP App in Azure Webapps. For this I would like to use my own Container, because I have some problems with the current default.
Is the Code, or the Dockerfile somewhere public, so one can use it as a base?
EDIT: I would also like to file a potential bug, but I also cannot find an issue tracker.
As Jason correctly pointed out, adding some more info on this topic for additional clarity:
In Azure App service, you can have different flavor/version of WebApp, as follows:
1.Code + Windows – You select the Application stack and deploy your code.
2.Code + Linux (WebApp Linux) - App Service on Linux provides pre-defined application stacks on Linux with support for languages such as .NET, PHP, Node.js and others. These are blessed images, predefined by platform. Here you just deploy your code.
Docker Container + Linux (WebApp for Container) – custom image (code already part of the image, and not deployed separately) - container image become containers at runtime.
Docker Container + Windows (WebApp for Windows Container) – custom image – container image become containers at runtime.
Yes, you can also use a custom Docker image to run your webapp on an application stack that is not already defined in Azure.
Azure App Service provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS.
az webapp config container set
You can run az webapp list-runtimes --linux to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
Kind checkout these documentation for more details.
https://learn.microsoft.com/azure/app-service/quickstart-custom-container?pivots=container-windows
https://learn.microsoft.com/azure/app-service/quickstart-custom-container?pivots=container-linux
All App Service base images can be found here:
https://github.com/Azure-App-Service/php (in your case)
-Similarly you can find for other languages-
https://github.com/Azure-App-Service/python
https://github.com/Azure-App-Service/ruby
https://github.com/Azure-App-Service
If there is any change on the WebApp blessed images you could typically see that change in this repository: https://github.com/Azure/app-service-quickstart-docker-images
And yes, it is a good place to track your feedback/suggestions for the repo:
https://github.com/Azure/app-service-quickstart-docker-images/issues
While the answer from #AjayKumar-MSFT matches the question perfectly, I upvoted and accepted, but want to share some of the information I found out in addition.
I started to build my own containers based on the work by webdevops. I find it a very good base image, since it has important stuff like syslog and supervisor preconfigured and is easily extendible. It can be found in the php docker repo or on dockerhub. I used (and linked) the apache variant, but I am considering to switch to nginx. The documentation was precise but relativeliy thin for newcomers like me, but it works really good.
The have webssh one needs to enable ssh on port 2222 having static credentials root:Docker!. I added this block to a derived Dockerfile to start ssh using supervisord:
# Enable ssh
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY docker/sshd_config /etc/ssh/
COPY docker/ssh.conf /opt/docker/etc/supervisor.d/
RUN mkdir -p /var/run/sshd
docker/sshd_config:
Port 2222
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
docker/ssh.conf:
[program:sshd]
command=/usr/sbin/sshd -D
Next, I put my code to /app .
When the starting point is somewhere else, as it is with some frameworks, you can override an env var like for example:
ENV WEB_DOCUMENT_ROOT=/app/public
Related
I've got a python package running in a container.
Is it best practice to install it in /opt/myapp within the container?
Should the logs go in /var/opt/myapp?
Should the config files go in /etc/opt/myapp?
Is anyone recommending writing logs and config files to /opt/myapp/var/log and /opt/myapp/config?
I notice google chrome was installed in /opt/google/chrome on my (host) system, but it didn't place any configs in /etc/opt/...
Is it best practice to install it in /opt/myapp within the container?
I place my apps in my container images in /app. So in the dockerfile I do
WORKDIR /app at the beginning
Should the logs go in /var/opt/myapp?
In container world the best practice is that your application logs go into stdout, stderr and not into files inside the container because containers are ephemeral by design and should be treated that way so when a container is stopped and deleted all of its data on its filesystem is gone.
On local docker development environment you can see the logs with docker logs and you can:
start a container named gettingstarted from the image docker/getting-started:
docker run --name gettingstarted -d -p 80:80 docker/getting-started
redirect docker logs output to a local file on the docker client (your machine from where you run the docker commands):
docker logs -f gettingstarted &> gettingstarted.log &
open http://localhost to generate some logs
read the log file with tail realtime or with any text viewer program:
tail -f gettingstarted.log
Should the config files go in /etc/opt/myapp?
Again, you can put the config files anywhere you want, I like to keep them together with my app so in the /app directory, but you should not modify the config files once the container is running. What you should do is instead pass the config variables to the container as environment variables at startup with the -e flag, for example to create MYVAR variable with MYVALUE value inside the container start it this way:
docker run --name gettingstarted -d -p 80:80 -e MYVAR='MYVALUE' docker/getting-started
exec into the container to see the variable:
docker exec -it gettingstarted sh
/ # echo $MYVAR
MYVALUE
From here it is the responsibility of your containerized app to understand these variables and translate them to actual application configurations. Some/most programming languages support reaching env vars from inside the code at runtime but if this is not an option then you can do an entrypoint.sh script that updates the config files with the values supplied through the env vars. A good example for this is the postgresql entrypoint: https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh
Is anyone recommending writing logs and config files to
/opt/myapp/var/log and /opt/myapp/config?
As you can see, it is not recommended to write logs into the filesystem of the container you would rather have a solution to save them outside of the container if you need them persisted.
If you understand and follow this mindset especially that containers are ephemeral then it will be much easier for you to transition from the local docker development to production ready kubernetes infrastructures.
Docker is Linux, so almost all of your concerns are related to the best operative system in the world: Linux
Installation folder
This will help you:
Where to install programs on Linux?
Where should I put software I compile myself?
and this: Linux File Hierarchy Structure
As a summary, in Linux you could use any folder for your apps, bearing in mind:
Don't use system folders : /bin /usr/bin /boot /proc /lib
Don't use file system folder: /media / mnt
Don't use /tmp folder because it's content is deleted on each restart
As you researched, you could imitate chrome and use /opt
You could create your own folder like /acme if there are several developers entering to the machine, so you could tell them: "No matter the machine or the application, all the custom content of our company will be in /acme". Also this help you if you are a security paranoid because will be able to guess where your application is. Any way, if the devil has access to your machine, is just a matter of time to find all.
You could use fine grained permissions to keep safe the chosen folder
Log Folder
Similar to the previous paragraph:
You could store your logs the standard /var/log/acme.log
Or create your own company standard
/acme/log/api.log
/acme/webs/web1/app.log
Config Folder
This is the key for devops.
In a traditional, ancient and manually deployments, some folders were used to store the apps configurations like:
/etc
$HOME/.acme/settings.json
But in the modern epoch and if you are using Docker, you should not store manually your settings inside of container or in the host. The best way to have just one build and deploy n times (dev, test, staging, uat, prod, etc) is using environment variables.
One build , n deploys and env variables usage are fundamental for devops and cloud applications, Check the famous https://12factor.net/
III. Config: Store config in the environment
V. Build, release, run: Strictly separate build and run stages
And also is a good practice on any language. Check this Heroku: Configuration and Config Vars
So your python app should not read or expect a file in the filesystem to load its configurations. Maybe for dev, but no for test and prod.
Your python should read its configurations from env variables
import os
print(os.environ['DATABASE_PASSWORD'])
And then inject these values at runtime:
docker run -it -p 8080:80 -e DATABASE_PASSWORD=changeme my_python_app
And in your developer localhost,
export DATABASE_PASSWORD=changeme
python myapp.py
Before the run of your application and in the same shell
Config of a lot pf apps
The previous approach is an option for a couple of apps. But if you are driven to microservices and microfrontends, you will have dozens of apps on several languages. So in this case, to centralize the configurations you could use:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Or the Configurator (I'm the author)
I'm new to both Docker and Azure Functions so it must be a silly question...
You can pull the images of Azure Functions from Docker Hub, like:
docker pull mcr.microsoft.com/azure-functions/node:3.0-node12
Now I pulled the image of a specific runtime of Azure Functions, but what can I do with this exactly?
First I thought I could find Azure Functions Core Tools inside of the container, then found the azure-function-host directory with bunch of files, but I'm not sure what it is.
docker exec -it "TheContainerMadeOfAzureFunctionsImage" bash
-> FuncExtensionBundles azure-functions-host bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Thank you in advance.
You can install the remote development extension tools for VSCode and the Azure Functions extension.
Create your local folder then using the remote development tools, open that folder inside a container from the command pallette by selecting 'Reopen In Container'
Reopen In Container Image
Then select your definition.
Remote Dev Tools Image
This actually use those base images you mentioned.
It will create a .devcontainer hidden directory in your repo where it stores the container information and saves you having to install the Function Core tools/NPM or anything else on your local machine.
It automatically forwards the required ports for local debugging and you can push the devcontainer definitions to source control so that others can use your definition with the project.
Last week I solved it myself. I found the exact image in Docker Hub, then docker pull mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools and that's it.
You can find a full list of available tags for each runtime.
In container you can run both Azure Functions Core Tools and a language runtime (like Node.js or Python, etc.) and of course you can create function apps.
With port-forwarding like docker run -it -p 8080:7071 --name container1 mcr.microsoft.com/azure-functions/node:3.0-node12-core-tools bash you can debug your functions running inside a container (which uses port 7071) from your local machine, by sending HTTP requests to localhost:8080. This is somewhat brute force but I'm happy.
I'm trying out the new webapp service for linux/containers with a custom docker image I've pushed up to an ACR.
https://azure.microsoft.com/en-gb/blog/webapp-for-containers-overview/
We've got it up and running, a django app, nicely.
What I need to do now though is to be able to run one off commands in the containers we're making.
Could anyone point me in the right direction on how to accomplish that?
Any help, greatly appreciated.
The current approach for doing that in App Service Linux is to enable SSH to your container -
https://learn.microsoft.com/en-us/azure/app-service-web/app-service-linux-ssh-support
# ------------------------
# SSH Server support
# ------------------------
RUN apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "root:Docker!" | chpasswd
You can then SSH into it through the Kudu console (which uses WebSSH) -
I'm pretty sure its not possible with Azure Web App yet.
Useful links:
https://learn.microsoft.com/en-us/azure/app-service-web/app-service-linux-ssh-support
https://learn.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image#troubleshooting
I am using this Dockerfile on Azure Linux App Service:
FROM ruby:2.3.3
ENV "GEM_HOME" "/home/gems"
ENV "BUNDLE_PATH" "/home/gems"
EXPOSE 3000
WORKDIR /home/webapp
CMD bundle install && bundle exec rails server puma -b 0.0.0.0 -e production
As you can see the gems folder is located in the home folder. The home folder is shared with the host system of the App Service. Now my problem the App Service LogFiles/docker/docker_***_out.log indicates that bundle install is called multiple times (probably from different containers). This leads to that some gems are never successfully installed.
Is there some setting which runs just one container so that my gems can be installed successfully and not interferring with each other? Or am I making wrong assumptions here? Maybe the problem isn't that there a multiple containers started?
Is there an easier way to install the gems the first time in the shared folder of the host system?
I have a VPS running Debian 8 with Docker. I want to give my customers some kind of terminal access to there container trough the web interface.
What's the best way of implementing this? And does anyone has some kind of example.
Cheers,
Ramon
You can spin your own web interface easily since Docker includes a REST based API. There are also plenty of existing implementations of this out there, including:
Universal Control Plane
UI for Docker
Docker WebUI
And various others if you search Docker Hub.
Because you're also asking for examples: A very easy implementation for a UI is the following:
install the docker engine (curl -sSL https://get.docker.com/ | sh)
Start the docker daemon: (sudo service docker start)
Run the ui-for-docker container and map the port 9000:
docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docker
access server-ip:9000 in your browser.
If you want just know what is happening in your docker registry, than you also may want to try this UI for Docker Registry. It is a bit "raw" now, but it has features that other have not.
It shows dependence tree (FROM directive) of stored images.
It shows pretty statistics about uploads number and image sizes.
Can serve multiple repositories.