I work a lot with DDEV on my PHP projects and love the features DDEV offers.
Since I also work with Django and NodeJS projects I would like to use them in combination with DDEV. Officially these are not yet supported in the current version (1.18) but maybe someone has already found a solution?
For a quick and dirty answer on django, I'd like to get you started with a simple and probably inadequate approach, but it shows how easy it is to add something like django. We'll just use the django dev server.
Make a directory, I called mine dj and cd dj
ddev config --auto
Add to the .ddev/config.yaml:
webimage_extra_packages: [python3-django]
hooks:
post-start:
- exec: python3 manage.py runserver 0.0.0.0:8000
Add .ddev/docker-compose.django.yaml:
version: "3.6"
services:
web:
expose:
- 8000
environment:
- HTTP_EXPOSE=80:8000
- HTTPS_EXPOSE=443:8000
healthcheck:
test: "true"
ddev start
ddev ssh and create a trivial django project:
django-admin startproject dj .
Add to your dj/settings.py ALLOWED_HOSTS = ["dj.ddev.site"]
Exit back out to the host with ctrl-D or exit and ddev start
You should be able to access the trivial project at https://dj.ddev.site
Note that as you proceed, you'll probably want to end up starting the django server another way, or more likely actually front it by the ddev-webserver nginx server, which would be more natural (as in https://docs.nginx.com/nginx/admin-guide/web-server/app-gateway-uwsgi-django/). But for now, this is a simple demonstration. Happy to help you as you go along.
Related
I'm having trouble setting up DB2 on macOS via Docker on my M1-Max MacBook Pro (32 GB RAM). I already had a look at this question, which might be related, however there is not a lot of information and I cannot exactly say, if it is about the exact same thing.
I set up following docker-compose.yml:
version: '3.8'
services:
db2:
image: ibmcom/db2
platform: linux/amd64
container_name: db2-test
privileged: true
environment:
LICENSE: "accept"
DB2INSTANCE: "db2dude"
DB2INST1_PASSWORD: "db2pw"
DBNAME: "RC1DBA"
BLU: "false"
ENABLE_ORACLE_COMPATIBILITY: "false"
UPDATEVAIL: "NO"
TO_CREATE_SAMPLEDB: "false"
REPODB: "false"
IS_OSXFS: "true"
PERSISTENT_HOME: "true"
HADR_ENABLED: "false"
ETCD_ENDPOINT: ""
ETCD_USERNAME: ""
ETCD_PASSWORD: ""
volumes:
- ~/workspace/docker/db2-error/db2/database:/database
- ~/workspace/docker/db2-error/db2/db2_data:/db2_data
ports:
- 50000:50000
on my Intel-MacBook, this spins up without any issue, on my M1-MacBook however I see after Task #4 finished, I see following portion inside of the STDOUT:
DBI1446I The db2icrt command is running.
DBI1070I Program db2icrt completed successfully.
(*) Fixing /etc/services file for DB2 ...
/bin/bash: db2stop: command not found
From what I could figure out, the presence of (*) Fixing /etc/services file for DB2 ... already seems to be wrong (since it does not appear in my intel log and does not sound like everything's fine) and the /bin/bash: db2stop: command not found appears due to line 81 of /var/db2_setup/include/db2_common_functions, which states su - ${DB2INSTANCE?} -c 'db2stop force'.
As far as I understand, su - should run with the path of the target user. In every single .profile or .bashrc in the home directory, the ~/sqllib/db2profile is being sourced (via . /database/config/db2dude/sqllib/db2profile).
However, when as root inside of the container (docker exec -it db2-test bash), calling su - db2dude -c 'echo $PATH', it prints /usr/local/bin:/bin:/usr/bin. Therefore, the PATH obviously is not as expected.
Maybe someone can figure out, what's happening at this point. I also tried running Docker with "new Virtualization framework", which did not change anything. I assume, Dockers compatibility magic might not be perfect, however I'm looking forward to find some kind of workaround, maybe by building an image upon ibmcom/db2.
I highly appreciate your time and advice. Thanks a lot in advance.
As stated in #mshabou's answer, there is no support yet. One way you can still make it work is by prepending your Docker command with DOCKER_DEFAULT_PLATFORM=linux/amd64 or executing export DOCKER_DEFAULT_PLATFORM=linux/amd64 in your shell before starting the container.
Alternatively, you can also use colima. Install colima as described on their GitHub page and then start it in emulated mode like colima start --arch x86_64. Now you will be able to use your ibmcom/db2 image the way you're used to (albeit with decreased performance).
db2 is not supported on ARM architecture, only theses Architectures are supported: amd64, ppc64le, s390x
https://hub.docker.com/r/ibmcom/db2
I am using VS Code's feature to create development containers for my services. Using the following layout, I've defined a single service (for now). I'd like to automatically run my node project after the container is configured to listen for http requests but haven't found the best way to do so.
My Project Directory
project-name
.devcontainer.json
package.json (etc)
docker-compose.yaml
Now in my docker-compose.yaml, I've defined the following structure:
version: '3'
services:
project-name:
image: node:14
command: /bin/sh -c "while sleep 1000; do :; done"
ports:
- 4001:4001
restart: always
volumes:
- ./:/workspace:cached
Note how I need to have /bin/sh -c "while sleep 1000; do :; done" as the service command, which is required according to VS Code docs so that the service doesn't close?
Within my .devcontainer.json:
{
"name": "My Project",
"dockerComposeFile": [
"../docker-compose.yaml"
],
"service": "project-name",
"shutdownAction": "none",
"postCreateCommand": "npm install",
"postStartCommand": "npm run dev" // this causes the project to hang while configuring?
"workspaceFolder": "/workspace/project-name"
}
I've added a postCreateCommand to install dependencies, but I also need to run npm run dev to have my server listen for requests. However, if I add this command in the postStartCommand, the project does build and run, but it technically hangs on Configuring Dev Server (with a spinner at the bottom of VS Code) since this starts my server and the script doesn't "exit", so I feel like there should be a better way to trigger the server to run after the container is set up?
See https://code.visualstudio.com/remote/advancedcontainers/start-processes
In other cases, you may want to start up a process and leave it running. This can be accomplished by using nohup and putting the process into the background using &. For example:
"postStartCommand": "nohup bash -c 'your-command-here &'"
I just tried it, and it works for me - it removes the spinning "Configuring Dev Container" that I also saw. However, it does mean the process is running in the background so your logs will not be surfaced to the devcontainer terminal. I got used to watching my ng serve logs in that terminal to know when compilation was done, but now I can't see them. Undecided if I'll switch back to the old way. Having the Configuring Dev Container spinning constantly was annoying but otherwise did not obstruct anything that I could see.
Smacking my head against a desk trying to get this one to work. I'm trying to deploy a simple Flask app on gunicorn through Google App Engine. When visiting the app, I'm getting Error 500, with "gunicorn: error: No application module specified." in the logs.
My layout is roughly:
Main_Directory/
|-MyApp/
| |-__init__.py <- Containing majority of code
| |-templates/
| |-static/
|-app.yaml
|-main.py
|-requirements.txt
main.py simply imports the app from MyApp and instantiates a new instance called "app".
requirements.txt has all dependencies, including gunicorn.
app.yaml is exactly as follows:
runtime: python37
instance_class: F1
default_expiration: "4d 12h"
entrypoint: gunicorn -b :$PORT -w 1
env_variables:
BUCKET_NAME: "bucket_name"
handlers:
- url: /.*
script: auto
I tried creating another app.yaml within the MyApp folder and adding service: "default" but that didn't help. I'm pretty much out of ideas here. I'm fairly certain the issue is because the main app code is defined in MyApp, rather than directly under the Main_directory. I appreciate the easy solution would be to bring everything from the MyApp folder up a level but I'm hesitating to do this as it will mess with my Git. I thought creating main.py and directly instantiating MyApp/__init__.py would do the trick, but doesn't quite seem to.
Any and all ideas appreciated.
If you want to use gunicorn as your HTTP server, you need to tell it where your WSGI app is located. This is likely an app variable in your main.py file, so you can use:
entrypoint: gunicorn -b :$PORT -w 1 main:app
As is often the case, after reaching my limit and raising a question, I found an answer...
I removed the entrypoint from app.yaml and removed gunicorn from requirements.txt. On redeploying the app, it now works.
I don't feel like this is a fix, so much as a workaround. I'm still not sure why it didn't work before. There must be better solutions which maintain gunicorn as a requirement. I expect it was the yaml that was wrong.
Another thing that can cause this error is if the gunicorn.conf.py file is missing.
I'm wondering how to do log task customization in the new Elastic Beanstalk platform (the one based on Amazon Linux 2). Specifically, I'm comparing:
Old: Single-container Docker running on 64bit Amazon Linux/2.14.3
New: Single-container Docker running on 64bit Amazon Linux 2/3.0.0
(My question actually has nothing to do with Docker as such, I'm speculating the problem exist for any of the new Elastic Beanstalk platforms).
Previously I could follow Amazon's recipe, meaning put a file into /opt/elasticbeanstalk/tasks/bundlelogs.d/ and it would then be acted upon. This is no longer true.
Has this changed? I can't find it documented. Anyone been successful in doing log task customization on the newer Elastic Beanstalk platform? If so, how?
Minimal working example
I've created a minimal working example and deployed on both platforms.
Dockerfile:
FROM ubuntu
COPY daemon-run.sh /daemon-run.sh
RUN chmod +x /daemon-run.sh
EXPOSE 80
ENTRYPOINT ["/daemon-run.sh"]
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Logging": "/var/mydaemon"
}
daemon-run.sh:
#!/bin/bash
echo "Starting daemon" # output to stdout
mkdir -p /var/mydaemon/deeperlogs
while true; do
echo "$(date '+%Y-%m-%dT%H:%M:%S%:z') Hello World" >> /var/mydaemon/deeperlogs/app_$$.log
sleep 5
done
.ebextensions/mydaemon-logfiles.config:
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/mydaemon-logs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/eb-docker/containers/eb-current-app/deeperlogs/*.log
If I do "Full Logs" action on the old platform I would get a ZIP with my deeperlogs included
inside var/log/eb-docker/containers/eb-current-app. On the new platform I don't.
Investigation
If you look on the disk you'll see that the new Elastic Beanstalk doesn't have a /opt/elasticbeanstalk/tasks folder at all, unlike the old one. Hmm.
On Amazon Linux 2 the folder is:
/opt/elasticbeanstalk/config/private/logtasks/bundle
The .ebextensions/mydaemon-logfiles.config should be:
files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/mydaemon-logs.conf":
mode: "000644"
owner: root
group: root
content: |
/var/mydaemon/deeperlogs/*.log
container_commands:
append_deeperlogs_to_applogs:
command: echo -e "\n/var/log/eb-docker/containers/eb-current-app/deeperlogs/*" >> /opt/elasticbeanstalk/config/private/logtasks/bundle/applogs
The mydaemon-logfiles.config also adds deeperlogs into applogs file. Without it deeperlogs will not be included in the download log zip bundle. Which is intresting, because the folder will be in the correct location, i.e., /var/log/eb-docker/containers/eb-current-app/deeperlogs/. But without being explicitly listed in applogs, it will be skipped when zip bundle is being generated.
I tested it with single docker environment (3.0.1).
The full log bundle successful contained deeperlogs with correct log data:
Hope that this will help. I haven't found any references for that. AWS documentaiton does not document this, as it is mostly based on Amazon Linux 1, not Amazon Linux 2.
Amazon has fixed this problem in version of the Elastic Beanstalk AL2 platforms released on 04-AUG-2020.
It has been fixed so that log task customization on AL2-based platforms now works the way it has always worked (i.e. on the prevision generation AL2018 platforms) and you can therefore follow the official documentation in order to make this happen.
Succesfully tested with platform "Docker running on 64bit Amazon Linux 2/3.1.0". If you (still) use "Docker running on 64bit Amazon Linux 2/3.0.x" then you must use the undocumented workaround described in Marcin's answer but you are probably better off by upgrading your platform version.
As of 2021/11/05, I tried the accepted answer and various other examples including the latest official documentation on using the .ebextensions folder with *.config files without success.
Most likely something I was doing wrong but here's what worked for me.
The version I'm using: Docker running on 64bit Amazon Linux 2/3.4.8
Simply, add a volume to your docker-compose.yml file to share your application logs to the Elastic Beanstalk log directory.
Example docker-compose.yml:
version: "3.9"
services:
app:
build: .
ports:
- "80:80"
user: root
volumes:
- ./:/var/www/html
# "${EB_LOG_BASE_DIR}/<service name>:<log directory inside container>
- "${EB_LOG_BASE_DIR}/app:/var/www/html/application/logs" # ADD THIS LINE
env_file:
- .env
For more info, here's the documentation I followed.
Hopefully, this helps future readers like myself 👍
I am trying to achieve something incredibly basic, but have been going at this for a couple of evenings now and still haven't found a solid (or any) solution. I have found some similar topics on SO and followed what was on there but to no avail, so I have created a GitHub repo for my specific case.
What I'm trying to do:
Be able to provision NodeJS app using docker-compose up -d (I plan to add further containers in future, omitted from this example)
Ensure the code is mapped via volumes so I don't have to re-build every time I make a change to some code locally.
My guess is the issue I'm encountering is something to do with the mapping of volumes causing some files to be lost/overwritten within the container, for instance in some of the variations I've tried the folders are being mapped but individual files are not.
I've created a simple repo to illustrate my issue, just checkout and run docker-compose up -d to see the issue, the container dies due to:
Error: Cannot find module '/src/app/app.js'
The link to the repo is here: https://github.com/josephmcdermott/nodejs-docker-issue, PR's welcome and if anybody can solve this for me I'd be eternally grateful.
UPDATE: please see the solution code below, kind thanks to ldg
Dockerfile
FROM node:4.4.7
RUN mkdir -p /src
COPY . /src
WORKDIR /src
RUN npm install
EXPOSE 3000
CMD ["node", "/src/app.js"]
docker-compose.yml
app:
build: .
volumes:
- ./app:/src/app
Folder Structure:
- app
- - * (files I want to sync and regularly update)
- app.js (initial script to call files within app/)
- Dockerfile
- docker-compose.yml
- package.json
In your compose file, the last line - /src/app/node_modules is likely mapping over your previous volume. If you mount /scr/app then node_modules will get created in that linked volume. So it would look like this:
app:
build: .
volumes:
- ./app:/src/app
If you do want to keep your entire /app directory as a linked volume, you'll need to either do npm install when starting the container (which would insure it picks up any updates) OR don't link the volume and update your Dockerfile to copy the entire /app directory. This is nice because it gives you a self-contained image. I usually Dockerize my Node.js apps this way. You can also run npm test as appropriate to verify the image.
If you need to create a linked volume for a script file you want to be able to edit (or if your app generates side-effects), you can link just that directory or file via Docker volumes.
Btw, if you want to make sure you don't copy the contents of that directory in the future, add it to .dockerignore (as well as .gitignore).
Notice the '/' at the end
volumes:
- ./app:/src/app/
This declaration is not correct
volumes:
- ./app:/src/app