I'm trying to set up a web app that has persistent storage via file share to a storage account.
I'm following various guides from Microsoft docs and I managed to do most of it, my app has persisted storage. But now, what I wanna do, I want to map volumes to my storage account.
I saw that there is this variable ${WEBAPP_STORAGE_HOME} that I can use in my docker-compose.
My question is, what is the value of this variable? The docs state:
${WEBAPP_STORAGE_HOME} is an environment variable in App Service that is mapped to persistent storage for your app.
I find this a little vague. Does it know automatically to map my path mappings? What if I have multiple path mappings? Should I set the value in the App Settings in the Configuration blade? If so, what do I need to specify, the mount path?
Besides that, I saw that it is used as following:
version: '3.3'
services:
wordpress:
image: mcr.microsoft.com/azuredocs/multicontainerwordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
I'm used with named volumes in docker-compose. I figure there is no need to specify something like that?
UPDATE
After #Jason Pan's answer, I tried to play with the mount a little bit.
I succeeded in having persisted storage on the App Service with the following docker-compose:
# ... lines skipped for brevity
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
driver: local
But I want to persist the data on a Storage Account. I saw that this is possible via: AppService/Configuration/Path Mappings.
My Docker Compose
# ...
volumes:
- MyMountedPath:/var/lib/mysql
In this docker-compose file, I have my app and a MySQL image: mysql:8 to be exact.
I mounted a path as follows:
Name: MyMountedPath; Mounted Path: /usr/local/mysql; Type: Azure Files ....
And I get the following error in the logs:
2021-04-08T11:02:06.790578922Z chown: changing ownership of '/var/lib/mysql/': Operation not permitted
2021-04-08T11:02:12.785079208Z 2021-04-08 11:02:12+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.23-1debian10 started.
Since it worked with the first approach, I suspect that there are some issues with the way my Path mapping is defined. This led me to even more questions:
Does my Mount Path need to exist on the App Service File system? If no, can I define something like: /foo/bar ?
If I have a Mount Path name MyMountedPath, can I specify in the docker-compose file something like
volumes:
- MyMountedPath/foo:/something
Basically, navigating in the mounted path?
Do I need to mount this path in the File Storage in my Storage Account? Or will the App Service create this path when it will need to store something?
Example:
In the App Service properties, I mounted an Azure File Share and gave the name MyExternalStorage
In the docker compose configuration I have to set
volumes:- MyExternalStorage:/var/www/html/contao
Thanks for TeddyDubois29's answer, hope it also can help you.
Web App Docker Compose Persistent Storage
Related
I am building a service which creates on demand node red instance on Kubernetes. This service needs to have custom authentication, and some other service specific data in a JSON file.
Every instance of node red will have a Persistent Volume associated with it, so one way I though of doing this was to attach the PVC with a pod and copy the files into the PV, and then start the node red deployment over the modified PVC.
I use following script to accomplish this
def paste_file_into_pod(self, src_path, dest_path):
dir_name= path.dirname(src_path)
bname = path.basename(src_path)
exec_command = ['/bin/sh', '-c', 'cd {src}; tar cf - {base}'.format(src=dir_name, base=bname)]
with tempfile.TemporaryFile() as tar_buffer:
resp = stream(self.k8_client.connect_get_namespaced_pod_exec, self.kube_methods.component_name, self.kube_methods.namespace,
command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
print(resp)
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
out = resp.read_stdout()
tar_buffer.write(out.encode('utf-8'))
if resp.peek_stderr():
print('STDERR: {0}'.format(resp.read_stderr()))
resp.close()
tar_buffer.flush()
tar_buffer.seek(0)
with tarfile.open(fileobj=tar_buffer, mode='r:') as tar:
subdir_and_files = [tarinfo for tarinfo in tar.getmembers()]
tar.extractall(path=dest_path, members=subdir_and_files)
This seems like a very messy way to do this. Can someone suggest a quick and easy way to start node red in Kubernetes with custom settings.js and some additional files for config?
The better approach is not to use a PV for flow storage, but to use a Storage Plugin to save flows in a central database. There are several already in existence using DBs like MongoDB
You can extend the existing Node-RED container to include a modified settings.js in /data that includes the details for the storage and authentication plugins and uses environment variables to set the instance specific at start up.
Examples here: https://www.hardill.me.uk/wordpress/tag/multi-tenant/
It might take a while to explain what I'm trying to do but bear with me please.
I have the following infrastructure specified:
I have a job called questo-server-deployment (I know, confusing but this was the only way to access the deployment without using ingress on minikube)
This is how the parts should talk to one another:
And here you can find the entire Kubernetes/Terraform config file for the above setup
I have 2 endpoints exposed from the node.js app (questo-server-deployment)
I'm making the requests using 10.97.189.215 which is the questo-server-service external IP address (as you can see in the first picture)
So I have 2 endpoints:
health - which simply returns 200 OK from the node.js app - and this part is fine confirming the node app is working as expected.
dynamodb - which should be able to send a request to the questo-dynamodb-deployment (pod) and get a response back, but it can't.
When I print env vars I'm getting the following:
➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
so it looks like the configuration is aware of the dynamodb address and port:
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
You'll also notice in the above env variables that I specified:
DB_DOCKER_URL=questo-dynamodb-service
Which is supposed to be the questo-dynamodb-service url:port which I'm assigning to the config here (in the configmap) which is then used here in the questo-server-deployment (job)
Also, when I log:
kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
I'm getting the following results:
Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port 443 instead of 8000?
The DB_DOCKER_URL should contain the full address (with port) to the questo-dynamodb-service
What am I doing wrong here?
Edit ----
I've explicitly assigned the port 8000 to the DB_DOCKER_URL as suggested in the answer but now I'm getting the following error:
Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using https ?
Any ideas what needs to be done here?
How about specify the port in the ConfigMap:
...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
Otherwise it may default to 443.
Answering my own question in case anyone have an equally brilliant idea of running local dybamodb in a minikube cluster.
The issue was not only with the port, but also with the protocol, so the final answer to the question is to modify the ConfigMap as follows:
data = {
DB_DOCKER_URL = "http://${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000"
...
}
As a side note:
Also, when you are running various scripts to create a dynamodb table in your amazon/dynamodb-local container, make sure you use the same region for both creating the table like so:
#!/bin/bash
aws dynamodb create-table \
--cli-input-json file://questo_db_definition.json \
--endpoint-url http://questo-dynamodb-service:8000 \
--region local
And the same region when querying the data.
Even though this is just a local copy, where you can type anything you want as a value of your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and actually in the AWS_REGION as well, the region have to match.
If you query the db with a different region it was created with, you get the Cannot do operations on a non-existent table error.
I deployed gcp-spark operator on k8s. Its working perfectly fine. Able to run scala and python jobs with no issues.
But, I am unable to create volume mounts on my pods. Unable to use local fs. Looks like spark-operator should be enabled with webhooks for it to work. Going by here.
There was an spark-operator with webhooks yaml here, but the name is different to the deployment coming through the operator hub. I updated the names to the best of my knowledge and tried to apply the deployment. But ran into the below issue.
kubectl apply -f spark-operator-with-webhook.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/spark-operator configured
service/spark-webhook unchanged
The Job "spark-operator-init" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVers......int(nil)}}: field is immutable
Is there an easy way of enabling webhooks on spark-operator? I want to be able to mount local fs on the sparkapplication. Please assist.
I purged the init object and redeployed. The manifest was successfully applied.
I am building an application using Jhipster.
My sample application-prod.yml looks like below as provided by Jhipster
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:mysql://localhost:3306/MyModule?useUnicode=true&characterEncoding=utf8&useSSL=false
name:
username: hello
password: hello
hikari:
data-source-properties:
...
jpa:
database-platform: org.hibernate.dialect.MySQL5InnoDBDialect
database: MYSQL
show-sql: false
org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory
...
When I run the application without docker I get a mysql error if the username/password is incorrect which is normal.
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
But if I am running the same application using docker image and provide the db properties in the docker compose file, the properties in the application-prod.yml file seems to get ignored. That is, even if the database properties in the application properties file is incorrect but the correct values are provided in the docker compose file, the application seems to work fine when run using docker image and can connect to the database.
The entries in the docker file is given below
version: '2'
services:
mymodule-mysql:
container_name: mymodule-mysql
image: mysql:5.7.13
environment:
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=root
- MYSQL_ALLOW_EMPTY_PASSWORD=no
- MYSQL_DATABASE=mymodule
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl
It seems that the environment variables in the docker compose file is overriding the properties application-dev.yml file. Is my thought correct ?
It will be good if someone can explain in details how this works in jhipster.
your observation is correct: the values specified over environment variables are overriding the ones specified in the yml file in the jar. This behavior has nothing to do with JHipster. This is pure spring-boot. Here is a short overview of the order how properties are override (from the spring doc)
Spring Boot uses a very particular PropertySource order that is designed to allow sensible overriding of values. Properties are considered in the following order:
Devtools global settings properties on your home directory (~/.spring-boot-devtools.properties when devtools is active).
#TestPropertySource annotations on your tests.
#SpringBootTest#properties annotation attribute on your tests.
Command line arguments.
Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property)
ServletConfig init parameters.
ServletContext init parameters.
JNDI attributes from java:comp/env.
Java System properties (System.getProperties()).
OS environment variables.
A RandomValuePropertySource that only has properties in random.*.
Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants)
Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants)
Application properties outside of your packaged jar (application.properties and YAML variants).
Application properties packaged inside your jar (application.properties and YAML variants).
#PropertySource annotations on your #Configuration classes.
Default properties (specified using SpringApplication.setDefaultProperties).
the entries in the yml file for mysql docker that you post here are the credential to the root user of RDMS mysql database which is startting as a docker service. This dose not mean that your application will use those credential. It can be that you have the same credential also in the application-prod.yml file which has been add to your war during packaging phase and then this war was put into your docker.
In the app.yml file which is used for starting docker-compose you should have also some environment varaible, e.g.
environment:
- SPRING_PROFILES_ACTIVE=prod
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/myDataBase?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=10 # gives time for the database to boot before the application
for spring which are overriding your application-prod.yml files. Also is importat that the mysql container is known to your app container.
we develop a NodeJS application and we want to launch them in the Amazon Cloud.
We integrated „Bamboo“ in our other Atlassian applications. Bamboo transfer the build files to the S3 Bucket from Amazon.
The problem is: how I can move and start the Application from the S3 to the EC2 instances?
You can find my appspec.yml in the attachments and in my build directory are following files:
- client | files like index.html etc
- server | files like the server.js and socketio.js
- appspec.yml
- readme
Have anyone an idea? I hope it contains all important informations you need.
Thank you :D
Attachments
version: 1.0
os: linux
files:
- source: /
destination: /
Update
I just realized that your appspec.yml seems to lack a crucial part for the deployment of a Node.js application (and most others for that matter), namely the hooks section. As outlined in AWS CodeDeploy Application Specification Files, the AppSpec file is used to manage each deployment as a series of deployment lifecycle events:
During the deployment steps, the AWS CodeDeploy Agent will look up the current event's name in the AppSpec file's hooks section. [...] If
the event is found in the hooks section, the AWS CodeDeploy Agent will
retrieve the list of scripts to execute for the current step. [...]
See for example the provided AppSpec file Example (purely for illustration, you'll need to craft a custom one appropriate for your app):
os: linux
files:
- source: Config/config.txt
destination: webapps/Config
- source: source
destination: /webapps/myApp
hooks:
BeforeInstall:
- location: Scripts/UnzipResourceBundle.sh
- location: Scripts/UnzipDataBundle.sh
AfterInstall:
- location: Scripts/RunResourceTests.sh
timeout: 180
ApplicationStart:
- location: Scripts/RunFunctionalTests.sh
timeout: 3600
ValidateService:
- location: Scripts/MonitorService.sh
timeout: 3600
runas: codedeployuser
Without such an ApplicationStart command, AWS CodeDeploy does not have any instructions what to do with your app (remember that CodeDeploy is technology agnostic, thus needs to be advised how to start the app server for example).
Initial Answer
Section Overview of a Deployment within What Is AWS CodeDeploy? illustrates the flow of a typical AWS CodeDeploy deployment:
The key aspect regarding your question is step 4:
Finally, the AWS CodeDeploy Agent on each participating instance pulls the revision from the specified Amazon S3 bucket or GitHub
repository and starts deploying the contents to that instance,
following the instructions in the AppSpec file that's provided. [emphasis mine]
That is, once you have started an AWS CodeDeploy deployment, everything should work automatically - accordingly, something seems to be configured not quite right, with the most common issue being that the deployment group does not actually contain any running instances yet. Have you verified that you can deploy to your EC2 instance from CodeDeploy via the AWS Management Console?
What do you see if you log into the Deployments list of AWS CodeDeploy console?
https://console.aws.amazon.com/codedeploy/home?region=us-east-1#/deployments
(change the region accordingly)
Also the code will be downloaded in /opt/codedeploy-agent/deployment-root/<agent-id?>/<deployment-id>/deployment-archive
And the logs in /opt/codedeploy-agent/deployment-root/<agent-id?>/<deployment-id>/logs/scripts.logs
Make sure that the agent has connectivity and permissions to download the release from the S3 bucket. That means having internet connectivity and/or using a proxy in the instance (setting http_proxy so that code_deploy uses it), and setting an IAM profile in the instance with permissions to read the S3 bucket.
Check the logs of the codedeploy agent to see if it's connecting successfully or not : /var/log/aws/codedeploy-agent/codedeploy-agent.log
You need to create a deployment in code deploy and then deploy a new revision using the drop down arrow in code depoy and your S3 bucket URL. However it needs to be a zip/tar.gz/tar