Database Connection in Jhipster Application while using Docker - jhipster

I am building an application using Jhipster.
My sample application-prod.yml looks like below as provided by Jhipster
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:mysql://localhost:3306/MyModule?useUnicode=true&characterEncoding=utf8&useSSL=false
name:
username: hello
password: hello
hikari:
data-source-properties:
...
jpa:
database-platform: org.hibernate.dialect.MySQL5InnoDBDialect
database: MYSQL
show-sql: false
org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory
...
When I run the application without docker I get a mysql error if the username/password is incorrect which is normal.
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
But if I am running the same application using docker image and provide the db properties in the docker compose file, the properties in the application-prod.yml file seems to get ignored. That is, even if the database properties in the application properties file is incorrect but the correct values are provided in the docker compose file, the application seems to work fine when run using docker image and can connect to the database.
The entries in the docker file is given below
version: '2'
services:
mymodule-mysql:
container_name: mymodule-mysql
image: mysql:5.7.13
environment:
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=root
- MYSQL_ALLOW_EMPTY_PASSWORD=no
- MYSQL_DATABASE=mymodule
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl
It seems that the environment variables in the docker compose file is overriding the properties application-dev.yml file. Is my thought correct ?
It will be good if someone can explain in details how this works in jhipster.

your observation is correct: the values specified over environment variables are overriding the ones specified in the yml file in the jar. This behavior has nothing to do with JHipster. This is pure spring-boot. Here is a short overview of the order how properties are override (from the spring doc)
Spring Boot uses a very particular PropertySource order that is designed to allow sensible overriding of values. Properties are considered in the following order:
Devtools global settings properties on your home directory (~/.spring-boot-devtools.properties when devtools is active).
#TestPropertySource annotations on your tests.
#SpringBootTest#properties annotation attribute on your tests.
Command line arguments.
Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property)
ServletConfig init parameters.
ServletContext init parameters.
JNDI attributes from java:comp/env.
Java System properties (System.getProperties()).
OS environment variables.
A RandomValuePropertySource that only has properties in random.*.
Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants)
Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants)
Application properties outside of your packaged jar (application.properties and YAML variants).
Application properties packaged inside your jar (application.properties and YAML variants).
#PropertySource annotations on your #Configuration classes.
Default properties (specified using SpringApplication.setDefaultProperties).
the entries in the yml file for mysql docker that you post here are the credential to the root user of RDMS mysql database which is startting as a docker service. This dose not mean that your application will use those credential. It can be that you have the same credential also in the application-prod.yml file which has been add to your war during packaging phase and then this war was put into your docker.
In the app.yml file which is used for starting docker-compose you should have also some environment varaible, e.g.
environment:
- SPRING_PROFILES_ACTIVE=prod
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/myDataBase?useUnicode=true&characterEncoding=utf8&useSSL=false
- JHIPSTER_SLEEP=10 # gives time for the database to boot before the application
for spring which are overriding your application-prod.yml files. Also is importat that the mysql container is known to your app container.

Related

Dast Authentication Issues in Gitlab CICD on Angular Website

I'm having an issue with the built in gitlab dast scanning and authentication in the pipeline.
The application that is attempting to be scanned is an angular app using the aspnetzero framework.
In gitlab the cicd file uses the dast UI configuration to setup the job and in the cicd yml file the job spec looks like:
# Include the DAST template
include:
- template: DAST.gitlab-ci.yml
# Your selected site and scanner profiles:
dast:
stage: dast
dast_configuration:
site_profile: "auth"
scanner_profile: "default"
In the site profile the proper data is setup for authentication and then running the dast scanning job, I get an error in the logs like
2022-07-12T22:00:16.000 INF NAVDB Load URL added to crawl graph
2022-07-12T22:00:16.000 INF AUTH Attempting to authenticate
2022-07-12T22:00:16.000 INF AUTH Loading login page LoginURL=https://example.com/account
2022-07-12T22:00:23.000 WRN BROWS response body exceeds allowed size allowed_size_bytes=10000000 request_id=interception-job-4.0 response_size_bytes=11100508 url=https://example.com/main.f3808aecbe8d4efb.js
2022-07-12T22:00:38.000 WRN CONTA request failed, attempting to continue scan error=net::ERR_BLOCKED_BY_RESPONSE index=0 requestID=176.5 url=https://example.com/main.f3808aecbe8d4efb.js
2022-07-12T22:00:39.000 INF AUTH Writing authentication report path=/zap/wrk/gl-dast-debug-auth-report.html
2022-07-12T22:00:39.000 INF AUTH skipping writing of JSON cookie report as there are no cookies to write
2022-07-12T22:00:40.000 FTL MAIN Authentication failed: failed to load login page: expected to find a single element for selector css:#manual_login to follow path to login form, found 0
2022-07-12 22:00:40,059 Browserker completed with exit code 1
2022-07-12 22:00:40,060 BrowserkerError: Failure while running Browserker 1.Exiting scan
sion.ExtensionLoader - Initializing Provides the foundation for concrete message types (for example, HTTP, WebSockets) expose fuzzer implementations.
[zap_server] 13499 [ZAP-daemon] INFO org.parosproxy.paros.extension.ExtensionLoader - Initializing Allows to fuzz HTTP messages.
It seems like container that is doing the dast scanning can't properly load the angular javascript file since it exceeds the allowed response size, and the actual login form does not load. Is there a way to increase the allowed size for the request so that we can have the login form properly load.
I've tried various options like setting the stability timeout variables, and even increasing the memory for the ZAP process (DAST_ZAP_CLI_OPTIONS: '-Xmx3072m' ). but am still getting the same result in that the login form isn't loading, most likely because the javascript isn't loading properly.
The fix looks like to be a gitlab/dast cicd variable issue that isn't in any of the current documentation that I could find.
In order to view all the options or parameters available I update the cicd file with the following:
include:
template: DAST.gitlab-ci.yml
dast:
script:
- /analyze --help
so I could see the options available. From this I was able to find DAST_BROWSER_MAX_RESPONSE_SIZE_MB variable to use. Setting that variable fixed my issue

Azure Web App usage of WEBAPP_STORAGE_HOME variable in docker-compose

I'm trying to set up a web app that has persistent storage via file share to a storage account.
I'm following various guides from Microsoft docs and I managed to do most of it, my app has persisted storage. But now, what I wanna do, I want to map volumes to my storage account.
I saw that there is this variable ${WEBAPP_STORAGE_HOME} that I can use in my docker-compose.
My question is, what is the value of this variable? The docs state:
${WEBAPP_STORAGE_HOME} is an environment variable in App Service that is mapped to persistent storage for your app.
I find this a little vague. Does it know automatically to map my path mappings? What if I have multiple path mappings? Should I set the value in the App Settings in the Configuration blade? If so, what do I need to specify, the mount path?
Besides that, I saw that it is used as following:
version: '3.3'
services:
wordpress:
image: mcr.microsoft.com/azuredocs/multicontainerwordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
I'm used with named volumes in docker-compose. I figure there is no need to specify something like that?
UPDATE
After #Jason Pan's answer, I tried to play with the mount a little bit.
I succeeded in having persisted storage on the App Service with the following docker-compose:
# ... lines skipped for brevity
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
driver: local
But I want to persist the data on a Storage Account. I saw that this is possible via: AppService/Configuration/Path Mappings.
My Docker Compose
# ...
volumes:
- MyMountedPath:/var/lib/mysql
In this docker-compose file, I have my app and a MySQL image: mysql:8 to be exact.
I mounted a path as follows:
Name: MyMountedPath; Mounted Path: /usr/local/mysql; Type: Azure Files ....
And I get the following error in the logs:
2021-04-08T11:02:06.790578922Z chown: changing ownership of '/var/lib/mysql/': Operation not permitted
2021-04-08T11:02:12.785079208Z 2021-04-08 11:02:12+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.23-1debian10 started.
Since it worked with the first approach, I suspect that there are some issues with the way my Path mapping is defined. This led me to even more questions:
Does my Mount Path need to exist on the App Service File system? If no, can I define something like: /foo/bar ?
If I have a Mount Path name MyMountedPath, can I specify in the docker-compose file something like
volumes:
- MyMountedPath/foo:/something
Basically, navigating in the mounted path?
Do I need to mount this path in the File Storage in my Storage Account? Or will the App Service create this path when it will need to store something?
Example:
In the App Service properties, I mounted an Azure File Share and gave the name MyExternalStorage
In the docker compose configuration I have to set
volumes:- MyExternalStorage:/var/www/html/contao
Thanks for TeddyDubois29's answer, hope it also can help you.
Web App Docker Compose Persistent Storage

AWS ECS error in prisma container - environment variable PRISMA_CONFIG

I'm new to AWS, and I'm trying to deploy my local web app on AWS using ECR and ECS, but got stuck when running a cluster, it throws the error about the PRISMA_CONFIG environment variable in prisma container.
In my local environment, i'm using docker to build the app using nodejs, prisma and mongodb, it's working fine.
Now on ECS, i created a task definition and for prisma container, i tried to copy the yml config from my local docker-compose.yml file to make it work.
There is field called "ENVIRONMENT", I've inputted the value in the Environment variables, it's just not working and throw the error while the cluster was running, then the task Stopped.
the yml is in multiple lines, but the input box supports string only
the variable key is PRISMA_CONFIG
and the following are the values that i've already tried
| port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
| \nport: 4466 \ndatabases: \ndefault: \nconnector: mongo \nuri: mongodb://prisma:prisma#mongo
|\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
and the errors
Exception in thread "main" java.lang.RuntimeException: Unable to load Prisma config: java.lang.RuntimeException: No valid Prisma config could be loaded.
expected a comment or a line break, but found p(112)
expected chomping or indentation indicators, but found \(92)
i expected that all containers will run without errors, but actual results are the container stopped after running for a minute.
Please help for this.
or suggest other way to deploy to AWS?
THANK YOU VERY MUCH.
I've been looking for a similar solution to load the prisma config without the multiline string.
There are repositories that load the prisma environment variables separately without a prisma config:
Check out this repo for example:
https://github.com/akoenig/prisma-docker-compose/blob/master/.prisma.env
Here akoenig uses the following env variables using a env_file. So, I'm assuming you can just pass in these environment variables separately to achieve what prisma is looking for.
# CONTENTS OF env_file
PORT=4466
SQL_CLIENT_HOST_CLIENT1=database
SQL_CLIENT_HOST_READONLY_CLIENT1=database
SQL_CLIENT_HOST=database
SQL_CLIENT_PORT=3306
SQL_CLIENT_USER=root
SQL_CLIENT_PASSWORD=prisma
SQL_CLIENT_CONNECTION_LIMIT=10
SQL_INTERNAL_HOST=database
SQL_INTERNAL_PORT=3306
SQL_INTERNAL_USER=root
SQL_INTERNAL_PASSWORD=prisma
SQL_INTERNAL_DATABASE=graphcool
CLUSTER_ADDRESS=http://prisma:4466
SQL_INTERNAL_CONNECTION_LIMIT=10
SCHEMA_MANAGER_SECRET=graphcool
SCHEMA_MANAGER_ENDPOINT=http://prisma:4466/cluster/schema
#CLUSTER_PUBLIC_KEY=
BUGSNAG_API_KEY=""
ENABLE_METRICS=0
JAVA_OPTS=-Xmx1G
This is for a mySQL database. You would need to tailor this to suit your values. But in theory you should just be able to pass these variables one by one into single variables in AWS's GUI.
I've also asked this question on the Prisma Slack channel and am waiting to see if they have other suggestions: https://prisma.slack.com/archives/CA491RJH0/p1569689413383000
Let me know how it goes.
Not and expert here but, have you set up an environment variable PRISMA_API_MANAGEMENT_SECRET you would have defined the secret when you configured your fargate instance.
have a look at the following artical
https://www.prisma.io/tutorials/deploy-prisma-to-aws-fargate-ct14

Inject a list of maps from application.yml into service

I am trying to inject a list of maps from my application.yml configuration file into my Spring Boot service. Here is the configuration in application.yml:
devoxx:
cfpApis:
-
url: http://cfp.devoxx.be/api/conferences
youtubeChannelId: UCCBVCTuk6uJrN3iFV_3vurg
-
url: http://cfp.devoxx.fr/api/conferences
-
url: http://cfp.devoxx.ma/api/conferences
youtubeChannelId: UC6vfGtsJr5RoBQBcHg24XQw
-
url: http://cfp.devoxx.co.uk/api/conferences
-
url: http://cfp.devoxx.pl/api/conferences
And here is my property in my service:
#Value("devoxx.cfpApis")
List<Map<String,String>> cfpApis
But there must be something wrong because when I try to run my application, I get the following exception:
java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [java.util.Map]: no matching editors or conversion strategy found
Any idea of what I'm doing wrong?
FYI, I'm trying to migrate a Grails 3 project into a vanilla Spring Boot project and this configuration works in Grails 3, but Grails has its own YAML processors.
Thanks to #Morfic's comment, here is how i ended up solving that problem.
I tagged my service class with #ConfigurationProperties(prefix="devoxx") annotation. And in my service, I now have a property called cfpApis, with the following declaration:
List<Map<String,String>> cfpApis
And this works great.

AWS CodeDeploy with Bamboo

we develop a NodeJS application and we want to launch them in the Amazon Cloud.
We integrated „Bamboo“ in our other Atlassian applications. Bamboo transfer the build files to the S3 Bucket from Amazon.
The problem is: how I can move and start the Application from the S3 to the EC2 instances?
You can find my appspec.yml in the attachments and in my build directory are following files:
- client | files like index.html etc
- server | files like the server.js and socketio.js
- appspec.yml
- readme
Have anyone an idea? I hope it contains all important informations you need.
Thank you :D
Attachments
version: 1.0
os: linux
files:
- source: /
destination: /
Update
I just realized that your appspec.yml seems to lack a crucial part for the deployment of a Node.js application (and most others for that matter), namely the hooks section. As outlined in AWS CodeDeploy Application Specification Files, the AppSpec file is used to manage each deployment as a series of deployment lifecycle events:
During the deployment steps, the AWS CodeDeploy Agent will look up the current event's name in the AppSpec file's hooks section. [...] If
the event is found in the hooks section, the AWS CodeDeploy Agent will
retrieve the list of scripts to execute for the current step. [...]
See for example the provided AppSpec file Example (purely for illustration, you'll need to craft a custom one appropriate for your app):
os: linux
files:
- source: Config/config.txt
destination: webapps/Config
- source: source
destination: /webapps/myApp
hooks:
BeforeInstall:
- location: Scripts/UnzipResourceBundle.sh
- location: Scripts/UnzipDataBundle.sh
AfterInstall:
- location: Scripts/RunResourceTests.sh
timeout: 180
ApplicationStart:
- location: Scripts/RunFunctionalTests.sh
timeout: 3600
ValidateService:
- location: Scripts/MonitorService.sh
timeout: 3600
runas: codedeployuser
Without such an ApplicationStart command, AWS CodeDeploy does not have any instructions what to do with your app (remember that CodeDeploy is technology agnostic, thus needs to be advised how to start the app server for example).
Initial Answer
Section Overview of a Deployment within What Is AWS CodeDeploy? illustrates the flow of a typical AWS CodeDeploy deployment:
The key aspect regarding your question is step 4:
Finally, the AWS CodeDeploy Agent on each participating instance pulls the revision from the specified Amazon S3 bucket or GitHub
repository and starts deploying the contents to that instance,
following the instructions in the AppSpec file that's provided. [emphasis mine]
That is, once you have started an AWS CodeDeploy deployment, everything should work automatically - accordingly, something seems to be configured not quite right, with the most common issue being that the deployment group does not actually contain any running instances yet. Have you verified that you can deploy to your EC2 instance from CodeDeploy via the AWS Management Console?
What do you see if you log into the Deployments list of AWS CodeDeploy console?
https://console.aws.amazon.com/codedeploy/home?region=us-east-1#/deployments
(change the region accordingly)
Also the code will be downloaded in /opt/codedeploy-agent/deployment-root/<agent-id?>/<deployment-id>/deployment-archive
And the logs in /opt/codedeploy-agent/deployment-root/<agent-id?>/<deployment-id>/logs/scripts.logs
Make sure that the agent has connectivity and permissions to download the release from the S3 bucket. That means having internet connectivity and/or using a proxy in the instance (setting http_proxy so that code_deploy uses it), and setting an IAM profile in the instance with permissions to read the S3 bucket.
Check the logs of the codedeploy agent to see if it's connecting successfully or not : /var/log/aws/codedeploy-agent/codedeploy-agent.log
You need to create a deployment in code deploy and then deploy a new revision using the drop down arrow in code depoy and your S3 bucket URL. However it needs to be a zip/tar.gz/tar

Resources