I succeded deploying my node.js application which is in /code/client
my /code/client/app.yml looks like:
runtime: nodejs10
resources:
memory_gb: 4
cpu: 1
disk_size_gb: 10
manual_scaling:
instances: 1
So I can gcloud app browse to see my page.
Now, I want to upload my /code/api application. (it's a node.js with express app)
What should I do to accomplish that? I want both applications running at the same time.
Just create a dedicated app.yaml for your backend app inside /code/api folder, and specify a service name in it.
Here, for example :
runtime: nodejs10
service: api
...
Your backend will be served at https://api-dot-YOUR_APP.appspot.com.
But of course, you can choose another service name.
More information on Official docs
Related
I have followed these steps:
I installed `Google Cloud Build app on Github, linked it to Cloud Build and configured it to use a certain repository (a private one)
I set up a trigger at Cloud Build: Push to any branch
the project has no app instances after deploying (App Engine -> Dashboard)
My cloudbuild.yarml looks like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--project=project-name', '--version=$SHORT_SHA']
If I try to run the trigger manually: I get this error in Google Cloud:
unable to get credentials for cloud build robot
I have also tried to set IAM roles based on this article but using #cloudbuild.gserviceaccount.com doesn't seem to be a valid "member" (perhaps I need two projects, one for running and one for building the app?)
How do I fill the gaps / fixes the errors mentioned?
It seems the error message looking for credential that has the required permission. From the article that you are following, in the step #4, don't add manually the Service Account for Cloud Build. Check if you enable the Cloud Build API in your project, if the API is disabled try to enable. It will automatically create the Service Account for Cloud Build and look likes this:
[PROJECT_NUMBER]#cloudbuild.gserviceaccount.com
Once the service account is created, go to Cloud Build > Setting page and enable the required roles for you application.
I am not sure if I misunderstand how this should work.
I have a docker compose file (below) that defines environment variable for a single "service"/image:
services:
webui:
image: ${DOCKER_REGISTRY-}webui
build:
context: .
dockerfile: src/WebUI/Dockerfile
environment:
UseInMemoryDatabase: false
ASPNETCORE_ENVIRONMENT: Production
ASPNETCORE_URLS: https://+:443;http://+:80
ConnectionStrings__DefaultConnection: "********************************"
ports:
- "5000:5000"
- "5001:5001"
restart: always
When I open kudu for the web app and look at the environment tag, NONE of the environment variables defined above are present.
I manually set them by going to azure app -> configuration - --> application settings and add a record for each of the env variables above.
I can restart the app and I can now see the variables listed in the azure app -> Advance tools -> Kudu -> environment -> AppSettings
I also had to add the connection string to the separate connection strings settings of the azure app portal.
QUESTION:
Am I understanding this correctly? Should the app service "see" my environment variables in my docker-compose file and add them to the app settings for the running app service? Or is it normal to have to define each of those variables a second time in the configuration of the azure app service?
thanks in advance
Not correct. The environment variables in the app settings are only accessible when the Web App is in the running state. Currently, it does not support to set custom variables in the docker-compose file. As I know, there are only serial variables set by Azure that can be used in the docker-compose file, such as the variable WEBAPP_STORAGE_HOME. But for the image option, it does not support.
You're setting your environment variables correctly in your compose file. App Service will inject them in your container so no need to duplicate them as App Settings.
What's strange is that if you look at the docker run command in the startup logs, they are not passed as parameters and also, they are not listed in the Environment page in Kudu. That threw me off but I did a test using the Kuard image and was able to see my env var.
I have two different applications in AWS, deployed by two serverless config files.
In the first one, I need to read the data from the DynamoDB of the second one.
serverless.yml n°1 :
service:
name: stack1
app: app1
org: globalorg
serverless.yml n°2 :
service:
name: stack2
app: app2
org: globalorg
If I put the 2 services in same app, I can access to the 2nd one with a line like this in iamRoleStatements :
Resource:
- ${output::${param:env}::stack2.TableArn}
But if they are not in the same app, I have "Service not found" error when I try to deploy.
How can I do this cross-application communication ?
Thanks
You will need to provide the actual ARN of the table now that this stack is not part of your app you cannot reference its components. Try something like this
custom:
table:
tableArn: "<insert-ARN-of-resource-here>"
IamRoleStatements:
Resource: ${self:custom.table.tableArn}
To get started, I create an Google App Engine where I deploy on my custom domain (which we will refer to as: mysite.ms) both the API and the frontend. The API are written in nodejs with Express ant the frontend is a React application. This is my app.yml file that I use for the deploy:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: .5
memory_gb: 0.5
disk_size_gb: 10
handlers:
- url: /
static_files: www/build/index.html
upload: www/build/index.html
- url: /
static_dir: www/build
Now, what I want is to separte the element. On the mysite.ms domain deploy only the React application and on a subdomain sub.mysite.ms the API. Since the domain was taken over on freenom, to create a subdomain I add a new DNS of type CNAME with value sub.mysite.ms and target the original domain mysite.ms.
Is it possible to create these separate deployments using only the Google App Engine and a single app.yml file or do you need to use some other tool and separate the files?
How do you advise me to proceed? Since I can't find anything clear online, could you give me some tips to solve these problems?
UPDATE
I have read the documentation that you provided me and I some doubts regarding it. First of all, how can I create different services? Because I create this (but most probably wrong) dispatch.yml:
dispatch:
- url: "mysite.ms/*"
service: default
- url: "sub.mysite.ms/*"
service: api
but when I deploy with this command gcloud app deploy dispatch.yaml, I get an error because it can't find the module 'api'.
In the previus version, in my server.js I have this code to handle the React:
app.use(express.static(path.resolve(__dirname, 'www', 'build')));
app.get('*', (req, res) => { res.sendFile(path.resolve(__dirname, 'www', 'build', 'index.html')); });
Should I keep these two lines of code even if I split the frontend and the api on different domain?
Shoud I add the sub.mysite.ms to the custom domain area in the section on Google App Engine?
Should I keep the file app.yml even if I have the dispath.yaml?
For now, it is not possible to deploy more than one service using the same yaml file. Let's suppose you may want to deploy two services: api and frontend. Let's say you want the frontend service to be default one so everytime everybody access to mysite.ms, they will see the frontend service.
Let's say you have the app.yaml file as follows for the frontend service:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
as you can notice, there is no the service property in your app.yaml. In the app.yaml file reference doc you will see the following:
service: service_name
Required if creating a service. Optional for the default service. Each service and each version must have a name. A name can contain numbers, letters, and hyphens. In the flexible environment, the combined length of service and version cannot be longer than 48 characters and cannot start or end with a hyphen. Choose a unique name for each service and each version. Don't reuse names between services and versions.
Because there is not the service property, the deploy will be set to the default service. Now let's say you have another yaml file, in particular the api.yaml to deploy the api service. Here's an example:
runtime: nodejs
env: flex
service: api
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
You will see that I've added the service property and when you deploy using gcloud app deploy api.yaml, the deploy will create the service api.
Finally, after creating the services you will be able to deploy the dispatch.yaml file you've created.
Some advices for you:
Is a good practice to assign the app.yaml file to the default service. For the other services, you may want to name the files according to the service to deploy with such file i.e. api.yaml, backend.yaml, api_external.yaml, etc.
You can deploy deploy both services using gcloud app deploy path/to/service1 path/to/service2 path/to/service3 ... or you can do it individually for better debugging in case there could be some issues.
Since you're using the Flex environment, the handlers property are not supported. If you add them, those are ignored.
Check the right documents for the environment you use. app.yaml, dispatch.yaml, general docs.
I'm trying to deploy the simple NodeJS hello-world functions the Serverless Framework provides to my Azure free-tier account from an Azure DevOps Build Pipeline using the Service Principal credentials I created when making the deployment from my desktop originally. I've used several of the Build Agents and Tasks combinations, including Windows and Ubuntu Agents as well as Bash, Command Line, Azure Powershell, and Azure CLI tasks with the DevOps provided link to the Service Principal credentials. I've made sure to add them as Pipeline variables so that they are included in the tasks' environmental variables and I've confirmed that they are there when the tasks run. I also make sure that the Azure CLI is installed and logged into with the subscription set. No matter what settings/permissions I tweak or new configurations I try, when the task runs successfully to the point where the serverless framework attempts the deployment it always tries to get me to use a browser to authenticate my account. This obviously defeats the purpose of a CI/CD pipeline and even if I do use a browser to authenticate, the process just hangs there.
The sample code and deployment works on my desktop, so I know the credentials work. I believe I've emulated each step I take on my desktop in the Build Pipeline, yet while my desktop deploys without browser authentication the build always requests it. Does anyone have experience in this manner and know what step/configuration I'm missing?
To look at the sample code and process look here or run these steps:
serverless create -t azure-nodejs -p testApp
cd .\testApp\
Change Node Runtime and Region in serverless.yml (nodejs12.x not supported & no free tier in West US)
serverless deploy
Here's the link I used to get this working on my desktop: link
Edit: Here is the default serverless.yml created by the steps above:
service: azure-serverless-helloworld
provider:
name: azure
region: East US
runtime: nodejs8.10
environment:
VARIABLE_FOO: 'foo'
plugins:
- serverless-azure-functions
package:
exclude:
- local.settings.json
- .vscode/**
- index.html
functions:
hello:
handler: src/handlers/hello.sayHello
events:
- http: true
x-azure-settings:
methods:
- GET
authLevel: anonymous
goodbye:
handler: src/handlers/goodbye.sayGoodbye
events:
- http: true
x-azure-settings:
methods:
- GET
authLevel: anonymous
You can try below steps to run sls package in command line task to create a deployment package, and then use Azure Function App task to deploy to azure.
1,install specific version nodejs using Node.js tool installer task
_
2, install serverless using npm task to run custom command
3, use npm task to run install command to install dependencies
_
4, Use command line task to run sls package to create the deployment package
_
5, use azure function app deploy task to deploy the deployment package
Right now the Serverless Framework thinks you're trying to deploy your application using the Serverless Dashboard (which does not yet support Azure).
I'm not sure, because it haven't posted your serverless.yml file, but I think you'll need to remove the app and org attributes from your serverless.yml configuration file. Then it will stop asking you to log in.
Using the serverless framework to deploy a function through DevOps gave me the same issue.
The problem is that the sls deplopy command will build, package and deploy the code but will ask you for credentials each time you run the pipeline.
I solved this using the command serverless package in the build task, after that I deployed the zip that was generated for the command with a normal web app deploy task.