I have two different applications in AWS, deployed by two serverless config files.
In the first one, I need to read the data from the DynamoDB of the second one.
serverless.yml n°1 :
service:
name: stack1
app: app1
org: globalorg
serverless.yml n°2 :
service:
name: stack2
app: app2
org: globalorg
If I put the 2 services in same app, I can access to the 2nd one with a line like this in iamRoleStatements :
Resource:
- ${output::${param:env}::stack2.TableArn}
But if they are not in the same app, I have "Service not found" error when I try to deploy.
How can I do this cross-application communication ?
Thanks
You will need to provide the actual ARN of the table now that this stack is not part of your app you cannot reference its components. Try something like this
custom:
table:
tableArn: "<insert-ARN-of-resource-here>"
IamRoleStatements:
Resource: ${self:custom.table.tableArn}
Related
I hope somebody can help me out here.
I have a basic configuration in azure which consists in a web app and database.
The web app is able to connect to the database using managed identity adn everything here works just fine, but i wanted to try the same configuration using aks.
I deployed AKS and enabled managed identity. I deployed a pod into the cluster as follow:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dockerimage
ports:
- containerPort: 80
env:
- name: "ConnectionStrings__MyDbConnection"
value: "Server=server-url; Authentication=Active Directory Managed Identity; Database=database-name"
- name: "ASPNETCORE_ENVIRONMENT"
value: "Development"
securityContext:
allowPrivilegeEscalation: false
restartPolicy: Always
The deployment went trough just smoothly and everything works just fine. But this is where i have the problem and cannot figure out the best solution.
The env block is in plain text, i would like to protect those environment variables by storing them in a keyvault.
I have been looking around into different forums and documentation and the options start confusing me. Is there any good way to achieve security in this scenario?
In my web app, under configurations i have the managed identity enabled and using this i can access the secrets in a keyvault and retrieve them. Can i do the same using AKS?
Thank you so much for any help you can provide or help with.
And please if my question is not 100% clear, just let me know
Upgrade an existing AKS cluster with Azure Key Vault Provider for Secrets Store CSI Driver support
Use a user-assigned managed identity to access KV
Set an environment variable to reference Kubernetes secrets
You will need to do some reading, but the process is straight forward.
The KV secrets will be stored in k8s secrets, that you can reference in the pods environment variables.
You can try to replace environment key-value like you did with Azure Configuration. Using Azure app config, you can
add "ConnectionStrings__MyDbConnection" as 'Key Vault reference' to your kv secret. Then use DefaultAzureCredential or ManagedIdentityCredential class
to setup credential for authentication to app config and key vault resources.
var builder = WebApplication.CreateBuilder(args);
var usermanaged_client_id = "";
var credential = new DefaultAzureCredential(new DefaultAzureCredentialOptions { ManagedIdentityClientId = usermanaged_client_id });
// Add services to the container.
builder.Configuration.AddAzureAppConfiguration(opt =>
{
opt.Connect(new Uri("https://your-app-config.azconfig.io"), credential)
.ConfigureKeyVault(kv =>
{
kv.SetCredential(credential);
});
});
Make sure that you grant access of Key Vault to the user managed identity.
Is rabbitMQ for azure functions not supported in serverless framework?
There is a documentation corresponding to aws: https://www.serverless.com/framework/docs/providers/aws/events/rabbitmq
but I didn't find anything for Azure.
When I tried something like:
service: my-app
provider:
name: azure
location: West US
runtime: nodejs14
plugins:
- serverless-azure-functions
- serverless-webpack
functions:
currentTime:
handler: main.endpoint
events:
- rabbitMQ: SMS
name: myQueueItem
connectionStringSetting: rabbitMQConnection
then was getting the error: "Binding rabbitMQTrigger not supported".
Unfortunately, rabbitmq event is only available for aws provider and you cannot use it with Azure.
We use serverless to deploy a graphql handler function as an Azure Function and access it via APIM.
We need to use our own custom domain (pointed via CNAME to Azure APIM domain), and can set this up manually via the Azure Portal, and uploading certificate + specifying certificate password for it.
However, if we execute "sls deploy" that custom domain setting gets removed, so we'd need to either retain it somehow or specify it via serverless.yml, but I cannot find any information on how to do this.
Current serverless.yml config:
service: my-service-${env:STAGE, 'develop'}
configValidationMode: off
provider:
name: azure
runtime: nodejs12
region: north-europe
resourceGroup: My-Service-Group
subscriptionId: MySubscriptionId
stage: ${env:STAGE, 'develop'}
apim: true
plugins:
- serverless-azure-functions
functions:
graphql:
handler: lib/azure.handler
events:
- http: true
methods:
- GET
- POST
authLevel: anonymous # can also be `function` or `admin`
route: graphql
- http: true
direction: out
name: "$return"
route: graphql
Any guidance in this would be much appreciated.
For setting up the certificate we need to select the option of TSL/SSL settings from Azure portal, then we can create App Service Managed Certificate.
To achieve this, we need to add the custom domain as below steps:
Map the domain to application
We would need to buy a wildcard certificate
Below is how we usually setup:
And lastly, we need to create the DNS rule.
Thanks to codeproject as we have all the info clearly drafted
Check for the below sample serverless.yml to from apim section:
# serverless.yml
apim:
apis:
- name: v1
subscriptionRequired: false # if true must provide an api key
displayName: v1
description: V1 sample app APIs
protocols:
- https
path: v1
tags:
- tag1
- tag2
authorization: none
cors:
allowCredentials: false
allowedOrigins:
- "*"
allowedMethods:
- GET
- POST
- PUT
- DELETE
- PATCH
allowedHeaders:
- "*"
exposeHeaders:
- "*"
And “sls deploy”
Check for serverless framework and azure deployment documentation
To get started, I create an Google App Engine where I deploy on my custom domain (which we will refer to as: mysite.ms) both the API and the frontend. The API are written in nodejs with Express ant the frontend is a React application. This is my app.yml file that I use for the deploy:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: .5
memory_gb: 0.5
disk_size_gb: 10
handlers:
- url: /
static_files: www/build/index.html
upload: www/build/index.html
- url: /
static_dir: www/build
Now, what I want is to separte the element. On the mysite.ms domain deploy only the React application and on a subdomain sub.mysite.ms the API. Since the domain was taken over on freenom, to create a subdomain I add a new DNS of type CNAME with value sub.mysite.ms and target the original domain mysite.ms.
Is it possible to create these separate deployments using only the Google App Engine and a single app.yml file or do you need to use some other tool and separate the files?
How do you advise me to proceed? Since I can't find anything clear online, could you give me some tips to solve these problems?
UPDATE
I have read the documentation that you provided me and I some doubts regarding it. First of all, how can I create different services? Because I create this (but most probably wrong) dispatch.yml:
dispatch:
- url: "mysite.ms/*"
service: default
- url: "sub.mysite.ms/*"
service: api
but when I deploy with this command gcloud app deploy dispatch.yaml, I get an error because it can't find the module 'api'.
In the previus version, in my server.js I have this code to handle the React:
app.use(express.static(path.resolve(__dirname, 'www', 'build')));
app.get('*', (req, res) => { res.sendFile(path.resolve(__dirname, 'www', 'build', 'index.html')); });
Should I keep these two lines of code even if I split the frontend and the api on different domain?
Shoud I add the sub.mysite.ms to the custom domain area in the section on Google App Engine?
Should I keep the file app.yml even if I have the dispath.yaml?
For now, it is not possible to deploy more than one service using the same yaml file. Let's suppose you may want to deploy two services: api and frontend. Let's say you want the frontend service to be default one so everytime everybody access to mysite.ms, they will see the frontend service.
Let's say you have the app.yaml file as follows for the frontend service:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
as you can notice, there is no the service property in your app.yaml. In the app.yaml file reference doc you will see the following:
service: service_name
Required if creating a service. Optional for the default service. Each service and each version must have a name. A name can contain numbers, letters, and hyphens. In the flexible environment, the combined length of service and version cannot be longer than 48 characters and cannot start or end with a hyphen. Choose a unique name for each service and each version. Don't reuse names between services and versions.
Because there is not the service property, the deploy will be set to the default service. Now let's say you have another yaml file, in particular the api.yaml to deploy the api service. Here's an example:
runtime: nodejs
env: flex
service: api
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
You will see that I've added the service property and when you deploy using gcloud app deploy api.yaml, the deploy will create the service api.
Finally, after creating the services you will be able to deploy the dispatch.yaml file you've created.
Some advices for you:
Is a good practice to assign the app.yaml file to the default service. For the other services, you may want to name the files according to the service to deploy with such file i.e. api.yaml, backend.yaml, api_external.yaml, etc.
You can deploy deploy both services using gcloud app deploy path/to/service1 path/to/service2 path/to/service3 ... or you can do it individually for better debugging in case there could be some issues.
Since you're using the Flex environment, the handlers property are not supported. If you add them, those are ignored.
Check the right documents for the environment you use. app.yaml, dispatch.yaml, general docs.
I succeded deploying my node.js application which is in /code/client
my /code/client/app.yml looks like:
runtime: nodejs10
resources:
memory_gb: 4
cpu: 1
disk_size_gb: 10
manual_scaling:
instances: 1
So I can gcloud app browse to see my page.
Now, I want to upload my /code/api application. (it's a node.js with express app)
What should I do to accomplish that? I want both applications running at the same time.
Just create a dedicated app.yaml for your backend app inside /code/api folder, and specify a service name in it.
Here, for example :
runtime: nodejs10
service: api
...
Your backend will be served at https://api-dot-YOUR_APP.appspot.com.
But of course, you can choose another service name.
More information on Official docs