Is rabbitMQ for azure functions not supported in serverless framework?
There is a documentation corresponding to aws: https://www.serverless.com/framework/docs/providers/aws/events/rabbitmq
but I didn't find anything for Azure.
When I tried something like:
service: my-app
provider:
name: azure
location: West US
runtime: nodejs14
plugins:
- serverless-azure-functions
- serverless-webpack
functions:
currentTime:
handler: main.endpoint
events:
- rabbitMQ: SMS
name: myQueueItem
connectionStringSetting: rabbitMQConnection
then was getting the error: "Binding rabbitMQTrigger not supported".
Unfortunately, rabbitmq event is only available for aws provider and you cannot use it with Azure.
Related
We use serverless to deploy a graphql handler function as an Azure Function and access it via APIM.
We need to use our own custom domain (pointed via CNAME to Azure APIM domain), and can set this up manually via the Azure Portal, and uploading certificate + specifying certificate password for it.
However, if we execute "sls deploy" that custom domain setting gets removed, so we'd need to either retain it somehow or specify it via serverless.yml, but I cannot find any information on how to do this.
Current serverless.yml config:
service: my-service-${env:STAGE, 'develop'}
configValidationMode: off
provider:
name: azure
runtime: nodejs12
region: north-europe
resourceGroup: My-Service-Group
subscriptionId: MySubscriptionId
stage: ${env:STAGE, 'develop'}
apim: true
plugins:
- serverless-azure-functions
functions:
graphql:
handler: lib/azure.handler
events:
- http: true
methods:
- GET
- POST
authLevel: anonymous # can also be `function` or `admin`
route: graphql
- http: true
direction: out
name: "$return"
route: graphql
Any guidance in this would be much appreciated.
For setting up the certificate we need to select the option of TSL/SSL settings from Azure portal, then we can create App Service Managed Certificate.
To achieve this, we need to add the custom domain as below steps:
Map the domain to application
We would need to buy a wildcard certificate
Below is how we usually setup:
And lastly, we need to create the DNS rule.
Thanks to codeproject as we have all the info clearly drafted
Check for the below sample serverless.yml to from apim section:
# serverless.yml
apim:
apis:
- name: v1
subscriptionRequired: false # if true must provide an api key
displayName: v1
description: V1 sample app APIs
protocols:
- https
path: v1
tags:
- tag1
- tag2
authorization: none
cors:
allowCredentials: false
allowedOrigins:
- "*"
allowedMethods:
- GET
- POST
- PUT
- DELETE
- PATCH
allowedHeaders:
- "*"
exposeHeaders:
- "*"
And “sls deploy”
Check for serverless framework and azure deployment documentation
I have two different applications in AWS, deployed by two serverless config files.
In the first one, I need to read the data from the DynamoDB of the second one.
serverless.yml n°1 :
service:
name: stack1
app: app1
org: globalorg
serverless.yml n°2 :
service:
name: stack2
app: app2
org: globalorg
If I put the 2 services in same app, I can access to the 2nd one with a line like this in iamRoleStatements :
Resource:
- ${output::${param:env}::stack2.TableArn}
But if they are not in the same app, I have "Service not found" error when I try to deploy.
How can I do this cross-application communication ?
Thanks
You will need to provide the actual ARN of the table now that this stack is not part of your app you cannot reference its components. Try something like this
custom:
table:
tableArn: "<insert-ARN-of-resource-here>"
IamRoleStatements:
Resource: ${self:custom.table.tableArn}
I'm using the Python 3.8 SDK for Azure service bus, (azure-servicebus v. 0.50.3). I use the following code to send a message to a topic ...
service = ServiceBusService(service_namespace,
shared_access_key_name=key_name,
shared_access_key_value=key_value)
msg = Message(json.dumps({'type': 'my_message'}))
service.send_topic_message(topic_name, msg)
How do I create a Docker image that runs the service bus with a topic or two already created? I found this image
version: '3.7'
services:
azure_sb:
container_name: azure_sb
image: microsoft/azure-storage-emulator
tty: true
restart: always
ports:
- "10000:10000"
- "10001:10001"
- "10002:10002"
but I'm unclear how to connect to it using the code I have or if the above is even a valid service bus image.
Azure Service Bus does not provide a docker image. The image that you are using (microsoft/azure-storage-emulator) is for the Azure Storage system, which can provide similar queuing capabilities with Azure Storage Queues. For more details check out How to use Azure Queue storage from Python.
If you need to use Azure Service Bus locally, check out the GitHub Issue: Local Development story?. TLDR: Use AMQP libraries and connect to another AMQP provider for local, and swap out for Service Bus in production.
To get started, I create an Google App Engine where I deploy on my custom domain (which we will refer to as: mysite.ms) both the API and the frontend. The API are written in nodejs with Express ant the frontend is a React application. This is my app.yml file that I use for the deploy:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: .5
memory_gb: 0.5
disk_size_gb: 10
handlers:
- url: /
static_files: www/build/index.html
upload: www/build/index.html
- url: /
static_dir: www/build
Now, what I want is to separte the element. On the mysite.ms domain deploy only the React application and on a subdomain sub.mysite.ms the API. Since the domain was taken over on freenom, to create a subdomain I add a new DNS of type CNAME with value sub.mysite.ms and target the original domain mysite.ms.
Is it possible to create these separate deployments using only the Google App Engine and a single app.yml file or do you need to use some other tool and separate the files?
How do you advise me to proceed? Since I can't find anything clear online, could you give me some tips to solve these problems?
UPDATE
I have read the documentation that you provided me and I some doubts regarding it. First of all, how can I create different services? Because I create this (but most probably wrong) dispatch.yml:
dispatch:
- url: "mysite.ms/*"
service: default
- url: "sub.mysite.ms/*"
service: api
but when I deploy with this command gcloud app deploy dispatch.yaml, I get an error because it can't find the module 'api'.
In the previus version, in my server.js I have this code to handle the React:
app.use(express.static(path.resolve(__dirname, 'www', 'build')));
app.get('*', (req, res) => { res.sendFile(path.resolve(__dirname, 'www', 'build', 'index.html')); });
Should I keep these two lines of code even if I split the frontend and the api on different domain?
Shoud I add the sub.mysite.ms to the custom domain area in the section on Google App Engine?
Should I keep the file app.yml even if I have the dispath.yaml?
For now, it is not possible to deploy more than one service using the same yaml file. Let's suppose you may want to deploy two services: api and frontend. Let's say you want the frontend service to be default one so everytime everybody access to mysite.ms, they will see the frontend service.
Let's say you have the app.yaml file as follows for the frontend service:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
as you can notice, there is no the service property in your app.yaml. In the app.yaml file reference doc you will see the following:
service: service_name
Required if creating a service. Optional for the default service. Each service and each version must have a name. A name can contain numbers, letters, and hyphens. In the flexible environment, the combined length of service and version cannot be longer than 48 characters and cannot start or end with a hyphen. Choose a unique name for each service and each version. Don't reuse names between services and versions.
Because there is not the service property, the deploy will be set to the default service. Now let's say you have another yaml file, in particular the api.yaml to deploy the api service. Here's an example:
runtime: nodejs
env: flex
service: api
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
You will see that I've added the service property and when you deploy using gcloud app deploy api.yaml, the deploy will create the service api.
Finally, after creating the services you will be able to deploy the dispatch.yaml file you've created.
Some advices for you:
Is a good practice to assign the app.yaml file to the default service. For the other services, you may want to name the files according to the service to deploy with such file i.e. api.yaml, backend.yaml, api_external.yaml, etc.
You can deploy deploy both services using gcloud app deploy path/to/service1 path/to/service2 path/to/service3 ... or you can do it individually for better debugging in case there could be some issues.
Since you're using the Flex environment, the handlers property are not supported. If you add them, those are ignored.
Check the right documents for the environment you use. app.yaml, dispatch.yaml, general docs.
I've been trying to invoke a GCP function (--runtime nodejs8 --trigger-http) from GCP scheduler, both located within the same project. I can only make it work, if I grant unauthenticated access by adding the allUsers member to the functions permissions, with the Cloud Functions-Invoker role applied to it. However, when I only use the service account of the scheduler as the Cloud Functions-Invoker, I get a PERMISSION DENIED Error.
I created a hello world example, to show in detail, how my setup looks like.
I set up a service account:
gcloud iam service-accounts create scheduler --display-name="Task Schedule Runner"
Setting the role:
svc_policy.json:
{
"bindings": [
{
"members": [
"serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com"
],
"role": "roles/cloudscheduler.serviceAgent"
}
]
}
gcloud iam service-accounts set-iam-policy scheduler#mwsdata-1544225920485.iam.gserviceaccount.com svc_policy.json -q
Deploying the Cloud Function:
gcloud functions deploy helloworld --runtime nodejs8 --trigger-http --entry-point=helloWorld
Adding the service account as a member to the function:
gcloud functions add-iam-policy-binding helloworld --member serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com --role roles/cloudfunctions.invoker
Creating the scheduler job:
gcloud beta scheduler jobs create http test-job --schedule "5 * * * *" --http-method=GET --uri=https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld --oidc-service-account-email=scheduler#mwsdata-1544225920485.iam.gserviceaccount.com --oidc-token-audience=https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
Log: PERMISSION DENIED
{
httpRequest: {
}
insertId: "1ny5xuxf69w0ck"
jsonPayload: {
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
jobName: "projects/mwsdata-1544225920485/locations/europe-west1/jobs/test-job"
status: "PERMISSION_DENIED"
targetType: "HTTP"
url: "https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld"
}
logName: "projects/mwsdata-1544225920485/logs/cloudscheduler.googleapis.com%2Fexecutions"
receiveTimestamp: "2020-02-04T22:05:05.248707989Z"
resource: {
labels: {
job_id: "test-job"
location: "europe-west1"
project_id: "mwsdata-1544225920485"
}
type: "cloud_scheduler_job"
}
severity: "ERROR"
timestamp: "2020-02-04T22:05:05.248707989Z"
}
Update
Here are the corresponding settings.
Scheduler Service Account
gcloud iam service-accounts get-iam-policy scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
bindings:
- members:
- serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
role: roles/cloudscheduler.serviceAgent
etag: BwWdxuiGNv4=
version: 1
IAM Policy of the function:
gcloud functions get-iam-policy helloworld
bindings:
- members:
- serviceAccount:scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
role: roles/cloudfunctions.invoker
etag: BwWdxyDGOAY=
version: 1
Function Description
gcloud functions describe helloworld
availableMemoryMb: 256
entryPoint: helloWorld
httpsTrigger:
url: https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
ingressSettings: ALLOW_ALL
labels:
deployment-tool: cli-gcloud
name: projects/mwsdata-1544225920485/locations/us-central1/functions/helloworld
runtime: nodejs8
serviceAccountEmail: mwsdata-1544225920485#appspot.gserviceaccount.com
sourceUploadUrl: https://storage.googleapis.com/gcf-upload-us-central1-671641e6-3f1b-41a1-9ac1-558224a1638a/b4a0e407-69b9-4f3d-a00d-7543ac33e013.zip?GoogleAccessId=service-617967399269#gcf-admin-robot.iam.gserviceaccount.com&Expires=1580854835&Signature=S605ODVtOpnU4LIoRT2MnU4OQN3PqhpR0u2CjgcpRcZZUXstQ5kC%2F1rT6Lv2SusvUpBrCcU34Og2hK1QZ3dOPluzhq9cXEvg5MX1MMDyC5Y%2F7KGTibnV4ztFwrVMlZNTj5N%2FzTQn8a65T%2FwPBNUJWK0KrIUue3GemOQZ4l4fCf9v4a9h6MMjetLPCTLQ1BkyFUHrVnO312YDjSC3Ck7Le8OiXb7a%2BwXjTDtbawR20NZWfgCCVvL6iM9mDZSaVAYDzZ6l07eXHXPZfrEGgkn7vXN2ovMF%2BNGvwHvTx7pmur1yQaLM4vRRprjsnErU%2F3p4JO3tlbbFEf%2B69Wd9dyIKVA%3D%3D
status: ACTIVE
timeout: 60s
updateTime: '2020-02-04T21:51:15Z'
versionId: '1'
Scheduler Job Description
gcloud scheduler jobs describe test-job
attemptDeadline: 180s
httpTarget:
headers:
User-Agent: Google-Cloud-Scheduler
httpMethod: GET
oidcToken:
audience: https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
serviceAccountEmail: scheduler#mwsdata-1544225920485.iam.gserviceaccount.com
uri: https://us-central1-mwsdata-1544225920485.cloudfunctions.net/helloworld
lastAttemptTime: '2020-02-05T09:05:00.054111Z'
name: projects/mwsdata-1544225920485/locations/europe-west1/jobs/test-job
retryConfig:
maxBackoffDuration: 3600s
maxDoublings: 16
maxRetryDuration: 0s
minBackoffDuration: 5s
schedule: 5 * * * *
scheduleTime: '2020-02-05T10:05:00.085854Z'
state: ENABLED
status:
code: 7
timeZone: Etc/UTC
userUpdateTime: '2020-02-04T22:02:31Z'
Here are the steps I followed to make Cloud Scheduler trigger an HTTP triggered Cloud Function that doesn't allow unauthenticated invocations:
Create a service account, which will have the following form [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com.
Adde the service account [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com as a project member and added the following roles to the service account: Cloud Functions Invoker and Cloud Scheduler Admin.
Deploy an HTTP triggered Cloud Function that doesn't allow public (unauthenticated) access (if you are using the UI, simply uncheck the Allow unauthenticated Invocations checkbox) and that used the recently created service account [SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com on the Service account field (click more and look for the Service account field, by default it should be set to the App Engine default service account) and take notice of the Cloud Function's URL.
Create a Cloud Scheduler job with authentication by issuing the following command from the Cloud Shell: gcloud scheduler jobs create http [JOB-NAME] --schedule="* * * * *" --uri=[CLOUD-FUNCTIONS-URL] --oidc-service-account-email=[SA-NAME]#[PROJECT-ID].iam.gserviceaccount.com
In your specific case you are leaving the default App Engine service account for your Cloud Functions. Change it to the service account you created as specified on the previous steps.
#Marko I went through the same issue, it seems to re-enable (disable/enable) the scheduler API did the fix. This is why creating a new project makes sense because you probably got a scheduler service account by doing so. So if your project doesn't have a scheduler service account created from google, doing this trick will give you one. And although you don't need to assign this specific service account to any of your tasks, it must be available. You can see my work here: How to invoke Cloud Function from Cloud Scheduler with Authentication
I had a similar issue.
In our case, we've enabled Cloud Scheduler quite a long time ago.
According to the docs, if you enabled Cloud Scheduler API before March 19, 2019, you need to manually add the Cloud Scheduler Service Agent role to your Cloud Scheduler service account.
So we had to create a new service account that looks like this service-[project-number]#gcp-sa-cloudscheduler.iam.gserviceaccount.com
Hope this will help anybody else.
this tutorial helped me to invoke a programmer function, but there is a problem when creating the program after creating the service account, finally eliminating the programmer and doing it again.
Google Cloud Scheduler - Calling Cloud Function
As per the recent update on GCP, new function needs manual update for authentication.
We need to add Cloud Function Invoker permission to user allusers.
Please refer
https://cloud.google.com/functions/docs/securing/managing-access-iam#allowing_unauthenticated_function_invocation