Azure App service settings not applied with docker - node.js

I'm new to Azure Deployment & DevOps,
This time I'm doing a small project to create a NestJS API with Azure App Service, using a customized docker image. Everything was deployed well with the right connection credentials environment variable to the database. However, when I tried to replace these environment variable with
config.service.ts
const PORT = process.env.APPSETTING_PORT;
const MODE = process.env.APPSETTING_MODE;
const POSTGRES_HOST = process.env.APPSETTING_POSTGRES_HOST;
const POSTGRES_PORT : any = process.env.APPSETTING_POSTGRES_PORT;
const POSTGRES_USER = process.env.APPSETTING_POSTGRES_USER;
const POSTGRES_PASSWORD = process.env.APPSETTING_POSTGRES_PASSWORD;
const POSTGRES_DATABASE = process.env.APPSETTING_POSTGRES_DATABASE;
, after create the app in azure I assign the values with this command on Azure CLI:
az webapp config appsettings set --resource-group <my_rs_group> --name <my_app_name> --settings WEBSITES_PORT=80 APPSETTING_POSTGRES_HOST=<my_host_name>\ APPSETTING_POSTGRES_PORT=5432 APPSETTING_POSTGRES_DATABASE=postgres \ APPSETTING_POSTGRES_USER=<my_user_name> APPSETTING_POSTGRES_PASSWORD=<my_host_pw>\ APPSETTING_PORT=80 APPSETTING_MODE=PROD APPSETTING_RUN_MIGRATION=true
It's been 2 days I'm bothered with this issue and have been going through many similar threads on the site, but I couldn't resolve this issue, the docker container runtime log always show that these environment variables are failed to be applied.

I'm answer my own question since I found the thread that resolves this issue, although for some reason this problem is unpopular, linking it down here,
404 Error when trying to fetch json file from public folder in deployed create-react-app
It's vexing that Microsoft didn't make it obvious in their documentation that for nodejs environment, need to set const {env} = process; and access it through PORT = env.APPSETTING_PORT; for example will help the server to apply the app settings variable properly at docker runtime.
Hope whoever meets this issue will not waste time.

Related

How to connect to a Routerlicious local server?

I have followed this guide https://github.com/microsoft/FluidFramework/tree/main/server/routerlicious and I have set up successfully a Routerlicious server and the gateway https://github.com/microsoft/FluidFramework/tree/main/server/gateway, but I am having trouble connecting the client to it.
Here is the client code config:
...
const hostUrl = "http://localhost:3000";
const ordererUrl = "http://localhost:3000";
const storageUrl = "http://localhost:3000";
const tenantId = "unused";
const tenantKey = "unused";
const serviceRouter = new RouterliciousService({
orderer: ordererUrl,
storage: storageUrl,
tenantId: tenantId,
key: tenantKey,
});
const container = await getContainer(
serviceRouter,
documented,
ContainerFactory,
createNew
);
...
The error is giving me is :
Buffer is not defined
I guess it is because of the tenantId and tenantKey. How can I solve this?
It sounds like you already got Routerlicious started, if so, jump to the last header. To recap...
Routerlicious
Cloning the Fluid Framework repository,
Installing and starting Docker
Running npm run start:docker from the Fluid Framework repo root.
Routerlicious is our name for the reference Fluid service implementation.
To get Routerlicious started, npm run start:docker uses the pre-built docker containers on Docker Hub. These are prepackaged for you in a docker-compose file in the repository.
Gateway
Gateway is a reference implementation of a Fluid Framework host. Gateway let's you run Fluid containers on a website. More specifically, it's a docker container that runs a web service. That web service delivers web pages with a Fluid Loader on it.
While that's a good option for running any Fluid container on one site, it is probably easier to use your strategy. That is... create your own website that connects directly to the service and loads your Fluid container.
Connecting to Routerlicious
To connect to your local Routerlicious instance, you need to know the orderer url, storage url, tenant id, and tenant key. By default, we provide a test local key in the config. The test tenant id is "local" and the test key is "43cfc3fbf04a97c0921fd23ff10f9e4b".

Authenticating to Google Cloud Firestore from GKE with Workload Identity

I'm trying to write a simple backend that will access my Google Cloud Firestore, it lives in the Google Kubernetes Engine. On my local I'm using the following code to authenticate to Firestore as detailed in the Google Documentation.
if (process.env.NODE_ENV !== 'production') {
const result = require('dotenv').config()
//Additional error handling here
}
This pulls the GOOGLE_APPLICATION_CREDENTIALS environment variable and populates it with my google-application-credentals.json which I got from creating a service account with the "Cloud Datastore User" role.
So, locally, my code runs fine. I can reach my Firestore and do everything I need to. However, the problem arises once I deploy to GKE.
I followed this Google Documentation to set up a Workload Identity for my cluster, I've created a deployment and verified that the pods all are using the correct IAM Service Account by running:
kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE sh
> gcloud auth list
I was under the impression from the documentation that authentication would be handled for my service as long as the above held true. I'm really not sure why but my Firestore() instance is behaving as if it does not have the necessary credentials to access the Firestore.
In case it helps below is my declaration and implementation of the instance:
const firestore = new Firestore()
const server = new ApolloServer({
schema: schema,
dataSources: () => {
return {
userDatasource: new UserDatasource(firestore)
}
}
})
UPDATE:
In a bout of desperation I decided to tear down everything and re-build it. Following everything over step by step I appear to have either encountered a bug or (more likely) I did something mildly wrong the first time. I'm now able to connect to my backend service. However, I'm now getting a different error. Upon sending any request (I'm using GraphQL, but in essence it's any REST call) I get back a 404.
Inspecting the logs yields the following:
'Getting metadata from plugin failed with error: Could not refresh access token: A Not Found error was returned while attempting to retrieve an accesstoken for the Compute Engine built-in service account. This may be because the Compute Engine instance does not have any permission scopes specified: Could not refresh access token: Unsuccessful response status code. Request failed with status code 404'
A cursory search for this issue doesn't seem to return anything related to what I'm trying to accomplish, and so I'm back to square one.
I think your initial assumption was correct! Workload Identity is not functioning properly if you still have to specify scopes. In the Workload article you have linked, scopes are not used.
I've been struggling with the same issue and have identified three ways to get authenticated credentials in the pod.
1. Workload Identity (basically the Workload Identity article above with some deployment details added)
This method is preferred because it allows each pod deployment in a cluster to be granted only the permissions it needs.
Create cluster (note: no scopes or service account defined)
gcloud beta container clusters create {cluster-name} \
--release-channel regular \
--identity-namespace {projectID}.svc.id.goog
Then create the k8sServiceAccount, assign roles, and annotate.
gcloud container clusters get-credentials {cluster-name}
kubectl create serviceaccount --namespace default {k8sServiceAccount}
gcloud iam service-accounts add-iam-policy-binding \
--member serviceAccount:{projectID}.svc.id.goog[default/{k8sServiceAccount}] \
--role roles/iam.workloadIdentityUser \
{googleServiceAccount}
kubectl annotate serviceaccount \
--namespace default \
{k8sServiceAccount} \
iam.gke.io/gcp-service-account={googleServiceAccount}
Then I create my deployment, and set the k8sServiceAccount.
(Setting the service account was the part that I was missing)
kubectl create deployment {deployment-name} --image={containerImageURL}
kubectl set serviceaccount deployment {deployment-name} {k8sServiceAccount}
Then expose with a target of 8080
kubectl expose deployment {deployment-name} --name={service-name} --type=LoadBalancer --port 80 --target-port 8080
The googleServiceAccount needs to have the appropriate IAM roles assigned (see below).
2. Cluster Service Account
This method is not preferred, because all VMs and pods in the cluster will have permissions based on the defined service account.
Create cluster with assigned service account
gcloud beta container clusters create [cluster-name] \
--release-channel regular \
--service-account {googleServiceAccount}
The googleServiceAccount needs to have the appropriate IAM roles assigned (see below).
Then deploy and expose as above, but without setting the k8sServiceAccount
3. Scopes
This method is not preferred, because all VMs and pods in the cluster will have permisions based on the scopes defined.
Create cluster with assigned scopes (firestore only requires "cloud-platform", realtime database also requires "userinfo.email")
gcloud beta container clusters create $2 \
--release-channel regular \
--scopes https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email
Then deploy and expose as above, but without setting the k8sServiceAccount
The first two methods require a Google Service Account with the appropriate IAM roles assigned. Here are the roles I assigned to get a few Firebase products working:
FireStore: Cloud Datastore User (Datastore)
Realtime Database: Firebase Realtime Database Admin (Firebase Products)
Storage: Storage Object Admin (Cloud Storage)
Going to close this question.
Just in case anyone stumbles onto it here's what fixed it for me.
1.) I re-followed the steps in the Google Documentation link above, this fixed the issue of my pods not launching.
2.) As for my update, I re-created my cluster and gave it the Cloud Datasource permission. I had assumed that the permissions were seperate from what Workload Identity needed to function. I was wrong.
I hope this helps someone.

How to deploy pgadmin4 docker image on azure web app?

I am unable to run docker image dpage/pgadmin4 on azure web app (Linux) which is available on docker hub.
I have installed Docker in my Linux machine and was able to run that docker image locally. Then I created Web app in Azure with options as given below:
OS: Linux
Publish: Docker Image
App service plan: Linux app service
After creating web app, I added two env variables in App Settings section:
PGADMIN_DEFAULT_EMAIL : user#domain.com
PGADMIN_DEFAULT_PASSWORD : SuperSecret
Finally login screen is visible but when I enter above credentials, it doesn't work and keeps redirecting to login page.
Update: If login is working properly, screen appears as shown below.
!(pgadmin initial screen)
After several retries i once got an message (CSRF token invalid) displayed in the right-top corner of the login screen.
For CSRF to properly work there must be some serverside state? So I activated the "ARR affinity" in the "General Settings" on the azure "Configuration".
I also noticed in the explamples on documentation the two environment-variables PGADMIN_CONFIG_CONSOLE_LOG_LEVEL (which is in the example set to '10') and PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION (which is in the example set to 'True').
After enabling "ARR" and setting PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION to False the login started to work. I have no idea what PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION is actually doing, so please take that with caution.
If thats not working for you, maybe setting PGADMIN_CONFIG_CONSOLE_LOG_LEVEL to 10 and enabling console debug logging can give you a clue whats happening.
For your issue, I do the test and find that it's really a strange thing. When I deploy the docker image dpage/pgadmin4 in Azure service Web App for Container through Azure CLI and set the app settings, there is no problem to log in with the user and password. But when I deploy it through the Azure portal, then I meet the same thing with you.
Not sure what is the reason, but the solution is that set the environment variables PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD through the Azure CLI like below:
az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PGADMIN_DEFAULT_EMAIL="user#domain.com" PGADMIN_DEFAULT_PASSWORD="SuperSecret"
If you really want to know the reason, then you can make feedback to Microsoft. Maybe it's a bug or some special settings.
Update
The screenshot of the test on my side here:

How to use Datadog agent in Azure App Service?

I'm running web apps as Docker containers in Azure App Service. I'd like to add Datadog agent to each container to, e.g., read the log files in the background and post them to Datadog log management. This is what I have tried:
1) Installing Datadog agent as extension as described in this post. This option does not seem to be available for App Service apps, only on VMs.
2) Using multi-container apps as described in this post. However, we have not found a simple way to integrate this with Azure DevOps release pipelines. I guess it might be possible to create a custom deployment task wrapping Azure CLI commands?
3) Including Datadog agent into our Dockerfiles by following how Datadog Dockerfiles are built. The process seems quite complicated and add lots of extra dependencies to our Dockerfile. We'd also not like to inherit our Dockerfiles from Datadog Dockerfile with FROM datadog/agent.
I'd assume this must be a pretty standard problem for Azure+Datadog users. Any ideas what's the cleanest option?
I doubt the Datadog agent will ever work on App Services web app as you do not have access to the running host, it was designed for VMs.
Have you tried this https://www.datadoghq.com/blog/azure-monitoring-enhancements/ ? They say they support AppServices
I have written a app service extension for sending Datadog APM metrics with .NET core and provided instructions for how to set it up here: https://github.com/payscale/datadog-app-service-extension
Let me know if you have any questions or if this doesn't apply to your situation.
Logs from App Services can also be sent to Blob storage and forwarded from there via an Azure Function. Unlike traces and custom metrics from App Services, this does not require a VM running the agent. Docs and code for the Function are available here:
https://github.com/DataDog/datadog-serverless-functions/tree/master/azure/blobs_logs_monitoring
If you want to use DataDog for logging from Azure Function of App Service you can use Serilog and DataDog Sink to the log files:
services
.AddLogging(loggingBuilder =>
loggingBuilder.AddSerilog(
new LoggerConfiguration()
.WriteTo.DatadogLogs(
apiKey: "REPLACE - DataDog API Key",
host: Environment.MachineName,
source: "REPLACE - Log-Source",
service: GetServiceName(),
configuration: new DatadogConfiguration(),
logLevel: LogEventLevel.Infomation
)
.CreateLogger())
);
Full source code and required NuGet packages are here:
To respond to your comment on wanting custom metrics, this is still possible without the agent at the same location. After installing the nuget package of datadog called statsdclient you can then configure it to send the custom metrics to an agent located elsewhere. Example below:
using StatsdClient;
var dogstatsdConfig = new StatsdConfig
{
StatsdServerName = "127.0.0.1", // Optional if DD_AGENT_HOST environment variable set
StatsdPort = 8125, // Optional; If not present takes the DD_DOGSTATSD_PORT environment variable value, else default is 8125
Prefix = "myTestApp", // Optional; by default no prefix will be prepended
ConstantTags = new string[1] { "myTag:myTestAppje" } // Optional
};
StatsdClient.DogStatsd.Configure(dogstatsdConfig);
StatsdClient.DogStatsd.Increment("fakeVisitorCountByTwo", 2); //Custom metric itself

Using Azure Container Registry creating new Azure Container Instance from C#

I have created a Azure Container Registry, and uploaded a custom ACI to the registry - no problem - everything works as intended. I have tried creating an container instance from the image using the Azure Portal, and no problems there either - however - when I want to automate things using C# with the Microsoft Azure Management Container Instance Fluent API, I run into problems, and even though I feel like I have been all over the Internet and the settings, looking for hidden obstructions, I haven't been able to find much help.
My code is as follows:
var azureCredentials = new AzureCredentials(new
ServicePrincipalLoginInformation
{
ClientId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
ClientSecret = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}, "xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
AzureEnvironment.AzureGlobalCloud);
var azure = Azure
.Configure()
.Authenticate(azureCredentials)
.WithSubscription("xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx");
IContainerGroup containerGroup = azure.ContainerGroups.Define("mytestgroup")
.WithRegion(Region.EuropeWest)
.WithExistingResourceGroup("mytest-rg")
.WithLinux()
.WithPrivateImageRegistry("mytestreg.azurecr.io", "mytestreg", "xxxxxxxxxxxxxx")
.WithoutVolume()
.DefineContainerInstance("mytestgroup")
.WithImage("mytestimage/latest")
.WithExternalTcpPort(5555)
.WithCpuCoreCount(.5)
.WithMemorySizeInGB(.5)
.Attach()
.Create();
The above code keeps giving me the exception:
Microsoft.Rest.Azure.CloudException: 'The image 'mytestimage/latest' in container group 'mytestgroup' is not accessible. Please check the image and registry credential.'
I have tried a couple of things;
Testing the credentials with docker login - no problem.
Pulling the image with docker pull mytestreg.azurecr.io/mytestimage - no problem.
Swapping WithPrivateImageRegistry with WithPublicImageRegistryOnly and just using debian in WithImage - works as intented - no problem.
Leaving the latest tag out of the image name - still doesn't work.
I have no idea why the credentials for the private Registry won't work - I have been copy/pasting directly from the Azure Portal to avoid typos, tried typing in manually etc.
Using Fiddler to inspect the traffic doesn't reveal anything interesting, other than the above exception message is returned directly from the Azure Management API.
What is the obvious thing that I am missing?
The answer above (ie using full azure registry server name):
.WithImage("mytestreg.azurecr.io/mytestimage:latest")
seems to be part of the solution, but even with that change I was still seeing this error. Looking through other examples on the web (https://github.com/Azure-Samples/aci-dotnet-create-container-groups-using-private-registry/blob/master/Program.cs) containing what I needed, I changed my azure authentication from:
azure = Azure.Authenticate(authFilePath).WithDefaultSubscription();
to:
AzureCredentials credentials = SdkContext.AzureCredentialsFactory.FromFile(authFilePath);
azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.Basic)
.Authenticate(credentials)
.WithDefaultSubscription();
and with THAT change, things are now working correctly.
I had the same problems for the last couple of weeks and I've finally found a solution. You should add your azure registry server name in front of the image name. So following your example change:
.WithImage("mytestimage/latest")
To:
.WithImage("mytestreg.azurecr.io/mytestimage:latest")
At least that did it for me, I hope it helps someone else.

Resources