bluemix container cannot add user-defined service and Watson service together - dialog

I have created a user-defined service (Compose Mongo DB) and Watson Dialog Service. I want to bind both services to my app deployed in IBM Container, however, I am not able to do.
I tried these, but none works:
I tried using BIND_TO, but I can only bind 1 service (e.g. Use comma separated using UI - BIND_TO : MongoBridge, WatsonBridge)
When I create bridge CF with 2 services bind, it doesn't work either (e.g. Using UI - BIND_TO : MongoWatsonBridge where MongoWatsonBridge has user-defined Mongo service - just URL - & Watson Dialog service bind)
When I tried to BIND_TO user-defined, + Watson service with --env CCS_BIND_SRV, it doesn't work either (e.g. BIND_TO : MongoWatsonBridge, also --env CCS_BIND_SRV=Watson-Dialog-Service)
Of course, user-defined doesn't work with --env CCS_BIND_SRV=MongoBridge - Bluemix throws error
I can individually bind each service though (using BIND_TO for MongoBridge, using --env CCS_BIND_SRV=Watson-Dialog-Service)
Please let me know if it is supported, or it is a bug that it suppose to work, but not working, or there is other way to bind both services.

the user-defined service does not support service key generation so it cannot be bound using the "CCS_BIND_SRV" parameter. The only way you can bind both these services to the container is by using a CF bridge app. Create a CF bridge app and bind both these services (user-defined and Watson) to this app. Then bind this app to the container using the "CCS_BIND_APP=" environment variable in the command line.

Related

Allow firewall rules for a GCP instance (port:8080) from CLI

Is there anyway to allow the ports from CLI?
I have an instance in GCP and I have installed a service which by default runs on Port:8080. I know there is an option to change the firewall rules to allow ports from the GCP dashboard but I'm wondering if there is any way to allow the required ports from the CLI
In my case I'm using Git Bash rather than the native GCP Cloud Console
I have seen the documentation to allow ports from command line GCP Firewall-rules-from CLI but this is throwing a ERROR since I'm using the Git Bash.
Here is the error log:
[mygcp#foo~]$ gcloud compute firewall-rules create FooService --allow=tcp:8080 --description="Allow incoming traffic on TCP port 8080" --direction=INGRESS
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- Request had insufficient authentication scopes.
[mygcp#foo~]$ gcloud compute firewall-rules list
ERROR: (gcloud.compute.firewall-rules.list) Some requests did not succeed:
- Request had insufficient authentication scopes.
Is there any option to allow required ports directly from the Git Bash CLI?
By default, the Compute Engine uses the default service account + scopes to handle the permissions.
The default scopes limit the API access even if your default compute engine service account has the editor role (by the way, a too wide role, never use it!).
To solve your issue, 2 solutions:
Use a custom service account on your Compute Engine
Add the required scopes to your current compute engine with the default compute engine service account used on it.
In both cases, you must stop the VM to update that security configuration.

How to run a Node app (NextJS) on gcloud from github?

I have followed these steps:
I installed `Google Cloud Build app on Github, linked it to Cloud Build and configured it to use a certain repository (a private one)
I set up a trigger at Cloud Build: Push to any branch
the project has no app instances after deploying (App Engine -> Dashboard)
My cloudbuild.yarml looks like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', '--project=project-name', '--version=$SHORT_SHA']
If I try to run the trigger manually: I get this error in Google Cloud:
unable to get credentials for cloud build robot
I have also tried to set IAM roles based on this article but using #cloudbuild.gserviceaccount.com doesn't seem to be a valid "member" (perhaps I need two projects, one for running and one for building the app?)
How do I fill the gaps / fixes the errors mentioned?
It seems the error message looking for credential that has the required permission. From the article that you are following, in the step #4, don't add manually the Service Account for Cloud Build. Check if you enable the Cloud Build API in your project, if the API is disabled try to enable. It will automatically create the Service Account for Cloud Build and look likes this:
[PROJECT_NUMBER]#cloudbuild.gserviceaccount.com
Once the service account is created, go to Cloud Build > Setting page and enable the required roles for you application.

Azure web app service for Linux containers not picking up environment variables from docker-compose file

I am not sure if I misunderstand how this should work.
I have a docker compose file (below) that defines environment variable for a single "service"/image:
services:
webui:
image: ${DOCKER_REGISTRY-}webui
build:
context: .
dockerfile: src/WebUI/Dockerfile
environment:
UseInMemoryDatabase: false
ASPNETCORE_ENVIRONMENT: Production
ASPNETCORE_URLS: https://+:443;http://+:80
ConnectionStrings__DefaultConnection: "********************************"
ports:
- "5000:5000"
- "5001:5001"
restart: always
When I open kudu for the web app and look at the environment tag, NONE of the environment variables defined above are present.
I manually set them by going to azure app -> configuration - --> application settings and add a record for each of the env variables above.
I can restart the app and I can now see the variables listed in the azure app -> Advance tools -> Kudu -> environment -> AppSettings
I also had to add the connection string to the separate connection strings settings of the azure app portal.
QUESTION:
Am I understanding this correctly? Should the app service "see" my environment variables in my docker-compose file and add them to the app settings for the running app service? Or is it normal to have to define each of those variables a second time in the configuration of the azure app service?
thanks in advance
Not correct. The environment variables in the app settings are only accessible when the Web App is in the running state. Currently, it does not support to set custom variables in the docker-compose file. As I know, there are only serial variables set by Azure that can be used in the docker-compose file, such as the variable WEBAPP_STORAGE_HOME. But for the image option, it does not support.
You're setting your environment variables correctly in your compose file. App Service will inject them in your container so no need to duplicate them as App Settings.
What's strange is that if you look at the docker run command in the startup logs, they are not passed as parameters and also, they are not listed in the Environment page in Kudu. That threw me off but I did a test using the Kuard image and was able to see my env var.

Azure Webapps for Containers connection string in environment variables

My app running in a docker container on Azure Webapps for Containers tries to access a connection string through an environment variable. I've added it to the Application Settings in the Azure UI but I can't access it through my code, specifically my ASP.NET Core application is returning null.
I know that the logs won't show it being added as a -e connstring=myconnstring argument in the docker run command, but it should never the less be present in the container.
It turns out, by using the Advanced Tools -> Environment Kudu service in Azure, the connection string environment variable names were being prefixed with SQLAZURECONNSTR_.
I know it is a convention to have these kind of prefixes on environment variables when reading them with the .NET Core environment variable configuration provider as described here, but quite why Azure adds these prefixes automatically, apparently without documenting this behaviour anywhere, is unclear to me.

How to use Datadog agent in Azure App Service?

I'm running web apps as Docker containers in Azure App Service. I'd like to add Datadog agent to each container to, e.g., read the log files in the background and post them to Datadog log management. This is what I have tried:
1) Installing Datadog agent as extension as described in this post. This option does not seem to be available for App Service apps, only on VMs.
2) Using multi-container apps as described in this post. However, we have not found a simple way to integrate this with Azure DevOps release pipelines. I guess it might be possible to create a custom deployment task wrapping Azure CLI commands?
3) Including Datadog agent into our Dockerfiles by following how Datadog Dockerfiles are built. The process seems quite complicated and add lots of extra dependencies to our Dockerfile. We'd also not like to inherit our Dockerfiles from Datadog Dockerfile with FROM datadog/agent.
I'd assume this must be a pretty standard problem for Azure+Datadog users. Any ideas what's the cleanest option?
I doubt the Datadog agent will ever work on App Services web app as you do not have access to the running host, it was designed for VMs.
Have you tried this https://www.datadoghq.com/blog/azure-monitoring-enhancements/ ? They say they support AppServices
I have written a app service extension for sending Datadog APM metrics with .NET core and provided instructions for how to set it up here: https://github.com/payscale/datadog-app-service-extension
Let me know if you have any questions or if this doesn't apply to your situation.
Logs from App Services can also be sent to Blob storage and forwarded from there via an Azure Function. Unlike traces and custom metrics from App Services, this does not require a VM running the agent. Docs and code for the Function are available here:
https://github.com/DataDog/datadog-serverless-functions/tree/master/azure/blobs_logs_monitoring
If you want to use DataDog for logging from Azure Function of App Service you can use Serilog and DataDog Sink to the log files:
services
.AddLogging(loggingBuilder =>
loggingBuilder.AddSerilog(
new LoggerConfiguration()
.WriteTo.DatadogLogs(
apiKey: "REPLACE - DataDog API Key",
host: Environment.MachineName,
source: "REPLACE - Log-Source",
service: GetServiceName(),
configuration: new DatadogConfiguration(),
logLevel: LogEventLevel.Infomation
)
.CreateLogger())
);
Full source code and required NuGet packages are here:
To respond to your comment on wanting custom metrics, this is still possible without the agent at the same location. After installing the nuget package of datadog called statsdclient you can then configure it to send the custom metrics to an agent located elsewhere. Example below:
using StatsdClient;
var dogstatsdConfig = new StatsdConfig
{
StatsdServerName = "127.0.0.1", // Optional if DD_AGENT_HOST environment variable set
StatsdPort = 8125, // Optional; If not present takes the DD_DOGSTATSD_PORT environment variable value, else default is 8125
Prefix = "myTestApp", // Optional; by default no prefix will be prepended
ConstantTags = new string[1] { "myTag:myTestAppje" } // Optional
};
StatsdClient.DogStatsd.Configure(dogstatsdConfig);
StatsdClient.DogStatsd.Increment("fakeVisitorCountByTwo", 2); //Custom metric itself

Resources