how can I run dapr subscriber quickstart in prod environment - dapr

All quickstart examples using .Net SDK show 'dotnet run' as the way to run ASP.Net Core applications. See https://docs.dapr.io/getting-started/quickstarts/pubsub-quickstart/#tabs-1-net
dapr run --app-id order-processor --components-path ../../../components --app-port 7001 -- dotnet run
I figured dotnet publish and then replace "dotnet run" with calling the exe or "dotnet order-processor.dll" would work as well, but it does not. The application starts and can be accessed directly but not through dapr.
What would be to way to run the subscriber quickstart in a prod environment (not using dotnet run)?

mmm, my mistake. dotnet /order-processor.dll works but you have to adapt the port to the default. After changing 7001 to 5000 it works as expected. This does the trick:
dapr run --app-id order-processor --components-path ../../../components --app-port 5000 -- dotnet .\bin\Debug\net6.0\publish\order-processor.dll

Related

Cloud Run partially sets environment variables via Github Actions

I am playing with Github Actions and Cloud Run to automate my tasks.
I have set up repository on Github and prepared two workflows.
One for DEV and other for.. let's call it PROD environment.
Workflow deploys and run container with variables that are hardcoded:
- name: Deploy to Cloud Run
run: |-
gcloud run deploy ${{ env.SERVICE }} \
--region ${{ env.REGION }} \
--image gcr.io/${{ env.PROJECT_ID }}/${{ env.SERVICE }}:${{ github.sha }} \
[...]
--set-env-vars "X=testvar1" \
--set-env-vars "Y=testvar2" \
--set-env-vars "Z=testvar3" \
The problem I am facing is that dev environment works fine.
Whenever I trigger action for DEV it ends successfully, cloud run service is green and I can request my app on GCP.
When I am deploying literally the same on a prod environment, this step above fails.
When I debug more on this I go to the Variables & Secrets tab on Cloud Run instance for this failed service and there these env variables are missing. When I will add them manually via GCP console and redeploy it service works fine.
This should be done automatically as on the dev environment.
What is more when I trigger github action again for prod it replaces docker image and these manually set by me env variables are gone and I have to set them manually via condole again.
No additional security configs are made. This is just simple express app made in NodeJS.
Everything is literally the same when it comes to github workflows (yaml files), Dockerfile is also the same, GCP project too.
What might be the cause for that?

Call to external API:s randomly slow after moving integration test to a docker container

I'm currently trying to learn docker and as an exercise I moved all our integration tests that we run in azure devops to a docker container. It don't do anything fancy but simply contains the dll:s to my tests. Here are my docker file:
FROM mcr.microsoft.com/dotnet/sdk:3.1.408-focal
WORKDIR /app
COPY ./Automation .
# run tests on docker run
ENTRYPOINT ["dotnet", "test", "Automation.dll", "--logger", "trx", "--results-directory", "/var/temp", "--filter"]
And then I run it in azure devops with a task like this:
steps:
- task: Docker#2
displayName: 'Run tests'
inputs:
containerRegistry: nameOfRegistry
repository: 'nameOfRepo'
command: run
arguments: '--name automation --env TZ=Europe/Stockholm repoUrl:latest "TestCategory=Tickets" --verbosity n'
continueOnError: true
The tests will call REST api:s that we have deployed in azure and it works fine most of the time. But it will randomly get spikes where call that normally take 100-300ms will take 5+ seconds. The problem is that I don't see these issued if I run the tests normally outside docker with a simple VSTest#2 task (on the same self-hosted agent).
So a test run that may take 20 min with VSTest#2 will take 30+ min inside docker with Docker#2.
So my question is, what could cause this problem? It would be easier to debug if all external api calls where slow but now it seems to happen pretty randomly but enough to make the test run considerably longer.
I found the problem and the performance issue where because of RestSharper. So I changed to httpclient and now it works a lot faster inside docker.
So if other have similar issue it can be good to check which library you use.

Azure storage emulator does not work with azure function docker run

I am trying to run my timer trigger azure function sample using Docker. It works fine in the visual studio both in debug and release mode. But when use docker
docker run -d -p 8080:80 testdockercors
The application starts and it says the below message, but my timer trigger azure function is not running.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down..
But the timer trigger works fine, when running from visual studio. Please find my docker file.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Testdockercors/Testdockercors.csproj", "Testdockercors/"]
RUN dotnet restore "Testdockercors/Testdockercors.csproj"
COPY . .
WORKDIR "/src/Testdockercors"
RUN dotnet build "Testdockercors.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Testdockercors.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/app
ENV AzureWebJobsStorage="UseDevelopmentStorage=true"
When I change the environment AzureWebJobsStorage value to an existing azure storage connection string it works in docker too. But i want to use the storage that is part of docker, not one available in azure.
I fixed the issue by changing the value of AzureWebJobsStorage as below, appending DevelopmentStorageProxyUri=http://host.docker.internal to environment did the trick.
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
You can read more about # https://www.maneu.net/blog/use-local-storage-emulator-remote-container/
This will not work in Linux vm
I can confirm that using:
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
works for the Python runtime (in particular mcr.microsoft.com/azure-functions/python:3.0-python3.7) as well.
In effect this resolves to the azurite endpoints running on the host where the docker runtime is running (in my case it is a macOS). Note that this does not imply that it will work if you run azurite in a container.

Unable to connect to container in Gitlab CI in my free account

I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?
Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).

How to publish .Net core docker application from Windows to Linux machine?

I created a .Net core application with Linux docker support using Visual Studio 2017 on a Windows 10 PC with Docker for Windows installed. I can use the following command to run it (a console application)
docker run MyApp
I have another Linux machine with Docker installed. How to publish the .Net core application to the Linux machine? I need to publish and run the dockerized application on the Linux machine.
The linux has the following docker packages installed.
$ sudo yum list installed "*docker*"
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Installed Packages
docker-engine.x86_64 17.05.0.ce-1.el7.centos #dockerrepo
docker-engine-selinux.noarch 17.05.0.ce-1.el7.centos #dockerrepo
There are many ways to do this, just search for any tool for CI/CD.
The easiest way to do it is manually, connect to your Linux server, make a git pull of the code and then run the same commands that you run locally.
Other option is to do a push of your docker image to a container registry, then do a pull in you docker server and you are ready to go
Edit:
You should really take a look to some CI service, for example, in our environment, we use GitLab, when we do a push to master there is a gitlab.yml that builds the project, then do a push:
image: docker:latest
services:
- docker:dind
stages:
- build
api:
variables:
IMAGE_NAME: git.lagersoft.com:4567/gumbo/vtae/api:${CI_BUILD_REF}
stage: build
only:
- master
script:
- docker build -t ${IMAGE_NAME} -f vtae.api/Dockerfile .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN ${IMAGE_NAME}
- docker push ${IMAGE_NAME}
With this we only need to do a pull in our server with the latest version.
It's worth noticing that docker by itself does not handle the publication part, so you need to do it manually or with some tool (any CI tool like gitlab, jenkins, circleci, amazon code pipeline...) if you are starting learning I would recommend to start manually and then integrate some CI tool.
Edit 2
About the Visual Studio tool, I would not recommend to use it for anything else than local development, since yeah, it only works in windows and it only works in visual studio (Rider has integrated just very recently), so, to do the deploy in a linux environment we use our own docker and docker compose files, they are based in the defaults anyway, they are something like this:
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY lagersoft.common/lagersoft.common.csproj lagersoft.common/
COPY vtae.redirect/vtae.redirect.csproj vtae.redirect/
COPY vtae.data/vtae.data.csproj vtae.data/
COPY vtae.common/vtae.common.csproj vtae.common/
RUN dotnet restore vtae.redirect/vtae.redirect.csproj
COPY . .
WORKDIR /src/vtae.redirect
RUN dotnet build vtae.redirect.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish vtae.redirect.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "vtae.redirect.dll"]
This docker file copy all the related projects (I hate the copying part, but is the same as Microsoft do their default file), the do the build and publish the app, on the other hand we have a docker-compose to add some services (this files must be in the solution folder to access all the related projects):
version: '3.4'
services:
vtae.redirect.redis:
image: redis
volumes:
- "./volumes/redirect/redis/data:/data"
container_name: vtae.redirect.redis
vtae.redirect:
image: vtae.redirect
depends_on:
- vtae.redirect.redis
build:
context: .
dockerfile: vtae.redirect/Dockerfile
ports:
- "8080:80"
volumes:
- "./volumes/redirect/data:/data"
container_name: vtae.redirect
entrypoint: dotnet /app/vtae.redirect.dll
With this parts there is only left to do a commit, then a pull in the server and run the docker-compose up command to run our app (you could do it from the docker file directly, but it is easier and more manageable with docker compose.
Edit 3
To make the deployment in the server we use two tools.
First the gitlab ci is run after the commit is done
It makes the build specified in the docker file and pushes it to our Gitlab container registry, same if it was the container registry of amazon, google, azure... etc...
Then it makes a post request to the server in production, this server is running a special tool in a separate port
The server receive the post request and validates it, for this we use this tool (a friend is the repo owner)
The script receive the request, check the login, and if it is valid, then it simply does the pull from our gitlab container registry and run docker-compose up
Notes
The tool is not perfect, we are moving from just docker to use kubernetes, were you can connect to your cluster directly from your machine or some CI integration and do the deploys directly, no matter what solution do you choose, i recommend you that start to see how kubernetes can help you, sadly is one more layer to learn, but it is very promising, were you will be able to publish to almos any cloud or metal painless, with fallbacks, scaling and other stuff.
Also
If you do not want or can not use the container registry (I strongly recommend this way), you can use the same tool, in the .sh that executes it, just do a git pull and then a docker build or docker compose.
The most simple scenario could be to create an script yourself where you do ssh to the server, upload the files as zip and then run it in the server, remember, Ubuntu is in the microsoft store and could run this script, but the other solutions are more "independient" and scalable, so, make your choose!

Resources