Is it possible to initialize .net core non-pcf deployed application with values from Config Server - asp.net-core-2.0

I have a .net core app, hosted on PCF. Also I have Config Server installed.
I want to run locally with iis express this application and load same config values as it will have when deployed to pcf, and I do not want to deploy it to Pcf Dev as I want to debug it.
Is it possible? The only workaround I have is to fetch all variables into User managed secrets, but it's awful.

Steeltoe and SCS Client look at the VCAP_SERVICES environment variable to load the configuration they use to talk with Config Server. On PCF, this environment variable is automatically populated with information based on the services that you bind to your app.
I do not know of any tool to manage/bind services locally, but you can always set environment variables manually. If you were to run cf env <app> for an app that is bound to your Config Server, it will list the contents of the VCAP_SERVICES env variable. Copy that output, paste it into an environment variable on your local machine. Fire up your app and Steeltoe or SCS Client should pick that information up automatically.
Hope that helps!

If you don't want to connect to the exact same config server, you can run the config server locally with Java or Docker and point it at the same back-end. The Steeltoe docs include instructions for running the config server with Maven and the Music Store sample includes cmd and sh scripts that show running a config server via Docker, though they may be slightly out of date. The most recent way I've run the docker command is something like this:
docker run --rm -ti -p 8888:8888 -v $PWD/config-repo:/config --name steeltoe-config steeltoeoss/configserver --spring.profiles.active=native
from a location that contains a folder named config-repo with the relevant config files in that location.

Related

Where to place app config/logs in container

I've got a python package running in a container.
Is it best practice to install it in /opt/myapp within the container?
Should the logs go in /var/opt/myapp?
Should the config files go in /etc/opt/myapp?
Is anyone recommending writing logs and config files to /opt/myapp/var/log and /opt/myapp/config?
I notice google chrome was installed in /opt/google/chrome on my (host) system, but it didn't place any configs in /etc/opt/...
Is it best practice to install it in /opt/myapp within the container?
I place my apps in my container images in /app. So in the dockerfile I do
WORKDIR /app at the beginning
Should the logs go in /var/opt/myapp?
In container world the best practice is that your application logs go into stdout, stderr and not into files inside the container because containers are ephemeral by design and should be treated that way so when a container is stopped and deleted all of its data on its filesystem is gone.
On local docker development environment you can see the logs with docker logs and you can:
start a container named gettingstarted from the image docker/getting-started:
docker run --name gettingstarted -d -p 80:80 docker/getting-started
redirect docker logs output to a local file on the docker client (your machine from where you run the docker commands):
docker logs -f gettingstarted &> gettingstarted.log &
open http://localhost to generate some logs
read the log file with tail realtime or with any text viewer program:
tail -f gettingstarted.log
Should the config files go in /etc/opt/myapp?
Again, you can put the config files anywhere you want, I like to keep them together with my app so in the /app directory, but you should not modify the config files once the container is running. What you should do is instead pass the config variables to the container as environment variables at startup with the -e flag, for example to create MYVAR variable with MYVALUE value inside the container start it this way:
docker run --name gettingstarted -d -p 80:80 -e MYVAR='MYVALUE' docker/getting-started
exec into the container to see the variable:
docker exec -it gettingstarted sh
/ # echo $MYVAR
MYVALUE
From here it is the responsibility of your containerized app to understand these variables and translate them to actual application configurations. Some/most programming languages support reaching env vars from inside the code at runtime but if this is not an option then you can do an entrypoint.sh script that updates the config files with the values supplied through the env vars. A good example for this is the postgresql entrypoint: https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh
Is anyone recommending writing logs and config files to
/opt/myapp/var/log and /opt/myapp/config?
As you can see, it is not recommended to write logs into the filesystem of the container you would rather have a solution to save them outside of the container if you need them persisted.
If you understand and follow this mindset especially that containers are ephemeral then it will be much easier for you to transition from the local docker development to production ready kubernetes infrastructures.
Docker is Linux, so almost all of your concerns are related to the best operative system in the world: Linux
Installation folder
This will help you:
Where to install programs on Linux?
Where should I put software I compile myself?
and this: Linux File Hierarchy Structure
As a summary, in Linux you could use any folder for your apps, bearing in mind:
Don't use system folders : /bin /usr/bin /boot /proc /lib
Don't use file system folder: /media / mnt
Don't use /tmp folder because it's content is deleted on each restart
As you researched, you could imitate chrome and use /opt
You could create your own folder like /acme if there are several developers entering to the machine, so you could tell them: "No matter the machine or the application, all the custom content of our company will be in /acme". Also this help you if you are a security paranoid because will be able to guess where your application is. Any way, if the devil has access to your machine, is just a matter of time to find all.
You could use fine grained permissions to keep safe the chosen folder
Log Folder
Similar to the previous paragraph:
You could store your logs the standard /var/log/acme.log
Or create your own company standard
/acme/log/api.log
/acme/webs/web1/app.log
Config Folder
This is the key for devops.
In a traditional, ancient and manually deployments, some folders were used to store the apps configurations like:
/etc
$HOME/.acme/settings.json
But in the modern epoch and if you are using Docker, you should not store manually your settings inside of container or in the host. The best way to have just one build and deploy n times (dev, test, staging, uat, prod, etc) is using environment variables.
One build , n deploys and env variables usage are fundamental for devops and cloud applications, Check the famous https://12factor.net/
III. Config: Store config in the environment
V. Build, release, run: Strictly separate build and run stages
And also is a good practice on any language. Check this Heroku: Configuration and Config Vars
So your python app should not read or expect a file in the filesystem to load its configurations. Maybe for dev, but no for test and prod.
Your python should read its configurations from env variables
import os
print(os.environ['DATABASE_PASSWORD'])
And then inject these values at runtime:
docker run -it -p 8080:80 -e DATABASE_PASSWORD=changeme my_python_app
And in your developer localhost,
export DATABASE_PASSWORD=changeme
python myapp.py
Before the run of your application and in the same shell
Config of a lot pf apps
The previous approach is an option for a couple of apps. But if you are driven to microservices and microfrontends, you will have dozens of apps on several languages. So in this case, to centralize the configurations you could use:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Or the Configurator (I'm the author)

Azure App Service - How to use environment variables in docker container?

I'm using an Azure App Service to run my docker image. Running my docker container requires using a couple of environment variables. Locally, when I run my container I just do something like:
docker run -e CREDENTIAL -e USERNAME myapp
However, in the Azure App Service, after defining CREDENTIAL and USERNAME as Application Settings, I'm unsure how to pass these to the container. I see from the logs that on startup Azure passes some of its own environment variables, but if I add a startup command with my environment variables, it tacks it on at the end of the one generated by Azure creating an invalid command. How can I pass mine to the container?
As I understand you want to set environment variables in that docker container with -e option.
You don't need to use startup command for that. Pass these variables as application settings:
Application Settings are exposed as environment variables for access by your application at runtime.
Documentation

Connect Node.js API to virtual machine on Microsoft Azure / Azure CLI

Currently, I'm working on a project which is hosted on Microsoft Azure as a resource. The project is presented on a virtual machine and is operated using commands on the Azure CLI.
Now I've been asked to create a web app for it using Node.js and React.js. I'm totally lost on how to connect the Node.js API to the virtual machine. Is there any way to trigger those Azure CLI commands through a Node.js app. Any help would be appreciated!
EDIT:
Managed to solve the issue. Used this npm package 'ssh-exec' which lets you execute commands on a virtual machine remotely after connecting using IP Address, username, password. Very simple to use.
Link to package - https://www.npmjs.com/package/ssh-exec
Managed to solve the issue. Used this npm package 'ssh-exec' which lets you execute commands on a virtual machine remotely after connecting using IP Address, username, password. Very simple to use.
Link to package - https://www.npmjs.com/package/ssh-exec
There are some few steps to connect Node.js API to VM,
Firstly, we need to clone the project that we will be deploying to the Azure VM. This project is a basic Node.js API with a single endpoint for returning an array of todo objects. Go to the location where you want to store the project and clone it:
git clone --single-branch --branch base-project https://github.com/coderonfleek/node-azure-vm.git
Once the project has been cloned, go to the root of the project and install dependencies:
cd node-azure-vm
npm install
Run the application using the npm run dev command. The application will start up at the address http://localhost:5000. When the application is up and running, enter http://localhost:5000/todos in your browser to see the list of todos.
enter image description here
Now, go to the package.json file of your project and add these scripts in the scripts sections:
"scripts": {
.....,
"stop": "pm2 kill",
"start": "pm2 start server.js"
}
The start and stop scripts will use the pm2 process manager to start and stop the Node.js application on the VM. The pm2 script will be installed globally on the VM when it has been set up.
At the root of the project, run the rm -rf .git command to remove any .git history. Then push the project to GitHub. Make sure that this is the GitHub account connected to your CircleCI account.
Then, Setting up a virtual machine on Azure to run Node.js.
Next, create a new VM on Azure and set its environment up for hosting the Node.js application. These are the steps:
Create a new VM instance
Install nginx
Configure nginx to act as a proxy server. Route all traffic to port 80 on your VM to the running instance of the Node.js application on port 5000
Install Node.js on the VM and clone the app from the GitHub repo into a folder in the VM
Install pm2 globally
Do not be intimidated by the complexity of these steps! You can complete all five with one command. At the root of your project, create a new file named cloud-init-github.txt; it is a cloud-init file. Cloud-init is an industry-standard method for cloud instance initialization.
cloud init- code
(REFER THE BELOW LINK FOR COMPLETE DETAILS)
https://circleci.com/blog/cd-azure-vm/

Local IP and external IP setup for React

I am connecting to Node on React. Every time I am outside of my home server, I need to change the local IP config to the external IP config. And, is there any way that I can set two IP addresses (one for local IP address and another one for external IP address?
I believe you mean when your are outside, in your React app you need to change the IP to access your Node server. If I am right then you can move forward.
You can make use of environment variable process.env.
In you package.json under scripts add something like this:
"office": "IP=\"THE_IP\" npm start". Then from your app, you can access the IP value like this process.env.IP.
Now, when you run your app from outside you would run this command: npm run office.
If your are using windows: set "IP=abcdef" && npm start.
Hope it makes sense.
For Node.js application -
You can add ecosystem.config.js file into your project and add the developement and production URL and other variables there. Then you can start your server via pm2.
For developement you can use command -
pm2 start
For production you can use command -
pm2 start --env production
For detailed explaination you can follow this link : https://pm2.keymetrics.io/docs/usage/environment/
And for React application
You can simply make a file named .env and define your URL and other environment variables there.

Connecting app to AWS EC2 instance

I'm pretty new to DevOps and I'm trying to set up my Node.js app on a AWS server instance. Steps I've taken:
Set up Elastic IP
Launched EC2 instance with Ubuntu server
Connected IP to instance
Allowed incoming connections on port 3000
SSH'd into the server with a .pem file
Now I'm at the point where I need to get my files uploaded to the server. I've used FileZilla (and like it) in the past to upload files but the initial part was already set up. When I set up the site on FileZilla there is no /var/www folder on the remote site.
Don't know how to connect these dots.
Also not sure what I need to run once I successfully upload the files. I imagine npm install when I'm ssh'd into the server? Most of the tutorials out there only go through the basic instance setup.
Thanks!
You don't need to have /var/www. Also, it's better that you use a version control and a remote repository like Github and then SSH to your EC2 and then clone your repository there.
Then cd into your repo and run npm install and then start your app.
And check.
Once you connect to the EC2 instance then clone your code in there. It not mandatory to be in /var/www/html but, it's best practice to keep it there. Once you clone npm install into your project home directory so all the required packages get installed. Then for running your node application in production you have to run it on service as pm2, supervisor, forever, passenger, etc. You can use any of these services and configured appropriately to run your application on desired port. As with pm2, you can follow this guide, install pm2 Then you can run with the following command w.r.t. your environment, like I want to run my application on port 5555 for production
$ PORT=5555 pm2 start app.js --name API --env production -f
Check the status using pm2 list Now, your application is running on http://server-ip:5555/ But, you won't be typing port number every-time. So, you need to configure the web server in front of your application like apache or nginx which will forward all request to your application running port. You could find the best guide to their home page. Then your application is available at http://server-ip/ You can follow this for single configuration of multiple node apps
Hope this helps.

Resources