PowerDNS with BDR and without BDR - powerdns

Deploying a powerdns with BDR and without BDR. What is the difference?
What is the role of bootstraping in postgres?
While deploying a postgres from scratch, they are doing bootstraping and bootstraping is happening while deploying PowerDNS with BDR.
Help me with the link to understand powerdns.
Well, ansible is being used to deploy powerdns with postgres. I am able to download ansible and created a virtual environment.
In the inventory, there is a list of host with BDR and without BDR.
While running a command "make report". It is showing the error as sshpass need to install. I am unable to install sshpass in mac.

Related

Multiple Linux Grafana Integration

I started with Grafana to monitor on-premise Linux Servers. I am using the Cloud Portal. On the Grafana Dashboard, I installed the Linux Server Integration using this tutorial -> https://grafana.com/docs/grafana-cloud/quickstart/agent_linuxnode/.
I used the command line on one server to setup the agent:
sudo ARCH=amd64 GCLOUD_STACK_ID="XXXXX" GCLOUD_API_KEY="xxxxx" GCLOUD_API_URL="https://integrations-api-eu-west.grafana.net" /bin/sh -c "$(curl -fsSL https://raw.githubusercontent.com/grafana/agent/release/production/grafanacloud-install.sh)"
sudo systemctl restart grafana-agent.service
It works perfectly with one server. However, when I added a new Remote Linux Server with the same command line, it replaced the previous server in the dashboard and I cannot select the other server. I feel I should not use the same command line, but I cannot find what parameters I should modify.
Did someone face the same issue and found a solution ?
Thank you in advance,
B.
PS: Ideally I would make it work using docker containers on each Linux Server, communicating to the Cloud Portal
Assume sudo systemctl restart grafana-agent.service is restarting a specific server with the execution command in /etc/systemd/system/grafana-agent.service
If you want to have another grafana-agent you need additional service file. For example: grafana-agent-2.service with different configuration.

Connect Node.js API to virtual machine on Microsoft Azure / Azure CLI

Currently, I'm working on a project which is hosted on Microsoft Azure as a resource. The project is presented on a virtual machine and is operated using commands on the Azure CLI.
Now I've been asked to create a web app for it using Node.js and React.js. I'm totally lost on how to connect the Node.js API to the virtual machine. Is there any way to trigger those Azure CLI commands through a Node.js app. Any help would be appreciated!
EDIT:
Managed to solve the issue. Used this npm package 'ssh-exec' which lets you execute commands on a virtual machine remotely after connecting using IP Address, username, password. Very simple to use.
Link to package - https://www.npmjs.com/package/ssh-exec
Managed to solve the issue. Used this npm package 'ssh-exec' which lets you execute commands on a virtual machine remotely after connecting using IP Address, username, password. Very simple to use.
Link to package - https://www.npmjs.com/package/ssh-exec
There are some few steps to connect Node.js API to VM,
Firstly, we need to clone the project that we will be deploying to the Azure VM. This project is a basic Node.js API with a single endpoint for returning an array of todo objects. Go to the location where you want to store the project and clone it:
git clone --single-branch --branch base-project https://github.com/coderonfleek/node-azure-vm.git
Once the project has been cloned, go to the root of the project and install dependencies:
cd node-azure-vm
npm install
Run the application using the npm run dev command. The application will start up at the address http://localhost:5000. When the application is up and running, enter http://localhost:5000/todos in your browser to see the list of todos.
enter image description here
Now, go to the package.json file of your project and add these scripts in the scripts sections:
"scripts": {
.....,
"stop": "pm2 kill",
"start": "pm2 start server.js"
}
The start and stop scripts will use the pm2 process manager to start and stop the Node.js application on the VM. The pm2 script will be installed globally on the VM when it has been set up.
At the root of the project, run the rm -rf .git command to remove any .git history. Then push the project to GitHub. Make sure that this is the GitHub account connected to your CircleCI account.
Then, Setting up a virtual machine on Azure to run Node.js.
Next, create a new VM on Azure and set its environment up for hosting the Node.js application. These are the steps:
Create a new VM instance
Install nginx
Configure nginx to act as a proxy server. Route all traffic to port 80 on your VM to the running instance of the Node.js application on port 5000
Install Node.js on the VM and clone the app from the GitHub repo into a folder in the VM
Install pm2 globally
Do not be intimidated by the complexity of these steps! You can complete all five with one command. At the root of your project, create a new file named cloud-init-github.txt; it is a cloud-init file. Cloud-init is an industry-standard method for cloud instance initialization.
cloud init- code
(REFER THE BELOW LINK FOR COMPLETE DETAILS)
https://circleci.com/blog/cd-azure-vm/

Issue on stoping and starting aws Ec2 wordpress(centos 6.9) vm

I have configured wordpress in a lamp setup in centos 6.9 server.
It's an aws ec2 instance. When i stop the vm and start it again database connectivity of wordpress is lost.How do i resolve this?
I tested by rebooting from inside vm. At that time everything is working fine. But not when i stop and start from aws dashbord.. any solution?
I think the issue might be with your mysql service , after rebooting if you have not specified that it should run at start of server it would not run.
Also Make sure you have connection string as localhost and Elastic (Static) IP assigned to it.
If that is the case you can resolve it by adding the service to run every time instance start by this command:
sudo chkconfig mysqld on

recommended way to install mongodb on elastic beanstalk

I have already taken a look at How to install mongodb in Elastic Beanstalk? dated 2014, which no longer works. as well as https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/#manually-deploy-mongodb-on-ec2
I have set up a new elastic beanstalk environment running on node.js with 1 ec2 micro instance '64bit Amazon Linux 2016.03 v2.1.0 running Node.js'
I have already tried using ssh to connect into my instance and install the mongodb packages using yum command:
$ sudo yum install -y mongodb-org-server mongodb-org-shell mongodb-org-tools
and received this call back:
Loaded plugins: priorities, update-motd, upgrade-helper
No package mongodb-org-server available.
No package mongodb-org-shell available.
No package mongodb-org-tools available.
Error: Nothing to do
When I first ssh 'd into my instance, I received this error warning:
This EC2 instance is managed by AWS Elastic Beanstalk. Changes made via SSH
WILL BE LOST if the instance is replaced by auto-scaling. For more information
on customizing your Elastic Beanstalk environment, see our documentation here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Currently my environment is set up as a single instance environment, to save on costs. However, in the future I will upgrade to an auto-scaling environment.
Because of this, I am asking is it recommendable to make any changes via ssh in ec2, or should I only be using EB CLI?
I have both EC2 and EB CLI installed locally, however I have never used EB CLI before. If I should be using EB, does anyone have a recommended way to install mongodb?
In case anyone is looking for an answer, here is the advice I received from aws business support.
All code deployed to Elastic Beanstalk needs to be "stateless" I.E. Never make changes directly to a running beanstalk instance using SSH or FTP.... As this will cause inconsistencies and or data lose!
- Elastic Beanstalk is not designed for application that are not stateless.
The environment is designed to scale up and down pending on your Network / CPU load and build new instances from a base AMI. If an instance has issues or the underlying hardware, Elastic Beanstalk will terminate these running instances and replace with new instances. Hence, why no code modification must be applied or done "directly" to an existing instance as new instances will not be aware of these direct changes. ALL changes / code needs to be either uploaded to Elastic Beanstalk console or the CLI tools and the pushed to all the running instances.
More information on Elastic Beanstalk design concepts can be read at the following link
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.design.html
Suggested Solution:
With the above in mind, if using MongoDB to store application data our recommendation would be to DE-couple the MongoDB environment from your Node.js application.
I.E Create a MongoDB Server outside of Elastic Beanstalk, example launching MongoDB directly on a EC2 instance and have your Elastic Beanstalk Node.js application connect to MongoDB Server using connection settings in your app.
-Creating MongoDB
Below is some example links that may be of use for your scenario for creating a MongoDB Server.
Deploy MongoDB on EC2,
https://docs.mongodb.org/ecosystem/platforms/amazon-ec2/
MongoDB node client
https://docs.mongodb.org/getting-started/node/client/
MongoDB on the AWS Cloud quick start guide
http://docs.aws.amazon.com/quickstart/latest/mongodb/architecture.html
-Adding environment variables to Elastic Beanstalk to reference your MongoDB server
Once you have created your MongoDB Server you can pass the needed connection settings to your Elastic Beanstalk environment using environment variables.
Example using .ebextensions .config which you can add Mongo URL / ports / users etc..
option_settings:
- option_name: MONGO_DB_URL
value: "Your MongoDB EC2 internal IP address"
Information on how to use environment properties and read them from within your application can be seen below.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs.container.html#create_deploy_nodejs_custom_container-envprop
And information using .ebextensions .config can be found at the following link
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html
Alternatively you can also set environment variable using the cli or via the AWS Console
eb cli set environment variables can be read per the below link.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-setenv.html
Using AWS Console
To set system properties (AWS Management Console)
Open the Elastic Beanstalk console.
Navigate to the management console for your environment.
Choose Configuration.
In the Software Configuration section, choose Edit.
Under Environment Properties, create your name / values ...
Accessing Environment Configuration Settings
Inside the Node.js environment running in AWS Elastic Beanstalk, you can access the environment variables using process.env.ENV_VARIABLE similar to the following example.
process.env.MONGO_DB_URL
process.env.PARAM2
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs.container.html#create_deploy_nodejs_custom_container-envprop
Summary:
In summary I would recommend the following steps to integrate MongoDB with Elastic Beanstalk environments.
Step 1) Create a MongoDB Server outside of Elastic Beanstalk
Step 2) Create your Node.js application in Elastic Beanstalk that connect to your MongoDB server
3 options:
1) SSH into an eb instance and install the mongo CLI manually:
sudo yum-config-manager --add-repo https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/4.0/x86_64/
sudo yum install --nogpgcheck -y mongodb-org-shell
Disadvantage is that if EB scales down its number of instances and the instance you are currently on gets terminated, you get kicked out of the SSH session:
The system is going down for halt NOW!
Connection to 1.2.3.4 closed by remote host.
Connection to 1.2.3.4 closed.
ERROR: CommandError - An error occurred while running: ssh.
You then need to start all over again: connect to instance, install mongo CLI...
2) Pre-install mongo CLI on instances by using a .config file:
container_commands:
01-mongocli:
command: "sudo yum-config-manager --add-repo https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/4.0/x86_64/;sudo yum install --nogpgcheck -y mongodb-org-shell"
ignoreErrors: true //use this the ensure instance deployment even if mongo CLI installation fails
Again, if the instance gets terminated by auto-scaler you'd have to connect again, but you don't have to the install mongo CLI manually.
3) Create a separate instance that hosts your mongo CLI, as described in #amyloula's answer. If your mongodb is within an VPC you need to create that separate instance also within the VPC. You will then need to create a Gateway to access the instance publicly, as you cannot connect directly to an instance in a VPC.

deploy wso2esb in docker container with kubernetes

can someone help with how to deploy wso2esb in docker container with kubernetes?
currently im running only one node/master at local machine with ubuntu server 14.04 LTS
if im running with this
sudo docker run --name esb isim/wso2esb
it instantly trigger the service inside the container
but if im running with this
kubectl run esb1 --image=isim/wso2esb
the container just run, without trigger the service inside the container
btw im using isim/wso2esb from docker hub
hope someone can help me..
From the comments above, it looks like you were connecting to the wrong IP address, which you discovered by running kubectl logs esb1.
In general, you can follow the Kubernetes Debugging FAQ when you see an issue like this to see if it is a common problem that has already been documented.

Resources