Azure Service Fabric application - hostname and host IP address - azure

From Azure service fabric application how to get the hostname and host IP address of the node which is serving the current request? please suggest.

These environment variables are made available by SF:
Fabric_NodeIPOrFQDN - The IP or FQDN of the node, as specified in the cluster manifest file. (e.g. localhost or 10.0.0.1)
Fabric_NodeName - The node name of the node running the process (e.g. _Node_0)
Assuming that you're using C#, you can get an environment variable by using Environment.GetEnvironmentVariable

Other then using environment variables you can use the StatelessServiceContext class. It has a NodeContext property containing several interesting properties. In your service you can get the fqdn/ip address like this:
var address = Context.NodeContext.IPAddressOrFQDN;
Afaik the Node Name isn't tied to a machine name, it is a logical name. Node name can be user defined name. I'd say Environment.MachineName or Context.NodeContext.IPAddressOrFQDN is the most accurate.

Related

What is the best way to write an VM's external IP to a .env file for a Node.js project running on GCP Compute Engine (Managed Instance Group)

Background Info
I have a Node.js app running on a Managed Instance Group of VM's on GCP's Compute Engine.
New VM's are generated from a template with a startup script. The script does the usual stuff, install Node.js, curl, git clone the app code, etc.
This application is set to auto-scale, which is why I need configurations to happen pro grammatically - namely setting host and port in the .env file for the Node.js project.
How I have tried to solve the problem
The only way I can think about doing this programmatically in my startup.sh script is by running the command: gcloud compute instaces list
This returns something like this
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
VM-1 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
VM-2 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
VM-3 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
Then from that, I would want to pull out the EXTERNAL_IP of the current machine the script is running on, and somehow write it to an environment variable / write it to the .env file for my project.
The .env file only has this information in it:
{
"config" : {
"host" : "VM_EXTERNAL_IP"
"port" : 3019
}
}
This method, I think, would require some sort of regex to grab the correct IP out of the commands output, then stored in Environment variable, then written to .env.
This seems like unnecessary work as I surely not the first person to want to do something like this. Is there a standard way of doing this? I'm not a Node.js expert by any means, and even less of a GCP expert. Perhaps there is a mechanism on GCP to deal with this? - some metadata API that can easily grab the IP to use in code? Maybe on the Node.js side there is a better way of configuring the host? Any recommendations is appreciated.
There are many methods to determine the external IP address. Note that your instance does not have an external public IP address. Google implements a 1-to-1 NAT which handles IP address translation which maps the public IP address to the private IP address.
The CLI supports the command line option --format json. You can use tools that parse json such as jq.
gcloud compute instances list --format json | jq -r ".[].networkInterfaces[0].accessConfigs[0].natIP"
Get your public IP address from several sources, which may or may not be the same as your instance:
https://checkip.amazonaws.com/.
curl https://checkip.amazonaws.com/
Use the CLI with options to get just the external address
gcloud compute instances describe [INSTANCE_NAME] --format='get(networkInterfaces[0].accessConfigs[0].natIP)'
Read the metadata server from inside the instance:
curl http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip -H "Metadata-Flavor: Google"
Once you have determined the method to get your external IP address, you can use tools such as awk to replace the value in your file.
https://www.grymoire.com/Unix/Awk.html

How does Fabric cli get ip of peer/orderer in the example of byfn?

Could any body tell me how the cli knows the IPs of other peers and orders just according to the Host in the configtx.yaml?
When does the DNS information generated?
Can anybody also tell me some more information about the configuration below "CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock"?
When you run fabric example, it always refer default credentials or already confirugred fabric configuration.
For example, if you use basic fabric example, you will run [your directory]/fabric-dev-servers/startFabric.sh
this file refer already configured information. One of them is connection profile. If you look at createPeerAdmin.sh file, you can find DevServer_connection.json. This file contains connection information for the fabric network.
As you are using byfn.sh, you can add the host ip address using "extra_hosts" in docker-compose.yaml file.
As there is no definition about this, it will use localhost as default.
https://medium.com/1950labs/setup-hyperledger-fabric-in-multiple-physical-machines-d8f3710ed9b4
like this,
extra_hosts:
- "peer0.org1.example.com:192.168.1.10"
- "ca.org1.example.com:192.168.1.15"
- "peer0.org2.example.com:192.168.1.20"
- "ca.org2.example.com:192.168.1.25"

Point Router To Node.js Server

I am trying to build a local test environment where my local devices will point to a different environment than production. The easiest way for me to do this is to point the device to a server that will map all routes to the production endpoint, to the staging endpoint.
How can I point my router to a Node.js instance, and use the Node.js instance as the DNS server?
It sounds like you're basically wanting to set up a (temporary?) alias for a host name on your local network so that all devices on your network use that alias. For example, today I might want to go to http://application.example.com and access the development version; tomorrow I will want to go to the same address and access the testing version.
There are a couple of different ways to do this:
Add a proxy - this will take HTTP requests for one host and route them to a different host. You could do this with a virtual machine, a Docker container, or directly on the development/testing machine. All you need to do is point your application domain at the proxy and configure the proxy to send the requests to the server you want.
Configure your router to serve the test environment IP address - some routers permit you to add host names to the DNS configuration. This would allow you to simply switch the IP address for the test and development environments as needed, while keeping the same host name.
Add a DNS server to your local network - this is basically the same as the item above, except that it gives you much more control (and is more difficult to configure).
All of these will take some work to set up and will depend very much on your server and network setup.

Riak "Node is not reachable"

I am using Riak 2.1.4 series in amazon. Totally new to it and have a couple of questions :
I deployed an instance of Riak. Its deployed in EC2 instance ?
Do we really need app.config and vm.args files for Riak configuration. I think if the nodename is available in Riak.conf thats enough isnt it ?
I see the IP address of the instance is different than the once configured in riak.conf is that fine ? i.e nodename for example instance name is ec2-35-160-XXX-XX.us-west-2.compute.amazonaws.com and riak.conf has riak#172.31.XX.XX
Only change in Riak.conf
ring_size = 64
erlang.distribution.port_range.minimum = 6000
erlang.distribution.port_range.maximum = 7999
transfer_limit = 2
search = on
This configuration exists in each instance. Am I missing something here ? How can I set this up for a five-node cluster?
I deployed an instance of Riak. Its deployed in EC2 instance ?
Not sure what you are asking here
Do we really need app.config and vm.args files for Riak configuration. I think if the nodename is available in Riak.conf thats
enough isnt it ?
The 'app.config' and 'vm.args' files are the old way to configure Riak. The 'riak.conf' and 'advanced.config' files are the new way. The old way is still accepted, probably to support legacy installations, but I would expect support for it to be dropped in a future release. See http://docs.basho.com/riak/kv/2.1.4/configuring/basic/
I see the IP address of the instance is different than the once configured in riak.conf is that fine ? i.e nodename for example
instance name is ec2-35-160-XXX-XX.us-west-2.compute.amazonaws.com and
riak.conf has riak#172.31.XX.XX
In general, if you want Erlang nodes to communicate they must be able to locate each other using the node name. The node name uses the local#domain pattern. All other nodes must be able to resolve the domain part to an IP address that is valid for the machine the node is running on, and the node itself will register the local part with the local erlang port mapper daemon(EPMD).
So whether or not riak#172.31.x.x is a valid node name will depend on your cluster's other nodes' ability to reach that private address.
Most riak-admin commands spawn a second maintenance node locally, which then uses remote procedure calls to talk to the running Riak instance. So if that 172.31.x.x IP address is not actually assigned to the local machine, those riak-admin commands will fail to find a node to talk to.

How to access the instance of OpenStack VM instance from outside the subnent?

I have setup a cloud test bed using OpenStack. I used the 3 node architecture.
The IP assigned to each node is as given below
Compute Node : 192.168.9.19/24
Network Node : 192.168.9.10/24
Controller Node : 192.168.9.2/24
The link of instance created is like this :
http://controller:6080/vnc_auto.html?token=2af0b9d8-0f83-42b9-ba64-e784227c119b&title=hadoop14%28f53c0d89-9f08-4900-8f95-abfbcfae8165%29
At first this instance was accessible only when I substitutes controller:8090 with 192.168.9.2:8090. I solved this by setting a local DNS server and resolving 192.168.9.2 to controller.local. Now instead of substituting the IP it works when I substitute controller.local.
Is there any other way to do it?? Also how can I access this instance from another subnet other than 192.168.9.0/24, without specifying the IP.
If I understood your question correctly, yes there is another way, you don't need to set up a DNS server!
On the machine that you would like to access the link, perform the operations below:
Open /etc/hosts file with a text editor.
Add this entry: 192.168.9.2 controller
Save the file, and that's it.
I suggest you to do these on all your nodes so that you can use these hostnames on your OpenStack configuration files instead of their IPs. This would also save you from tons of modifications if you have to make a change on the subnet IPs.
So for example your /etc/hosts files on your nodes should look like these:
#controller
192.168.9.2 controller
#network
192.168.9.10 network
#compute
192.168.9.19 compute

Resources