What is the best way to write an VM's external IP to a .env file for a Node.js project running on GCP Compute Engine (Managed Instance Group) - node.js

Background Info
I have a Node.js app running on a Managed Instance Group of VM's on GCP's Compute Engine.
New VM's are generated from a template with a startup script. The script does the usual stuff, install Node.js, curl, git clone the app code, etc.
This application is set to auto-scale, which is why I need configurations to happen pro grammatically - namely setting host and port in the .env file for the Node.js project.
How I have tried to solve the problem
The only way I can think about doing this programmatically in my startup.sh script is by running the command: gcloud compute instaces list
This returns something like this
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
VM-1 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
VM-2 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
VM-3 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
Then from that, I would want to pull out the EXTERNAL_IP of the current machine the script is running on, and somehow write it to an environment variable / write it to the .env file for my project.
The .env file only has this information in it:
{
"config" : {
"host" : "VM_EXTERNAL_IP"
"port" : 3019
}
}
This method, I think, would require some sort of regex to grab the correct IP out of the commands output, then stored in Environment variable, then written to .env.
This seems like unnecessary work as I surely not the first person to want to do something like this. Is there a standard way of doing this? I'm not a Node.js expert by any means, and even less of a GCP expert. Perhaps there is a mechanism on GCP to deal with this? - some metadata API that can easily grab the IP to use in code? Maybe on the Node.js side there is a better way of configuring the host? Any recommendations is appreciated.

There are many methods to determine the external IP address. Note that your instance does not have an external public IP address. Google implements a 1-to-1 NAT which handles IP address translation which maps the public IP address to the private IP address.
The CLI supports the command line option --format json. You can use tools that parse json such as jq.
gcloud compute instances list --format json | jq -r ".[].networkInterfaces[0].accessConfigs[0].natIP"
Get your public IP address from several sources, which may or may not be the same as your instance:
https://checkip.amazonaws.com/.
curl https://checkip.amazonaws.com/
Use the CLI with options to get just the external address
gcloud compute instances describe [INSTANCE_NAME] --format='get(networkInterfaces[0].accessConfigs[0].natIP)'
Read the metadata server from inside the instance:
curl http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip -H "Metadata-Flavor: Google"
Once you have determined the method to get your external IP address, you can use tools such as awk to replace the value in your file.
https://www.grymoire.com/Unix/Awk.html

Related

getting hostname of remote computers on the local network not setup in /etc/hosts

I have a new learning, I was trying to get hostname using python's socket.
so from my macbook I ran the below code:
socket.gethostbyaddr("192.168.1.111")
and I get the ('rock64', [], ['192.168.1.111']) then I tried IP address of a computer that is not on the network anymore but used to be:
socket.gethostbyaddr("192.168.1.189")
and it returned: ('mint', [], ['192.168.1.189']) then I realised its coming from the /etc/hosts file.
now in that host file I also have this entry:
/etc/hosts
172.217.25.3 google.com.hk
but if I try to get host from ip of wan address i get different results than expected!
socket.gethostbyaddr("172.217.25.3")
that returns ('hkg07s24-in-f3.1e100.net', ['3.25.217.172.in-addr.arpa'], ['172.217.25.3'])
so I am not wondering where in the later case of WAN ip address I am getting the hostname and why in case of local computer IP's I am getting hostname from the configured /etc/hosts file ?
How can we get hostname of host computers on the local network without socket.gethostbyaddr having to look into /etc/hosts file or by other means ?
This is opinion based answer to the question "how to build registry of network devices on your local network?"
The best way to build registry of devices on your local network is to setup ntopng on your gateway. It uses DPI (Deep Packet Inspection) Technics to collect information about hosts.
NTOPNG has nice user interface and displays host names (when possible).
You can assign aliases for specific hosts which do not leak host names via any protocol.
For some reasons ntopng developers did not include alias into json response for request http://YOUR-SERVER:3000/lua/host_get_json.lua?ifid=2&host=IP-OF-DEVICE .
You can add it manually by adding lines require "mac_utils" and hj["alias"]=getDeviceName(hj["mac_address"]) into file /usr/share/ntopng/scripts/lua/host_get_json.lua
You can use REST API to interrogate ntopng and use provided information for building any script you need.

How to use get cf ssh-code password

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

How to access the instance of OpenStack VM instance from outside the subnent?

I have setup a cloud test bed using OpenStack. I used the 3 node architecture.
The IP assigned to each node is as given below
Compute Node : 192.168.9.19/24
Network Node : 192.168.9.10/24
Controller Node : 192.168.9.2/24
The link of instance created is like this :
http://controller:6080/vnc_auto.html?token=2af0b9d8-0f83-42b9-ba64-e784227c119b&title=hadoop14%28f53c0d89-9f08-4900-8f95-abfbcfae8165%29
At first this instance was accessible only when I substitutes controller:8090 with 192.168.9.2:8090. I solved this by setting a local DNS server and resolving 192.168.9.2 to controller.local. Now instead of substituting the IP it works when I substitute controller.local.
Is there any other way to do it?? Also how can I access this instance from another subnet other than 192.168.9.0/24, without specifying the IP.
If I understood your question correctly, yes there is another way, you don't need to set up a DNS server!
On the machine that you would like to access the link, perform the operations below:
Open /etc/hosts file with a text editor.
Add this entry: 192.168.9.2 controller
Save the file, and that's it.
I suggest you to do these on all your nodes so that you can use these hostnames on your OpenStack configuration files instead of their IPs. This would also save you from tons of modifications if you have to make a change on the subnet IPs.
So for example your /etc/hosts files on your nodes should look like these:
#controller
192.168.9.2 controller
#network
192.168.9.10 network
#compute
192.168.9.19 compute

AWS get active Public IP's of AutoScaling Group

I'm using node.js and AWS with autoscaling. A javascript SDK solution is preferable but at this point I'll take anything.
I'm hoping this is super easy to do and that I'm just an idiot, but how does one go about getting the public IP addresses of instances that are undergoing the scaling event?
I'm trying to keep a list of active public IP's within a specific application tier so I can circumvent ELB for websocket connections, but I can't figure out how to programmatically get the public IP addresses of the instances that have just been added/removed.
For me, and Sensu Client config, I add a basic sensu client config in the base AMI for my instances with "this_hostname" "this_ip" and "this_role." Then I just add some simple sed's in my cloudformation user_data script that curl the aws endpoint for the public ip's as the instance boots. Each cloudformation script sets/exports the APP_TYPE(downcased) in the same user_data script prior to my sed lines, so I reuse that as the role for sensu:
"sed -i \"s/this_hostname/$(curl http://169.254.169.254/latest/meta-data/public-ipv4)/\" /etc/sensu/conf.d/client.json\n",
"sed -i \"s/this_ip/$(hostname -i)/\" /etc/sensu/conf.d/client.json\n",
"sed -i \"s/this_role/${APP_TYPE,,}/\" /etc/sensu/conf.d/client.json\n",
You can also use the internal IP for both, or external for both hostname/IP, to which you can see examples of both above...
For Shutdown, I use a simple /etc/rc0.d/S01Instance_Termination script that is symbolically linked from /etc/init.d/instance_termination that runs a similar curl to remove itself from the host on instance shutdown:
http://pastebin.com/6He1mQTH

RabbitMQ Cluster on EC2: Hostname Issues

I want to set up a 3 node Rabbit cluster on EC2 (amazon linux). We'd like to have recovery implemented so if we lose a server it can be replaced by another new server automagically. We can set the cluster up manually easily using the default hostname (ip-xx-xx-xx-xx) so that the broker id is rabbit#ip-xx-xx-xx-xx. This is because the hostname is resolvable over the network.
The problem is: This hostname will change if we lose/reboot a server, invalidating the cluster. We haven't had luck in setting a custom static hostname because they are not resolvable by other machines in the cluster; thats the only part of that article that doens't make sense.
Has anyone accomplished a RabbitMQ Cluster on EC2 with a recovery implementation? Any advice is appreciated.
You could create three A records in an external DNS service for the three boxes and use them in the config. E.g., rabbit1.alph486.com, rabbit2.alph486.com and rabbit3.alph486.com. These could even be the ec2 private IP addresses. If all of the boxes are in the same region it'll be faster and cheaper. If you lose a box, just update the DNS record.
Additionally, you could assign an elastic IPs to the three boxes. Then, when you lose a box, all you'd need to do is assign the elastic IP to it's replacement.
Of course, if you have a small number of clients, you could just add entries into the /etc/hosts file on each box and update as needed.
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of the system. If the hostname changes, a new empty database is created. To avoid data loss it's crucial to set up a fixed and resolvable hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname
#Chrskly gave good answers that are the general consensus of the Rabbit community:
Init scripts that handle DNS or identification of other servers are mainly what I hear.
Elastic IPs we could not get to work without the aid of DNS or hostname aliases because the Internal IP/DNS on amazon still rotate and the public IP/DNS names that stay static cannot be used as the hostname for rabbit unless aliased properly.
Hosts file manipulations via an script are also an option. This needs to be accompanied by a script that can identify the DNS's of the other servers upon launch so doesn't save much work in terms of making things more "solid state" config wise.
What I'm doing:
Due to some limitations on the DNS front, I am opting to use bootstrap scripts to initialize the machine and cluster with any other available machines using the default internal dns assigned at launch. If we lose a machine, a new one will come up, prepare rabbit and lookup the DNS names of machines to cluster with. It will then remove the dead node from the cluster for housekeeping.
I'm using some homebrew init scripts in Python. However, this could easily be done with something like Chef/Puppet.
Update: Detail from Docs
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of
the system. If the hostname changes, a new empty database is created.
To avoid data loss it's crucial to set up a fixed and resolvable
hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname

Resources