Hi we are trying to use NodeJS to return IP address WHOIS information before we send the requesting IP address to the rest of our app - That part is easy.
However the part that is not easy is, selecting only the Organization part of the whois information.
for example this is a whois and what it returns
whois 137.184.236.168
% IANA WHOIS server
% for more information on IANA, visit http://www.iana.org
% This query returned 1 object
refer: whois.arin.net
inetnum: 137.0.0.0 - 137.255.255.255
organisation: Administered by ARIN
status: LEGACY
whois: whois.arin.net
changed: 1993-05
source: IANA
# whois.arin.net
NetRange: 137.184.0.0 - 137.184.255.255
CIDR: 137.184.0.0/16
NetName: DIGITALOCEAN-137-184-0-0
NetHandle: NET-137-184-0-0-1
Parent: NET137 (NET-137-0-0-0-0)
NetType: Direct Allocation
OriginAS: AS14061
Organization: DigitalOcean, LLC (DO-13)
RegDate: 2019-11-13
Updated: 2020-04-03
Comment: Routing and Peering Policy can be found at https://www.as14061.net
Comment:
Comment: Please submit abuse reports at https://www.digitalocean.com/company/contact/#abuse
Ref: https://rdap.arin.net/registry/ip/137.184.0.0
OrgName: DigitalOcean, LLC
OrgId: DO-13
Address: 101 Ave of the Americas
Address: FL2
City: New York
StateProv: NY
PostalCode: 10013
Country: US
RegDate: 2012-05-14
Updated: 2022-05-19
Ref: https://rdap.arin.net/registry/entity/DO-13
OrgAbuseHandle: ABUSE5232-ARIN
OrgAbuseName: Abuse, DigitalOcean
OrgAbusePhone: +1-347-875-6044
OrgAbuseEmail: abuse#digitalocean.com
OrgAbuseRef: https://rdap.arin.net/registry/entity/ABUSE5232-ARIN
OrgTechHandle: NOC32014-ARIN
OrgTechName: Network Operations Center
OrgTechPhone: +1-347-875-6044
OrgTechEmail: noc#digitalocean.com
OrgTechRef: https://rdap.arin.net/registry/entity/NOC32014-ARIN
OrgNOCHandle: NOC32014-ARIN
OrgNOCName: Network Operations Center
OrgNOCPhone: +1-347-875-6044
OrgNOCEmail: noc#digitalocean.com
OrgNOCRef: https://rdap.arin.net/registry/entity/NOC32014-ARIN
The only thing we are interested in is Organization: DigitalOcean, LLC (DO-13)
As we want to drop all IP addresses from this host provider.
We noticed that we have been successful at stopping Google and AWS via using host command but Digital Ocean does not work this way and we need to do it via Whois.
I know in NodeJS I would request the information
exec("whois "+ip, (error, stdout, stderr) => {
console.log(stdout);
}
Could use a regular expression:
const organizationPattern = /^organization:\s*(.+)$/im;
const match = organizationPattern.exec(stdout);
const organization = match ? match[1] : 'unknown';
console.log(organization);
Related
The user input / queryText being sent to Dialogflow is not the expected, original user query.
simulator query manipulation
I enabled "Log interactions to Google Cloud" in my Dialogflow project's settings. What I'm seeing is multiple "assistant_action" resources before the actual request that goes to DF. In the example above, this is what I see:
GCP logs
With the first debug resource showing post data with:
"inputs":[{"rawInputs":[{"inputType":"UNSPECIFIED_INPUT_TYPE","query":"how long has it been on the market"}]
And
resource: {
type: "assistant_action"
labels: {
project_id: "<MY-PROJECT-ID>"
version_id: ""
action_id: ""
}
},
timestamp: "2021-03-05T18:41:44.142202856Z"
severity: "DEBUG"
labels: {
channel: "production"
querystream: "GOOGLE_USER"
source: "AOG_REQUEST_RESPONSE"
}
The subsequent requests are the same but with modified input queries ("how long has it been on the market" -> "how long has something been on the market" -> "how long has us FDA been on the market"), the last one being the actual user query sent, the channel being preview and the action_id "actions.intent.TEXT".
resource: {
type: "assistant_action"
labels: {
project_id: "<MY-PROJECT-ID>"
version_id: ""
action_id: "actions.intent.TEXT"
}
},
timestamp: "2021-03-05T18:41:45.942019959Z"
severity: "DEBUG"
labels: {
channel: "preview"
querystream: "GOOGLE_USER"
source: "AOG_REQUEST_RESPONSE"
}
I should note that I am testing current drafts of an AoG project and have no releases let alone a production release. I have a denied beta, because of branding issues which I address with separate AoG/DF projects for PROD. I do not have any intents enabled for slot filling or any required entity parameters. This is just one example, but I have been noticing many occurrences of this issue.
What is happening here? Why is the original user input being manipulated? What are all these interactions we are seeing before the expected request/response cycle?
After having contacted someone at Google Cloud, I was informed this was something that had been raised by others and that AoG devs were looking into it.
As of a Mar 24 2021 release, I can no longer replicate this Entity Resolution issue.
I'm setting up my load balancer in GCP with 2 nodes (Apache httpd), with domain lblb.tonegroup.net.
Currently my load balancer is working fine, the traffic is switching over between the 2 nodes, but how do i configure to redirect http://lblb.tonegroup.net to https://lblb.tonegroup.net ?
Is it possible to configure it at the load balancer level or I need to configure it at apache level? I have Google Managed SSL cert installed FYI.
Right now the redirection from http to https is possible with the Load Balancer's Traffic Management.
Below is an example of how to set it up on their documentation:
https://cloud.google.com/load-balancing/docs/https/setting-up-traffic-management#console
Basically you will create two of each "forwarding rules", targetproxy and urlmap.
2 URLMaps
In 1st URL map you will just set a redirection. The define redirection rules are below and no backend service is needed to be define here
httpsRedirect: true
redirectResponseCode: FOUND
In 2nd map you will have to define your backend services there
2 forwarding rules
1st forwarding rule is to serve http request so basically port 80
2nd forwarding rule is to serve http request so port 443
2 targetproxy
1st target proxy is targetHttpProxy, this will where the 1st forwarding rule is forwarded to and is mapped to the 1st URLMap
2nd target proxy is targetHttpsProxy where the 2nd forwarding rule is forwarded to and is mapped to the 2nd URLMap
========================================================================
Below is a Cloud Deployment Manager example with Managed Certificates and Storage Buckets as the backend
storagebuckets-template.jinja
resources:
- name: {{ properties["bucketExample"] }}
type: storage.v1.bucket
properties:
storageClass: REGIONAL
location: asia-east2
cors:
- origin: ["*"]
method: [GET]
responseHeader: [Content-Type]
maxAgeSeconds: 3600
defaultObjectAcl:
- bucket: {{ properties["bucketExample"] }}
entity: allUsers
role: READER
website:
mainPageSuffix: index.html
backendbuckets-template.jinja
resources:
- name: {{ properties["bucketExample"] }}-backend
type: compute.beta.backendBucket
properties:
bucketName: $(ref.{{ properties["bucketExample"] }}.name)
enableCdn: true
ipaddresses-template.jinja
resources:
- name: lb-ipaddress
type: compute.v1.globalAddress
sslcertificates-template.jinja
resources:
- name: example
type: compute.v1.sslCertificate
properties:
type: MANAGED
managed:
domains:
- example1.com
- example2.com
- example3.com
loadbalancer-template.jinja
resources:
- name: centralized-lb-http
type: compute.v1.urlMap
properties:
defaultUrlRedirect:
httpsRedirect: true
redirectResponseCode: FOUND
- name: centralized-lb-https
type: compute.v1.urlMap
properties:
defaultService: {{ properties["bucketExample"] }}
pathMatchers:
- name: example
defaultService: {{ properties["bucketExample"] }}
pathRules:
- service: {{ properties["bucketExample"] }}
paths:
- /*
hostRules:
- hosts:
- example1.com
pathMatcher: example
- hosts:
- example2.com
pathMatcher: example
- hosts:
- example3.com
pathMatcher: example
httpproxies-template.jinja
resources:
- name: lb-http-proxy
type: compute.v1.targetHttpProxy
properties:
urlMap: $(ref.centralized-lb-http.selfLink)
- name: lb-https-proxy
type: compute.v1.targetHttpsProxy
properties:
urlMap: $(ref.centralized-lb-https.selfLink)
sslCertificates: [$(ref.example.selfLink)]
- name: lb-http-forwardingrule
type: compute.v1.globalForwardingRule
properties:
target: $(ref.lb-http-proxy.selfLink)
IPAddress: $(ref.lb-ipaddress.address)
IPProtocol: TCP
portRange: 80-80
- name: lb-https-forwardingrule
type: compute.v1.globalForwardingRule
properties:
target: $(ref.lb-https-proxy.selfLink)
IPAddress: $(ref.lb-ipaddress.address)
IPProtocol: TCP
portRange: 443-443
templates-bundle.yaml
imports:
- path: backendbuckets-template.jinja
- path: httpproxies-template.jinja
- path: ipaddresses-template.jinja
- path: loadbalancer-template.jinja
- path: storagebuckets-template.jinja
- path: sslcertificates-template.jinja
resources:
- name: storagebuckets
type: storagebuckets-template.jinja
properties:
bucketExample: example-sb
- name: backendbuckets
type: backendbuckets-template.jinja
properties:
bucketExample: example-sb
- name: loadbalancer
type: loadbalancer-template.jinja
properties:
bucketExample: $(ref.example-sb-backend.selfLink)
- name: ipaddresses
type: ipaddresses-template.jinja
- name: httpproxies
type: httpproxies-template.jinja
- name: sslcertificates
type: sslcertificates-template.jinja
$ gcloud deployment-manager deployments create infrastructure --config=templates-bundle.yaml > output
command output
NAME TYPE STATE ERRORS INTENT
centralized-lb-http compute.v1.urlMap COMPLETED []
centralized-lb-https compute.v1.urlMap COMPLETED []
example compute.v1.sslCertificate COMPLETED []
example-sb storage.v1.bucket COMPLETED []
example-sb-backend compute.beta.backendBucket COMPLETED []
lb-http-forwardingrule compute.v1.globalForwardingRule COMPLETED []
lb-http-proxy compute.v1.targetHttpProxy COMPLETED []
lb-https-forwardingrule compute.v1.globalForwardingRule COMPLETED []
lb-https-proxy compute.v1.targetHttpsProxy COMPLETED []
lb-ipaddress compute.v1.globalAddress COMPLETED []
It is not possible to do that directly on GCP Load balancer.
One possibility is to make the redirection on your backend service. GCP Loader balancer add x-forwarded-proto property in requests headers which is equal to http or https. You could add a condition based on this property to make a redirection.
I believe the previous answer provided by Alexandre is correct; currently, it's not possible to redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer. I have found a feature request already submitted for this feature; you can access it and add your comment using this link.
You have also mentioned you are using Google managed SSL certificate but the only workaround I found is to redirect it in the Server level. In such scenario, you would have to use self-managed SSL certificate.
To redirect HTTP URLs to HTTPS, do the following in Apache server:
<VirtualHost *:80>
ServerName www.example.com
Redirect "/" "https://www.example.com/"
</VirtualHost>
<VirtualHost *:443>
ServerName www.example.com
# ... SSL configuration goes here
</VirtualHost>
You would have to configure an Apache server configuration file. Refer to the apache.org documentation on Simple Redirection for more details.
Maybe it's too late, but I had the same problem and here my solution:
Configure two frontends on GCP Load balancer(HTTP and HTTPS).
Set port 80(http protocol) to communication to backend service and final VMs.
On the backend service add the Google variable: {tls_version} as X-SSL-Protocol custom header.
On final servers perform redirection based on X-SSL-Protocol value:
If empty(no https), redirect(301), otherwise do nothing.
You can check the header value on your web server or from an intermediate load balancer VM instance. My case with HAProxy:
frontend fe_http
bind *:80
mode http
#check if value is empty
acl is_http res.hdr(X-SSL-Protocol) -m len 0
#perform redirection only if no value found in custom header
redirect scheme https code 301 if is_http
#when redirect is performed, subsequent instructions are not reached
default_backend bk_http1
If you use Terraform (highly recommend for GCP configuration), here's a sample config. This code creates two IP addresses (v4 & v6) -- which you would use in your https forwarding rules as well.
// HTTP -> HTTPS redirector
resource "google_compute_url_map" "http-to-https" {
name = "my-http-to-https"
default_url_redirect {
https_redirect = true
strip_query = false
redirect_response_code = "PERMANENT_REDIRECT"
}
}
resource "google_compute_target_http_proxy" "proxy" {
name = "my-http-proxy"
url_map = google_compute_url_map.http-to-https.self_link
}
resource "google_compute_global_forwarding_rule" "http-v4" {
name = "my-fwrule-http-v4"
target = google_compute_target_http_proxy.proxy.self_link
ip_address = google_compute_global_address.IPv4.address
port_range = "80"
}
resource "google_compute_global_forwarding_rule" "http-v6" {
name = "my-fwrule-http-v6"
target = google_compute_target_http_proxy.proxy.self_link
ip_address = google_compute_global_address.IPv6.address
port_range = "80"
}
resource "google_compute_global_address" "IPv4" {
name = "my-ip-v4-address"
}
resource "google_compute_global_address" "IPv6" {
name = "my-ip-v6-address"
ip_version = "IPV6"
}
At a high level, to redirect HTTP traffic to HTTPS, you must do the following:
Create HTTPS LB1 (called here web-map-https).
Create HTTP LB2 (no backend) (called here web-map-http) with the same IP address used in LB1 and a redirect configured in the URL map.
Please check:
https://cloud.google.com/load-balancing/docs/https/setting-up-http-https-redirect
Perhaps I'm late to the game but I use the following:
[ingress.yaml]:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-external-ip
networking.gke.io/managed-certificates: my-google-managed-certs
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: redirect-frontend-config
spec:
defaultBackend:
service:
name: online-service
port:
number: 80
[redirect-frontend-config.yaml]
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: redirect-frontend-config
spec:
redirectToHttps:
enabled: true
I'm using the default "301 Moved Permanently", but if you'd like to use something else, just add a row under redirectToHttps containing
responseCodeName: <CHOSEN REDIRECT RESPONSE CODE>
MOVED_PERMANENTLY_DEFAULT to return a 301 redirect response code (default).
FOUND to return a 302 redirect response code.
SEE_OTHER to return a 303 redirect response code.
TEMPORARY_REDIRECT to return a 307 redirect response code.
PERMANENT_REDIRECT to return a 308 redirect response code.
Further reading at
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
I want to monitor and get information regarding the different instances in an Azure Virtual Machine Scale Set (VMSS).
I used the command (Python):
vmss = compute_client.virtual_machine_scale_sets.list(resource_group, scale_set_name)
But I am not able to get the result I am expecting.
Any suggestions what to do?
You can use the following code to get the ip and powerstate.
compute_client = ComputeManagementClient(credentials, subscription_id)
vmss = compute_client.virtual_machine_scale_set_vms.list(resource_group_name="", vmss="")
for item in vmss:
print("name: ", item.name)
ni_reference = item.network_profile.network_interfaces[0].id
resource_client = ResourceManagementClient(credentials, subscription_id)
nic = resource_client.resources.get_by_id(
ni_reference,
api_version='2017-12-01')
ip_reference = nic.properties['ipConfigurations'][0]['properties']
print("ip info: ", ip_reference)
instance_view = compute_client.virtual_machine_scale_set_vms.get_instance_view(resource_group_name="", vmss="", instance_id=item.instance_id)
print(instance_view.statuses[1].code)
result:
name: yangtestvmss_1
ip info: {'provisioningState': 'Succeeded', 'privateIPAddress': '10.0.0.5', 'privateIPAllocationMethod': 'Dynamic', 'subnet': {'id': '/subscriptions/e5b0fcfa-e859-43f3-8d84-5e5fe29f4c68/resourceGroups/yangtestvmss/providers/Microsoft.Network/virtualNetworks/yangtestvmssVnet/subnets/default'}, 'primary': True, 'privateIPAddressVersion': 'IPv4', 'isInUseWithService': False}
PowerState/running
name: yangtestvmss_3
ip info: {'provisioningState': 'Succeeded', 'privateIPAddress': '10.0.0.7', 'privateIPAllocationMethod': 'Dynamic', 'subnet': {'id': '/subscriptions/e5b0fcfa-e859-43f3-8d84-5e5fe29f4c68/resourceGroups/yangtestvmss/providers/Microsoft.Network/virtualNetworks/yangtestvmssVnet/subnets/default'}, 'primary': True, 'privateIPAddressVersion': 'IPv4', 'isInUseWithService': False}
PowerState/running
If you want to get the VMs information, please use the following code.
subscription_id = 'subscription Id'
credentials = ServicePrincipalCredentials(client_id=CLIENT, secret=KEY, tenant=TENANT_ID)
client = ComputeManagementClient(credentials, subscription_id)
vmss = client.virtual_machine_scale_set_vms.list("resourcegroup Name","VMSS name")
for item in vmss:
print("id:",item.id)
print("name",item.name)
Test Result:
There is a cool tool that a guy from Microsoft has been build for monitoring VMSS
see this link VMSS Dashboard
The mentioned tool helps you to see the status of VMs in the scale set: you can see the update domain and fault domain grouping of VMs. It lets you start or deallocate a VM. The code is for more than two years ago.
I'm experiencing difficulties regarding accessing external resources from nodes (RHEL) configured in a VM Scale Set.
To sketch the environment I'm trying to describe using Azure Resource Manager Templates, I'm looking to create:
1 common virtualNetwork
1 Frontend VM (running RHEL, and is working as intended)
1 Cluster (vmss) running 2 nodes (RHEL)
Nodes are spawned in the same private subnet as the frontend VM
1 loadbalancer should work as a NAT gateway (but it's not working this way)
The loadbalancer has an external IP, inboundNatPool (which works), backendAddressPool (in which nodes are successfully registered)
the Network Security Group manages access to ports (set to allow all outbound connections)
As a footnote, I'm comfortable writing up AWS cloudformation files in YAML, so I'm handling Azure Resource Manager Templates in a similar way, for the sake of readability and the added functionality of adding comments in my template.
An Example of my vmss config (short snippet)
... #(yaml-template is first converted to json and than deployed using the azure cli)
# Cluster
# -------
# Scale Set
# ---------
# | VM Scale Set can not connect to external sources
# |
- type: Microsoft.Compute/virtualMachineScaleSets
name: '[variables(''vmssName'')]'
location: '[resourceGroup().location]'
apiVersion: '2017-12-01'
dependsOn:
- '[variables(''vnetName'')]'
- '[variables(''loadBalancerName'')]'
- '[variables(''networkSecurityGroupName'')]'
sku:
capacity: '[variables(''instanceCount'')]' # Amount of nodes to be spawned
name: Standard_A2_v2
tier: Standard
# zones: # If zone is specified, no sku can be chosen
# - '1'
properties:
overprovision: 'true'
upgradePolicy:
mode: Manual
virtualMachineProfile:
networkProfile:
networkInterfaceConfigurations:
- name: '[variables(''vmssNicName'')]'
properties:
ipConfigurations:
- name: '[variables(''ipConfigName'')]'
properties:
loadBalancerBackendAddressPools:
- id: '[variables(''lbBackendAddressPoolsId'')]'
loadBalancerInboundNatPools:
- id: '[variables(''lbInboundNatPoolsId'')]'
subnet:
id: '[variables(''subnetId'')]'
primary: true
networkSecurityGroup:
id: '[variables(''networkSecurityGroupId'')]'
osProfile:
computerNamePrefix: '[variables(''vmssName'')]'
adminUsername: '[parameters(''sshUserName'')]'
# adminPassword: '[parameters(''adminPassword'')]'
linuxConfiguration:
disablePasswordAuthentication: True
ssh:
publicKeys:
- keyData: '[parameters(''sshPublicKey'')]'
path: '[concat(''/home/'',parameters(''sshUserName''),''/.ssh/authorized_keys'')]'
storageProfile:
imageReference: '[variables(''clusterImageReference'')]'
osDisk:
caching: ReadWrite
createOption: FromImage
...
The Network Security Group referenced from the template above is:
# NetworkSecurityGroup
# --------------------
- type: Microsoft.Network/networkSecurityGroups
name: '[variables(''networkSecurityGroupName'')]'
apiVersion: '2017-10-01'
location: '[resourceGroup().location]'
properties:
securityRules:
- name: remoteConnection
properties:
priority: 101
access: Allow
direction: Inbound
protocol: Tcp
description: Allow SSH traffic
sourceAddressPrefix: '*'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '22'
- name: allow_outbound_connections
properties:
description: This rule allows outbound connections
priority: 200
access: Allow
direction: Outbound
protocol: '*'
sourceAddressPrefix: 'VirtualNetwork'
sourcePortRange: '*'
destinationAddressPrefix: '*'
destinationPortRange: '*'
And the loadbalancer, where I assume the error should be, is described as:
# Loadbalancer as NatGateway
# --------------------------
- type: Microsoft.Network/loadBalancers
name: '[variables(''loadBalancerName'')]'
apiVersion: '2017-10-01'
location: '[resourceGroup().location]'
sku:
name: Standard
dependsOn:
- '[variables(''natIPAddressName'')]'
properties:
backendAddressPools:
- name: '[variables(''lbBackendPoolName'')]'
frontendIPConfigurations:
- name: LoadBalancerFrontEnd
properties:
publicIPAddress:
id: '[variables(''natIPAddressId'')]'
inboundNatPools:
- name: '[variables(''lbNatPoolName'')]'
properties:
backendPort: '22'
frontendIPConfiguration:
id: '[variables(''frontEndIPConfigID'')]'
frontendPortRangeStart: '50000'
frontendPortRangeEnd: '50099'
protocol: tcp
I keep reading articles about configuring a SNAT with port masquerading, but I'm missing relevant examples of such setup.
Any help is greatly appreciated.
It took a lot of searching but the article from Azure about Azure Load Balancer outbound Connections (Scenario #2) stated a load-balancing rule (and complementary Health Probe) was necessary for SNAT to function.
the new code for the load balancer became:
...
- type: Microsoft.Network/loadBalancers
name: '[variables(''loadBalancerName'')]'
apiVersion: '2017-10-01'
location: '[resourceGroup().location]'
sku:
name: Standard
dependsOn:
- '[variables(''natIPAddressName'')]'
properties:
backendAddressPools:
- name: '[variables(''lbBackendPoolName'')]'
frontendIPConfigurations:
- name: LoadBalancerFrontEnd
properties:
publicIPAddress:
id: '[variables(''natIPAddressId'')]'
probes: # Needed for loadBalancingRule to work
- name: '[variables(''lbProbeName'')]'
properties:
protocol: Tcp
port: 22
intervalInSeconds: 5
numberOfProbes: 2
loadBalancingRules: # Needed for SNAT to work
- name: '[concat(variables(''loadBalancerName''),''NatRule'')]'
properties:
disableOutboundSnat: false
frontendIPConfiguration:
id: '[variables(''frontEndIPConfigID'')]'
backendAddressPool:
id: '[variables(''lbBackendAddressPoolsId'')]'
probe:
id: '[variables(''lbProbeId'')]'
protocol: tcp
frontendPort: 80
backendPort: 80
...
I need to get the DNS name (ILPIP DNS name) in a Node.JS application (an IoT gateway) that is running on a VM in Azure.
Background details.
I need this so the application can inform the web frontend where to open the socket.io connection to when the web based client wish to communicate with the IoT gateway.
I have been looking in Microsofts Azure modules for Node.JS but I haven't found anything that gives the ILPIP (assigned dns name)
You can try to use RoleEnvironment.getCurrentRoleInstance() function in Azure SDK for Node.js, run the following code snippet in a classic VM:
var azure = require('azure');
azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) {
if (!error && instance['endpoints']) {
//You can get information about "endpoint1" such as its address and port via
console.log(instance)
} else {
console.log(error);
}
});
You may get the following similar info of role instance:
{ id: 'WorkerRole1_IN_0',
roleName: 'WorkerRole1',
faultDomain: '0',
updateDomain: '0',
endpoints:
{ 'Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp':
{ name: 'Microsoft.WindowsAzure.Plugins.RemoteAccess.Rdp',
address: '100.104.92.19',
port: '3389',
publicPort: '0',
protocol: 'tcp',
roleInstanceId: 'WorkerRole1_IN_0' },
'Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput':
{ name: 'Microsoft.WindowsAzure.Plugins.RemoteForwarder.RdpInput',
address: '100.104.92.19',
port: '20000',
publicPort: '3389',
protocol: 'tcp',
roleInstanceId: 'WorkerRole1_IN_0' }
}
}