Deployed small footprint tanzu application service(tas) in Azure,without no domains.Can i access the ccapi and apps manager with the IP? - azure

Could deploy Bosh and small footprint tanzu application service(tas) in Azure, without using the domains.All Vms are running.Can i access the ccapi and apps manager with the IP address instead of the api.SYSTEMDOMAIN?

The short answer is no. You really, really want to have DNS set up properly.
Here's the long answer that is more nuanced.
All requests to your foundation go through the Gorouter. Gorouter will take the incoming request, look at the Host header and use that to determine where to send the request. This happens the same for system services like CAPI and UAA as it does for apps you deploy to the foundation.
DNS is a requirement because of the Host header. A browser trying to access CAPI or an application on your foundation is going to set the Host header based on the DNS entry you type into your browser's address bar. The cf CLI is going to do the same thing.
There are some ways to work around this:
If you are strictly using a client like curl where you can set the Host header to arbitrary values. In that way, you could set the host header to api.system_domain and at the same time connect to the IP address of your foundation. That's not a very elegant way to use CF though.
You can manually set entries in your /etc/hosts` (or similar on Windows). This is basically a way to override DNS resolution and supply your own custom IP.
You would need to do this for uaa.system_domain, login.system_domain, api.system_domain and any host names you want to use for apps deployed to your foundation, like my-super-cool-app.apps_domain. These should all point to the IP of the load balancer that's in front of your pool of Gorouters.
If you add enough entries into /etc/hosts you can make the cf CLI work. I have done this on occasion to bypass the load balancer layer for troubleshooting purposes.
Where this won't work is on systems where you can't edit /etc/hosts, like customers or external users of software running on your foundation or if you're trying to deploy apps on your foundation that talk to each other using routes on CF (because you can't edit /etc/hosts in the container). Like if you have app-a.apps_domain and app-b.apps_domain and app-a needs to talk to app-b. That won't work because you have no DNS resolution for apps_domain.
You can probably make app-to-app communication work if you are able to use container-to-container networking and the apps.internal domain though. The resolution for that domain is provided by Bosh DNS. You have to be aware of this difference though when deploying your apps and map routes on the apps.internal domain, as well as setting network policy to allow traffic to flow between the two.
Anyway, there might be other hiccups. This is just off the top of my head. You can see it's a lot better if you can set up DNS.

The most easy way to achieve a portable solution is a service like xip.io that will work out of the box. I have setup and run a lot of PoCs that way, when wildcard DNS was something that enterprise IT was still oblivious about.
It works like this (excerpt from their site):
What is xip.io?
xip.io is a magic domain name that provides wildcard DNS
for any IP address. Say your LAN IP address is 10.0.0.1.
Using xip.io,
10.0.0.1.xip.io resolves to 10.0.0.1
www.10.0.0.1.xip.io resolves to 10.0.0.1
mysite.10.0.0.1.xip.io resolves to 10.0.0.1
foo.bar.10.0.0.1.xip.io resolves to 10.0.0.1
...and so on. You can use these domains to access virtual
hosts on your development web server from devices on your
local network, like iPads, iPhones, and other computers.
No configuration required!

Related

SSL Certs for single IP- two ports, same URL website

We've a project that is to go live very soon and we ran into this issue when dealing with developers. This is two JDEdwards (ERP) website which are hosted on a single IBM WebSphere webserver, currently using a FQDN, and different ports assignment for DEV and TEST users. Websites as such are -
DEV
https://jdeweb01dev.corporate.company.com:100/jde/owhtml/
TEST
https://jdeweb01dev.corporate.company.com:101/jde/owhtml/
There is only one IP configured for the above server FQDN but we will eventually give common name like JdeDev.company.com JdeTest.company.com or something.
We want to implement SSL cert for our Test/Dev environments, but how would we implement this on IIS or IBM Web SPhere, as well as on DNS level. Sine the only difference between the URLs is port numbers and both lead to different websites. I'm open for suggestions on how we can improve the design as well or how to make the current design work.
Another important thing to consider, the two websites will be accessed between two different Domain Forests which have transient Trust. This is a JDEdwards project.
Appreciate any help on this!
In order to configure HTTPS binding in IIS site binding, just configure a certificate in IIS site binding module.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-an-iis-hosted-wcf-service-with-ssl
Also, this could be accomplished by the Netsh http command.
netsh http add sslcert ipport=0.0.0.0:8000
certhash=0000000000003ed9cd0c315bbb6dc1c08da5e6
appid={00112233-4455-6677-8899-AABBCCDDEEFF}
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-a-port-with-an-ssl-certificate
After you have set up the FQDN in DNS entries, you could specify the Hostname field in order to access the service with the server fully qualified domain name.
Feel free to let me know if there is anything I can help with.
WebSphere supports multiple virtual hosts, each with its own alias(es), which can be a combination of DNS name and port. The built-in default_host will typically have an alias for the server/node name and the * wildcard for all ports. You then assign a specific virtual host to an application when you deploy it.

Does traffic to weird subdomains on my LAN indicate a security issue?

I am a user of OpenDNS, and I am noticing network traffic to weird subdomains on my local area network. Suppose the "Local Domain Name" setting on my router is named "mynetwork". I am seeing many requests to domains like:
lb._dns-sd._udp.mynetwork
db._dns-sd._udp.mynetwork
b._dns-sd._udp.mynetwork
tvovhvumfcuvo.mynetwork
pqwakwyids.mynetwork
vbqulcywazgwao.mynetwork
wjyuspdzzbac.mynetwork
etc.
If this is not normal traffic how should I discern where my problem lies? Should I install something like "Little Snitch" on my Macs for example?
You may want to check out this answer from menandmice, where they say:
These are queries generated by 'Multicast/Unicast DNS Service Discovery or
Zeroconf', which is a service of Apple 'Bonjour/Rendevous' or Unix Services like
'Avahi'. DNS Queries coming from Port 5353 are DNS queries from a Zeroconf service.
The DNS Service Discovery enabled clients are looking for pointers to services running in their network block 192.0.2.0/24.
This is harmless. If there is not PTR record for the requested ownernames, it
only means that unicast Zeroconf is not configured.
"unicast Zeroconf is not configured" might not be your exact problem, but overall it's nothing to worry about.

Point Router To Node.js Server

I am trying to build a local test environment where my local devices will point to a different environment than production. The easiest way for me to do this is to point the device to a server that will map all routes to the production endpoint, to the staging endpoint.
How can I point my router to a Node.js instance, and use the Node.js instance as the DNS server?
It sounds like you're basically wanting to set up a (temporary?) alias for a host name on your local network so that all devices on your network use that alias. For example, today I might want to go to http://application.example.com and access the development version; tomorrow I will want to go to the same address and access the testing version.
There are a couple of different ways to do this:
Add a proxy - this will take HTTP requests for one host and route them to a different host. You could do this with a virtual machine, a Docker container, or directly on the development/testing machine. All you need to do is point your application domain at the proxy and configure the proxy to send the requests to the server you want.
Configure your router to serve the test environment IP address - some routers permit you to add host names to the DNS configuration. This would allow you to simply switch the IP address for the test and development environments as needed, while keeping the same host name.
Add a DNS server to your local network - this is basically the same as the item above, except that it gives you much more control (and is more difficult to configure).
All of these will take some work to set up and will depend very much on your server and network setup.

How to map unique dns names to service fabric port

I have a local service fabric cluster which has 6-7 custom http endpoints exposed. I use fiddler to redirect these to my service like so:
127.0.0.1:44300 identity.mycompany.com
127.0.0.1:44310 docs.mycompany.com
127.0.0.1:44320 comms.mycompany.com
etc..
I've never deployed a cluster in azure before, so there's some intricacies that i'm not familiar with and I can't find any documentation on. I've tried a multiple times to deploy and tinker with the load balancers/public ips with no luck.
I know DNS CNAMES can't specify ports, so I guess that I have to have separate public IP for each hostname I want to use and then somehow internally map that to the port. So i end up with something like this:
identity.mycompany.com => azure public ip => internal redirect / map => myservicefabrichostname.azure.whatever:44300
my questions are:
1) is this the right way to go about it? or is there some fundamental method that i'm missing
2) do I have to specify all these endpoints (44300, 44310, 44320...) when creating the cluster (it appears to set up a load of load balancer rules/probes) or will this be unnecessary if I have multiple public IPs), i'm unsure if this is for internal or external access.
thanks
EDIT:
looks like the azure portal is broken :( been on phone with microsoft support and it looks like it's not displaying the backendpools in the load balancer correctly, so you can't set up any new nat rules.
Might be able to write a powershell script to get round this though
EDIT 2:
looks like Microsoft have fixed the bug in the portal, happy times
Instead of using multiple ip addresses you can use a reverse proxy. Like HAProxy, IIS (with rewriting), the built-in reverse proxy, or something you build yourself or reuse. The upside of that is that is allows for flexibility in adding and removing underlying services.
All traffic would come in on one endpoint, and then routed in the right direction (your services running on various ports inside the cluster). Do make sure your reverse proxy is high available.

Single domain on multiple server

I have a domain with multiple active users with several applications hosting on it.
Domain: www.domain.com and running on server IP: XXX.XXX.XXX.1
I want to run www.domain.com/business on server IP: XXX.XXX.XXX.2
and similarly to run www.domain.com/hosting on server IP: XXX.XXX.XXX.3
It is very similar to Google scenario:
www.google.com runs on XXX.XXX.173.1 - XXX.XXX.185.1
www.google.com/+dinesh on XXX.XXX.186.1 -XXX.XXX.187.1
I have seen a lot of articles to manage DNS and virtual entries but unable to get correct answer.
Another way to do this is to make the host portions slightly different, i.e.:
business.domain.com/business
hosting.domain.com/hosting
You would then use these links where you are currently putting www.domain.com/business and www.domain.com/hosting. It's then a simple matter to have those different hostnames point at different addresses.
In general, it's not possible to have URLs with the same host point to different IP addresses on the basis of the stuff after the hostname. I cannot seem to verify your Google example (from where I'm looking, they both go to the same set of addresses). If you've more information on how you determined those addresses, please post that and maybe something else can be suggested.
You can manage it through Load balance rather than run on different server
Please use a reverse proxy in front of the application servers.
Consider using nginx or Apache Httpd.
These can be configured to route (technically proxy) to the desired app servers by inspecting the context path in URL.
If you choose to use nginx, see this post on how to configure nginx for such a use case.
Nginx configuration page for additional details: config

Resources