Single domain on multiple server - dns

I have a domain with multiple active users with several applications hosting on it.
Domain: www.domain.com and running on server IP: XXX.XXX.XXX.1
I want to run www.domain.com/business on server IP: XXX.XXX.XXX.2
and similarly to run www.domain.com/hosting on server IP: XXX.XXX.XXX.3
It is very similar to Google scenario:
www.google.com runs on XXX.XXX.173.1 - XXX.XXX.185.1
www.google.com/+dinesh on XXX.XXX.186.1 -XXX.XXX.187.1
I have seen a lot of articles to manage DNS and virtual entries but unable to get correct answer.

Another way to do this is to make the host portions slightly different, i.e.:
business.domain.com/business
hosting.domain.com/hosting
You would then use these links where you are currently putting www.domain.com/business and www.domain.com/hosting. It's then a simple matter to have those different hostnames point at different addresses.
In general, it's not possible to have URLs with the same host point to different IP addresses on the basis of the stuff after the hostname. I cannot seem to verify your Google example (from where I'm looking, they both go to the same set of addresses). If you've more information on how you determined those addresses, please post that and maybe something else can be suggested.

You can manage it through Load balance rather than run on different server

Please use a reverse proxy in front of the application servers.
Consider using nginx or Apache Httpd.
These can be configured to route (technically proxy) to the desired app servers by inspecting the context path in URL.
If you choose to use nginx, see this post on how to configure nginx for such a use case.
Nginx configuration page for additional details: config

Related

Deployed small footprint tanzu application service(tas) in Azure,without no domains.Can i access the ccapi and apps manager with the IP?

Could deploy Bosh and small footprint tanzu application service(tas) in Azure, without using the domains.All Vms are running.Can i access the ccapi and apps manager with the IP address instead of the api.SYSTEMDOMAIN?
The short answer is no. You really, really want to have DNS set up properly.
Here's the long answer that is more nuanced.
All requests to your foundation go through the Gorouter. Gorouter will take the incoming request, look at the Host header and use that to determine where to send the request. This happens the same for system services like CAPI and UAA as it does for apps you deploy to the foundation.
DNS is a requirement because of the Host header. A browser trying to access CAPI or an application on your foundation is going to set the Host header based on the DNS entry you type into your browser's address bar. The cf CLI is going to do the same thing.
There are some ways to work around this:
If you are strictly using a client like curl where you can set the Host header to arbitrary values. In that way, you could set the host header to api.system_domain and at the same time connect to the IP address of your foundation. That's not a very elegant way to use CF though.
You can manually set entries in your /etc/hosts` (or similar on Windows). This is basically a way to override DNS resolution and supply your own custom IP.
You would need to do this for uaa.system_domain, login.system_domain, api.system_domain and any host names you want to use for apps deployed to your foundation, like my-super-cool-app.apps_domain. These should all point to the IP of the load balancer that's in front of your pool of Gorouters.
If you add enough entries into /etc/hosts you can make the cf CLI work. I have done this on occasion to bypass the load balancer layer for troubleshooting purposes.
Where this won't work is on systems where you can't edit /etc/hosts, like customers or external users of software running on your foundation or if you're trying to deploy apps on your foundation that talk to each other using routes on CF (because you can't edit /etc/hosts in the container). Like if you have app-a.apps_domain and app-b.apps_domain and app-a needs to talk to app-b. That won't work because you have no DNS resolution for apps_domain.
You can probably make app-to-app communication work if you are able to use container-to-container networking and the apps.internal domain though. The resolution for that domain is provided by Bosh DNS. You have to be aware of this difference though when deploying your apps and map routes on the apps.internal domain, as well as setting network policy to allow traffic to flow between the two.
Anyway, there might be other hiccups. This is just off the top of my head. You can see it's a lot better if you can set up DNS.
The most easy way to achieve a portable solution is a service like xip.io that will work out of the box. I have setup and run a lot of PoCs that way, when wildcard DNS was something that enterprise IT was still oblivious about.
It works like this (excerpt from their site):
What is xip.io?
xip.io is a magic domain name that provides wildcard DNS
for any IP address. Say your LAN IP address is 10.0.0.1.
Using xip.io,
10.0.0.1.xip.io resolves to 10.0.0.1
www.10.0.0.1.xip.io resolves to 10.0.0.1
mysite.10.0.0.1.xip.io resolves to 10.0.0.1
foo.bar.10.0.0.1.xip.io resolves to 10.0.0.1
...and so on. You can use these domains to access virtual
hosts on your development web server from devices on your
local network, like iPads, iPhones, and other computers.
No configuration required!

SSL Certs for single IP- two ports, same URL website

We've a project that is to go live very soon and we ran into this issue when dealing with developers. This is two JDEdwards (ERP) website which are hosted on a single IBM WebSphere webserver, currently using a FQDN, and different ports assignment for DEV and TEST users. Websites as such are -
DEV
https://jdeweb01dev.corporate.company.com:100/jde/owhtml/
TEST
https://jdeweb01dev.corporate.company.com:101/jde/owhtml/
There is only one IP configured for the above server FQDN but we will eventually give common name like JdeDev.company.com JdeTest.company.com or something.
We want to implement SSL cert for our Test/Dev environments, but how would we implement this on IIS or IBM Web SPhere, as well as on DNS level. Sine the only difference between the URLs is port numbers and both lead to different websites. I'm open for suggestions on how we can improve the design as well or how to make the current design work.
Another important thing to consider, the two websites will be accessed between two different Domain Forests which have transient Trust. This is a JDEdwards project.
Appreciate any help on this!
In order to configure HTTPS binding in IIS site binding, just configure a certificate in IIS site binding module.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-an-iis-hosted-wcf-service-with-ssl
Also, this could be accomplished by the Netsh http command.
netsh http add sslcert ipport=0.0.0.0:8000
certhash=0000000000003ed9cd0c315bbb6dc1c08da5e6
appid={00112233-4455-6677-8899-AABBCCDDEEFF}
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-a-port-with-an-ssl-certificate
After you have set up the FQDN in DNS entries, you could specify the Hostname field in order to access the service with the server fully qualified domain name.
Feel free to let me know if there is anything I can help with.
WebSphere supports multiple virtual hosts, each with its own alias(es), which can be a combination of DNS name and port. The built-in default_host will typically have an alias for the server/node name and the * wildcard for all ports. You then assign a specific virtual host to an application when you deploy it.

How to get haproxy to use a specific cluster computer via the URI

I have successfully set haproxy on my server cluster. I have run into one snag that I can't find a solution for...
TESTING INDIVIDUAL CLUSTER COMPUTERS
It can happen that for one reason or another, one computer in the cluster gets a configuration variation. I can't find a way to tell haproxy that I want to use a specific computer out of a cluster.
Basically, mysite.com (and several other domains) are served up by boxes web1, web2 and web3. And they round-robin perfectly.
I want to add something to the URL to tell haproxy that I specifically want to talk to web2 only because in a specific case, only that server is throwing an error on one web page.
Anyone know how to do that without building a new cluster with a URI filter and only have one computer in that cluster? I am hoping to use the cluster as-is but add something to the URI that will tell haproxy which server to use out of the cluster.
Thanks!
Have you thought about using different port for this? Defining new listen section with different port, because, as I understand, you can modify your URL by any means?
Basically, haproxy cannot do what I was hoping. There is no way to add a param to the URL to suggest which host in the cluster to use.
I solved my testing issue by setting up unique ports for each server in the cluster at the firewall. This could also be done at the haproxy level.
To secure this path from the outside world, I told the firewall to only accept traffic from inside our own network.
This lets us test specific servers within the cluster. We did have to add a trap in our PHP app to deal with a session cookie that is too large because we have haproxy manipulating this cookie to keep users on the server they first hit. So when the invalid session cookie is detected, we have the page simply drop the session and reload the page.
This is working well for our testing purposes.

Point subdomain to port on server

I'd like to point a subdomain to an IP address plus a port number, but I have no idea how to do so, and Google isn't being very helpful. Any suggestions?
Came across this question while looking through my posting history, so I figured I may as well answer it now that I have more knowledge of the subject. What I was looking for were SRV records, which can be used to point a subdomain to a service on a port. However, the service must explicitly provide SRV support in order for this to work properly.
If the service will only be accessed through a web browser, it may be more practical to add a VirutalHost containing a reverse proxy to Apache. This is useful for services such as CI servers or Tomcat instances.

Setup virtual hosts file to host the source code from remote server

I would really appreciate your support for the below inquiry
Current Situation:
I have a web app (contains a module to upload documents) on a Linux Apache server "A" that can only be HTTP-ed through the intranet.
Required:
Another Linux Apache server "B" is required to host the same web app, while maintaining the source code on server "A" only. Server "B" can be HTTP-ed through the internet and intranet.
Blocking points:
Under the current circumstances we are unable to host the website on server "B" directly (which would seem like the logical solution).
Question:
Is it possible to setup the virtual-hosts of the httpd.conf file for such requirement?
Research:
Usually most of my findings were posts about deploying a load-sharing/load-balancing solution (not my objective), or setup a two-way synchronization process between "A" and "B" (last resort solution).
Googled strings:
share website between two servers, host website on two servers, virtual host to another server, run single website on multiple servers setup, virtual host for website on another server, host a website on two different servers, setup two linux servers to host the same website
Server Details:
Server A:
Server IP: 192.168.xxx.xxx (accessible through the intranet only)
Hosts the website source code
Apache server
OS: RHEL5
Server B:
Accessible through the intranet and internet
Apache server
OS: Same as A (RHEL5)
Summing up what you've probably found yourself by now: unfortunately, there are two things that are called proxying. The you are interested in is called a reverse proxy, in which B will take requests and forward them to A. The client never sees that A even exists. There are few security concerns, depending on what angle of security you look at:
server A only ever sees requests from B, not the original client, so any IP-based restrictions you want should be configured on server B.
The usually mentioned security concern is that a (forward) proxy will ask arbitrary servers for things on behalf of the client, so it masks the client's identity. I don't think you need to worry about this as long as you put ProxyRequests Off to disable forward proxying.
Server A might accidentally reveal its IP, which you might not be comfortable with. When B passes back the answer to the clients request that it has received from A, it will not look at the payload. So, if you return HTML documents, they better all have only relative paths. I think this might be the problem you are having: if your code still contains references to 192.168.x.y, those won't work for the external client. If you are changing paths (i.e. you have something like ProxyPass /somepath http://internal-server/otherpath), things become even more complicated, so try to avoid that. (In general, your backend application would need knowledge of what its publicly-visible URIs are. How to do this depends on the application.)

Resources