I am working on features like those provided by github.io, which provides custom domain for each collection of article of a user, new I've tested this by changing hosts, and I want to test for the real-world simulations performed locally, how can I start in my LAN? Now I read some articles about dnsmasq, but I'm still not very clear.
Things I do are like this:
When user A has a collection, he can access it in https://ourserver-example.com/collcetion-of-a
Then he can add his personal domain like my-collection.io
After he configs it in our system, he can access his collection by https://my-collection.io
Now I want to figure out how do I test this my-collection.io domain in local, need I build a local dns server and add a cname record to associate the my-collection.io to ourserver-example.com?
You can just add an entry in your host file like this:
192.168.10.10 my-collection.io
Where the IP address is the IP address of the server/machine where the application is running. If running in your local machine on localhost then set this:
localhost my-collection.io
Related
Could deploy Bosh and small footprint tanzu application service(tas) in Azure, without using the domains.All Vms are running.Can i access the ccapi and apps manager with the IP address instead of the api.SYSTEMDOMAIN?
The short answer is no. You really, really want to have DNS set up properly.
Here's the long answer that is more nuanced.
All requests to your foundation go through the Gorouter. Gorouter will take the incoming request, look at the Host header and use that to determine where to send the request. This happens the same for system services like CAPI and UAA as it does for apps you deploy to the foundation.
DNS is a requirement because of the Host header. A browser trying to access CAPI or an application on your foundation is going to set the Host header based on the DNS entry you type into your browser's address bar. The cf CLI is going to do the same thing.
There are some ways to work around this:
If you are strictly using a client like curl where you can set the Host header to arbitrary values. In that way, you could set the host header to api.system_domain and at the same time connect to the IP address of your foundation. That's not a very elegant way to use CF though.
You can manually set entries in your /etc/hosts` (or similar on Windows). This is basically a way to override DNS resolution and supply your own custom IP.
You would need to do this for uaa.system_domain, login.system_domain, api.system_domain and any host names you want to use for apps deployed to your foundation, like my-super-cool-app.apps_domain. These should all point to the IP of the load balancer that's in front of your pool of Gorouters.
If you add enough entries into /etc/hosts you can make the cf CLI work. I have done this on occasion to bypass the load balancer layer for troubleshooting purposes.
Where this won't work is on systems where you can't edit /etc/hosts, like customers or external users of software running on your foundation or if you're trying to deploy apps on your foundation that talk to each other using routes on CF (because you can't edit /etc/hosts in the container). Like if you have app-a.apps_domain and app-b.apps_domain and app-a needs to talk to app-b. That won't work because you have no DNS resolution for apps_domain.
You can probably make app-to-app communication work if you are able to use container-to-container networking and the apps.internal domain though. The resolution for that domain is provided by Bosh DNS. You have to be aware of this difference though when deploying your apps and map routes on the apps.internal domain, as well as setting network policy to allow traffic to flow between the two.
Anyway, there might be other hiccups. This is just off the top of my head. You can see it's a lot better if you can set up DNS.
The most easy way to achieve a portable solution is a service like xip.io that will work out of the box. I have setup and run a lot of PoCs that way, when wildcard DNS was something that enterprise IT was still oblivious about.
It works like this (excerpt from their site):
What is xip.io?
xip.io is a magic domain name that provides wildcard DNS
for any IP address. Say your LAN IP address is 10.0.0.1.
Using xip.io,
10.0.0.1.xip.io resolves to 10.0.0.1
www.10.0.0.1.xip.io resolves to 10.0.0.1
mysite.10.0.0.1.xip.io resolves to 10.0.0.1
foo.bar.10.0.0.1.xip.io resolves to 10.0.0.1
...and so on. You can use these domains to access virtual
hosts on your development web server from devices on your
local network, like iPads, iPhones, and other computers.
No configuration required!
I have a network setup containing two machines.
On one machine I have a site hosted with IIS.
I have added an entry in the HOSTS file pointing my local IP to this domain
10.42.12.105 to - www.mysite.come. Then I configured to accept incoming calls on TCP Port 80. By going to windows firewall with advance security
Inbound Rules -> Action -> New Rule, select "Predefined" and then select the last item - World Wide Web Services(Http) and allow the connection. Also allowed port 80 too.
I can access the site with www.mysite.com with no problem on the same machine.
what I would like to do is be able to view this site from my other machine on the same network.
Can anyone see where I'm going wrong?
A host file is a way to tell 1 machine to map a web address to an IP address, like an alias. It only works on the machine containing the HOSTS file. For example, I could add a line in my hosts file which could map your URL www.mysite.com to 127.0.0.1. My browser would think your site is on my PC now.
So, if you want to set up this alias/name-mapping for multiple machines, you will need to add a host entry, on each machine (so they all have this mapping), or add this mapping to your local DNS (on your domain controller or router).
To check your firewall rules and IIS config, try having your test PC go to the IP address instead of the alias (from HOSTS).
I am trying to build a local test environment where my local devices will point to a different environment than production. The easiest way for me to do this is to point the device to a server that will map all routes to the production endpoint, to the staging endpoint.
How can I point my router to a Node.js instance, and use the Node.js instance as the DNS server?
It sounds like you're basically wanting to set up a (temporary?) alias for a host name on your local network so that all devices on your network use that alias. For example, today I might want to go to http://application.example.com and access the development version; tomorrow I will want to go to the same address and access the testing version.
There are a couple of different ways to do this:
Add a proxy - this will take HTTP requests for one host and route them to a different host. You could do this with a virtual machine, a Docker container, or directly on the development/testing machine. All you need to do is point your application domain at the proxy and configure the proxy to send the requests to the server you want.
Configure your router to serve the test environment IP address - some routers permit you to add host names to the DNS configuration. This would allow you to simply switch the IP address for the test and development environments as needed, while keeping the same host name.
Add a DNS server to your local network - this is basically the same as the item above, except that it gives you much more control (and is more difficult to configure).
All of these will take some work to set up and will depend very much on your server and network setup.
My Oracle 11.2 database schema has a scheduled job that queries a webpage on my website every few minutes. The database and web servers are two physical Linux machines that sit next to each other and have local IP addresses 192.168.0.11 (database) and 192.168.0.12 (web server). There is a RJ-45 cable cross-connect that directly links the two servers on the same subnet.
If I enter the web address http://xxx.xxx.xxx.xxx/path/to/webpage where xxx.xxx.xxx.xxx is the external IP address, things work fine. Things also work well if I replace xxx.xxx.xxx.xxx with www.mydomain.com.
However, I'm thinking it should be much more efficient if I could re-write xxx.xxx.xxx.xxx as 192.168.0.12 thinking that this would avoid having the request go out on the internet and come back, but rather stay on the same subnet to get to the webpage (thus saving time and resources).
req := UTL_HTTP.BEGIN_REQUEST('http://192.168.0.12/path/to/webpage');
When I try that, I get a 404 error, which makes me think it didn't get to the right webpage.
Can I keep the query on the same subnet by modifying the hosts file or some other way?
My current hosts file already contains an alias for the email server, that is:
192.168.0.12 mail.mydomain.com
If I also include the web address such as
192.168.0.12 mail.mydomain.com www.mydomain.com
would that keep the database on the same subnet when accessing the website? Or will it still leave the subnet to get there? Also, will it confuse things now that I've got two aliases (e.g. one for the database to send emails and one for the database to access webpages)?
I am not sure I would add "192.169.0.12 mail.mydomain.com www.mydomain.com" if that is not the proper IP for the host. That might only make things more confusing.
Assuming that you can ping 192.168.0.12 from the DB server, make sure that your Web Server is listening on the 192.168.0.12 address as well. It could be listening only on the external IP address, in which case, it will return HTTP 404 to every request on the 192.168.0.12 IP/interface.
On Apache, the httpd.conf file would have
listen xxx.xxx.xxx.xxx:80
which would make it listen on the external IP only.
Please note that if the purpose of your HTTP requests is to test the web server availability, you may be better of leaving things as they are. The external test is much more compreheensive than a local one could ever be.
I have a PC at my home that I typically access using Remote Desktop. I would like to be able to use a domain name to access this computer, and be able to use the same domain name regardless of if I am at home (on the same network as that PC) or on the road.
I know that I need to use Dynamic DNS in order to keep my IP address up to date. I have that working now.
I also have port forwarding configured on my router to send traffic on the ports I'm using to that PC's local IP address.
I am able to successfully get to my computer from the outside world using mypc.mysite.com (example url).
However, when I am at home, the mypc.mysite.com domain name needs to resolve to the local IP address, and instead it is getting my "outgoing ip".
I know I can get around this my modifying the "host headers" on my PC, but I want it to work on other devices like my tablet. I also don't want to have to switch my host header file every time I boot depending on where I'm at.
Does anyone have a suggestion?