Windows Active Directory Domain setup remotely through univention using samba4 - dns

I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael

For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.

Related

NodeJS TLS/TCP server in need of an external firewall

Problem:
I have an AWS EC2 instance running FreeBSD. In there, I'm running a NodeJS TLS/TCP server. I'd like to create a set of rules (in my NodeJS application) to be able to individually block IP addresses programmatically based on a few logical conditions.
I'd like to run an external (not on the same machine/instance) firewall or load-balancer, that I can control from NodeJS programmatically, such that when certain conditions are given, I can block a specific remote-address(IP) before it reaches the NodeJS instance.
Things I've tried:
I have initially looked into nginx as an option, running it on a second instance, and placing my NodeJS server behind it, but after skimming through the NGINX
Cookbook
Advanced Recipes for High Performance
Load Balancing I've learned that only the NGINX Plus (the paid version) allows for remote/API control & customization. While I believe that paying $3500/license is not too much (considering all NGINX Plus' features), I simply can not afford to buy it at this point in time; in addition the only feature I'd be using (at this point) would be the remote API control and the IP address blocking.
My second thought was to go with the AWS/ELB (elastic-load-balancer) by integrating AWS' SDK into my project. That sounded feasible, unfortunately, after reading a few forum threads and part of their documentation (unless I'm mistaken) it seems these two features I need are not available on the AWS/ELB. AWS seems to offer an entire different service called WAF that I honestly don't understand very well (both as a service and from a feature-stand-point).
I have also (briefly) looked into CloudFlare, as it was recommended in one of the posts, here on Sackoverflow, though I can't really tell if their firewall would allow this level of (remote) control.
Question:
What are my options? What would you guys recommend I did?
I think Nginx provide such kind of functionality please refer to link
If you want to block an IP with Node TCP you can just edit a nginx config file and deny IP address.
Frankly speaking, If I were you, I would use AWS WAF but if you don’t want to use it, you can simply use Node JS
In Node JS You should have a global array variable where you will store all blocked IP addresses and upon connection, you will check whether connected host IP is in blocked IP variable. However there occurs a problem when machine or application is restarted, you will lose all information about blocked IP-s. So as a solution to that you can just setup Redis (It is key-value database but there are also other datatypes) DB and store blocked IP-s there. Inasmuch as Redis DB is in RAM all interaction with DB will be instantly and as long as machine or node is restarted, Redis makes a backup on hard drive and it syncs from it and continue to work in RAM with old databases.

maintaining DNS resolution for connecting clients when using fresh app instances with fresh IPs

I have an Ansible playbook for spinning up and building a brand new GNU/Linux box and installing vsftpd.
I have a client which needs to send a nightly file over SFTP. I have instructed to send to ftp.example.com.
I need to be able to very quickly run the playbook against any infrastructure provider (such as DigitalOcean, AWS, Rackspace etc) and without any change on the client end still receive the nightly file upload even if (as will be the case) the IP of the server has changed. So, one night the server may be on a DigitalOcean box in NewYork, the next on an AWS box in Ireland.
Now, obviously I could use a DNS name-server provider who has a good API to code against and reset the A-record as a stage of the playbook run. However, this will likely mean that until the client's DNS cache is flushed they will still be seeing ftp.example.com as the previous server.
So, how can I guarantee that this will work without any interaction on the part of the client.
Many thanks
In terms of DNS, you cant. Evan if you set a low TTL on the DNS records (time to live/expire) many DNS services cache short TTLs for up to 72 Hours.
My recommendation is not to have a infrastructure that requires constant changes in IP.
What may be a better solution is to use a distributed service like BitTorrentSync.
You could also host your own BIND server and ask your client to point his DNS at your BIND server, avoiding any third party DNS.
The real solution would be to stick keeping a persistant instance. If money is an issue, you can suspend your instance saving on compute in Amazon AWS

Wide area service discovery via bonjur / avahi

I'm looking into wide area service discovery and bonjur / avahi seem to be really good.
However, I'm a bit confused about how all this works?
So:
I have a bunch of services running in a cloud.
I have clients which can be located anywhere in the world.
I want the clients to automatically discover the services in the cloud.
I need the clients to be absolutely zero conf, so they don't know IPs, ports, nothing.
If I understand it correctly, this can be done using the above mentioned dns-sd libs. I have full access to a DNS server, so I suppose, the services can register themselves on startup using these libs and then the data can be spread through DNS servers world wide.
The clients can obtain the advertised info by querying the DNS record of my domain using bonjur / avahi tech, right?
All I need to do is to link the client with bonjur / avahi libs, and tell it which domain it should use (query).
Is this correct?
Am I missing something here or is it how this works?
Thanks in advance!
Avahi does not currently support publishing to a wide-area server, though it can browse wide-area. So if you can dynamically update a DNS server somewhere with the appropriate records Avahi would be able to see it.
You do however potentially have more problems to solve here including port mapping/nat traversal which Avahi does not address at all.

Slow website even though VPS is up and running

Sorry if this is a bit of a newbie question, but I am quite new to VPS and the relatively more complicated set up. I have a VPS set up, and every day or twice a day the site loads for a bout 10 minutes with no luck. Then when it comes back on line its fine after that. Upon logging on to Plesk, the server is up and running, very low CPU usage (0.10 and drops to 0.00 after a few minutes) and around 18% RAM usage.
The MySQLAdmin loads up fine.
So it seems the VPS is running fine.
Is there maybe another reason? The domain is with Daily.co.uk and the VPS is with LCN.com. Could there be another problem somewhere? On daily.co.uk, there are two nameservers set. ns0.etc*** and ns1.etc***. I did a tracert on windows cmd, this traced down to the server, with two timeouts.
I also tried a check on http://dnscheck.pingdom.com/ while the site was slow and this came back fine, except this: Too few IPv4 name servers (1). Only one IPv4 name server was found for the zone. You should always have at least two IPv4 name servers for a zone to be able to handle transient connectivity problems.
Any help would be appreciated. I have tried searching but with no luck.
The recommended diagnostic check for the issue you are experiencing is called a DIG.
On your Windows system, this check is not intrinsically available, but it can be downloaded from http://members.shaw.ca/nicholas.fong/dig/
Once you have installed it, you'll want to run it from the command prompt with the following syntax:
C:> dig -insert your domain here- +trace
This will show you how DNS resolution is happening from your location to the requested end point. Chances are, the error you received is correct. Most DNS setups have several name servers to assign to your domain registration to allow the round-robining of delegated name servers in the event that one becomes unresponsive.
My personal recommendation would be to outsource the DNS to a managed provider. Doing so will increase the availability of the zone, and reduce latency.

bind ip to subdomain

i have a linux client which uses pppoe to connect to the internet and
everytime this client comes online I wanna bind his ipadress to a subdomain.
dyndns is not an option due to their TTL.
It looks like i have to setup my own nameserver on my root server to accomplish this task because I cannot create the keys needed to run an nsupdate on the client with a provider nameserver... am I correct?
If so is there a good Howto for setting up a bind server for this specific task?
I havent ever maintained pppoe but if it uses dhcp to provide the ip address to the client, you could do updates from the dhcp to the dns.
Instructions on how to do this for debian here: http://www.debian-administration.org/article/Configuring_Dynamic_DNS__DHCP_on_Debian_Stable
Do not thou that you can adapt these to other distros too. You can find the same software atleast on fedora and ubuntu, difference is only how you install the required software.
One posibility is to set their machine to register with somebody like DynDNS. They have all the software you need to automatically notify them when they come online/go offline. This will give them a domain name of something like whatever.homelinux.org (it does not really matter). You then put static CNAME entries in your DNS to point your nice domain names eg southern.company.com to point to whatever.homelinux.org.
When they come online the domain will start to resolve and stop when they go off line since DynDNS have low TTL for this very reason. You can use large TTL in your zone file since the CNAMEs will not change.
Well dont you just need to create an A-Record for this IP on your DNS Server?
If your domain is 'google.com' and you wanted your host to be called 'server1'. Create an A-Record for 'server1' and point it to your machines IP.
Unless I am misunderstanding what you are asking for help with.

Resources