Vulnerability Scans Against Newly Launched EC2 - security

I launched an EC2 instance a few days ago, it's launched from an ECS service. It's only being used by me and a couple of others for testing purposes. It hosts an API that an iOS app connects to. Almost immediately in the logs I started seeing and continue to see vulnerability scans against it similar to the below.
2020-07-14T08:27:37.031+01:00
[0;36m[ INFO ][0m ERROR From: XXX.XXX.XXX.XXX:XXXXX, Description: GET / HTTP/1.1
Host: X.X.XXX.XX
User-Agent: Mozilla/5.0 zgrab/0.x
Accept: /
Accept-Encoding: gzip
The scans are against the IP rather than the DNS and a series of scans are run every few hours. This is the first time I've run an EC2 for any period of time, always used Heroku before now which either hid these things from me or never encountered them. Is this just them scanning entire IP address ranges and finding my service or do I have a leak somewhere that's alerting them to the launch of the service?
Thanks in advance.

Internet is being scanned by the crawlers and scanners all the time. Hard to tell whether the purpose is malicious or not. The one scanning you is the zgrab tool you can find here: https://zmap.io/.

Related

Is there be a difference between ab testing on localhost and hostname?

I test my website using ab as ab -n 10000 -c 1000 http://example.com/path and I got response as 160 #/second. But when i test it as ab -n 10000 -c 1000 http://localhost/path the response is totally different 1500 #/second.
why?
Normally you should not be running load generator (ab or any other tool) on the same host where application under test lives as load testing tools themselves are very resource intensive and you may run into the situation when application under test and load generator are struggling for the same CPU, RAM, Network, Disk, Swap, etc.
So I would recommend running ab from another host in your intranet, this way you will be able to get more clear results without aforementioned mutual interference. Remember to monitor baseline OS health metrics using vmstat, iostat, top, sar, etc. on both application under test and load generator side - it should give you more clear picture regarding what's going on and what is the impact of the perceived load.
You may also want to try out a more advanced tool as ab has quite limited load testing capabilities, check out Open Source Load Testing Tools: Which One Should You Use? article for more information on the most prominent free and open source load testing solutions (all listed tools are cross-platform so you will be able to run them on Linux)
From what I understand, you are testing the same website in 2 different configurations:
http://example.com/path, which is testing the remote website from your local computer,
http://localhost/path, which is testing the a local copy of website on your local machine, or being tested directly in the machine where the website is hosted.
Testing your remote website involves the network connection between your computer and the remote server. when testing locally, all the goes through the loopback network interface which is probably several orders of magnitude faster than your DSL internet connection.

maintaining DNS resolution for connecting clients when using fresh app instances with fresh IPs

I have an Ansible playbook for spinning up and building a brand new GNU/Linux box and installing vsftpd.
I have a client which needs to send a nightly file over SFTP. I have instructed to send to ftp.example.com.
I need to be able to very quickly run the playbook against any infrastructure provider (such as DigitalOcean, AWS, Rackspace etc) and without any change on the client end still receive the nightly file upload even if (as will be the case) the IP of the server has changed. So, one night the server may be on a DigitalOcean box in NewYork, the next on an AWS box in Ireland.
Now, obviously I could use a DNS name-server provider who has a good API to code against and reset the A-record as a stage of the playbook run. However, this will likely mean that until the client's DNS cache is flushed they will still be seeing ftp.example.com as the previous server.
So, how can I guarantee that this will work without any interaction on the part of the client.
Many thanks
In terms of DNS, you cant. Evan if you set a low TTL on the DNS records (time to live/expire) many DNS services cache short TTLs for up to 72 Hours.
My recommendation is not to have a infrastructure that requires constant changes in IP.
What may be a better solution is to use a distributed service like BitTorrentSync.
You could also host your own BIND server and ask your client to point his DNS at your BIND server, avoiding any third party DNS.
The real solution would be to stick keeping a persistant instance. If money is an issue, you can suspend your instance saving on compute in Amazon AWS

Windows Active Directory Domain setup remotely through univention using samba4

I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael
For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.

Slow website even though VPS is up and running

Sorry if this is a bit of a newbie question, but I am quite new to VPS and the relatively more complicated set up. I have a VPS set up, and every day or twice a day the site loads for a bout 10 minutes with no luck. Then when it comes back on line its fine after that. Upon logging on to Plesk, the server is up and running, very low CPU usage (0.10 and drops to 0.00 after a few minutes) and around 18% RAM usage.
The MySQLAdmin loads up fine.
So it seems the VPS is running fine.
Is there maybe another reason? The domain is with Daily.co.uk and the VPS is with LCN.com. Could there be another problem somewhere? On daily.co.uk, there are two nameservers set. ns0.etc*** and ns1.etc***. I did a tracert on windows cmd, this traced down to the server, with two timeouts.
I also tried a check on http://dnscheck.pingdom.com/ while the site was slow and this came back fine, except this: Too few IPv4 name servers (1). Only one IPv4 name server was found for the zone. You should always have at least two IPv4 name servers for a zone to be able to handle transient connectivity problems.
Any help would be appreciated. I have tried searching but with no luck.
The recommended diagnostic check for the issue you are experiencing is called a DIG.
On your Windows system, this check is not intrinsically available, but it can be downloaded from http://members.shaw.ca/nicholas.fong/dig/
Once you have installed it, you'll want to run it from the command prompt with the following syntax:
C:> dig -insert your domain here- +trace
This will show you how DNS resolution is happening from your location to the requested end point. Chances are, the error you received is correct. Most DNS setups have several name servers to assign to your domain registration to allow the round-robining of delegated name servers in the event that one becomes unresponsive.
My personal recommendation would be to outsource the DNS to a managed provider. Doing so will increase the availability of the zone, and reduce latency.

Random DNS Client Issue with BIND9/Windows Server 2003 DNS

Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations.
When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc.
The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client.
I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same.
We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.
In this particular situation, it appears that Windows will randomly start using a secondary name server instead of the primary, even if the primary is available.
My solution: remove the secondary. This is not a great solution as it obviously will kill the whole name resolution if this single name server goes down, but given this network is small and name resolution isn't mission critical (read: it can go down for an hour), this solution is acceptable.

Resources