I have a large list of tor servers, maybe 2,000+ servers that I would like to ban from registering accounts on my site. Is it viable to block the entire list I have in an .htaccess file or will this cause the server to slow down in the same way having thousands of hosts in iptables will?
Is there another alternative? I have captcha already but bots aren't the problem. The problem is users using tor to circumvent bans.
Related
I currently use Apache2 to host multiple websites for some friends, and as i've been experimenting with NodeJS I was wondering if it was possible to host these sites with NodeJs?
I wanted to have a folder structure like the following:
App
--> server
--> websites
`--> site1 (example1.com)
`--> site2 (example2.com)
With more people asking me to host there sites I need to be able to easily create a new site quickly with out restarting the server effecting other hosted sites.
Currently I use a bash script to create the folder structure for apache and add a new virtualhost in the apache conf file and finally reload apache.
So my main question is, should I even be looking at nodejs for this or stick with apache?
Any opinions, examples or tutorials would be great.
You're probably better off staying with apache (or possibly switching to nginx). In fact, best practice for production node.js servers is typically to run them behind apache/nginx through a reverse proxy. Few reasons for that:
You have to run node.js as root to give it access to port 80/443 (generally bad idea)
You're going to be very hard-pressed (and probably a lot of trial and error) to get the security, performance and stability of apache/nginx.
For the context : I'm a student and I must do a project with some other people of my class. My role is to prepare them a web server that each one can use and access from anywhere. I plan to host everything on a dedicated server that I already have to avoid additional cost and give to each people a subdomain that will be redirected with VirtualHosts. They will be able to send files to the server with a SFTP server (openssh), they will get an account per person and it will be chrooted to their virtualhost directory.
My main problem : Will this be secure ? I mean, if one of the user set an easy password or just do anything risky, can someone access the other's people virtualhost or even the host dedicated machine ? I already thought about .htaccess and they will be deactivated. Is there another way to get out of an apache virtualhost ?
Things to note : they will have apache, php and an access to a mysql (or maybe mariadb, I don't know for now) database. So, they may be able to upload some old, unsecure code. Some of these users are not very educated to cybersecurity.
The server is a Ubuntu 16.04 LTS.
Thanks for the advices,
If you limit their access to only their own home directory, that's a good start.
A good layer of security would also be to implement 2FA, check out Duo Mobile, you can implement it for SSH logins (or need more details, eg. what options do they have to login into the server?)
If the users are not very educated in cybersecurity as you mentioned, it will be difficult for them to escape the virtual host they have access to.
Although i need more details such as each virtual host will have a separate database or it will be talking to a central database? also, for a paranoid measure, consider where the server is hosted. There are lots of variables that can be affirmed from what you described, but it is best to keep the server on its own network with nothing critical in the same subnet. Just in case.
These days I am facing a weird problem with my wordpress websites on a linux shared host.
I had 6 WordPress websites on my Linux shared host service. As two of them were moved to a new server, I tried to cleanup some mess by deleting some of the cached, temp and log files from different folders such as .trash, .cache, tmp and so on. (I don't remember what files are deleted from which folders exactly).
After this cleaning, I can see the main page of the websites but all /wp-admin's are out of reach even if I install a fresh WP.
When I try to access wp-admin I'll get the following error,
and after that I can't see the main pages for a few hours! It seems that cPanel is blocking my IP for a while, because my domain.com/cpanel is also not working after I tried domain/wp-admin.
Unfortunately, the Mesrahosting host provider has a terrible service support and they are not replying my tickets nor Whatsapp messages.
Any idea to solve this problem would be appreciated.
First of all the tmp folder should exist in your cPanel account. If you deleted it you have to create it in /home/cpaneluserame/tmp and it needs to have cpaneluser:cpaneluser ownership and also 755 permissions. That would be a 1st step. Since you are unable to access cPanel then most probably your ip got blocked by the server's firewall or cPhulkd. In most cases the block is just temporary. If you keep trying then depending on the server's firewall configuration and the settings of cPhulkd (that comes with WHM) your blocking time might be increased gradually. Try accessing cPanel from another ip address (use a proxy or VPN service) and see if that works. When you are able to access cPanel be sure to create /tmp folder and then try again and see if that works. If your ip address is permanently blocked then your only chance to unblock it is to request that to your hoster's tech support team.
So if I were you, I would try accessing cPanel from another ip address.
Right now, we a have a site that is flooded and the site is very slow. Normally it is working well but someone decided to flood it today. We have other sites on the server and they are not directly affected. We have limited the amount of CPU that the flooded site may use.
We really need a solution to avoid flooding of sites like this case.
I have read this page:
http://www.acunetix.com/blog/articles/8-tips-secure-iis-installation/
It tells about temporarily dening IP addresses but it does not say for how long the IPs are denied access.
In addition, I rather like that the IPs were auto-blocked in IIS or even better in the Windows firewall. Is this possible?
Can someone help so the site can start behaving normally again? :-)
We are using IIS version 8.5 :-)
I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael
For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.