Few questions in one. I'm a mobile developer, and as a pet project I've set up a small server (on a raspberry pi) that I use as my backend development sever
I think I have set up the server fairly securely and have avoid the common blunders.
The problem is when it comes to security I'm completely neurotic, not because I have something to hide, but because I don't want to be victim of my own naivety / stupidity.
Currently I check my apache2 logs daily to find out what traffic (bar my own) has hit the server. Every day there seems to be between 4-5 hits from random ip's looking for directories that dont exist. Am I correct in assuming there are servers that randomly trawl through ip's searching for known weaknesses in sever software?
My main question is, is there a way for me to log every hit to the server in an sql database? That way I can see if somebody is really trying to get in by querying the number of hits from that ip without trawling through the logs manually.
Secondly, anybody got any more obscure security tips / things I should do on a daily basis?
Thanks for your time!
Edit: - Also, are their any good automatic penetration tools out there that can tell me if I have any vulnerability?
Am I correct in assuming there are servers that randomly trawl through ip's searching for known weaknesses in sever software?
Yes.
My main question is, is there a way for me to log every hit to the server in an sql database?
You could use mod_log_sql: http://www.outoforder.cc/projects/apache/mod_log_sql/
anybody got any more obscure security tips / things I should do on a daily basis?
you could setup a firewall, use port knocking, expose services only locally and connect via VPN, ...
Related
I am thinking of using another "less" important server to store files that our clients want to upload and handling the data validation, copying, insertion, etc at that end.
I would display the whole upload thingy through iframe on our website and using HTML,PHP,SQL as syntax-languages for the thingy?
Now I would like to ask your opinions is this is a good or bad idea.
I´m figuring out that the pros and cons are:
**Pros:
The other server is "less" valuable, meaning if something malicious could be uploaded there it would not be the end of the world
Since the other server has less events/users/functionality/data it would help to lessen the stress of our main website server
If the less important server goes down the other functionality on main server would still be functioning
Firewall prevents outside traffic (at least to a certain point)
The users need to be logged through the main website
**Cons:
It does not have any CMS+plugins, so it might be more vunerable
It might generate more malicious traffic towards it.
Makes the upkeep of the main website that much more complicated for future developers
Generally I´m not found of the idea that users get to uploading files, but it is not up to me.
Thanks for your input. I´m looking forward to hearing your opinions.
Servers have file quotas and bandwidths defined/allocated for them.
If you transfer your "less" used files to another server ,it will help your main server to improve its performance.
And also there wont be much maintenance headaches with the main server if all files are uploaded there.
Conclusion : It is a good idea.
Well, I guess most importantly, you will need a single sign-on (SSO) solution in place between the two web applications. I assume you don't want user A be able to read or delete files from user B.
SSO between 2 servers is a lot more complicated than for a single web application. Unless this site is only deployed in an intranet with a Active Directory domain controller in which case you can use Kerberos.
I'm not sure it's worth it just for the advantages you name.
Something I have been curious about for quite sometime now.
How exactly do you distribute your web traffic to various servers? And when do you know when to distribute to another server?
For sites like Facebook, they have one point of entry via the domain www.facebook.com so if server A is running at 90% of what it can or whatever how does it know to switch to server X or to use a server closer to your location. How exactly does it achieve this.
And when building a website that will have large traffic how do you deal with this. Is this something you consider as a developer?
More information you can provide the better.
Thanks.
You probably want to look into load balancing
If you have specific questions beyond that, they're probably more suitable for server fault
I'm developing a chat application (in VB.Net). It will be a "secure" chat program. All traffic will be encrypted (I also need to find the best approach for this, but that's not the question for now).
Currently the program works. I have a server application and a client application. However I want to setup the application so that it doesn't need a central server for it to work.
What approach can I take to decentralize the network?
I think I need to develop the clients in a way so that they do also act as a server.
How would the clients know what server it needs to connect with / what happens if a server is down? How would the clients / servers now what other nodes there are in the network without having a central server?
At best I don't want the clients to know what the IP addresses are of the different nodes, however I don't think this would be possible without having a central server.
As stated the application will be written in VB.Net, but I think the language doesn't really matter at this point.
Just want to know the different approaches I can follow.
Look for example at the paper of the Kademlia protocol (you can find it here). If you just want a quick overview, look at the Wikipedia page http://en.wikipedia.org/wiki/Kademlia. The Kademlia protocol defines a way of node lookups in a network in a decentral way. It has been successfully applied in the eMule software - so it is tested to really work.
It should cause no serious problems to apply it to your chat software.
You need some known IP address for clients to initially get into a network. Once a client is part of a network, things can be more decentralized, but that first step needs something.
There are basically only two options - either the user provides one (for an existing node of the network - essentially how BitTorrent trackers work), or you hard-code in a gateway node (which is effectively a central server).
Maybe you can see uChat program. It's a program from uTorrent creator with chat without server in mind.
The idea is connect to a swarm from a magnetlink and use it to send an receive messages. This is as Amber answer, you need an access point, may it be a server, a know swarm, manual ip, etc.
Here is uChat presentation: http://blog.bittorrent.com/2011/06/30/uchat-we-just-need-each-other/
I want to check a particular website from various locations. For example, I see a site example.com from the US and it works fine. The colleague in Europe says he cannot see the site (gets a dns eror).
Is there any way I can check that for my self instead of asking him every time?
This is a bit of self promotion, but I built a tool to do just this that you might find useful, called GeoPeeker.
It remotely accesses a site from servers spread around the world, renders the page with webkit and sends back an image. It will also report the IP address and DNS information of the site as it appears from that location.
There are no ads, and it's very stream-lined to serve this one purpose. It's still in development, and feedback is welcome. Here's hoping somebody besides myself finds it useful!
Sometimes a website doesn't work on my PC and I want to know if it's the website or a problem local to me(e.g. my ISP, my router, etc).
The simplest way to check a website and avoid using your local network resources(and thus avoid any problems caused by them) is using a web proxy such as Proxy.org.
Well, DNS should be the same worldwide, wouldn't it? Of course it can take up to a day or so until your new DNS record is propagated around the world. So either something is wrong on your colleague's end or the DNS record still takes some time...
I usually use online DNS lookup tools for that, e.g. http://network-tools.com/
It can check your HTTP header as well. Only a proxy located in Europe would be better.
Besides using multiple proxies or proxy-networks, you might want to try the planet-lab. (And probably there are other similar institutions around).
The social solution would be to post a question on some board that you are searching for volunteers that proxy your requests. (They only have to allow for one destination in their proxy config thus the danger of becoming spam-whores is relatively low.) You should prepare credentials that ensure your partners of the authenticity of the claim that the destination is indeed your computer.
DNS info is cached at many places. If you have a server in Europe you may want to try to proxy through it
It depends on wether the locatoin is detected by different DNS resolution from different locations, or by IP address that you are browsing from.
If its by DNS, you could just modify your hosts file to point at the server used in europe. Get your friend to ping the address, to see if its different from the one yours resolves to.
To browse from a different IP address:
You can rent a VPS server. You can use putty / SSH to act as a proxy. I use this from time to time to brows from the US using a VPS server I rent in the US.
Having an account on a remote host may or may not be enough. Sadly, my dreamhost account, even though I have ssh access, does not allow proxying.
The only thing that springs to mind for this is to use a proxy server based in Europe. Either have your colleague set one up [if possible] or find a free proxy. A quick Google search came up with http://www.anonymousinet.com/ as the top result.
Is there a detailed guide which explains how to host a website on your own server on linux.
I have currently hosted it on one of the commerical web-hosts.
Also the domain is registered to a different vendor.
Thanks
This guide is probably more info than you really requested, but webserver information is in there. It's Gentoo-specific, but you can apply the same information with minor translations to any other distro.
I would look into installing apache
99% of linux distributions will have a package for it.
On ubuntu you can run:
sudo apt-get install apache2
Are you considering hosting a web page locally for the internet? Or is this just for development etc..
If it's for an internet server, you will need a stable internet connection with a good upstream.
You may also need a static IP address so you can setup DNS to point to the right place.
While I don't have an url to a good tutorial in english, I would just warn you that this is not something you should take lightly. Administrating a server involves getting your hands dirty in linux stuff and dealing with security can be pretty complex depending on your knowledge and requirements.
So if you know nothing about it, you should be very careful and if the website you host has is of any commercial importance you are probably better off hiring a server admin.
Just to point out; if this is a personal (home) server, as opposed to one in a corporate environment, then it's better not to bother hosting it - you won't necessarily have the bandwidth, and your ISP may not allow it.
As mentioned above, you will also need a static IP address, and you'll need to set up DNS records to point to the correct location, which your domain vendor may or may not help you with.
I think it depends on how familiar you are with linux. Certainly, many people do this for hobbyist websites.
There are many aspects involved - you should begin with something simple like getting apache running and visible to the outside world.