I've installed git on Ubuntu 11.04 and I've cloned a private respository on GitHub. Whenever I try to push or pull to/from the repository, it takes about 30-60 seconds. Even if their are no changes in the repository. When using the same repository on Windows 7, pull/push requests only take a few seconds. I can't figure out what is wrong.
I've ran ssh -v git#github.com and it hangs right after this line:
debug1: SSH2_MSG_SERVICE_ACCEPT received
The above line will take 30-60 seconds to complete and then the remaining lines finish within a second. Here is the full output of ssh -vvv git#github.com: http://pastebin.com/LdY0EifW
I've already tried changing "GSSAPIAuthentication" to "no" and "UseDNS" to "no" in /etc/ssh/ssh_config. That didn't make any difference.
Any ideas?
It seems like the system might be running out of entropy.
SSH, like any other cryptographic application, needs some truly random numbers to be able to provide security. Linux kernel normally gathers some randomness (entropy) from precise timing of various events and makes it available via /dev/random, which ssh reads when it needs to create the session keys. On desktops there is usually enough entropy gathered, but if there is some other application that needs it, you might be running low and than reading /dev/random will take a lot of time, because it's waiting for enough entropy to be collected.
=> please verify by running strace ssh git#github.com whether it's actually waiting in read from `/dev/random. If yes, you have this problem.
If it's a server hosting any potentially sensitive data, you should probably equip it with hardware random number generator (e.g. the "entropy key"). You can also try to modify the random number generator settings to less secure ones (I believe there are some options to be set via /proc), but only if the server does not host customer data or any sensitive company data.
Edit: it looks more like a network problem somewhere.
I had a similar problem, although with all internet and not just push/pull of git. The problem was according to solutions to the same problem for many people either something with IPv6 or something with the driver:
So try these:
Disable IPv6. See here <--- Note only do this if you know you won't use IPv6. Also remember this so if you needed later you can enable it.*
Blacklist your driver. Follow this question for more instructions.
For me, the second case was the solution.
* I'm not sure what they shipped with Ubuntu 11.04, but there appears to be something wrong with DNS lookup when the network is in IPv4, but IPv6 is enabled, which makes it take a long time.
Related
2.5 months ago, I was running a website on a Linux server to do a user study on 3 variations of a tool. All 3 variations ran on the same website. While I was conducting my user study, the website (i.e., process hosting the website) crashed. In my sleep-deprived state, I unfortunately did not record when the crash happened. However, I now need to know a) when the crash happened, and b) for how long the website was down until I brought it back up. I only have a rough timeframe for when the crash happened and for long it was down, but I need to pinpoint this information as precisely as possible to do some time-on-task analyses with my user study data.
The server runs Linux 16.04.4 LTS (GNU/Linux 4.4.0-165-generic x86_64) and has been minimally set up to run our website. As such, it is unlikely that any utilities aside from those that came with the OS have been installed. Similarly, no additional setup has likely been done. For example, I tried looking at a history of commands used in hopes that HISTTIMEFORMAT was previously set so that I could see timestamps. This ended up not being the case; while I can now see timestamps for commands, setting HISTTIMEFORMAT is not retroactive, meaning I can't get accurate timestamps for the commands I ran 2.5 months ago. That all being said, if you have an idea that you think might work, I'm willing to try (as long as it doesn't break our server)!
It is also worth mentioning that I currently do not know if it's possible to see a remote desktop or something of the like; I've been just ssh'ing in and use the terminal to interact with the server.
I've been bouncing ideas off with friends and colleagues, and we all feel that there must be SOMETHING we could use to pinpoint when the server went down (e.g., network activity logs showing spikes around the time that the user study began as well as when the website was revived, a log of previous/no longer running processes, etc.). Unfortunately, none of us know about Linux logs or commands to really dig deep into this very specific issue.
In summary:
I need a timestamp for either when the website crashed or when it was revived. It would be nice to have both (or otherwise determine for how long the website was down for), but this is not completely necessary
I'm guessing only a "native" Linux command will be useful since nothing new/special has been installed on our server. Otherwise, any additional command/tool/utility will have to be retroactive.
It may or may not be possible to get a remote desktop working with the server (e.g., to use some tool that has a GUI you interact with to help get some information)
Myself and my colleagues have that sense of "there must be SOMETHING we could use" between various logs or system information, such at network activity, process start times, etc., but none of us know enough about Linux to do deep digging without some help
Any ideas for what I can try to help figure out at least when the website crashed (if not also for how long it was down)?
A friend of mine pointed me to the journalctl command, which apparently maintains timestamps of past commands separately from HISTTIMEFORMAT and keeps logs that for me went as far back as October 7. It contained enough information for me to determine both when I revived my Node js server as well as when my Node js server initially went down
I have pretty strange problem with Collectd. I'm not new to Collectd, was using it for a long time on CentOS based boxes, but now we have Ubuntu TLS 12.04 boxes, and I have really strange issue.
So, using version 5.2 on Ubuntu 12.04 TLS. Two boxes residing on Rackspace (maybe important, but I'm not sure). Network plugin configured using two local IPs, without any firewall in between and without any security (just to try to set simple client server scenario).
On both servers collectd writes in configured folders as it should write, but on server machine it doesn't write data received from client.
Troubleshooted with tcpdump, and I can clearly see UDP traffic and collectd data, including hostname and plugin names from my client machine, received on server, but they are not flushed to appropriate folder (configured by collectd) ever. Also running everything as root user, to avoid troubleshooting permissions.
Anyone has any idea or similar experience with this? Or maybe some idea what could I do for troubleshooting this beside trying to crawl internet (I think I clicked on every sensible link Google gave me in last two days) and checking network layer (which looks fine)?
And just small note: exactly the same happened with official 4.10.2 version from Ubuntu's repo. After trying to troubleshoot it for hours moved to upgrade to version five.
I'd suggest trying out the quite generic troubleshooting procedure based on the csv and logfile plugins, as described in this answer. As everything seems to be fine locally, follow this procedure on the server, activating only the network plugin (in addition to logfile, csv and possibly rrdtool).
So after no way of fixing this, I upgraded my Ubuntu to 12.04.2 LTS (3.2.0-24-virtual) and this just started working fine, without any intervention.
We have problem with our Qt based production server for our business application. When total SSL connections increases with time, some clients does not manage to connect at all.
QSslSocket::waitForEncrypted() starts to fail with no QSslError, regardless of that timeout where set. There are more then ~100 active connections when this problem starts to kick in.
So there are ~170 connections, twice of threads, and "lsof" mentions a little more then 1000 opened files (we had to increase file "ulimit" for that..).
It does not look like it's clients problem, since IPs that are failing and reconnecting changes with time (some "leaps in" with success, but then other don't).
As mentioned, this happens in Ubuntu Server (Zentyal 10.04 and "vanilla" 9.10), but does NOT in Ubuntu Desktop 9.10.
Everything runs inside VMWare ESX 4.1, systems there tested with same resources attached. System loads stays below 1.0. Daemon runs with root permissions.
It looks like it's something with "server"/"desktop" kernel or other configuration differences, but I couldn't tell what exactly could make SSL connection not to handshake... in "server editions"...
We are using Qt 4.5.3 compiled by ourselves.
EDIT: after all it's the same on any Linux I tried. It feels like it's some kind socket limit per process, witch is about 1016 - other_opened_files. I'll try to create new question about that.
EDIT 2: It's select and FD_SETSIZE limit problem...
Problem is with fact that Qt uses select() which is limited with FD_SETSIZE macro for maximum selected sockets/files. I had to change FD_SETSIZE value inside /usr/include/bits/typesizes.h before compiling libQtNetwork and libQtCore.
We received access to the environment, but I now need to go through the process of securing it so that the previous vendor can no longer access it, or the Web applications running on it. This is a Linux box running Ubuntu. I know I need to change the following passwords:
SSH
FTP
MySQL
Control Panel Admin
Primary Application Admin
However, how do I really know I've completely secured the system using best practices, and am I missing anything else that I need to do other than just changing passwords?
3 simple steps
Backup configurations / source files from HTTP / SQL tables
Reinstall operating system
Follow standard hardening steps on fresh OS
Regardless of who it was, they could have installed any old crap on there (rootkits) that you can't configure away.
You will probably get more responses at serverfault.com on these kinds of questions.
There are several things you can do to secure SSH by editing your sshd_config file which is usually in /etc/ssh/:
Disable Root Logins
PermitRootLogin no
Change the ssh port from Port 22
Port 9222
Manually specifying which accounts can login
AllowUsers Andrew,Jane,Doe
SecurityFocus has a good article about securing MySQL, although it's a bit dated.
The best thing you could do would be reinstall and make sure when you bring over files from the old system to the new that it is just data, and not executables that could be nasty. If this is to much, changing all the passwords, and watching the logs for a few weeks, as well as playing with iptables to block former vendor. Also given that it could have a rootkit at the kernel level its probably good idea to change that out, and also watch traffic coming out of the box fro something that might be going to the vendor. It really is a hassle to take someone else's machine and say that is safe now, I would go as far to say it is nearly impossible.
side note. This isn't really programming related so probably shouldn't be on this site.
I have Apache running on a public-facing Debian server, and am a bit worried about the security of the installation. This is a machine that hosts several free-time hobby projects, so none of us who use the machine really have the time to constantly watch for upstream patches, stay aware of security issues, etc. But I would like to keep the bad guys out, or if they get in, keep them in a sandbox.
So what's the best, easy to set up, easy to maintain solution here? Is it easy to set up a user-mode linux sandbox on Debian? Or maybe a chroot jail? I'd like to have easy access to files inside the sadbox from the outside. This is one of those times where it becomes very clear to me that I'm a programmer, not a sysadmin. Any help would be much appreciated!
Chroot jails can be really insecure when you are running a complete sandbox environment. Attackers have complete access to kernel functionality and for example may mount drives to access the "host" system.
I would suggest that you use linux-vserver. You can see linux-vserver as an improved chroot jail with a complete debian installation inside. It is really fast since it is running within one single kernel, and all code is executed natively.
I personally use linux-vserver for seperation of all my services and there are only barely noticeable performance differences.
Have a look at the linux-vserver wiki for installation instructions.
regards, Dennis
I second what xardias says, but recommend OpenVZ instead.
It's similar to Linux-Vserver, so you might want to compare those two when going this route.
I've setup a webserver with a proxy http server (nginx), which then delegates traffic to different OpenVZ containers (based on hostname or requested path). Inside each container you can setup Apache or any other webserver (e.g. nginx, lighttpd, ..).
This way you don't have one Apache for everything, but could create a container for any subset of services (e.g. per project).
OpenVZ containers can quite easily get updated altogether ("for i in $(vzlist); do vzctl exec apt-get upgrade; done")
The files of the different containers are stored in the hardware node and therefore you can quite easily access them by SFTPing into the hardware node.
Apart from that you could add a public IP address to some of your containers, install SSH there and then access them directly from the container.
I've even heard from SSH proxies, so the extra public IP address might be unnecessary even in that case.
You could always set it up inside a virtual machine and keep an image of it, so you can re-roll it if need be. That way the server is abstracted from your actual computer, and any virus' or so forth are contained inside the virtual machine. As I said before, if you keep an image as a backup you can restore to your previous state quite easy.
To make sure it is said, CHRoot Jails are rarely a good idea it is, despite the intention, very easy to break out of, infact I have seen it done by users accidentally!
No offense, but if you don't have time to watch for security patches, and stay aware of security issues, you should be concerned, no matter what your setup. On the other hand, the mere fact that you're thinking about these issues sets you apart from the other 99.9% of owners of such machines. You're on the right path!
I find it astonishing that nobody mentioned mod_chroot and suEXEC, which are the basic things you should start with, and, most likely the only things you need.
You should use SELinux. I don't know how well it's supported on Debian; if it's not, just install a Centos 5.2 with SELinux enabled in a VM. Shouldn't be too much work, and much much safer than any amateur chrooting, which is not as safe as most people believe.
SELinux has a reputation for being difficult to admin, but if you're just running a webserver, that shouldn't be an issue. You might just have to do a few "sebool" to let httpd connect to the DB, but that's about it.
While all of the above are good suggestions, I also suggest adding a iptables rule to disallow unexpected outgoing network connections. Since the first thing most automated web exploits do is download the rest of their payload, preventing the network connection can slow the attacker down.
Some rules similar to these can be used (Beware, your webserver may need access to other protocols):
iptables --append OUTPUT -m owner --uid-owner apache -m state --state ESTABLISHED,RELATED --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --protocol udp --destination-port 53 --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --jump REJECT
If using Debian, debootstrap is your friend coupled with QEMU, Xen, OpenVZ, Lguest or a plethora of others.
Make a virtual machine. try something like vmware or qemu
What problem are you really trying to solve? If you care about what's on that server, you need to prevent intruders from getting into it. If you care about what intruders would do with your server, you need to restrict the capabilities of the server itself.
Neither of these problems could be solved with virtualization, without severly criplling the server itself. I think the real answer to your problem is this:
run an OS that provides you with an easy mechanism for OS updates.
use the vendor-supplied software.
backup everything often.