In the past few days I have been doing extensive testing of Subversion with different clients, operating systems, client and server versions and have noticed very strange behaviour with windows clients connecting to Linux servers, hitting them with excessive CPU usage on the sshd process, where the Linux clients do not exhibit this behaviour.
A sample test setup is as follows:
Server Linux Ubuntu 16.04.3 LTS, OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016, Subversion version 1.9.3 (and 1.9.7).
Client TortoiseSVN 1.9.7
When checking out large repositories the linux server is hit on the sshd process, the process running with 100% cpu usage. This in effect slows down the performance and ultimately the speed at which the checkout runs. Linux clients connecting to the same server do not cause this load on the server.
This happens even when compressions is turned off and when encryption Cyphers are changed, as well as different versions of subversion. The behaviour is identical. I'm not sure who to address for this issue as this not only happens with TortoiseSVN but with SlikSVN as well. Any direction would be appreciated.
If you're just looking for a way for a controlled set of users to access your SVN servers, an easy workaround for any Windows 10 users is to have them use SVN from WSL (Windows Subsystem for Linux). In fact, I would consider testing that route to isolate the client from the network stack, etc.
It's worth noting that default SVN settings may be to convert line endings, and the SVN server may be converting every file, every line to Windows default line endings.
Likely better answers out there, but those are my initial thoughts.
Related
I have a client running an application of ours on a Raspberry Pi in PA. I want to replicate, as closely as possible, their system to a spare Pi here I have at the office. Mostly I want to do this so that I can, with near 100% surety, know that new software I put on their system will run without problems. The link between here and there is not bad, and I've read somewhere about using rcp to do a backup like this, but how long that will take is anyone's guess.
The system is a Raspberry Pi2 running "Raspbian", or Debian Wheezy.
This is their current "uname -a".
Linux pa0036 3.18.11-v7+ #781 SMP PREEMPT Tue Apr 21 18:07:59 BST 2015 armv7l GNU/Linux
I guess a secondary question is: can I update their machine without having to reboot? I can reboot that system but keep in mind that I'm in TN and they are in PA, so a hardwired connection is not possible.
If there is a secondary mechanism, where I can build an inventory of their system and then force that installation to my parse pi, that is an option too. I'm open to any suggestions, really.
You could install exactly the same Raspbian on both sides. If you keep
them updated you should have identical systems. This is what I do. A
reboot (on either side) should only be needed if you do an
apt-get dist-upgrade
and it actually results in the kernel being
upgraded. Of course, any running processes will continue to run the
old versions of the software until restarted.
For any non-system files and directories I use rsync. It is
efficient, flexible (lots of options) and works nicely over ssh.
There are two remote machines, one with redhat linux, the other with solaris. Each has a file (let's say /var/log/events.log) that is rotated daily, can be from 0 bytes to 400Mb in size and is updated constantly. There is a third machine with windows xp, that should monitor updates to that file which is currently done by an ssh session opened with putty and tail -f /var/log/events.log running in it.
There are some restrictions on how it should be done:
I can't use anything except SSH and SCP for remote access.
The solution must NOT require installing, storing or having running permanently anything on the remote servers; it should operate with single connection attempts.
It should have minimal impact on network load, close to that of a remotely executed tail -f
I've looked up how diffing is usually done and found out about rsync. Unfortunately, solaris server doesn't have it installed, and on redhat server, I don't have permissions to launch it.
Any ideas?
The question is rather old, but it seems that you want to use rsync.
According to Wikipedia:
rsync is a file synchronization and file transfer program for Unix-like systems that minimizes network data transfer by using a form of delta encoding called the rsync algorithm.
So you can synchronize the files by they diff, thus saving a lot of bandwidth and time.
Official rsync page is here.
does somebody know, if windows and linux versions are compatible?
I need to know because I need to share the disk where my local reposities are between linux and win, which runs in virtualBox on the linux pc.
I develop on linux, but I need to use git on windows when I work remote (because of a VPN issue). Another option would be to always use git from windows, but I prefer not to start vbox.
Has somebody done this? I suppose it could be a bit risky, or would it be OK to rely on the versions to be 100% compatible. I would not like to have my repository corruptet...
Cheers Henning
Yes, as long as you're concerned about the database format they are compatible. Moreover, such compatibility is required even for "non-native" solutions like the JGit or libgit2 libraries.
I can only see two possible problems:
From time to time Git might change behaviour in a way not compatible with some of its past versions (but very rarely and with bold warnings in release notes documents long before the change is made).
So find out what versions you're running (git --version) in both worlds and if there's a major difference between them (in the X number of 1.X.Y version) consider reading the changelog for the one which has a greater X for possible gotchas.
Potential filesystem issues: these days you can mount an NTFS volume R/W when running a recent Linux kernel, and same goes about Ext2 (but not Ext3 and certainly not Ext4) on Windows but you might, in theory, accidently hit some problem with these drivers — of course, they haven't received that much love their native variants have.
< skippable part >
I work in IT (mostly desktop support and network administration) in a Windows environment, and I occasionally program.
A couple weeks ago, I decided I couldn't be as effective as I want to be without a Bash environment for my command prompt needs. This is especially true when I am using Ruby and git. I used Msysgit for a while, but I just didn't like how it wasn't extensible like Linux. So, I installed Cygwin and played around with that for a couple weeks.
As great as Cygwin is, it seems like it is meant to be a suped up command prompt, and its compatibility with Linux is just a pleasant side effect. This especially became evident when I tried to upgrade Ruby to 1.9.3 (it worked, but it wasn't straightforward), install rvm (never worked), and install RMagick (may or may not work, but looks like a headache).
So, now I'm considering running Linux in a virtual machine. But I'm worried that might be another can of worms and I'll have wasted hours before I find that out. I like that Cygwin runs in Windows and I get to use my IDE, user folder, and more with it. But I don't like that support for it is not as thorough as for a major distro.
< /skippable part >
Does anyone here have insight on using Cygwin vs running a Linux virtual machine?
Any advice on setting up a Linux development environment in a virtual machine within Windows?
I have faced common issues before, and the best solution according to my experience is just 2 workstations :).
Apart from that having Linux running in a virtual environment is way better.
First of all, you will have full Linux capabilities (except 3d acceleration, but you probably don't need that).
You will have the capability of creating snapshots and revert back to them when things go wrong!
You can start multiple environment using templates, which is very convenient.
The only downfall I can think of is performance issues of the host machine.
If it's a normal workstation/PC, an IDE + one virtual machine + a 100+tabs browser just makes it slow.
1: cygwin is good for quick hacks, and for being able to acces host-os resources(you can run IE for example in a bash script). For something tightly integrated and some "real" word, go to a vm. It will emulate everything and separate development from the real machine, and this may be a good thing in some cases... as a plus it simulates a real server:)
2: in virtualbox at least, you have shared folders, and you can share a local folder, and see it in the vm as a local folder(local or as a windows share..it actually depends). Then you can use that "entry point" to symlink stuff into the vm, and do the things you need with the real files being located in the real(host) machine
SSH into a linux box. This is what everyone does. Why isn't this the answer?
There is something I have heard of called Cooperative Linux. It runs Linux alongside with Windows kernel so you can use them at the same time. I've never used it, but here:
http://www.colinux.org/
What I think now is getting the pros of 2 options is using
Docker
, it is giving you cygwin simplicity and VM functionality with better performance.
Linux in a virtual machine will give you the experience you want more than cygwin or any mock shell as I like to call them.
Running VM's though require a lot of ram depending on whether you want a desktop version of linux or just a command line version.
Myself in work I have a pc with 8gb of ram and I run ubuntu 64bit as main OS, two ubuntu servers (these are for dev environments two different projects) and a windows 7 VM and a win XP VM.
I can run the two ubuntu servers and one other VM at the same time, key here is more ram if you want to be able to do VM's.
If you're going to be working with Ruby then get an Ubuntu virtual machine up and running :) I've not tried Ruby, etc on Windows but I have heard that it is a pain to setup and configure. I use a Mac for all my Rails development so I cannot comment on the Windows side for that.
As for virtual machine creation, I prefer VMware Workstation, however there are free alternatives such as Virtualbox and VMware Server.
I'm using a Linux VM within a Windows seven environment as this VM is as representative as possible of the final production environment. The whole setup is binded to the Eclipse IDE under ms-Windows seven. So this is really great for local full testing, before committing or tagging the tested version to the production servers.
As you mentioned as well, this takes some time to get properly setup and fully configured. So if your need is only for little tricks or tasks, you may keep using cygwin. For example, I faced significant issues to configure perl and compile mysql within cygwin. So it's ok for basic usages, but not to fully take advantage of a full linux environment.
Your choice strongly depends on the final server setup purpose. A VM will do it whatever your need is. The setup cost for it is higher, so this time investment must be used often to get returned.
In the spirit of being helpful, this is a problem I had and solved, so I will answer the question here.
Problem
I have:
An application that has to be installed on on Redhat or SuSE enterprise.
It has huge system requirements and requires OpenGL.
It is part of a suite of tools that need to operate together on one machine.
This application is used for a time intensive task in terms of man hours.
I don't want to sit in the server room working on this application.
So, the question came up... how do I run this application from a remote windows machine?
I'll outline my solution. Feel free to comment on alternatives. This solution should work for simpler environments as well. My case is somewhat extreme.
Solution
I installed two pieces of software:
PuTTY
XMing-mesa The mesa part is important.
PuTTY configuration
Connection->Seconds Between Keepalives: 30
Connection->Enable TCP Keepalives: Yes
Connection->SSH->X11->Enable X11 forwarding: Yes
Connection->SSH->X11->X display location: localhost:0:0
Lauching
Run Xming which will put simply start a process and put an icon in your system tray.
Launch putty, pointing to your linux box, with the above configuration.
Run program
Hopefully, Success!
If you want the OpenGL rendering to be performed on your local machine, using a Windows X server, like Xming is a good solution. However, if you want rendering to be done on the remote end with just images sent to the local machine, you want a specialized VNC system that can handle remote OpenGL rendering, like VirtualGL.
You could also use VNC ( like cross platform remote desktop )
X is more efficent since it only sends draw commands rather than pixels, but if you are using opengl it is likely that most of the data is a rendered image anyway.
Another big advantage of VNC is that you can start the program locally on the server and then connect to it with VNC, drop the connection, reconnect from another machine etc without disturbing the main running program.
For OpenGL, running an X server is definitely a better solution. Just make sure the application is developed to be networked. It should NOT use immediate mode for rendering and textures should be RARELY transferred.
Why is X server a better solution in this case (as opposed to VNC)? Because you get acceleration on workstation, while VNC'ed solution is usually not even accelerated on the mainframe. So as long as data is buffered on the X server (using vertex arrays, vertex buffer objects, texture objects, etc) you should get much higher speed than using VNC, especially with complex scenes since VNC has to analyze, transfer and decode them as pixels.
If you need server glx version 1.2 the free version of Xming (Mesa 2007) works fine. But if your application needs version 1.4, example qt5, the X Server from Cygwin works free to run it use this commands:
[On server]
sudo vi /etc/ssh/ssh_config
Add:
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalHost no
AllowTcpForwarding yes
TCPKeepAlive yes
ClientAliveInterval 30
ClientAliveCountMax 10000
sudo vi ~/.bashrc
Add:
export DISPLAY=ip_from_remote:0
Now restart ssh server
[On Client slide]
Install Cygwin64 (with support to X package) after that run this command:
d:\cygwin64\bin\run.exe --quote /usr/bin/bash.exe -l -c "cd; /usr/bin/xinit /etc/X11/xinit/startxwinrc -- /usr/bin/XWin :0 -ac -multiwindow -listen tcp"
Now execute ssh client:
d:\cygwin64\bin\mintty.exe -i /Cygwin-Terminal.ico -e /usr/bin/ssh -Y user_name#ip_from_server