Syncing files (code) on a local machine with a remote server - linux

Almost all the code I write is usually run on a high-performance server. Lately I've been just remotely coding via NoMachine NX (similar to VNC) remote desktop and the latency in scrolling through code and typing has become unbearable.
Is there any suggestions on how I can setup a folder locally which syncs with the high-performance server every time I save a file? Something similar to Dropbox but I can apply to any folder.
Needs to work with Mac OS X and Linux Ubuntu.

syncs with the high-performance server every time I save a file
You don't want this. What you should do instead is use a DVCS on both sides, and then push the changes to the server once they've been tested. A makefile can help with the actual push action (among others).

Mount the remote drive on your local machine.
If your working machine is osx and you don't feel confident with the terminal then http://panic.com/transmit/ might be a good solution.

Related

What is the best practice to code when the project is on a Guest OS (Virtualbox)?

I have a project and the files are on Guest OS (Red Hat Enterprise Linux) with Virtualbox, my host OS is Mac OS. I used to coding right in RHEL with editor Atom. But my boss told me that it's inefficient to code in a Guest OS, well, it makes sense because Mac OS or Windows is more responsive than linux, so I changed my way:
Copy the whole project located on RHEL to a share folder between Mac OS and RHEL using rsync
Code with Atom in Mac OS
Copy back the project in share folder to the original project in RHEL by rsync
I'm using Atom (not vim in RHEL) because it can edit the whole project in one window which is convenient for my situation. But there is a problem: after copying back the project in Step 3, git status shows everything has been changed even though I just edited only a few files. That is a little annoying.
Is there any better way to code in such environment? any advice is appreciated.
BretzL's suggestion to use shared folders is a good one, but I think it's important to address the underlying issue: your boss' assumption about coding being inefficient or slow just because you're working on a VM is simply not true.
It sounds like your new workflow, which was instituted as a result of his/her advice, is causing you to have a harder time developing that you did on the VM. The shared folders will help with that, but if you have the VM configured to have access to enough cores and memory, then its performance for most tasks will be fine, and there may not be any problem with developing on the VM directly. I do a significant amount of development on a VM, and haven't had any issues. You may experience slower builds on the VM if you're building whole kernels or other large projects, but if that's not the case, it should be fine.
If you didn't have any performance or productivity problems before forcing yourself to work outside of the VM, then... it wasn't a problem.
(I also have an issue with the assumption that Linux is always less responsive than Windows or Mac OS, but that's a debate for a different day.)
VirtualBox supports shared folders, so you dont need to rsync back and forth. Just mount the shared folder into where your application server on RHEL guest expects the code.
I also recommend you take a look at https://www.vagrantup.com/ for managing developer VMs.

Vagrant, shared folder: take advantage of inotify over NFS

Our Symfony2 webapp uses the Assetic watcher in development mode to re-compile assets on the go.
The webapp runs in a Docker container which runs in a Vagrant VM (Ubuntu 12.04 Precise).
The host is OSX 10.9 Mavericks and it shares the code folder with the VM through a NFS (v3) share and the code is mounted in the container via a host/guest volume in Docker.
Since inotify seems to not be able to detect file modifications over NFSv3, the watcher works in polling mode which can be very slow (~1/2 minutes to detect the modification).
I've read that NFSv4 is inotify compliant but I did not found any good ressource on that.
Is there a way to make NFS/inotify works together?
Unfortunately, inotify cannot work on NFS. inotify works by hooking itself in the VFS (virtual filesystem) layer, in the kernel. Whenever a modification happens, inotify knows about it, because the modification happens on the same machine, therefore in the same kernel — which makes the whole thing possible.
With NFS, modifications happen on the server, and notifications are expected on the client. But the NFS doesn't notify the clients when a change is made. Otherwise, it wouldn't scale. NFS has been designed (and operated) to have thousands of clients on a single server. Imagine if you do a tiny change, and the server has to push it to all clients!
Of course, you could say "hey, there should be a subscription mechanism in the NFS protocol, so that clients can tell the server that they want to know about changes happening in a specific location". Well, NFS was designed 30 years ago, so forgive them for not including this subscription/notification system :-)
I'm not familiar with Assetic, but maybe you could have a custom script to watch for changes manually, and re-compile assets each time you detect a change. Just walk through the directory containing the source for the assets, keep track of the mtime of each file in an associative array, and each time you detect a new file (or a new mtime), recompile. Boom!
See also this other SO question about inotify and NFS.
Here is a plugin which aim to solve this: https://github.com/mhallin/vagrant-notify-forwarder
Just install it and reload your boxes to have inotify notifications forwarded to your guests machine:
vagrant plugin install vagrant-notify-forwarder
You might be interested in this tool called Guard it listens to the file changes made on host OS, and then on Guest it pulls and update those. This worked for me, and now my assets are updated almost instantaneously.
https://serverfault.com/questions/453826/vagrant-shared-folder-and-file-change-events

How to transfer a live data stream from a linux headless system to a windows machine?

I've recently been working with the NAO. We're trying to connect the NavChip to it and do some experiments related to robot navigation. The NAO uses a modified 2.6 linux kernel on it's geode system. I've managed to make my NavChip work on it (needed to compile the linux cp210x kernel module etcetera). I can therefore run a C program that came with the NavChip and collect data from it. However, the data can only be logged on the local file system. I'd like to stream this data over the network to a windows machine, since all the processing is MATLAB based. Would anyone have any suggestions on how I can send this data from the NAO to a windows machine?
The NAO's system is pretty limited. It has ssh, and some common utilities like cat etc., but nothing advanced.
I'm not sure I understand the problem properly but I think you've answered your own question you mentioned ssh is installed so why not just scp the file? Using some ssh client on the windows box to remotely connect and download relevent log file.
If you really do need to push the file from the remote host to local machine (rather then connect to remote host and download to local) then netcat should work see here: http://www.g-loaded.eu/2006/11/06/netcat-a-couple-of-useful-examples/
Other wise just write your own socket program in C and pipe the file accross (should be pretty trivial).

Is it safe to use a virtual server as dev environment, symlinking to files on the host?

I used to use MAMP (or just a local Apache/PHP/MySQL stack) to work on web projects. I've since graduated to a live Ubuntu server which is much closer to the production environments for the sites I work on.
Now I'm trying to take this a step further to optimize my workflow. My goal is to have a Linux server running in VirtualBox that automounts a local folder share (from the host) and uses a symlink to gain access to the files (i.e. client:/var/www/dev is a symlink to host:/Users/charlie/dev/).
I don't want to keep my files stored on the virtual server if it can be avoided. I prefer having direct local access to the files and not having to wait for buffering issues between the host and the client. i.e., if I have several files that are located on the client open in my IDE and I close my laptop, as soon as I open it there's a bit of a buffer issue. My IDE has open project(s) that reference folders and files located on a network share that isn't yet available. In the few seconds it takes for the virtual machine to wake up, OSX is already reporting that the share can't be found and was disconnected, the IDE chokes up, etc.
So what am I asking? Well, is this safe / are there obvious pitfalls I'm not seeing / better ways to do this?
Edit: For anyone that stumbles upon this post, the final setup is a Linux virtual machine running in VirtualBox on a Mac with NFS and a symlink from my Apache web root to my mount.
I used NFS Manager (http://www.bresink.com/osx/NFSManager.html) to setup the NFS Server on my host computer with user mapping to my primary account. This ensures that when my VM mounts the NFS share it can do whatever it needs (reading, writing, modifying). Then I added this line to /etc/fstab on my VM to automount the share on boot: "123.456.89.1:/Users/charlie/nfs_share /mnt/nfs_share nfs" (where 123 is my host IP on the virtual NAT).
The result is a killer development environment where I can use Finder, Aptana (or whatever your editor of choice is) Photoshop, etc to work on files locally and simultaneously test them out in my "real" Apache/Lighttpd/MySQL/PHP environment!
I am using the exact same setup for accessing my documents folder between my Ubuntu host and the windows guest. Idem on my iMac. The only issues are when editing on the 2 platforms are the CR/LS, but that will be no issue on your setup.

How do you upload/edit files on your web development server?

For a long time now I have been using a local XAMPP installation on my OS X machine for all my web development. Because updating/maintaing XAMPP is such a pain, I set up an Ubuntu server for my web development.
I would like to know what you think is the best/easiest way to connect to your main development server to edit the files. What protocol do you use (smb, webdav, fdp, ldap, etc.)? Also, do you leave the files on your machine and let the server read the files form your hard drive (e.g. smb via a smb) or do you leave the files on the server?
I would go with SMB as your means of file transfer. How you do this is up to you. It depends on how often your files are accessed, how often they are updated, etc. If you plan on updating the files often (i.e. if you are in a rapid dev phase) then you can link them like you talked about. If the updating is infrequent and the amount of requests are high, upload them to the server. This will decrease the amount of stress on your LAN as the files are requested; in the other method the route would have been modem -- SMB server -- SMB share -- SMB server -- modem, wheras this way it is modem -- SMB server -- modem.
I use an Ubuntu Virtual Machine running the web server, git and vim. So I backup everything my Vim configuration and server config. For me is the fastest way to recover from a crash in example.
Also, you can use vim through ssh by
vim scp://myuser#server.com//home/myuser/file
A simpler example is to view source with an editor syntax, indent
vim http://domain.com
You can save ssh credentials too
I normally use Aptana (an Eclipse derivative) over ssh/sftp to edit the files directly on my server.
If you need to transfer files I suggest using something like FileZilla which will let you connect over ftp or ssh/sftp.
I used to map a SMB share of my LAMP server and edit the PHP files directly with Dreamweaver. Worked really well.
Lol, i'm the first one in the testimonial here. Oh memories.

Resources