I have the following boxes:
a) A Windows box with Eclipse CDT,
b) A Linux box, accessible for me only via SSH.
Both the compiler and the hardware required to build and run my project is only on machine B.
I'd like to work "transparently" from a Windows box on that project using Eclipse CDT and be able to build, run and debug the project remotely from within the IDE.
How do I set up that:
The building will work? Any simpler solutions than writing a local makefile which would rsync the project and then call a remote makefile to initiate the actual build? Does Eclipse managed build have a feature for that?
The debugging will work?
Preferably - the Eclipse CDT code indexing will work? Do I have to copy all required header files from machine B to machine A and add them to include path manually?
Try the Remote System Explorer (RSE). It's a set of plug-ins to do exactly what you want.
RSE may already be included in your current Eclipse installation. To check in Eclipse Indigo go to Window > Open Perspective > Other... and choose Remote System Explorer from the Open Perspective dialog to open the RSE perspective.
To create an SSH remote project from the RSE perspective in Eclipse:
Define a new connection and choose SSH Only from the Select Remote System Type screen in the New Connection dialog.
Enter the connection information then choose Finish.
Connect to the new host. (Assumes SSH keys are already setup.)
Once connected, drill down into the host's Sftp Files, choose a folder and select Create Remote Project from the item's context menu. (Wait as the remote project is created.)
If done correctly, there should now be a new remote project accessible from the Project Explorer and other perspectives within eclipse. With the SSH connection set-up correctly passwords can be made an optional part of the normal SSH authentication process. A remote project with Eclipse via SSH is now created.
The very simplest way would be to run Eclipse CDT on the Linux Box and use either X11-Forwarding or remote desktop software such as VNC.
This, of course, is only possible when you Eclipse is present on the Linux box and your network connection to the box is sufficiently fast.
The advantage is that, due to everything being local, you won't have synchronization issues, and you don't get any awkward cross-platform issues.
If you have no eclipse on the box, you could thinking of sharing your linux working directory via SMB (or SSHFS) and access it from your windows machine, but that would require quite some setup.
Both would be better than having two copies, especially when it's cross-platform.
I'm in the same spot myself (or was), FWIW I ended up checking out to a samba share on the Linux host and editing that share locally on the Windows machine with notepad++, then I compiled on the Linux box via PuTTY. (We weren't allowed to update the ten y/o versions of the editors on the Linux host and it didn't have Java, so I gave up on X11 forwarding)
Now... I run modern Linux in a VM on my Windows host, add all the tools I want (e.g. CDT) to the VM and then I checkout and build in a chroot jail that closely resembles the RTE.
It's a clunky solution but I thought I'd throw it in to the mix.
My solution is similar to the SAMBA one except using sshfs. Mount my remote server with sshfs, open my makefile project on the remote machine. Go from there.
It seems I can run a GUI frontend to mercurial this way as well.
Building my remote code is as simple as: ssh address remote_make_command
I am looking for a decent way to debug though. Possibly via gdbserver?
I tried ssh -X but it was unbearably slow.
I also tried RSE, but it didn't even support building the project with a Makefile (I'm being told that this has changed since I posted my answer, but I haven't tried that out)
I read that NX is faster than X11 forwarding, but I couldn't get it to work.
Finally, I found out that my server supports X2Go (the link has install instructions if yours does not). Now I only had to:
download and unpack Eclipse on the server,
install X2Go on my local machine (sudo apt-get install x2goclient on Ubuntu),
configure the connection (host, auto-login with ssh key, choose to run Eclipse).
Everything is just as if I was working on a local machine, including building, debugging, and code indexing. And there are no noticeable lags.
I had the same problem 2 years ago and I solved it in the following way:
1) I build my projects with makefiles, not managed by eclipse
2) I use a SAMBA connection to edit the files inside Eclipse
3) Building the project:
Eclipse calles a "local" make with a makefile which opens a SSH connection
to the Linux Host. On the SSH command line you can give parameters which
are executed on the Linux host. I use for that parameter a makeit.sh shell script
which call the "real" make on the linux host.
The different targets for building you can give also by parameters from
the local makefile --> makeit.sh --> makefile on linux host.
The way I solved that one was:
For windows:
Export the 'workspace' directory from the Linux machine using samba.
Mount it locally in windows.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
For Linux:
Mount the 'workspace' directory using sshfs
Run Eclipse.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
In both cases you can either build and run through Eclipse, or build on the remote machine via ssh.
For this case you can use ptp eclipse https://eclipse.org/ptp/ for source browsing and building.
You can use this pluging to debug your application
http://marketplace.eclipse.org/content/direct-remote-c-debugging
How to edit in Eclipse locally, but use a git-based script I wrote (sync_git_repo_from_pc1_to_pc2.sh) to synchronize and build remotely
The script I wrote to do this is sync_git_repo_from_pc1_to_pc2.sh.
Readme: README_git-sync_repo_from_pc1_to_pc2.md
Update: see also this alternative/competitor: GitSync:
How to use Sublime over SSH
https://github.com/jachin/GitSync
This answer currently only applies to using two Linux computers [or maybe works on Mac too?--untested on Mac] (syncing from one to the other) because I wrote this synchronization script in bash. It is simply a wrapper around git, however, so feel free to take it and convert it into a cross-platform Python solution or something if you wish
This doesn't directly answer the OP's question, but it is so close I guarantee it will answer many other peoples' question who land on this page (mine included, actually, as I came here first before writing my own solution), so I'm posting it here anyway.
I want to:
develop code using a powerful IDE like Eclipse on a light-weight Linux computer, then
build that code via ssh on a different, more powerful Linux computer (from the command-line, NOT from inside Eclipse)
Let's call the first computer where I write the code "PC1" (Personal Computer 1), and the 2nd computer where I build the code "PC2". I need a tool to easily synchronize from PC1 to PC2. I tried rsync, but it was insanely slow for large repos and took tons of bandwidth and data.
So, how do I do it? What workflow should I use? If you have this question too, here's the workflow that I decided upon. I wrote a bash script to automate the process by using git to automatically push changes from PC1 to PC2 via a remote repository, such as github. So far it works very well and I'm very pleased with it. It is far far far faster than rsync, more trustworthy in my opinion because each PC maintains a functional git repo, and uses far less bandwidth to do the whole sync, so it's easily doable over a cell phone hot spot without using tons of your data.
Setup:
Install the script on PC1 (this solution assumes ~/bin is in your $PATH):
git clone https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles.git
cd eRCaGuy_dotfiles/useful_scripts
mkdir -p ~/bin
ln -s "${PWD}/sync_git_repo_from_pc1_to_pc2.sh" ~/bin/sync_git_repo_from_pc1_to_pc2
cd ..
cp -i .sync_git_repo ~/.sync_git_repo
Now edit the "~/.sync_git_repo" file you just copied above, and update its parameters to fit your case. Here are the parameters it contains:
# The git repo root directory on PC2 where you are syncing your files TO; this dir must *already exist*
# and you must have *already `git clone`d* a copy of your git repo into it!
# - Do NOT use variables such as `$HOME`. Be explicit instead. This is because the variable expansion will
# happen on the local machine when what we need is the variable expansion from the remote machine. Being
# explicit instead just avoids this problem.
PC2_GIT_REPO_TARGET_DIR="/home/gabriel/dev/eRCaGuy_dotfiles" # explicitly type this out; don't use variables
PC2_SSH_USERNAME="my_username" # explicitly type this out; don't use variables
PC2_SSH_HOST="my_hostname" # explicitly type this out; don't use variables
Git clone your repo you want to sync on both PC1 and PC2.
Ensure your ssh keys are all set up to be able to push and pull to the remote repo from both PC1 and PC2. Here's some helpful links:
https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh
https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Ensure your ssh keys are all set up to ssh from PC1 to PC2.
Now cd into any directory within the git repo on PC1, and run:
sync_git_repo_from_pc1_to_pc2
That's it! About 30 seconds later everything will be magically synced from PC1 to PC2, and it will be printing output the whole time to tell you what it's doing and where it's doing it on your disk and on which computer. It's safe too, because it doesn't overwrite or delete anything that is uncommitted. It backs it up first instead! Read more below for how that works.
Here's the process this script uses (ie: what it's actually doing)
From PC1: It checks to see if any uncommitted changes are on PC1. If so, it commits them to a temporary commit on the current branch. It then force pushes them to a remote SYNC branch. Then it uncommits its temporary commit it just did on the local branch, then it puts the local git repo back to exactly how it was by staging any files that were previously staged at the time you called the script. Next, it rsyncs a copy of the script over to PC2, and does an ssh call to tell PC2 to run the script with a special option to just do PC2 stuff.
Here's what PC2 does: it cds into the repo, and checks to see if any local uncommitted changes exist. If so, it creates a new backup branch forked off of the current branch (sample name: my_branch_SYNC_BAK_20200220-0028hrs-15sec <-- notice that's YYYYMMDD-HHMMhrs--SSsec), and commits any uncommitted changes to that branch with a commit message such as DO BACKUP OF ALL UNCOMMITTED CHANGES ON PC2 (TARGET PC/BUILD MACHINE). Now, it checks out the SYNC branch, pulling it from the remote repository if it is not already on the local machine. Then, it fetches the latest changes on the remote repository, and does a hard reset to force the local SYNC repository to match the remote SYNC repository. You might call this a "hard pull". It is safe, however, because we already backed up any uncommitted changes we had locally on PC2, so nothing is lost!
That's it! You now have produced a perfect copy from PC1 to PC2 without even having to ensure clean working directories, as the script handled all of the automatic committing and stuff for you! It is fast and works very well on huge repositories. Now you have an easy mechanism to use any IDE of your choice on one machine while building or testing on another machine, easily, over a wifi hot spot from your cell phone if needed, even if the repository is dozens of gigabytes and you are time and resource-constrained.
Resources:
The whole project: https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles
See tons more links and references in the source code itself within this project.
How to do a "hard pull", as I call it: How do I force "git pull" to overwrite local files?
Related:
git repository sync between computers, when moving around?
I have found lots of questions about coping content of a file in Vim from one file to another and of course there are various ways to do it (1-2). I'm working on a remote machine and I'm going to copy large amount of data from a file in Vim to my laptop. None of the methods I have found yet works for coping from a remote machine. I have to use scp for any thing that I need from the remote machine. Do you have any idea?
I can't use "*100yy to copy content and remote machine does not have
any graphical editor. – Abolfazl
Naturally, if the editor has no notion of a GUI or a window system, it cannot use the clipboard, let alone that of your networked laptop. One option is to use the copy function of the local terminal program where you enter ssh, but of course that is practically limited by the terminal size.
If your laptop runs Windows, I recommend using WinSCP instead of ssh and the remote vim - you can configure WinSCP to edit the remote file with your local gvim, with all its features.
In your terminal:
$ man ssh
/-X
I'm confused about setting up Jenkins slave on Mac. Google seems to have a great answer for java web start option (https://blog.codecentric.de/en/2012/01/continuous-integration-for-ios-projects-with-jenkins-ci/), however can someone clarify steps for setting up jenkins slave on mac with ssh start option.
Currently jenkins master is on Centos. As I understood, to make a slave on Mac you should:
1. Go to Mac and create a new full-fledged sudo user for jenkins with home folder where agent itself will reside.
2. Set up node as ususal linux node in Jenkins web interface with login|pass for this user.
3. Restrict your mac build to this node.
However I'm not sure if first step is right - do i need to set up jenkins user by hand with elevated privileges, ability to log onto machine, etc. Perhaps it's possible to create a "hidden" user - if that is so, can someone help or point to good manual for this? I'm new to mac terminal, so not sure if steps all the same as linux or different.
Thank you.
Just finished setting up my Mac mini slave for ssh access. Lots of old tutorials and ones with unnecessary information. I had to reboot my mini to start over again and this time it worked.
To put it quickly (this is all through terminal/command line, no Ubuntu nothing else):
Create ssh private and public keys with ssh-keygen. In my case keys were given to me with -C "name" but no passphrase and with file names of id_rsa and id_rsa.pub. Keep private (non .pub) key to be used by Jenkins Credentials later and for testing purposes while verifying things are working for ssh connection without having to relaunch Jenkins agent, the private key should be kept in the /Users/<username>/.ssh directory and readable permission and ownership of the user of local test host if that's how I'm testing it
mkdir .ssh in remote Mac mini slave's /User/jenkins/ directory
on Mac mini make sure owner of .ssh directory and any sub directories or files are jenkins and NOT root (sudo chown ...).
make sure permissions of .ssh directory and any sub dirs or files are read and writeable (if you haven't set ownership properly, in order to change permissions you will be required to use sudo. If you are using sudo to set permissions, you haven't properly set ownership to the jenkins user)
allow remote login in the Mac mini system preferences -> Sharing -> check Remote login and allow Administrators and static IP -> Network -> TCP/IP -> DHCP with manual or completely manual
on test host/local machine (non Mac mini) terminal and command line ssh jenkins#static.ip.address.of.MacMini to make sure you can ssh into remote Mac mini with password authentication. You may get a request to okay the new host (at remote Mac mini's IP address).
then logout and in local machine use ssh-copy-id -i to copy contents of id_rsa.pub (whether its in .ssh or wherever) to authorized_keys found here.
this will automatically generate authorized_keys file in .ssh directory
make sure authorized_keys file is also of proper permission
in Jenkins manage nodes. Create a new node. Add credential and make it ssh username with private key. Username: jenkins. Private key: enter directly. String should be copied from local machine test host private key (pbcopy<~/.ssh/id_rsa) including the ==== begin and end private key ====== parts and then save.
Then on new node configuration No need for toolkit. Remote root directory: /Users/jenkins. Host: Mac mini's static IP address. Host Key verification strategy: Manually Trusted Key verification strategy. Check require manual verification of initial connection
upon first connection attempt if you don't have JDK setup and running then do so. I downloaded Java 8 Stack Exchange Development Kit and once I confirmed it was installed on Mac mini with javac-version and java-version I launched agent again and authenticated no problem.
My mistakes from reading old tutorials were:
trying to remove the need for passwords in /etc/ssh/sshd_config. This was completely unnecessary
Also, I may have not paid attention to the owner and/or screwed up permissions of .ssh, .ssh/authorized_keys & .ssh/id_rsa in remote and my local machine as well.
Initially I deleted the ===== Begin private key and ======End private key when I manually entered the private key when creating the credential in jenkins. Those should be included. The file of id_rsa should be left as is.
You do need a user on the Mac which the Jenkins master will use to ssh in. But this is exactly the same as setting up a Linux slave.
Whether the user needs elevated privileges depends on what you want Jenkins to do with the account.
You also need to log into Mac from the console using an admin user and turn on remote login in the Sharing panel of System Preferences. In the same panel you can restrict the remote login to specific users or allow all users to log in.
If I were you, I would create a normal user for Jenkins using the Users and Groups panel in System Preferences. Creating a hidden user using command line tools is possible, but it is a bit involved. If you really want to go there, you can check how postinstall script in Jenkins Mac installer creates a user named jenkins:
https://github.com/jenkinsci/packaging/blob/master/osx/scripts/postinstall-launchd-jenkins
On a machine I don't have physically access to, I left gedit open with a text file I forgot to save. I can ssh to that machine. The OS running there is Ubuntu.
Is there any way I can save that file remotely?
Look at xdotool, which can programatically activate/move/resize windows, and simulate keyboard input and mouse activity.
xdotool search --classname gedit key ctrl+s
This will search for all of gedit's windows and press ctrl+s in them.
Yup! I can think of two straightforward ways.
If you have SSH access and root privileges, you can tunnel in and install and configure a remote desktop viewer (or use the default VNC, vinagre). You can then connect to your desktop, find the gedit window and literally press "save."
You can use X forwarding over SSH to forward the gedit window to your local machine, where you can also just press "save." Note that you'll have to change the X display of the gedit instance so it's forwarded. This may be slightly tricky, but you can give it a try. You'll find many guides to X forwarding with a simple search.
There's definitely a hackish way to take the contents gedit is holding in memory and write them to file, but I think using a VNC client is a much, much simpler option.
My father died recently and I've inherited his Mac. I'd love to put it to use in my own life, but I don't want to wipe its brains out just so I can reconfigure it to use in my network, etc. His old files are historically important to me—I trust you can understand my desire to keep them.
I can log in as I had an account on the machine before he passed, but that's about it.
Sincere condolences.
Try this: Mac OS X - Resetting a Forgotten Account Password
The link shows a walk-through of starting up from the Mac OS X installation DVD and using its Reset Password functionality to reset the administrator account's password.
There's further information here: Mac OS X: Changing or resetting an account password
If you don't want to make any changes to the mac, a little known feature called Target Disk Mode might make it easier.
If open firmware password has not been set, you can try entering single user mode by pressing apple-s during boot. Then you should be able to do anything you want, since you will be root.
See this apple support document.
I recommend using this capability to make all your father's files readable by your normal user account, then backup all his files and do a fresh reinstall.
If your account is an administrator account. You can open up Terminal.app and type
sudo passwd root
You will be prompted to enter a new root password.
If you don't have administrator access, you can reset the root password using your osx installation cd. Instructions are here.
EDIT: Node's link is better.
Boot the computer with a BSD or Linux CD and mount the filesystem.
If the machine is running leopard or higher you can actually force
any mac to create a new administrator account by simply deleting one file.
Simply boot into single user mode, (hold command + s when booting up)
once in single user mode, mount the file system using mount -uw /
then input rm /var/db/.AppleSetupDone
restart the machine.
This works by removing the check .AppleSetupDone so it tricks the
mac into replaying the setup assistant on boot.
Else, you can do something very similar to older macs by again booting up into single user mode, except this time inputting the following (line by line)
mount -uw /
cd /private/var/db/netinfo
mv local.nidb local.old
rm ../.AppleSetupDone
exit
If you can access the files in question, then the first thing to do would be to back them up...
FWIW
assuming your account has admin privileges, boot the machine and enter your password; then use the Finder to navigate to his user folder (/Users/name-he-used-on-the-mac); his subfolders will have a little red locked symbol.
you can either copy/paste them in the finder (you will be prompted for your password), or
you can open terminal and ditto them:
sudo ditto <his files> <directory where you want the copies>
at which time you will be prompted for your password.