Jenkins loses all data when rebooting pc - linux

I am currently learning Jenkins and how to utilize continuous integration. I am having an issue where all of my data/config files are reset after rebooting my PC. Has anyone had similar issues or am I missing something?

Sincerely, I didn't have this problem it's somehow strange because Jenkins, as I know, stores all the configurations in config.xml files in the installation directory. Not sure if it'll help you, but after a restart, if your data doesn't appear to be indexed in Jenkins, go to "Configure" and there you'll find "Reload configuration from disk".
What I didn't understood from your question:
after a restart of Jenkins+ PC, you data+settings don't appear in Jenkins GUI? Or they are also missing from the config.xml file from the installation directory. For example, if you create a user and a job, after a restart this settings are missing only from the GUI or also from config.xml & jobs directory.
How do you run Jenkins? You start Jenkins from Eclipse, you have it installed on your PC.
Does Jenkins have permissions to create/edit files in the installation directory? Be aware that after install, Jenkins creates a default username "JENKINS" and will try to edit files and create directories with that username on your PC.

Related

Spring Tool Suite 3.8.2 - Installation on Ubuntu

I managed to install STS 3.8.2 on Ubuntu 16.04 - with a lot of hacking experiments. I have it working, but I am not happy with my solution.
Here is what I had to do:
Extracted the tar file into /opt/sts-bundle.
If you put it anywhere else, like /opt/sts, the TC server fails to start from STS.
With files in /opt/sts-bundle, TC server still fails to start from STS - permission errors. To get it to work you need to futz around with permissions of the pivotal-c-server subdirectories, essentially you need to open it up your group (the same one running STS) (security hole ?).
A local install in your own ~/sts-bundle fails on "files not found" while attempting to backup - all the conf files. It still looks in /opt/sts-bundle for all these config files (just to copy them to /backup). You can change the top directory of the server in STS server properties - but it still looks in /opt/sts-bundle. Seems hard-coded - don't know where. So you have to create all the config files in the conf directory in the tree rooted at /opt/sts-bundle ("touch" works - creating empty files). TC Server still fails to start with a "failed to clean" error - with no clue from the detailed message what files are being "cleaned".
I tried creating a non-privileged user "tcserver" per suggestion from the Pivotal TC Server docs. I installed to /opt/sts-bundle, while logged in as tcserver (with sudo privileges). That fails when I am using STS as a regular developer that is not "tcserver". Could not figure out how to tell TC server to run under a different user than the one that started STS.
The solution I have working and I am not happy with, starts by extracting the tar.gz file into /opt/sts-bundle, as it wants. Then changing owner and group of sts-bundle to my id and my group (same ones that are used in STS UI). I am not happy with that. It seems wrong to put things in /opt that are owned by a single developer.
I am new to Linux, and I still have some Windows habits that need to be unlearned.
The question is: how do I get the clean solution (installing using a "tcserver" user in the global /opt directory) to work for developers who are not "tcserver"? How should the tcserver user be related to the developers (same group?).
Am I making this problem harder than it should be? What am I missing?
I'm not sure this what you want, but I don't install the STS bundles in some kind of shared directory as a special user at all. I just install it in my user.home dir, as myself, and launch it from there.
It is very unsophisticated. I just download the tar.gz file, unpack it in my home dir and then launch it from a trivial bash script which looks something like this:
#!/bin/bash
/home/kdvolder/Applications/sts-bundle/sts-*/STS
That script is on my PATH. So I can just type 'STS' in a terminal and STS will start.
I don't have to do anything else and it works.
If you are trying to somehow install this so that several different users can run a shared installation then this isn't a good setup. But I think for your own personal laptop or desktop which only you are using, this simple setup is perfectly fine.
For a shared-user env, unfortunately, I don't know how to help you. It could be complicated to sort out all the permissions issues etc because Eclipse is a complicated beast w.r.t to installation of plugins etc.

How to migrate Jenkins job from windows local machine to Linux server?

I installed Jenkins in my local machine(Windows) & I have created one new job using Jenkins and it's works perfectly... Now I have installed Jenkins in one dedicated Linux Server... How to migrate the job from windows(local machine) to newly installed Jenkins on Linux server??
The safest solution is to use the Job Import plugin.
Install this plugin on the Linux server, and next import the job from the Windows Jenkins URL :)
You can also check in your Job's configs with some smart .gitignore (or whatever your choice of SCM is) and use the %JENKINS_HOME% as a checked in and versioned directory in the SCM of your choice.
Job config's are OS independent, though the job itself might have OS specific scripts (if you use a shell script instead of a mvn pom file / ant build.xml).
Then you can just check out your checked in job repo to the new linux host's $jenkins_home directory and start up jenkins. all your jobs should be found and added to your linux jenkins (with out the need for a plugin).
Generally speaking... the less plugins, the more stable your Jenkins install will be.

Qt creator cannot upload files onto the remote device

I have been using the QtCreator to develop qt application for my remote generic Linux device, when i press the 'Run' button, the program will be deployed into the targeted directory on the remote device and running automatically, everything is fine until recently, i just changed lines of code, but haven't change any settings of the project, after that i'm not able to upload the program onto the remote device anymore, in the .pro file:
TARGET = Test
target.files = Test
target.path = /home/root
INSTALLS += target
The compile output info shows that:
mkdir: cannot create directory '/home/root': permission denied
Failed to upload file...
Deploy step failed.
Error while building/deploying project Test
When executing step 'Upload files via SFTP'
This is confusing, because i'm not creating the directory but just deploy the program into it, that's what i did before and it worked alright.
I was suspecting maybe i need to update the SFTP to newer version, but based on the fact that i can still manually upload files to the remote device via SFTP without any problems, so i guess this is not the reason.
Is anyone here encountered this issue before? Any suggestions and comments are appreciated, and thanks in advance.
check /home/root folders can have rights to access by using command (ls -l)
I just found out the problem has nothing to do with ssh or access right.
It is because i have added more than one linux generic devices, but i'm using the kit for the project with selecting the wrong device.

Need to create dfs.domain.socket.path manually in Hadoop-2.0.0 to use Impala?

I am following the instructions to configure hadoop-2.0.0 cluster for installing Impala. In hdfs-site.xml, I add two properties "dfs.client.read.shortcircuit" and "dfs.domain.socket.path" (/var/lib/hadoop-hdfs/dn_socket).
But when I start the Hadoop cluster by start-dfs.sh, it fails to start datanodes. The log in datanode says that "failed to stat a path component: '/var/lib/hadoop-hdfs'". Then I create /var/lib/hadoop-hdfs manually, and start Hadoop cluster again. It fails again and log says that it's the permission problem about that directory. OK, fine. I change the owner of hadoop-hdfs from root to ubuntu (ubuntu is the machine username). Now it finally works normally.
I am just confused. Am I doing in the right way? Do we really need to create /var/lib/hadoop-hdfs by ourselves and change the permission or the owner of that directory? Or I missed some configuration setting?
I was running into similar problems using Cloudera Manager. It was an issue of trying to run in 'single-user mode' instead of using root. I think you are doing something similar with user ubuntu. Is this a clean install or are you upgrading / did you have a failed install last time?
I'm guessing you sudo-ed somewhere you should have run something as 'ubuntu'.
If you can make it work by manually setting permissions, go for it. I have a feeling there are lots of other files owned by root that should be owned byubuntu lurking about in your system.
Anecdotally, if there is no critical data in the server, I have found it is easier to very thoroughly remove any and all files from the old install and then reinstall fresh.
I was facing a similar issue with starting the datanodes. Then, I came across this link https://github.com/cloudera/Impala/wiki/Build-prerequisites, where it states that we need to create the /var/lib/hadoop-hdfs manually and set the appropriate permissions. This has also fixed my problem.
Make certain directory /var/lib/hadoop-hdfs/present is OK.

Jenkins to SCP a few folders and files from SVN to linux server?

I use jenkins to do auto deployment weekly to a tomcat server, and it is fairly simple to do using the "curl" with the tomcat manager. and since i am only uploading a .war file, so its very straight forward.
But when comes to a backend console application, Anyone has any idea how to use jenkins to upload an entire "set of folders with files" onto a linux box? The project that i have is built via ant and has all the folder inside the SVN.
A couple things come to mind.
Probably the most straightforward thing to do is use the ant scp task to push the directory / directories up to the server. You'll need the jsch jar on your Ant classpath to make it work, but that's not too bad to deal with. See the Ant docs for the scp task here. If you want to keep your main build script clean, just make another build script that Jenkins can run named 'deploy.xml' or similar. This has the added benefit that you can use it from places other than Jenkins.
Another idea is to check them out directly on the server from SVN. Again, ant can probably help you with this if you use the sshexec task, and run the subversion task inside of that. SSHexec docs here
Finally, Jenkins has a "Publish Over SSH" plugin you might try out. I've not used it personally, but it looks promising! Right over here!

Resources