Where and how are Hudson jobs and slave information stored?
I accidentally canceled a Hudson upgrade today. It wouldn't permit me to continue the upgrade; only to downgrade to the previous version and then upgrade again. After I downgraded, the two jobs I had created in the recent past were gone from the dashboard along with the slave node I created for one of those jobs, and the job I had recently deleted showed up in the dashboard. After the upgrade, the jobs and nodes are in that same state.
What happened? Can I restore my recent jobs and nodes, and how would I do that? Please keep in mind that while I know C/C++ well, web services are out of my area and I don't really know what a jar or a war is... I just followed online directions to install and set up Hudson and it worked. I wish to avoid simply re-creating those jobs; setting up one of them was less than trivial.
More info: Looking in the configuration, the home directory is incorrect; it thinks HOME is /root/ instead of /home/hudson. How did it change, and how do I change it back?
The previous version of Hudson is 1.379. It's currently running 1.381. I'm running it on RHEL 5.
When I look in the .hudson/jobs directory, both of the recent jobs are there, and the previously deleted job is not there. These job directories are missing their "workspace" directories.
As you've noticed, job configuration is stored in HUDSON_HOME/jobs/[name]/config.xml.
Slave configuration is stored in the main Hudson config file, HUDSON_HOME/config.xml.
I'm not sure why Hudson didn't pick up the jobs when you restarted after the upgrade. Checking the Hudson log might provide a clue, usually /var/log/hudson/hudson.log.
If your jobs' config.xml files are present, Hudson might be able to reread them if you reload your configuration (Manage Hudson -> Reload Configuration From Disk). If Hudson still doesn't recognize them (and the config file is present), your best bet is probably to recreate the jobs manually grabbing whatever you can from the config file (keeping in mind that XML escapes are applied to text fields like the build commands).
I got a helpful clue when I revisited the "manage hudson" page and saw a message that I had data in an old and unreadable format. That suggested Hudson was running a .war that was different from the one used more recently. So I searched the disk for any "hudson.war" files and found two; one from a couple of weeks ago and one from some months ago. The newer one is in the place I expected to find one, and the older one was elsewhere. I renamed the older one. Also, I have a start-hudson.sh script, added 'export HUDSON_HOME=/home/hudson' to that script, and used it to restart hudson. Lo and behold, my new jobs were back and working.
I would have thought that simply naming the HUDSON_HOME variable would have done it, but I did that first and restarted Hudson, and no joy. It was only after I renamed the older .war AND had set the environment variable that I found the fix. My guess would be that the older .war file had root set as HUDSON_HOME and that somehow that .war was being run, but the version showed on the page was the current version. I don't understand it, but I'm happy to be back in business.
Related
I am working on learning Hybris. I have successfully install hybris, there are lots and lots of blogs out there that talk about getting the core hybris install with your own custom moduleds to make changes to, such as this one:
http://javainsimpleway.com/hybris-b2b-installation/
In the blog above the gentleman creates a mystore. The question I have is this: Once you have this all setup, you have made changes to the mystore modules and you want get those changes onto a new developers machine (or productions machine), who do you do it?
What I have tried, which does not work is this:
zipped up the bin/custom/mystore, config/local.properties, and localextensions.xml
followed his steps 1 thru 4
unzipped the files on the new machine
jumped down to step 12 where he does an ant clean all initialize
One difference between his process and mine is that I am adding some addon's. It is my impression that all those changes happen with custom/mystore, but to be safe between my steps 3 & 4 I have rerun the ant addoninstall for all four addon's.
The process I have documented, SmartEdit was not working and I found SAP's documentation about running ant npminstall because Hybris does not include npm-related 3rd party JavaScript libraries. This are blowing up when I go to run ant npminstall.
I really feel like I am trying to recreate the wheel here. I would imagine what I am trying to do is very common to any Hybris team, but I cannot find documentation on how to do it. Does anyone know of a blog out there that talks about how to migrate the source from one machine to another?
there are lots and lots of blogs out there that talk about getting the
core hybris install with your own custom moduleds to make changes to
Although they may be helpful, I would suggest you stick to official Hybris documentation (e.g. https://help.sap.com/viewer/4c33bf189ab9409e84e589295c36d96e/1905/en-US/8acc8a5a86691014a20781b3f738213e.html) which is quite rich.
Once you have this all setup, you have made changes to the mystore
modules and you want get those changes onto a new developers machine
(or productions machine), who do you do it?
For production deployment, please go through https://wiki.hybris.com/display/hybrisALF/Ant+Production+for+Continuous+Integration
However, for simply copying the things from one machine to another machine, whatever artefacts you have already copied to the target machine (after you have installed Hybris on the target machine), are correct. If you are working in a team, you typically set up an SCM (e.g. git, SVN etc.) code repository and then it becomes easier.
It is my impression that all those changes happen with custom/mystore
This is a wrong impression. When you run addon install it creates/updates the project.properties file in the addon; not in your custom/mystore. So, if the addon is part of the code repository (which is typically not the case unless it is a custom addon), anyone pulling your code on their machine will automatically get the addon project.properties and therefore they will not require to run addon install on their machines; otherwise, they need to run addon install on their machines. A workaround is to copy the content of the addon project.properties to the local.properties (and thus getting the changes to the target machine when the local.properties is copied to the target machine).
This are blowing up when I go to run ant npminstall.
Make sure to run ant npminstall as an admin user. Please check https://answers.sap.com/questions/12771768/smart-edit-unable-to-find-local-grunt.html for another option.
I have a project Pyramid Application. I store it on git and pull the branch to the server when I need update. Until now I was working on Koding but lately decided to check out azure and it's developer's benefits.
After I've created ubuntu server virtual machine (which actually is what runs under Koding) I've downloaded my project using git pull, but forgot to change the branch to the one I'm working on atm. So I did, but server still shows me the old page (like I didn't checkout the other branch). So I checked sftp and files show me they have been updated.
Why am I still seeing the old page?
Now I know the reason why! (at least I think, but please. correct me if I'm wrong)
I noticed that there was .pyc file for every .py file, and those are "compiled" (bit of simplification?) python files as I understood it. And it seemed to me that they were not "compiled" on app launch. But they compiled with setup.py... edit dates suggest that.
So the reason why I didn't see the changes I did in code was that... http.server was using old "compiled" files instead of the source files! But is that normal/expected behaviour? Dunno. There are many other quetions now, but main question was answered so I mark this as answer until someone gives better answer.
I want to be able to work across multiple workstations synchronously jumping from one to the other without having to worry about committing.
I have windows personal and work desktop and a Mac OSX laptop. At the moment, I point my project to a cloud directory and have the local install of Android Studio pointing to a gradle offline cache in another cloud directory. This keeps failing as it tells me that the path to gradle is invalid. Which I understand because gradle is referenced in different locations on different machine (considering the differing file management system in MACOSX and Windows7).
Edit: When I try to open the project, it brings up the "Import Project from Gradle" screen. To which it has the option for me to select "Use local gradle distribution" and select the Gradle home directory. I pointed it to the cache directory, and it tells me:
Cannot Save Settings
Gradle location is incorrect.
Location:C:/Users/Username/.gradle
All my research (include these answers here, and here) suggest that VCS is the way to go. However, I don't see this as a solution to my problem. I'm not looking to version control, I'm looking to transition seamlessly across workstations. Of course I will still use Version Control System for the purpose of saving a working version of my code, or sharing it with other developers, but there has to be a better way when I simply just want to keep all workstations synced.
I come from web development, and I synchronise local environment on AMPPS across multiple computers without any issue. This meant I can transition from my personal desktop, laptop, and work desktop instantly. It frustrates me if I have to remember to commit every time I move around. If I have to do this 20 times a day, and it takes about a minute to do this, that's 20 minutes that could have been spent writing a couple of functions. And what if I forget to commit, then I get to work, or home, that would be a day wasted because I won't actually have the current up to date code...
So the question remains, is there a way to instantly synchronise Android Studio projects? How do I keep all my code base (ie gradle) in sync?
Ok thanks to the comments above which pointed me in the right direction.
Android Studio create some local files that are specific to the machine that you are on. Following on this principle, to sync the "source" files (files that are specific to your application only), you must ignore all these local files. This is similar to what you would store on github. I followed the answer for this question to apply the ignore rules.
Having ignored all the "local files", when I create a new project, the source files are synchronised across all my workstations. In order to establish a local version, I need to "import" the project first. Once it has been imported, "local files" will be created for that particular machine. From then on, I can "open" the project locally.
To summarise:
Set your sync to ignore files as per .gitignore or refer to this question.
Create a project on one of your workstation and save it in the cloud.
When you are ready to work on the project for the first time on another workstation, "import" the project.
Once the project has been imported, all local files should have been created.
From then on, use the "open" option to continue working on the project.
I hope this helps somebody else, saving hours on googling.
What I'm trying to do:
I want to launch files to a .NET based website. Any time the dlls change, Windows recycles the web app. When I rsync files over the app can recycle several times because of the delay instead of the preferred single time. This brings the site out of commission for a longer period of time.
How I tried to solve it:
I attempted to remedy this by using the --delay-updates, which is supposed to stage all of the file changes in temporary files before changing them over. This appeared to be exactly what I wanted, however, giving the --delay-updates argument does not appear to behave as advertised. There is no discernable difference in the output (with -vv), and the end behavior is identical (the app recycles multiple times rather than once).
I don't want to run Cygwin on all of the production machines for stability reasons, otherwise I could rsync to a local staging directory, and then perform a local rsync, which would be fast enough to be "atomic".
I'm running Cygwin 1.7.17, with rsync 3.0.9.
I came across atomic-rsync (http://www.opensource.apple.com/source/rsync/rsync-40/rsync/support/atomic-rsync) which accomplishes this by rsyncing to a staging directory, renaming the existing directory, and then renaming the staging directory. Sadly this does not work in a Windows setting, because you cannot rename folders with running dll files in them (permission denied).
You are able to remove folders with running binaries, however this results in recycling the app every time, rather than just when there are updates to the dlls, which is worse.
Does anyone know how to either
Verify that --delay-updates is actually working
Accomplish my goal of updating all the files atomically (or rather, very very quickly)?
Thanks for the help.
This is pretty ancient, but I eventually discovered that --delay-updates was actually working as intended. The app only appeared to be recycling multiple times due to other factors.
CruiseControl.NET service needs to be restarted to pick up changes in the projects configuration files.
I find this very annoying, not sure if it's a bug or it's the way it works.
Is there any way to overcome this issue in people's experience?
If your projects are separated in a different file from ccnet.config, then you need to restart the service unless you touch the actual ccnet.config.
We use ENTITY with SYSTEM file reference in ccnet.config for our projects, so we're in the same boat. I'm happy to pay the price for easier project maintenance, as it's easy to script a restart:
net stop CCService
net start CCService
IISRESET
If you wanted to completely automate this, and had your projects under source control, then you could trigger an update and restart whenever your project files are touched.
There was a bug in CC.Net prior to 1.4.4 if you were using a pre-processor include it did not reload the configuration when an included ccnet.config file was modified.
That was a bug that I reported and it is fixed in CC.Net 1.4.4 and greater.
Also, keep in mind that if a build is running and there is a change to the configuration it will not take place until that build is in an idle state.
How are you updating your config files? By hand? Mine always recognizes and adjusts. Is your config file in source control and designed to pull it down and replace the file? This for me requires a kick. How I ended up fixing it was have my project pull it down to a seperate folder. THen I call ccnet.exe -validate on it to make sure it is well formed, then I copy it over ontop of the current config file. CC.NET recognizes the changes and loads in the new config
Exceptions: If cc.net is currently running a project, it will not recognize the changes till that project has completed.
If your ccnet.config has errors, it will not ever recognize the changes and keep running the old version it has stored in memory. (However when CC.NET does restart it will try to parse the error filled config and choke.
Hope this helps!!
Do you mean you are using linked files, that is the ccnet.config file has links to the independent project files.
If so then they are not picked up, it's mentioned in the documentation that it doesn't watch the sub-files.
Internally we have modified our CruiseControl.net so that our ccnet.config is optionally a directory - and we can drop shortcuts to our project config files into that directory. We put watches on the directory, the files or shortcuts in the directory and all of the targets of the shortcuts. That means we have our project config files in ClearCase and just drop a shortcut into the ccnet.config directory.
I've just spent half a day or so moving from 1.2 to 1.4.2 dropping our changes into the new version for our internal use. We don't own our code, our client does and so it has to stay internal :(
I have never experienced this. Whenever I change the configuration files, the CruiseControl.NET service seems to automatically re-read them.
I'm using Version 1.3 of CC.NET.
Update:
In the service's config file (ccservice.exe.config), there is a setting to enable/disable watching the ccnet.config file for changes:
<add key="WatchConfigFile" value="true"/>
Make sure this is set to true.