Problem performing svn export using Hudson that run as a service - linux

I have a script that perform a build as well as performing svn export. When I run hudson manually, by running it from a root user, I can do a build and svn export without a problem.
If I call hudson using a service (chkconfig), hudson runs okay. SVN checkout (assume this is URL1) is also fine since the credential is stored in hudson config. However, when my script tries to perform svn export (different from URL1; let's say this is URL2), it always fail. It says "Password for 'root': Authentication realm". This is basically error because my build server cannot provide the necessary credential to login into svn. This is what I don't understand, because I store the svn credential in my root account, and have no problem performing svn update/svn info to URL2 from the shell or when I start hudson manually (not as service).
My guess right now is that when we are running an app as a service, it does not load some/all stored user configurations? Any idea how can I force the service to load my svn credential? Any other solution/insight is also welcome.
Btw, my build server is running Red Hat 5.6
Thanks!!!

Can't give you a detailed answer, but one of the differences between an interactive shell and when running as a service is the first starts up by reading ~/.bash_profile, while the second runs ~/.bashrc
Try to compare the two!

Comparing the output of env in both contexts (within hudson and within your shell) should help you troubleshoot this.

By default, SVN stores user credential in their corresponding home directory. When a server is restarted, it will not load your profile (in my case, /etc/profile) as pointed by Tobu. So to solve this problem, we simply need to set the HOME folder to the correct location. Modify your service script for your apps to include the following line:
HOME=/<<user home folder location>>
This solves my problem.

Related

What means terminal prompts disabled?

I use gitlab since some years.
After an update of my mac book, one application fails on deploy with deployer.
fatal: could not read Username for 'http://mygitlab.org:22': terminal prompts disabled
I use the same gitlab server for all projects. The other projects are working well.
I compared the gig config file. No differences between the applications.
I tried to set/change the username. No success
I created a new repo on gitlab, and cloned it into my php storm. No success
Has someone an idea, where i have to search?
Thanks in advance!
Check the URL of that repository. A port 22 is the default one used by SSH, so seeing an HTTP URL used is strange, and would trigger a prompt for the username.
This differs from a git#mygitlab.org: URL (or ssh://git#mygitlab.org:22/...), which should not need any prompt, if the right SSH key is used (and has no passphrase, or if the passphrase is cached in an ssh-agent).

Bamboo 5.5.0 - How to delete a remote agent's capability via the bamboo-capabilities.properties file?

I am currently trying to automate the process of bamboo remote agent installation and uninstallation. I have run into a problem in regards to adding and removing capabilities.
What I am trying to automate:
(The following is what I do on the bamboo server via the GUI, I want to do this on the remote agent machine via bash script.)
I install the remote agent on a VM machine, then start it up. I go to the bamboo interface and click on the newly created agent's name.
I add a custom capability type, for the key I put 'buildserver' and for the value I put the name of the agent.
I add an 'Executable' capability of type 'Command' with Executable label 'cygwin' and path 'C:\cygwin64\bin\bash'
I navigate to the git executable, and remove it by clicking 'delete.' <--- (the problem step)
what I've done.
I have looked here and found a way to automate steps 1-3 using the following "bamboo-capabilities.properties" file:
buildserver="AGENTNAME"
system.builder.command.cygwin="C:\cygwin64\bin\bash"
However I am stuck on how I would remove the git capability (step 4.) I've tried something appending something like this to the file:
system.git.executable=""
but it does not seem to do anything. Does anyone know how I would do this? There seems to be very little documentation about this online.
Thanks very much.
I never found a way to get around this, but I found a workaround. I later learned the point of removing git in my situation was to allow a shared capability that was also called git to take precedence. My workaround was to set the non-shared capability to the value of the shared capability. I am not 100% sure that this does the same thing, and I am not in a position to test it yet, but as a capability seems to be only a key-value pair I don't see why it wouldn't.... will update if anything breaks.

IIS executable not executing

I have been looking at an issue for a week straight and have been unable to figure it out and I am desperate for the fix.
On a client site, we have two environments: UAT and PROD. UAT works perfect (Please keep this in mind). We are now trying to deploy the solution to PROD but certain parts of the solution are not working.
We have developed an asp.net application that we provide to clients to allow them to invoke SSIS packages (there are a couple of drop downs that they first select then click a button named "invoke").
When the user clicks the Invoke button, a batch file named InvokeSSIS.bat is called that assembles a command line call to dtexec with the appropriate parameters.
I'm having a problem with a particular package that is responsible for calling an executable which generates a spreadsheet that i will be importing into my system.
The executable is on an mapped H:\ drive.
I have modified the InvokeSSIS.bat batch file to capture the command the batch file is generating. If I execute this command from the command line, it works perfectly. From the webapp Invoker, it executes the package but the tasks responsible for calling the executable doesn't execute as the entire package takes only 1 second to complete (whereas it should take about a minute.)
The executable DOES have a GUI, but it is NOT interactive. This is because when you call the GUI with specific parameters, it automatically runs in batch mode and executes a macro used to generate the desired spreadsheet.
I know this is ok because it works on the UAT server AND it works from the command line!
I have checked the permissions on the executable (bu right-clicking the executable and clicking properties.) I have granted Full Control on the executable to the same user specified as the identity tab of the application pool i am using.
Can someone please help me? As I said I am dying over here!
Please let me know if you have any ideas or what other info you need.
Environment (both UAT and PROD)
OS: Windows Server 2003
IIS 6
asp.net 2.0
SQL Server 2008
Thanks!
Steve
You can't use a mapped drive with IIS.
You must use the \\servername syntax to reach files on other systems.
I agree with user544284 that this is at least in part a mapping issue. I'll ignore for a minute the complete insanity of having a web application call a batch file to start an executable that's on a remote network drive through a drive letter mapping.
Most likely the UAT box has something set up that maps that drive letter for you which Prod is missing.
The only other possibility is a security violation is occurring. Running .exe's from a network drive is generally frowned on. Do the two environments have the exact same version of windows? Are they configured the same with regards to UAC? Any differences here are going to be important.
Which brings up an interesting thought. I wonder if someone logged in to the UAT server using the same account credentials the app pool is using and added the ip address of the machine where the exe lives to the list of "Local Intranet" sites... Or, if they installed SSIS on the UAT server itself.
Just because YOU can log in to the server and run it on the command line means nothing. You have to find out if the drive letter is mapped at all for the user that the web app is running under and whether that user has the required security bits and whether the local OS will allow it regardless.
Okay, I can't ignore it: hairbrained is the nicest adjective I can come up with for this "architecture". Do yourself a favor and go back to the drawing board on this one. It has the word "brittle" written all over it, as you have already found. Instead of building out a batch file to call dtexec, just do it directly either by something like this or this.

Setting up a local SVN repository with encrypted passwords with TortoiseSVN

I am planning on using a local repository, using only TortoiseSVN's "create repository here" feature.
The repo is created and I can read and write to it just fine. The problem is that I can't get authentication to work. I thought I wanted Windows authentication, but I actually want the simple text-file based authentication so I can force the current system user (i.e. any person can be using the same Windows account and I want to differentiate between them) to provide their name and password. I haven't found any information on how to do this without svnserve running.
So far, I have modified svnserve.conf like this:
anon-access = read
auth-access = write
password-db = passwd
realm = LocalOnly
I didn't mess with the [sasl] section.
I also modified passwd:
[users]
harry = teH0wLIpW0gyQ
I am trying to use encrypted passwords created with a simple perl script. However, regardless of what I do with the repo (i.e. including writing to the repo), I am never prompted for a password.
I tried clearing TortoiseSVN's authentication cache since I do connect to a remote repo, but this didn't matter at all.
Has anyone tried this and succeeded? Or is it not possible without svnserve?
Not possible without svnserve - it takes care of the challenge/response.
Try Subversion Edge. you can edit the file you are mentioning using the GUI provided by the tool. It uses its own http server(not svnserve or IIS).
Unfortunately your best bet with a local repository is to use your file system permissions. A simple and free option for a server (that's easy to manager) would be VisualSVN Server. You can hang it off or a workstation or drop it on a public webserver somewhere. I now have mine setup with a reverse proxy with IIS7 so it's integrated with the rest of my web site.

What's a good way to deal with permissions with automated deployments across domains?

I have been working to automate some deployment processes using just Nant for the time being. Once the Nant script scripts are fairly stable and proven, I'll be looking to incorporate CruiseControl.net or similar product.
With that being said, I ran into a snag today.
I have a Nant script that will copy files from a network share that is used as a staging area to the destination (very simple to start out with). In my snag today, I was attempting to copy the files to another file share that was on a separate domain. The two domains do now have any trust between them.
The user running the Nant script had accessed both locations via Windows Explorer first to ensure that he had an authenticated session with both domains. When he ran the script, he of course got an access denied error since Nant.exe was running under the credentials of the other domain. This was an oversight on my part.
Does anyone have any recommendations on how to address this issue without touching AD?
There isn't any way that I can think of without touching AD that wouldn't be a hack. The right way to do it is to setup a service account in the domain where the nant job is running that has rights in the other domain. Then have the nant job run under the service account. The only hacky way that I can think of is to find a way to have the user with rights in both domains kick off the nant job and I'm not even positive that that would work.
It's rather hacky, but you could have a NAnt task plug in the credentials for the foreign share, either by P/Invoking WNetUseConnection or by issuing a command like
net use \\host\share password /user:user#domain
before copying the files across.

Resources