I am trying to come up with a way (if its even possible) to make a custom tool for Perforce that will take 3 sets of different but specific changelists and submit them at a custom time and day of the week. Like 15:00:00 on a Sunday.
I have been looking around on the perforce site and here or even an already created tool I could take a look at but I cannot seem to find anything.
Any ideas would be great!
Thanks.
I ended up making a batch file to run the commands.
It took a while but I managed to make it run with all the arguments it needed.
Here is a snippet of what I ended up doing.
SET path=directory
SET port=portaddress
SET wrk=workspace
SET arg= arguments from a json file
echo Setting up Enviroment
start "" config.bat
echo Running Changlists through arg
start "" %path% %arg% 000000 %wrk% %port%
start "" %path% %arg% 000000 %wrk% %port%
There was quite a few good resources I found when coming up with this which may be useful to others who are new to batch.
This was the most helpful : Batch Help
The Perforce site also helped a lot too : Perforce
Related
so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value
What would be the best way to conditionally run an init.d script on linux based on hostname? I'm working with New Relic and some of the servers simply don't need it installed, but they're all otherwise basic copies of one another. This is Ubuntu.
I've tried (and failed) to put in a host conditional but for the life of me I can't get it working. Threw exits in the top of the file as well as in the start function, but it seems to fire up every time. Without knowing completely how those scripts are fired I'm a little confused on how to alter it to not fire if it server name isn't something like production, etc.
Any guidance would be super helpful.
Put this at the top of the script you would like to disable:
if [ $(hostname) != "goodhost" ]
then
exit
fi
replacing "goodhost" with the actual name of the host where the script is supposed to run.
Does that solve the problem?
For one of my Project, I have a certain challenge where I need to take all the reports generated in a certain path, I want this to be an automated process in "Linux". I know the way how to get the file names which have been updated in the past 120 mins, but not the files directly. Now my requirements are in such a way
Take a certain files that have been updated in past 120 mins from the path
/source/folder/which/contains/files
Now do some bussiness logic on this generated files which i can take care of
Move this files to
/destination/folder/where/files/should/go
I know how to achieve #2 and #3 but not sure of #1. Can someone help me how can i achieve this.
Thanks in Advance.
Write a shell script. Sample below. I haven't provided the commands to get the actual list of file names as you said you know how to do that.
#!/bin/sh
files=<my file list>
for file in $files; do
cp $file <destination_dirctory>
done
I am using following command to sync B vob files from A vob
clearfsimport -master -follow -nsetevent -comment $2 /vobs/A/xxx/*.h /vobs/B/xxx/
It works fine. But it will check in all the changes automatically. Is there a way to do the same task but leave the update files in a check out status?
I want to update the file for B from A. Build my programme, and then re-cover the branch. So if the updated files is an check out status, I can do unco later. Well with my command before, everything is checked in. I cann't re-cover my branch then.
Thanks.
As VonC said, it's impossible to prevent "clearfsimport" to do the check in. And he suggested to use a label to recover back.
For me, the branch where I did "clearfsimport" is branched from a label.Let's call it LABEL_01. So I guess I can use that label for recovery. Is there an easy way (one command) to recover the files under /vobs/B/xxx/ to label LABEL_01 ? I want to do it in my bash script, so the less/easy the command is, the better.
Thanks.
After having a look at the man page for clearfsimport, no, it isn't possible to prevent the checkins.
I would set a label before the clearfsimport, and modify the config spec for the new version to be created in a branch (similar to this config spec).
That way, "re-cover" the initial branch would be easy: none of the new version would have been created in it.
I searched a lot but i didn't find a solution for my problem.
I use CruiseControl.NET (1.4.4). My project (in ccnet.config) load a repository from a cvs server to a local repository, and launch some executables (msbuild, NUnit...).
I use a trigger (Interval or Schedule Trigger), that launch regularly my project. But if my project has not been modified, it always launch all next tasks. And I would like to avoid it. So i want to launch my project only if a commit has been detected.
Is there any solution for it please?
Thanks
Olivier
Your trigger needs to specify IfModificationExists:
<intervalTrigger
name = "dave"
seconds = "30"
buildCondition = "IfModificationExists" />
Although buildCondition="IfModificationExists" is the default anyway, so as long as its not set to ForceBuild you should be fine.
EDIT:
The URL Trigger might be of some use to you. You can set your svn server to modify a page on commmit and the CC.Net checks the page to see if it has changed, thus not getting all the files.
I start my project as below, which ensures that the tasks get executed only if there are modifications.
Hope this helps,
Anders, Denmark
Edited: My code excerpt didn't make it to the page - I've tried to replace less-than, bigger-than with brackets.
[project name="SpilMerePool" queue="Q2" queuePriority="1"]
[sourcecontrol type="svn"]
[trunkUrl]https://ajf-ser1.ajf.local:8443/svn/SpilMerePool/trunk[/trunkUrl]
[workingDirectory]c:\from_vc\SpilMerePool[/workingDirectory]
[executable]C:\Program Files\VisualSVN Server\bin\svn.exe[/executable]
[username]username[/username]
[password]password[/password]
[/sourcecontrol]
Just use IntervalTrigger, like this:
<triggers>
<intervalTrigger />
</triggers>
You can also add an modificationDelaySeconds, to wait for a number of seconds before starting the build after the last commit.
<modificationDelaySeconds>30</modificationDelaySeconds>
Thank you Anders Juul abd Andy for your quick answers.
By using the intervalTrigger with "IfModificationExists" build condition, the project must be loaded each time (it's logical ^^). But my project size is about 450Mo. So it's a little long.
So my last question is : can we execute all builds and next tasks when a commit command has been detected? (without loading all files, in CruiseControl).
I use TortoiseCVS (version 1.10.10). Maybe we can force CruiseControl project to be lauched after a commit?