I have 2 branches in my depot.
//depot/project/mainline/...
//depot/project/staging/...
I am using an in-house tool that manages the build of the project, and would like to create a build step that automatically promotes all the files from mainline into staging. I have been trying to write it using p4.net API, following the following example. I am able to run powershell commands from the build tool. My plan was to write a c# console application, compile it using the tool, and then execute it as a build step. Unfortunately I am getting nowhere with that example. I am able to create a client, create a branch spec and even sync the files down, but I for the life of me cant figure out how to submit the integrate.
I feel like I am trying to over-engineer a solution thought. This is something that should be easy to do. I am attaching my broken code below. If it dose not make sense, its because I am using trial and error to figure stuff out and didn't make a final pass through it yet. That said, if i don't need to use the p4 api, all the better. The only requirement is that there be no user input required to run the commands. If there is a merge conflict, I want to automatically accept the source.
Thanks
string uri = "server";
string user = "user";
string pass = null;
string ws_client = "Project-Temp-"+ Guid.NewGuid().ToString();
Server server = new Server(new ServerAddress(uri));
Repository rep = new Repository(server);
Connection con = rep.Connection;
con.UserName = user;
con.Client = new Client();
con.Client.Name = ws_client;
con.Client.ViewMap = new ViewMap();
con.Connect(null);
Credential cred = con.Login(pass, null, null);
rep.DeleteClient(con.Client, null);
rep.CreateClient(con.Client);
con.Client.ViewMap.Clear();
con.Client.ViewMap.Add("//depot/project/...", String.Format("//{0}/...", con.Client.Name), MapType.Include);
rep.UpdateClient(con.Client);
var files = con.Client.SyncFiles(new SyncFilesCmdOptions(SyncFilesCmdFlags.None, -1));
ViewMap vm = new ViewMap();
vm.Add(new MapEntry(MapType.Include,new ClientPath("//depot/project/mainline/..."), new ClientPath("//depot/project/staging/...")));
string msg = "Mainline to Staging";
BranchSpec bs = new BranchSpec("Project-Temp", user, DateTime.Now, DateTime.Now, msg, true, vm, null, null);
int change = -1;
IntegrateFilesCmdOptions BranchOptions = new IntegrateFilesCmdOptions(IntegrateFilesCmdFlags.None, change, -1, "Project-Temp", null, null);
rep.CreateBranchSpec(bs);
rep.UpdateClient(con.Client);
var integrated = con.Client.IntegrateFiles(BranchOptions);
con.Client.ResolveFiles(files, new ResolveCmdOptions(ResolveFilesCmdFlags.AutomaticYoursMode, change));
rep.DeleteClient(con.Client, null);
From the command line this is:
p4 integrate //depot/project/mainline/... //depot/project/staging/...
p4 resolve -am
p4 resolve -at
p4 resolve -ay
p4 submit -d "Integrate."
The "resolve -am" automerges all files without conflicts.
The "resolve -at" accepts the source of all remaining files.
On the very off chance there are source files that can't be accepted (e.g. the source revisions have been obliterated, or the source and target actions are incompatible), the "resolve -ay" ignores them.
Related
Goal: Whenever a pull-request is approved and reviewer completes(merges) the pull request in azure repos, I want to determine the list of files in that merge request. Based on the file extensions, I have to trigger the respective build pipeline.
Issue: As of now it doesn't seem possible from azure devOps portal and I need REST APIs to do so. Please point me to API/APIs which help me determine list of changed files in a merge-request
Analysis: I have also seen Get Merge Requests but not sure where to get the mergeOperationId in the pipeline.
You can get list of changed files in merge request from the UI page.
If you want to get a list of changed files through the rest api, then you need to use the method provided by Shamrai Aleksander.
First you need to use the Pull Request Commits rest api to get the pull request Commits.
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/pullRequests/{pullRequestId}/commits?api-version=5.1
Then according to the Commits-Get Changes rest api to read the path parameter to get the changed files, and get the list of changed files in a loop.
GET https://dev.azure.com/{organization}/{project}/_apis/git/repositories/{repositoryId}/commits/{commitId}/changes?api-version=5.1
As workaround, you can find completed PRs and check their commits. Example with GitHttpClient of .NET client libraries for Azure DevOps Services .
private static void ViewPullRequests(string TeamProjectName, string GitRepo, bool ClosedPRs = false, string TargetRef = "")
{
if (ClosedPRs && TargetRef == "")
{
Console.WriteLine("Define a target branch reference");
return;
}
var pullRequests = (ClosedPRs)?
GitClient.GetPullRequestsAsync(TeamProjectName, GitRepo, new GitPullRequestSearchCriteria { Status = PullRequestStatus.Completed, TargetRefName = TargetRef }, top: 10).Result :
GitClient.GetPullRequestsAsync(TeamProjectName, GitRepo, null ).Result;
foreach (var pullRequest in pullRequests)
{
var commits = GitClient.GetPullRequestCommitsAsync(TeamProjectName, GitRepo, pullRequest.PullRequestId).Result;
foreach (var commit in commits)
{
Console.WriteLine("{0} {1}", commit.CommitId.Substring(0, 8), commit.Comment);
GitCommitChanges changes = GitClient.GetChangesAsync(TeamProjectName, commit.CommitId, GitRepo).Result;
foreach(var change in changes.Changes)
Console.WriteLine("{0}: {1}", change.ChangeType, change.Item.Path);
}
}
}
Rest Api methods:
Pull Requests - Get Pull Requests
Pull Request Commits - Get Pull Request Commits
Commits - Get Changes
One issue with the solution provided by Shamrai Aleksander is that the set of changes that you would collect by running GetChangesAsync() in a loop would include changes that have been merged into your pull request from merge commits. As a result, the set of changed files would be much larger than the true set of files changed within the pull request you are analyzing.
You can retrieve the set of files that have been changed only by the selected pull request by using the GetCommitDiffsAsync() method in the .NET client library (this maps to the Get Diff REST API). What's unique about this method/API is that if diffCommonCommit is true, it will perform a diff from the "merge base" of the target branch. This avoids the problem of extra files showing up due to merge commits in your pull request.
You will need the commit IDs for the latest commits both your pull request branch and the branch your pull request is targeting. Here is an example of a method that detects if a file with a certain path has been changed in a pull request.
public static async Task<bool> DoesPullRequestChangelistContainPath(
string repositoryId,
GitPullRequest pullRequest,
string path)
{
// Get commit ID for production branch
var productionBranch = await GitClient.GetBranchAsync(
Constants.ProjectName,
repositoryId,
Constants.ProductionBranchName);
var productionBranchCommitId = productionBranch.Commit.CommitId;
// Get commit ID for PR branch
var pullRequestSourceBranchNameWithoutPrefix = pullRequest.SourceRefName.Replace(Constants.AdoBranchPrefix, string.Empty);
var pullRequestBranch = await GitClient.GetBranchAsync(
Constants.ProjectName,
repositoryId,
pullRequestSourceBranchNameWithoutPrefix);
var pullRequestBranchCommitId = pullRequestBranch.Commit.CommitId;
// Get the diff between the merge base of the two branches and the latest commit of the pull request branch.
// This gets us the set of changes between the "squashed" commits of the pull request and the production branch.
var response = await GitClient.GetCommitDiffsAsync(
repositoryId,
diffCommonCommit: true,
baseVersionDescriptor: new GitBaseVersionDescriptor() { Version = productionBranchCommitId, VersionType = GitVersionType.Commit},
targetVersionDescriptor: new GitTargetVersionDescriptor { Version = pullRequestBranchCommitId, VersionType = GitVersionType.Commit });
return response.Changes.Any(change => change?.Item?.Path?.Contains(path) == true);
}
I'm trying to use the ExchangeService.AutodiscoverUrl() method, but it's not working. It doesn't seem to be getting a URL, resulting in the error "Cannot read property 'AbsoluteUri' of undefined" from ExchangeCredentials.GetUriWithoutSuffix.
Here is my code ('c' is just a json object):
service = new EwsJS.ExchangeService(EwsJS.ExchangeVersion.Exchange2016);
service.Credentials = new EwsJS.ExchangeCredentials(c.UserName, c.Password);
service.AutodiscoverUrl("email#domain.com", RedirectCallback);
// I'm forcing the accepted redirect here.
function RedirectCallback(url) {
return true;
}
Autodiscover in ews-javascript-api needs major re-write to work properly.
Autodiscover is re-written, latest dev build is out with #next tag.
you can now use it when installing npm i ews-javascript-api#next, once stable build is out you can install regular build.
var Service = new ExchangeService(ExchangeVersion.Exchange2010_SP1);
Service.Credentials = new WebCredentials(user, pass);
//Autodiscover
Service.AutodiscoverUrl(user, this.RedirectionUrlValidationCallback);
console.log(Service.Url);
I've been trying to fetch files from the Integrity server with java MKS api, using command "si createsandbox" . It went fine connecting to servers and making directories, but won't fetch any files, even with the option "populate". So I tried another command, "si resync", and it won't get me anything either.
Here is my code, simplified:
public void createsandbox(CmdRunner cmdRunner)
String fullDir = "U:/workspace/test/Admin_Project";
String projectName = "/Admin_Project/project.pj";
Command command = new Command(Command.SI, "createsandbox");
command.addOption(new Option("project", projectName));
command.addOption(new Option("Y"));
command.addOption(new Option("cwd", fullDir));
Response reponse = cmdRunner.execute(command);
public void resync(CmdRunner cmdRunner)
String sandbox = "U:/workspace/test/Admin_Project/project.pj";
Command command= new Command(Command.SI, "resync");
command.addOption(new Option("sandbox", sandbox));
command.addOption(new Option("Y"));
Response reponse = cmdRunner.execute(command);
I tried them with command-line, and they worked fine. These are the command-lines I used.
si createsandbox --project "/Admin_Project/project.pj" --cwd "U:/workspace/test/Admin_Project"
si resync --sandbox "U:/workspace/test/Admin_Project/project.pj" --Y
Any thoughts or comments would be very useful for me, thx in advance.
figured it out.
I added a simple code adding another options,
command.addOption(new Option("R"));
which means "Recurse into subsandboxes creation"
We have a job hosted in an azure website, the job reads entries from a topic subscription. Everything works fine when we only have one instance to host the website. Once we scale out to more than one instance we observe the message is processed as many times as instances we have. Each instance points to the same subscription. From what we read, once the item is read, it won't be available for any other process. The duplicated processing is happening inside the same instance, meaning that if we have two instances, the item is processed twice in one of the instances, it is not splitted.
What can be possible be wrong in the way we are doing things?
This is how we proceed to configure the connection to the queue, if the subscription does not exists, it is created:
var serviceBusConfig = new ServiceBusConfiguration
{
ConnectionString = transactionsBusConnectionString
};
config.UseServiceBus(serviceBusConfig);
var allRule1 = new RuleDescription
{
Name = "All",
Filter = new TrueFilter()
};
SetupSubscription(transactionsBusConnectionString,"topic1", "subscription1", allRule1);
private static void SetupSubscription(string busConnectionString, string topicNameKey, string subscriptionNameKey, RuleDescription newRule)
{
var namespaceManager =
NamespaceManager.CreateFromConnectionString(busConnectionString);
var topicName = ConfigurationManager.AppSettings[topicNameKey];
var subscriptionName = ConfigurationManager.AppSettings[subscriptionNameKey];
if (!namespaceManager.SubscriptionExists(topicName, subscriptionName))
{
namespaceManager.CreateSubscription(topicName, subscriptionName);
}
var subscriptionClient = SubscriptionClient.CreateFromConnectionString(busConnectionString, topicName, subscriptionName);
var rules = namespaceManager.GetRules(topicName, subscriptionName);
foreach (var rule in rules)
{
subscriptionClient.RemoveRule(rule.Name);
}
subscriptionClient.AddRule(newRule);
rules = namespaceManager.GetRules(topicName, subscriptionName);
rules.ToString();
}
Example of the code that process the topic item:
public void SendInAppNotification(
[ServiceBusTrigger("%eventsTopicName%", "%SubsInAppNotifications%"), ServiceBusAccount("OutputServiceBus")] Notification message)
{
this.valueCalculator.AddInAppNotification(message);
}
This method is inside a Function static class, I'm using azure web job sdk.
Whenever the azure web site is scaled to more than one instance, all the instances share the same configuration.
It sounds like you're creating a new subscription each time your new instance runs, rather than hooking into an existing one. Topics are designed to allow multiple subscribers to attach in that way as well - usually though each subscriber has a different purpose, so they each see a copy of the message.
I cant verify this from your code snippet but that's my guess - are the config files identical? You should add some trace output to see if your processes are calling CreateSubscription() each time they run.
I think I can access the message id, I'm using azure web job sdk but I think I can find a way to get it. Let me check it and will let you know.
Is there anywhere in the service runtime that would tell me if I'm currently running on 'Staging' or 'Production'? Manually modifying the config to and from production seems a bit cumbersome.
You should really not change your configurations when you're based upon if you're in Prod or Staging. Staging area is not designed to be a "QA" environment but only a holding-area before production is deployed.
When you upload a new deployment, current deployment slot where you upload your package to is destroyed and is down for 10-15minutes while upload and start of VM's is happening. If you upload straight into production, that's 15 minutes of production downtime. Thus, Staging area was invented: you upload to staging, test the stuff, and click "Swap" button and your Staging environment magically becomes Production (virtual IP swap).
Thus, your staging should really be 100% the same as your production.
What I think you're looking for is QA/testing environment? You should open up a new service for Testing environment with its own Prod/Staging. In this case, you will want to maintain multiple configuration file sets, one set per deployment environment (Production, Testing, etc.)
There are many ways to manage configuration-hell that occurs, especially with Azure that has on top of .config files, its own *.cscfg files. The way I prefer to do it with Azure project is as follows:
Setup a small Config project, create folders there that match Deployment types. Inside each folder setup sets of *.config & *.cscfg files that match to particular deployment environment: Debug, Test, Release... these are setup in Visual Studio as well , as build target types. I have a small xcopy command that occurs during every compile of the Config project that copies all the files from Build Target folder of Config project into root folder of the Config project.
Then every other project in the solution, LINKS to the .config or .cscfg file from the root folder of the Config project.
Voila, my configs magically adapt to every build configuration automatically. I also use .config transformations to manage debugging information for Release vs. non-Release build targets.
If you've read all this and still want to get at the Production vs. Staging status at runtime, then:
Get deploymentId from RoleEnvironment.DeploymentId
Then use Management API with a proper X509 certificate to get at the Azure structure of your Service and call the GetDeployments method (it's rest api but there is an abstraction library).
Hope this helps
Edit: blog post as requested about the setup of configuration strings and switching between environments # http://blog.paraleap.com/blog/post/Managing-environments-in-a-distributed-Azure-or-other-cloud-based-NET-solution
Sometimes I wish people would just answer the question.. not explain ethics or best practices...
Microsoft has posted a code sample doing exactly this here: https://code.msdn.microsoft.com/windowsazure/CSAzureDeploymentSlot-1ce0e3b5
protected void Page_Load(object sender, EventArgs e)
{
// You basic information of the Deployment of Azure application.
string deploymentId = RoleEnvironment.DeploymentId;
string subscriptionID = "<Your subscription ID>";
string thrumbnail = "<Your certificate thumbnail print>";
string hostedServiceName = "<Your hosted service name>";
string productionString = string.Format(
"https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}",
subscriptionID, hostedServiceName, "Production");
Uri requestUri = new Uri(productionString);
// Add client certificate.
X509Store store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
store.Open(OpenFlags.OpenExistingOnly);
X509Certificate2Collection collection = store.Certificates.Find(
X509FindType.FindByThumbprint, thrumbnail, false);
store.Close();
if (collection.Count != 0)
{
X509Certificate2 certificate = collection[0];
HttpWebRequest httpRequest = (HttpWebRequest)HttpWebRequest.Create(requestUri);
httpRequest.ClientCertificates.Add(certificate);
httpRequest.Headers.Add("x-ms-version", "2011-10-01");
httpRequest.KeepAlive = false;
HttpWebResponse httpResponse = httpRequest.GetResponse() as HttpWebResponse;
// Get response stream from Management API.
Stream stream = httpResponse.GetResponseStream();
string result = string.Empty;
using (StreamReader reader = new StreamReader(stream))
{
result = reader.ReadToEnd();
}
if (result == null || result.Trim() == string.Empty)
{
return;
}
XDocument document = XDocument.Parse(result);
string serverID = string.Empty;
var list = from item
in document.Descendants(XName.Get("PrivateID",
"http://schemas.microsoft.com/windowsazure"))
select item;
serverID = list.First().Value;
Response.Write("Check Production: ");
Response.Write("DeploymentID : " + deploymentId
+ " ServerID :" + serverID);
if (deploymentId.Equals(serverID))
lbStatus.Text = "Production";
else
{
// If the application not in Production slot, try to check Staging slot.
string stagingString = string.Format(
"https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}",
subscriptionID, hostedServiceName, "Staging");
Uri stagingUri = new Uri(stagingString);
httpRequest = (HttpWebRequest)HttpWebRequest.Create(stagingUri);
httpRequest.ClientCertificates.Add(certificate);
httpRequest.Headers.Add("x-ms-version", "2011-10-01");
httpRequest.KeepAlive = false;
httpResponse = httpRequest.GetResponse() as HttpWebResponse;
stream = httpResponse.GetResponseStream();
result = string.Empty;
using (StreamReader reader = new StreamReader(stream))
{
result = reader.ReadToEnd();
}
if (result == null || result.Trim() == string.Empty)
{
return;
}
document = XDocument.Parse(result);
serverID = string.Empty;
list = from item
in document.Descendants(XName.Get("PrivateID",
"http://schemas.microsoft.com/windowsazure"))
select item;
serverID = list.First().Value;
Response.Write(" Check Staging:");
Response.Write(" DeploymentID : " + deploymentId
+ " ServerID :" + serverID);
if (deploymentId.Equals(serverID))
{
lbStatus.Text = "Staging";
}
else
{
lbStatus.Text = "Do not find this id";
}
}
httpResponse.Close();
stream.Close();
}
}
Staging is a temporary deployment slot used mainly for no-downtime upgrades and ability to roll back an upgrade.
It is advised not to couple your system (either in code or in config) with such Azure specifics.
Since Windows Azure Management Libraries and thanks to #GuaravMantri answer to another question you can do it like this :
using System;
using System.Linq;
using System.Security.Cryptography.X509Certificates;
using Microsoft.Azure;
using Microsoft.WindowsAzure.Management.Compute;
using Microsoft.WindowsAzure.Management.Compute.Models;
namespace Configuration
{
public class DeploymentSlotTypeHelper
{
static string subscriptionId = "<subscription-id>";
static string managementCertContents = "<Base64 Encoded Management Certificate String from Publish Setting File>";// copy-paste it
static string cloudServiceName = "<your cloud service name>"; // lowercase
static string ns = "http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration";
public DeploymentSlot GetSlotType()
{
var managementCertificate = new X509Certificate2(Convert.FromBase64String(managementCertContents));
var credentials = new CertificateCloudCredentials(subscriptionId, managementCertificate);
var computeManagementClient = new ComputeManagementClient(credentials);
var response = computeManagementClient.HostedServices.GetDetailed(cloudServiceName);
return response.Deployments.FirstOrDefault(d => d.DeploymentSlot == DeploymentSlot.Production) == null ? DeploymentSlot.Staging : DeploymentSlot.Production;
}
}
}
An easy way to solve this problem is setting at your instances an key to identify which environment it is running.
1) Set at your production slot:
Set it Settings >> Application settings >> App settings
And create a key named SLOT_NAME and value "production". IMPORTANT: check Slot setting.
2) Set at your staging slot:
Set it Settings >> Application settings >> App settings
And create a key named SLOT_NAME and value "staging". IMPORTANT: check Slot setting.
Access from your application the variable and identify which environment the application is running. In Java you can access:
String slotName = System.getenv("APPSETTING_SLOT_NAME");
Here are 4 points to consider
VIP swap only makes sense when your service faces the outside world. AKA, when it exposes an API and reacts to requests.
If all your service does is pull messages from a queue and process them, then your services is proactive and VIP swap is not a good solution for you.
If your service is both reactive and proactive, you may want to reconsider your design. Perhaps split the service into 2 different services.
Eric's suggestion of modifying the cscfg files pre- and post- VIP swap is good if the proactive part of your service can take a short down time (Because you first configure both Staging and Production to not pull messages, then perform VIP Swap, and then update Production's configuration to start pulling messages).