My current tfs will be retired in next few months.I am using tfs api to create a parallel tfs on a new server from the existing one. I have folders and solutions that have been renamed. I am iterating items and based on their changetype(add, edit, delete, sourcerename etc), I am checking them in destination tfs.
I am not able to get Old filename for a file, in order to use PendRename when the item that is being iterated is Delete|SourceRename or Rename.
I tried the mentioned solution :
https://social.msdn.microsoft.com/Forums/vstudio/en-US/f9c7e7b4-b05f-4d3e-b8ea-cfbd316ef737/how-to-get-previous-path-of-renamedmoved-of-file-using-tfs-api?forum=tfsgeneral
But, my changeset has a lot of changes and hence identifying a particular file seems difficult.
Do we have something that interraltes two items (the deleted and renamed) ones other than the changeset, because there needs to be a uniquely identifier that associated the two items so that they may appear together in TFS history?
I read this at someplace and modified it for my purpose so that I can get parent branch for a given server item :
ItemSpec itemSpec = new ItemSpec(serverItem, RecursionType.None);
BranchHistoryTreeItem[][] results = sourceTFSHelper.VCS.GetBranchHistory(
new ItemSpec[] { itemSpec }, VersionSpec.Latest);
BranchHistoryTreeItem[] thisBranchResults = results[0];
foreach (BranchHistoryTreeItem treeItem in thisBranchResults)
{
BranchRelative requestedItem = FindRequestedItem(treeItem);
if (requestedItem != null)
{
return (requestedItem.BranchFromItem == null) ? null : requestedItem.BranchFromItem.ServerItem;
}
}
The support from TFS API can be definitely improved here. For the solution, you will have to look for the associated Delete change instance which is stored in the MergeSources container.
Below I pasted some sample code by which hopefully you will be inspired.
var changeInstances = VersionControlSvr.GetChangesForChangeset(
%YourChangesetId%,
false,
Int32.MaxValue,
null,
null,
true);
foreach (Change changeInstance in changeInstances)
{
Console.WriteLine(changeInstance);
//if (changeInstance.MergeSources != null && changeInstance.MergeSources.Count > 0)
//{
// var source = changeInstance.MergeSources.FirstOrDefault();
// Console.WriteLine($"{changeInstance.Item.ServerItem}");
// Console.WriteLine("was renamed from:");
// Console.WriteLine($"{source.ServerItem}");
// Console.WriteLine("::::::::::::::::::::::::::::::::::::::::::::::::::::::");
//}
}
Related
With soft delete turned on, I add a single record on the client, push, delete the added record push and then attempt to add a new record (and then push) with the same primary key as the initial record I get an exception. It would appear that EntityDomainManager just attempts to do a new insert without checking to see if the record is to be 'updated' instead of inserted.
However if I turn off soft delete in the domain manager constructor everything works fine.
We are using incremental sync, so soft delete as I understand it is required to make this work, so we don't end up with different pictures of what's right between mobile and server.
When is/are the recommended approach? A Custom EntityDomainManager (or other DomainManager)? If so it would be useful for more clarity on the interactions between the table controller and the domain manager.
I have constructed this custom domain manager which seems to work, but would appreciate any guidance/suggestions.
public class CustomEntityDomainManager<TData> : EntityDomainManager<TData> where TData : class, ITableData
{
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services)
: base(context, request, services)
{
}
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services, bool enableSoftDelete) : base(context, request, services, enableSoftDelete)
{
}
public async override Task<TData> InsertAsync(TData data)
{
if (data == null)
{
throw new ArgumentNullException("data");
}
// now then, if we have soft delete enabled & data has been provided with an id in it
if (EnableSoftDelete && data.Id != null)
{
// now look to see if the record exists and if it is deleted
// if so we look to remove the record before then attempting the insert
// record old value of deleted, since need to query to see if deleted.
var oldIncludeDeleted = IncludeDeleted;
try
{
IncludeDeleted = true;
var existingData = await this.Lookup(data.Id).Queryable.FirstOrDefaultAsync();
// if record exists, and its soft deleted then truly delete it
if (existingData != null && existingData.Deleted)
{
// now need to remove this record...
this.Context.Set<TData>().Remove(existingData);
}
}
finally
{
IncludeDeleted = oldIncludeDeleted;
}
}
if (data.Id == null)
{
data.Id = Guid.NewGuid().ToString("N");
}
return await base.InsertAsync(data);
}
This behavior is by design--we require that you do an explicit undelete before doing the update.
The solution you've presented is fine. You can also move the code to your table controller, assuming you only need this behavior in one table. If you need it in multiple tables, then the custom domain manager is the best approach.
I have a simple IBackgroundTask implementation that performs a query and then either performs an insert or one or more updates depending on whether a specific item exists or not. However, the updates are not persisted, and I don't understand why. New items are created just as expected.
The content item I'm updating has a CommonPart and I've tried authenticating as a valid user. I've also tried flushing the content manager at the end of the Sweep method. What am I missing?
This is my Sweep, slightly edited for brevity:
public void Sweep()
{
// Authenticate as the site's super user
var superUser = _membershipService.GetUser(_orchardServices.WorkContext.CurrentSite.SuperUser);
_authenticationService.SetAuthenticatedUserForRequest(superUser);
// Create a dummy "Person" content item
var item = _contentManager.New("Person");
var person = item.As<PersonPart>();
if (person == null)
{
return;
}
person.ExternalId = Random.Next(1, 10).ToString();
person.FirstName = GenerateFirstName();
person.LastName = GenerateLastName();
// Check if the person already exists
var matchingPersons = _contentManager
.Query<PersonPart, PersonRecord>(VersionOptions.AllVersions)
.Where(record => record.ExternalId == person.ExternalId)
.List().ToArray();
if (!matchingPersons.Any())
{
// Insert new person and quit
_contentManager.Create(item, VersionOptions.Draft);
return;
}
// There are at least one matching person, update it
foreach (var updatedPerson in matchingPersons)
{
updatedPerson.FirstName = person.FirstName;
updatedPerson.LastName = person.LastName;
}
_contentManager.Flush();
}
Try to add _contentManager.Publish(updatedPerson). If you do not want to publish, but just to save, you don't need to do anything more, as changes in Orchard as saved automatically unless the ambient transaction is aborted. The call to Flush is not necessary at all. This is the case both during a regular request and on a background task.
I am running a sharepoint caml query where I want to check that a field on an item is equal to one of many values. I do this dynamically and may wish to check against many hundreds of values.
I found that when performing a query with 780 OR elements I got an error related to server memory. Obviously this is variable across environments, but I am looking for some guidelines suggesting a maximum query length to which I should cap.
Thanks!
How about using ContentIterator?
http://community.zevenseas.com/Blogs/Robin/Lists/Posts/Post.aspx?ID=122
It supports recursion for walking a tree of items and acting on them in some way.
This code does a "publish all" on the style library files whose "FeatureId" properties match a particular value:
SPList styleLibrary = rootWeb.Lists.TryGetList("Style Library");
SPFolder folder = styleLibrary.RootFolder;
ContentIterator ci = new ContentIterator();
ci.ProcessFilesInFolder(
styleLibrary,
folder,
true,
new ContentIterator.FileProcessor((SPFile f) =>
{
// Check the FeatureId property the file's been "stamped" with
if (f.Properties.ContainsKey("FeatureId"))
{
if (String.Equals(f.Properties["FeatureId"] as string, featureId, StringComparison.InvariantCultureIgnoreCase))
{
if (f.Level == SPFileLevel.Checkout)
f.CheckIn(String.Empty, SPCheckinType.MajorCheckIn);
if (f.Level == SPFileLevel.Draft)
f.Publish("");
}
}
}),
new ContentIterator.FileProcessorErrorCallout((SPFile f, Exception Ex) =>
{
//Define the action I need to do if an error occur
return false;
}));
You could get all folders by SPList.Folders, iterate over the folders and filter it by whatever...
From code I've automatically created a lot of similar sites (SPWeb) in my site collection from a site template (in Sharepoint Foundation). Every site has a home page on which I've added the "what's new" web part (found under "Social collaboration").
Even though the web part has several "target lists" (I'd have called it "source lists") added to it on the template site, this connection is lost on the sites created from the template. So I need to programmatically find all these web parts and add the target lists to them. Looping the web parts is not an issue - I've done that before - but I can't seem to find a word on the net on how to go about modifying this particular web part. All I have is a brief intellisense.
I've found out that it recides in the
Microsoft.SharePoint.Applications.GroupBoard.WebPartPages
namespace, but on the lists provided on MSDN this is one of very few namespaces that doesn't have a link to a reference documentation.
Does anyone have any experience of modifying this web part from code? If not, how would you go about to find out? I can't seem to figure out a method for this..
Here is how I did it. It worked really well. I had a feature that created several list instances and provisioned the What's New web part. In the Feature Receiver, I looped through all of the list instances, indexed the Modified field, and then added the list to the web part:
private void ConfigureLists(SPWeb web, SPFeatureReceiverProperties properties)
{
List<Guid> ids = new List<Guid>();
SPElementDefinitionCollection elements =
properties.Feature.Definition.GetElementDefinitions(new CultureInfo((int)web.Language, false));
foreach (SPElementDefinition element in elements)
{
if ("ListInstance" == element.ElementType)
{
XmlNode node = element.XmlDefinition;
SPList list = web.Lists[node.Attributes["Title"].Value];
SPField field = list.Fields[SPBuiltInFieldId.Modified];
if (!field.Indexed)
{
field.Indexed = true;
field.Update();
}
ids.Add(list.ID);
}
}
string targetConfig = string.Empty;
foreach (Guid id in ids)
{
targetConfig += string.Format("'{0}',''\n", id);
}
SPFile file = web.GetFile("Pages/default.aspx");
file.CheckOut();
using (SPLimitedWebPartManager manager = file.GetLimitedWebPartManager(PersonalizationScope.Shared))
{
WhatsNewWebPart webpart = null;
foreach (System.Web.UI.WebControls.WebParts.WebPart eachWebPart in manager.WebParts)
{
webpart = eachWebPart as WhatsNewWebPart;
if (null != webpart)
{
break;
}
}
if (null != webpart)
{
webpart.TargetConfig = targetConfig;
manager.SaveChanges(webpart);
}
}
file.CheckIn("ConfigureWebParts");
file.Publish("ConfigureWebParts");
file.Approve("ConfigureWebParts");
}
If you are unsure about the property, export the web part from the browser, then open the .webpart/.dwp file with a text editor. Somewhere in the xml will be a reference to the source list.
*.webparts are usually easier to modify, just set the property.
*.dwps are harder because you sometimes have to get the property (eg ViewXML), then load it into an XmlDocument, then replace the property, and write the xml document string value back to ViewXML.
I'm working with Windows Azure Table Storage and have a simple requirement: add a new row, overwriting any existing row with that PartitionKey/RowKey. However, saving the changes always throws an exception, even if I pass in the ReplaceOnUpdate option:
tableServiceContext.AddObject(TableName, entity);
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
If the entity already exists it throws:
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<code>EntityAlreadyExists</code>
<message xml:lang="en-AU">The specified entity already exists.</message>
</error>
Do I really have to manually query for the existing row first and call DeleteObject on it? That seems very slow. Surely there is a better way?
As you've found, you can't just add another item that has the same row key and partition key, so you will need to run a query to check to see if the item already exists. In situations like this I find it helpful to look at the Azure REST API documentation to see what is available to the storage client library. You'll see that there are separate methods for inserting and updating. The ReplaceOnUpdate only has an effect when you're updating, not inserting.
While you could delete the existing item and then add the new one, you could just update the existing one (saving you one round trip to storage). Your code might look something like this:
var existsQuery = from e
in tableServiceContext.CreateQuery<MyEntity>(TableName)
where
e.PartitionKey == objectToUpsert.PartitionKey
&& e.RowKey == objectToUpsert.RowKey
select e;
MyEntity existingObject = existsQuery.FirstOrDefault();
if (existingObject == null)
{
tableServiceContext.AddObject(TableName, objectToUpsert);
}
else
{
existingObject.Property1 = objectToUpsert.Property1;
existingObject.Property2 = objectToUpsert.Property2;
tableServiceContext.UpdateObject(existingObject);
}
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
EDIT: While correct at the time of writing, with the September 2011 update Microsoft have updated the Azure table API to include two upsert commands, Insert or Replace Entity and Insert or Merge Entity
In order to operate on an existing object NOT managed by the TableContext with either Delete or SaveChanges with ReplaceOnUpdate options, you need to call AttachTo and attach the object to the TableContext, instead of calling AddObject which instructs TableContext to attempt to insert it.
http://msdn.microsoft.com/en-us/library/system.data.services.client.dataservicecontext.attachto.aspx
in my case it was not allowed to remove it first, thus I do it like this, this will result in one transaction to server which will first remove existing object and than add new one, removing need to copy property values
var existing = from e in _ServiceContext.AgentTable
where e.PartitionKey == item.PartitionKey
&& e.RowKey == item.RowKey
select e;
_ServiceContext.IgnoreResourceNotFoundException = true;
var existingObject = existing.FirstOrDefault();
if (existingObject != null)
{
_ServiceContext.DeleteObject(existingObject);
}
_ServiceContext.AddObject(AgentConfigTableServiceContext.AgetnConfigTableName, item);
_ServiceContext.SaveChangesWithRetries();
_ServiceContext.IgnoreResourceNotFoundException = false;
Insert/Merge or Update was added to the API in September 2011. Here is an example using the Storage API 2.0 which is easier to understand then the way it is done in the 1.7 api and earlier.
public void InsertOrReplace(ITableEntity entity)
{
retryPolicy.ExecuteAction(
() =>
{
try
{
TableOperation operation = TableOperation.InsertOrReplace(entity);
cloudTable.Execute(operation);
}
catch (StorageException e)
{
string message = "InsertOrReplace entity failed.";
if (e.RequestInformation.HttpStatusCode == 404)
{
message += " Make sure the table is created.";
}
// do something with message
}
});
}
The Storage API does not allow more than one operation per entity (delete+insert) in a group transaction:
An entity can appear only once in the transaction, and only one operation may be performed against it.
see MSDN: Performing Entity Group Transactions
So in fact you need to read first and decide on insert or update.
You may use UpsertEntity and UpsertEntityAsync methods in the official Microsoft Azure.Data.Tables TableClient.
The fully working example is available at https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet/blob/main/2-completed-app/AzureTablesDemoApplicaton/Services/TablesService.cs --
public void UpsertTableEntity(WeatherInputModel model)
{
TableEntity entity = new TableEntity();
entity.PartitionKey = model.StationName;
entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
// The other values are added like a items to a dictionary
entity["Temperature"] = model.Temperature;
entity["Humidity"] = model.Humidity;
entity["Barometer"] = model.Barometer;
entity["WindDirection"] = model.WindDirection;
entity["WindSpeed"] = model.WindSpeed;
entity["Precipitation"] = model.Precipitation;
_tableClient.UpsertEntity(entity);
}