How do I find out who deleted (removed) an Iteration from Rally? - excel

I am using the Excel Add-in to look at revision history. I would like to locate the information (User, Date, Time, etc.) when an Iteration in Rally was "removed". I am able to obtain this but get too many rows.
I want to know when an Iteration is removed from a project.
I don't want to know when the Iteration value is changed on a User Story.
Right now I am getting rows relating to both situations.
Any hints?
Regards,
Jim

Unfortunately there is not a specific single place to look for this, nor a single Webservices query that you can run to provide this information. Your best bet is to find a Work Product (User Story, Defect, etc.) that was scheduled into the Iteration, and examine its Revision History. There will be an entry similar to:
ITERATION removed [Iteration 3] 2013-May-15 12:19:43 PM America/Denver Mark W
That includes Date/Time and User that performed the delete.
This likely precludes Excel being the tool of choice to do this query as it would require you to query and parse many, many Revision History entries throughout your User Stories of interest.

Related

Excel - how to store several words into one combined substring

I am working with a document, where each row contains a description for a specific incident (fire incidents, where firefighters turn up and thereafter write a report).
The incidents/reports are written by several different people, so the language varies a lot, which makes it difficult to code for one specific context using one word: is.number(search(substring;text))
Because even if the word is in the text piece, the context is not related to what I am trying to analyse.
I want to broaden my word search to be more flexible, by being able to "put" or "store" several different words/phrases into my "substring" - being able to get closer to the specific context that I wish to analyse.
This way to cover more data that is in fact related, but different in how it is described in the individual incident reports.
I have tried to search for a solution myself, but am unsure on how to phrase this specific inquiry.
So far I have only been able to use the code piece above, which is a bit insufficient, when trying to comb through 2000 rows.
I hope that someone is able to help me!
Thank you
An example:
Store the following words: stopped fire, killed fire, fire was put out into: Killed fire
So that when I use Killed fire all the above wordings are included in my search.

Filtering SharePoint List by Another SharePoint List

I posted this question on Stack Exchange here: (https://sharepoint.stackexchange.com/questions/249418/filtering-sharepoint-list-by-another-sharepoint-list), but just realized I should have posted it to Stack Overflow instead. Hope it's not bad form to cross-post (I'll add a link to this post in the other post).
I've been searching the forums and doing research online with no luck- apologies if this has been answered before.
I have a list with several thousand items in it. I often receive bulk update requests where I need to update several hundred of these items at a time (let's say for this example that we're using a field called "Case ID").
Here's what I've tried:
Searching cases individually, or up to three at a time in datasheet view; this is not time effective
Exporting the list and manually manipulating the data in Excel, then pasting in (and writing over) the data in the column that needs to be updated; this approach is not user friendly, is not necessarily time effective, and has potential side effects (causing errors for users currently modifying items that I am changing in bulk)
Lastly- I know I can create custom views that isolate this data; the problem is that the lists of cases I need to modify generally do not have enough commonalities to isolate them using the view filter logic
So- my guess is that I need two lists, likely connected with a web part. The first list would exist solely for the purpose of querying the second list. I would enter the Case IDs I wanted to filter by in the first list, and the second list would filter to show only the Case IDs in the first list. All items would be deleted from the first list between queries.
I'm not married to this approach- it's just my best guess. I'm open to creative and alternative approached, but the final process needs to be user friendly (business partners will be using it).
Does anyone know how I can accomplish this? I've tried to get something implemented several times over the past few years and have never been successful; posting here is my last resort before I throw in the towel.
I have SP 2013, and have SharePoint Designer; please let me know if I need to add any other information.
Thanks in advance for the support,
Chad
I'd suggest to create a JSOM application that will do all updates. It can query only items for update and do item-by-item update.

data and structure cleansing of excel sheets

I have over 6,000 excel sheets. While all the sheets describe the same thing, they are independently formatted. They all have between 9 and 13 columns, but they are out of order, the column names are independently misspelled, and they may or may not have a second, or third, column header.
I am currently trying in python to read cells in a left-down-right-up motion to attempt to locate the same data, but there is physically too many differences in structure names, column ordering, and data definitions to lock them in one a time. Is there a tool that I can use to read these documents and conform them to a single format, via a rapid mapping function?
Thanks much.
Thanks
Wow, it's the Ultimate Data Horror Story.
I want to ask how you ever let it get this way... but I actually don't want to know; I'm already going to have nightmares about this.
It's like that Hoarding show on TV, but with data.
No, I'm afraid that if you can't even identify a pattern then there's no magic function that will be able to either.
But that doesn't mean it's a lost cause. It's just going to need some human interaction, and there are ways to minimize the pain.
What you need is a custom interface that will load the documents one by one, and will walk a human through clicking each relevant column or area, and then automatically load the next document.
There would also need to be buttons for sorting things like obvious garbage sheets (blanks?), "unknowns" (that get put in a folder for advanced research later), and other "unpredictables" may come up during the process.
Also, perhaps once you get into it, you'll notice a pattern you're not thinking of, like maybe *"the person who handled the files from 2002 to 2004 set them up this way"*, or, "when Budget is misspelled, it's always either Bugdet or Budteg".
In this scope, little patterns like that can make a big difference.
Depending on your coding skills, you may or may not need outside assistance with this. I assume this is not data that can just get thrown out, or you wouldn't be asking...
If each document took an average of 20 seconds to process, that would be about 33 hours in total. An hour a day an it's done in a month. Or someone full-time, and it's done in a week.
Do you have a budget you can throw at this? Data archaeology is an actual thing! Hell, I'll do it for you for the right price... (wouldn't break the bank, depending on how urgent it is, of course!)
Either way, this ain't going to be fun for "someone"...

Dynamics CRM 2011 Import Data Duplication Rules

I have a requirement in which I need to import data from excel (CSV) to Dynamics CRM regularly.
Instead of using some simple Data Duplication Rules, I need to implement a point system to determine whether a data is considered duplicate or not.
Let me give an example. For example these are the particular rules for Import:
First Name, exact match, 10 pts
Last Name, exact match, 15 pts
Email, exact match, 20 pts
Mobile Phone, exact match, 5 pts
And then the Threshold value => 19 pts
Now, if a record have First Name and Last Name matched with an old record in the entity, the points will be 25 pts, which is higher than the threshold (19 pts), therefore the data is considered as Duplicate
If, for example, the particular record only have same First Name and Mobile Phone, the points will be 15 pts, which is lower than the threshold and thus considered as Non-Duplicate
What is the best approach to achieve this requirement? Is it possible to utilize the default functionality of Import Data in the MS CRM? Is there any 3rd party Add-on that answer my requirement above?
Thank you for all the help.
Updated
Hi Konrad, thank you for your suggestions, let me elaborate here:
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Nice one but I don't think it is really workable in my case, the data will be coming regularly from client in moderate numbers (hundreds to thousands). Typically client won't check about the duplication on the data.
Workflow. Run a process removing any instance calculated as a duplicate.
Workflow is a good idea, however since it is being processed asynchronously, my concern is the user in some cases may already do some update/changes to the data inserted, before the workflow finish working.. therefore creating some data inconsistency or at the very least confusing user experience
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
I like this approach. So I just import like usual (for example, to contact entity), but I already have a plugin in place that getting triggered every time a record is created, the plugin will check whether the record is duplicat-ish or not and took necessary action.
I haven't been fiddling a lot with duplicate detection but looking at your criteria you might be able to make rules that match those, pretty much three rules to cover your cases, full name match, last name and mobile phone match and email match.
If you want to do the points system I haven't seen any out of the box components that solve this, however CRM Extensions have a product called Import Manager that might have that kind of duplicate detection. They claim to have customized duplicate checking. Might be worth asking them about this.
Otherwise it's custom coding that will solve this problem.
I can think of the following approaches to the task (depending on the number of records, repetitiveness of the import, automatization requirement etc.) they may be all good somehow. Would you care to elaborate on the current conditions?
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
Workflow. Run a process removing any instance calculated as a duplicate.
You also need to consider the implication of such elimination of data. There's a mathematical issue. Suppose that the uniqueness' radius (i.e. the threshold in this 1D case) is 3. Consider the following set of numbers (it's listed twice, just in different order).
1 3 5 7 -> 1 _ 5 _
3 1 5 7 -> _ 3 _ 7
Are you sure that's the intended result? Under some circumstances, you can even end up with sets of records of different sizes (only depending on the order). I'm a bit curious on why and how the setup came up.
Personally, I'd go with plugin, if the above is OK by you. If you need to make sure that some of the unique-ish elements never get omitted, you'd probably best of applying a test algorithm to a backup of the data. However, that may defeat it's purpose.
In fact, it sounds so interesting that I might create the solution for you (just to show it can be done) and blog about it. What's the dead-line?

How does the "mark as read" system on webforums work?

I've wondered about this for some time now. I'm wondering webforums implement the option to highlight something you haven't read. How the forum knows.
Since most webforums have a function to show you all posts since your last visit, they must save the last time you visited one of their pages in your userdata in the database.
But that doesn't explain how individual topics are still highlighted after you've read just one.
A many to many table connecting a user to a topic/post with flags for read/favorite etc.
Many web forums store a huge list of the last time you looked at each topic you've looked at.
This gets out of hand quickly, but there are mitigations. See Determining unread items in a forum
Keeping track of what posts a visitor has read is of course not that much of a big deal. Since it's highly likely that the number of posts a visitor read will be much less than the posts not read. So, if you know what posts a visitor has read, you also know what posts this visitor didn't read. To make this less computational intensive you'd normally do this only over a certain period of time, say the last two weeks. Everything before that time will be considered read.
Usually, this list of "unread" items only shows changes that have been made since the last time you logged out.
Use the user's last activity date/time to mark items as "unread" (any activity in a topic after that time is marked "unread"). Then store in a Session variable, a list of topic IDs that the user viewed since last login. Combining these two would give you a relatively accurate list of unread topics.
Of course this data would then be lost on log-out or session expire and the cycle would start again without sacrificing an unnecessary amount of SQL queries.
On the custom forum I used to work with, we used a combination of your last visit time (updated every time you viewed another page - usually cookied), and a "mark read" button on each topic that added a date/time value to a SQL table containing your UserID, the TopicID and the Date/Time.
Thus to view new topics we would look at your last visit date and anything created after that point in time was a new topic.
Once you entered a topic any topic you had clicked "mark read" on would only show the initial topic and then any replies with a date/time added after you clicked the mark read button. If you have fewer viewers and performance to spare you could basically set it up to add an entry to the table for every topic the user clicks on, when they click on it.
Another option you have, and I have actually seen this done before in a vBulletin installation, is to store a comma separated list of viewed topic ids client-side in a cookie.
Server-side, the only thing stored was the time of the user's previous visit. The forum system used this in conjunction with the information in the user's cookie to show 'as read' for any topic where either
Last modified date (ie last post) older than the user's previous visit
Topic ID found in the user's cookie as a topic the user has visited this session.
I'm not saying it's a good idea, but I thought I'd mention it as an alternative - the obvious way to do it has already been stated in other answers, ie store it server-side as a relation table (many to many table).
I guess it does have the advantage of putting less burden on the server of keeping that information.
The downsides are that it ties it to the session, so once a new session is started everything that occurred before the last session is considered 'already read'. Another downside is that a cookie can only hold so much information, and a user may view hundreds of topics in a session, so it approaches the storage limit of the cookie.
One more approach:
Make sure your stylesheet shows a clear difference between visited and non-visited links, taking advantage of the fact that browsers remember visited pages persistently.
For this to work, however, you'd need to have consistent URLs for topics, and most forum systems don't tend to do this. Another downside to this is that users may clear their history, or use more than one browser. This therefore puts this measure into the 'not highly reliable category'; you would probably just do this to augment whatever other measure you are using to track viewed topics.

Resources