In my project we have three different branches where code resides.
I am giving name 1st, 2nd, 3rd as branch names. 1st is the starting brach of any code and merge happens to 2nd from 1st and to 3rd from 2nd.
Queries
How to get difference between two changelist given that-
a. Both changelist belong to same branch.
b. Both changelist belong to different branches lets say 1st and 2nd.
For a given changelist, is there any way to know if that changelist was ever backed out or rollbacked?
1a) p4 diff2 //depot/branch/...#CHANGE1 //depot/branch/...#CHANGE2
1b) p4 diff2 //depot/branch1/...#CHANGE1 //depot/branch2/...#CHANGE2
2) If you used the p4 undo command:
p4 filelog #=CHANGE (look for "undone by" in the history)
If you used P4V or some other method: no.
Related
I have huge_database.csv like this:
name,phone,email,check_result,favourite_fruit
sam,64654664,sam#example.com,,
sam2,64654664,sam2#example.com,,
sam3,64654664,sam3#example.com,,
[...]
===============================================
then I have 3 email lists:
good_emails.txt
bad_emails.txt
likes_banana.txt
the contents of which are:
good_emails.txt:
sam#example.com
sam3#example.com
bad_emails.txt:
sam2#example.com
likes_banana.txt:
sam#example.com
sam2#example.com
===============================================
I want to do some grep, so that at the end the output will be like this:
sam,64654664,sam#example.com,y,banana
sam2,64654664,sam2#example.com,n,banana
sam3,64654664,sam3#example.com,y,
I don't mind doing it in multiple steps manually and, perhaps, in some complex algorithm such as copy pasting to multple files. What matters to me is the reliability, and most importantly the ability to process very LARGE csv files with more than 1M lines.
What must also be noted is the lists that I will "grep" to add data to some of the columns will most of the times affect at most 20% of the total csv file rows, meaning the remaining 80% must be intact and if possible not even displace from their current order.
I would also like to note that I will be using a software called EmEditor rather than spreadsheet softwares like Excel due to the speed of it and the fact that Excel simply cannot process large csv files.
How can this be done?
Will appreciate any help.
Thanks.
Googling, trial and error, grabbing my head from frustration.
Filter all good emails with Filter. Open Advanced Filter. Next to the Add button is Add Linked File. Add the good_emails.txt file, set to the email column, and click Filter. Now only records with good emails are shown.
Select column 4 and type y. Now do the same for bad emails and change the column to n. Follow the same steps and change the last column values to the correct string.
Each week I manually pull a very large dataset. In the dataset there two columns A - “Name” , and B - “Description”. Sometimes column A is blank, so I go with the first word in column B. I have created a third column C - “Correction” with a formula that accomplishes this. Since this does not always provide the correct name, I have to manually correct many of the entries.
Each week, I have to manually pull the entire data set again. There is no way to only pull new entries, and there is no guarantee that the order of the entries is the same, as sometimes new entries slot in the middle of the set. So each time, I loose all of the manual corrections that I make.
Is there a method to keep track of and automatically reapply these corrections? I have a vague plan for a rat’s nest of IFs and VLookups using the previous week’s data but I wonder if there is a better way.
Appreciate any help you could lend!
Hello each month we receive a series of monthly returns from different accounts which go into a designated folder based on the account name. Each return has the new month's returns appended to all the previous monthly returns. I am running a vlookup function on my workbook based on the specific return I am looking for. Is it possible to change the source on the vlookup function so it takes the data from the most recently added file in the folder, that way it will contain all the most recent return data with all the previous returns?
Thanks
There are many ways to do that. The first step should be to connect to the designated folder. You should see then something like this:
Option 1: If the file contains the month
If your file contains the month you can use it to extract this information. Following the example above you could:
extract the first 7 charactes and parse it to a date.
sort the date in descending order, so the latest file will be on top
use keep rows to get rid of the rest of files
with the last file remaining, expand the content
Option 2: Use file properties
When you connect to a folder you can see the field "Date created". Use this the same way as explained in option 1.
Option 3: Remove duplicates
If for whatever reason the two options above are not possible, depending on your data you can:
join all files which will lead to duplicates
filter duplicates
This third option might not work if you could have two registers which look the same (all columns in the row have the same value) can appear in your dataset.
I am attempting to create a spreadsheet with a lot of data captured in it. The two requirements I have to meet are 1) group jobs/parts with the same PROJECT #, and 2 sort by JOB START DATE. I thought PivotTables were the best way to do so, but I keep running into a brick wall. I'm either unable to group by Project # (most likely because they are a mixture of numbers and text, this cannot be changed), or I'm unable to sort by Job Start Date.
I've tried moving Project # and Job Start Date from Rows to Values, as well as changing the order they're displayed in (Job Start Date before Project # and vice versa).
If grouped and sorted correctly, the records should show the grouped PROJECT # with the earliest start date first, then the next group with the next start date, etc.
An example would be:
>2074, 68506, BUC10626, 3/4/19
>>2074, 68568, AUC15393, 3/4/19
>>2074, 68570, AUC14509, 5/30/19
>2552, 69920, 99163786, 4/1/19
>>2552, 71066, H695359, 6/5/19
>1166, 71527, 5450926, 5/16/19
>2497, 71138, 2436-923, 6/11/19
>>2497, 73445, H646427, 7/24/19
>2704, 72682, AUC11771, 6/24/19
Pivot tables build a hierarchy. If you have a cascade of Project > Job > Part > Date, then you can only sort by date within the container of the previous level, i.e. Part.
If you have more than one part in the hierarchy, then different dates will show up sorted inside that part, e.g.
ProjectA
JobA
Part A
January 'these rows are
February 'sorted by
March 'date
Part B
August
September
Part C
March
April
If you want the projects sorted by date, then you need to have the date column before the project column.
How could we manage to import into a hg repo the differences of two repos.
I mean, say we have repo A, A2 and B. I would like to import to repo B(same file structure than A) the differences between A2 and A (A2 is just A with some changes).
I guess we should generate a diff between both directories and use hg import, but how should the diff be generated?
is there a better way to do this?
If B is totally equal to A you can (in B) just pull from A2
If B also differ from A, you can:
pull A from A2 (get additional head in A2 as result)
save diff of heads in A2 into file
import result of previous operation into B
And, BTW, you can streamline your current exotic workflow