How to create a fresh Shelve Changelist in Perforce - perforce

I made some changes in few files (lets say F1, F2, F3) in workspace A. I shelved these changes (shelved ID#1) (no issues so far) and unshelved these changes to new workspace B (No issues). After unshelving in B, I made few more changes in files (F2, F3, F4 and F5) in workspace B. Now I want to move changes from workspace B to a new workspace C. Here I am facing problem.
When I tried Shelving from workspace B, it listed only files F4 and F5 in shelved list (Shelved Id#2), instead of all modifiled files (F1, F2, F3, F4 and F5). When I do $P4 opened ..., it lists all 5 modified files, However, shelved CL takes only the files which are modified in the exclusively workspace B.
I tried unshelving both the IDs 1 & 2). However, I did not get the changes made in workspace B in files F2 and F3.
How can I make my all changes (all 5 files) from workspace B to workspace C.
More specifically, how I can I create fresh Shelve from workspace B which contains all the changes, not just the changes made post unshelving.
Hope somebody can help me with this quickly.

To take all the files you currently have open in workspace B, and make a fresh shelf with all those changes, do:
p4 reopen -c default //...
p4 shelve
The first command takes all your opened files, and associates them with the default changelist.
The second command takes all files currently opened in the default changelist, and makes a new shelf with those changes.

Related

Script task to download specific folders from a Sharepoint active site using excel

I wanted to see if it was at all possible to write a script task to download specific folders from a sharepoint active site in an automated
fashion. The way that we currently do this task is that we would use an excel sheet that contain "pms" which would be the six digit number
that we would use to manually copy and paste into the search to find the the folder within the active sites documents. From there we
would highlight all the contents within that folder and download them. When the file is done downloading we manually copy and paste the file
name of the file changing it to the corresponding six digit number from the excel sheet that we originally used to find the file in the directory.
Once the file is renamed we move the file from the downloads folder to a different folder on the desk top named "pms" for future reference.
We then would go back on sharepoint highlight and delete the folder and the relative files within it. From there we do the process all over again
going down a list of pm numbers on the excel sheet downloading and deleting. The goal is to backup the files individually from sharepoint then
deleting them from the site. My first question would be is this even possible to automate? If so what would be the best way to accomplish this.
For example, here is a step by step process.
1- Copy file number aka Pm number from the excel file.
2- Go to sharepoint admin portal, go to the specific url of the portal in this case we are currently working to remove files from our Fielding portal.
3- Use the file number you got from the excel document and paste that in the search bar. Then you are going to highlight all associated files/documents.
4- It will download as "One_drivefile_date" something like this. We will need to then take the original file number that we used to locate these files and rename the newly downloaded folder to the PM number(file number). This way once we archive these documents we will be able to find them in the future via the file number.
5- Once you have downloaded and renamed the file... go back to sharepoint and delete all associated files. This is being done to remove old files for archive and create space within Sharepoint.
enter image description here

Automatically get data from a specific workbook among other workbooks with similar name

I have a folder which contains workbooks, and everyday a new file is uploaded to this folder.
I need to find a way where (another workbook I have) TES_Workbook automatically gathers data from the newest file in the folder.
For example TES_Workbook should get data from AKS_20210930 and not the other files, because of the current date.
All the names of the files starts with AKS_20210901, AKS_20210902, AKS20210903 etc.
Any who can guide me in the above question?
You can easily do this with Power Query.
Data tab in the ribbon > Get Data > From File > From Folder
Navigate to your folder, click Open, click Transform Data
Filter the extension column to csv
Filter the Date Created column to Is Latest
When you only have the latest file click the double arrows in the Content column

Updating external links and file locations Excel

I kinda have two questions in one.
I am creating a main budget file, budget files for specific districts aswell as report files connected to these district budgets.
The budgets are to be created for each district, so the main file is basicly just for a complete overview of budget and sales. There are about 30 districts, so quite a lot of files to connect.
-The report files gather data from a workbook in which sales are added, aswell as from the budget file for that specific district.
-The main budget adds the combined data from the different budget files and also gathers data from the sales workbook.
-The district budget files gather data from the sales workbook as a basis for the budget.
So first of all:
When the files are opened and I am prompted to activate content or update content. This will result in value errors unless I go to Data > Links > Open Files. For the main budget that means opening all 30 budget files to update everything.
Is there a way to make automate this process so that the files update without having to open all connected files?
Second:
I am creating this on my laptop and I am not sure how this is gonna be set up later. When I move the files, the connections will probably be lost since I am using formulas like SUMIFS etc. Is there a way to make the files search for the connected files without having to change every formula in a cell every time a file is moved?
Thank you for your time!
You could write VBA code to open all the files you need at once, then update and save your main sheet. If you aren't familiar with VBA, that would very much be a simple and straightforward intro project.
Also as far as moving sheets around, you are asking about Relative vs. Absolute paths. I would suggest reading more about that here: Relative vs. Absolute paths

Save versions of Excel file on Git to reconcile differences manually later

I will be one month updating Excel files. These files are in a language other than English. I thought I could use Git too to manage what I want to do.
The situation (the initial commit)
I have an Excel file that is written in the other language.
I have to perform some work and fill an Excel file with data from that.
My plan
After an initial commit, create a branch called toEnglish. Then translate some text on the Excel files to English so that I feel more comfortable. Once I do this I will commit.
Then, the one-month work will start and I will fill the data in the Excel file. I will commit periodically.
After the one month finishes, I will commit, and so I will have the data filled in a Excel page where some labels are in English.
However the output of that one month work has to have those labels in the original language.
So I have a original branch with the original language labels but no data
and the toEnglish branch with the data but English labels.
The question
I can not merge (fast-forward merge) the branches since that will eliminate the original language labels, so how can I merge in order to produce conflicts (the labels in two different languages) that I will solve one by one so that the final merge will have both the data and the labels in the original language?
There is an even bigger problem with versioning Excel files in Git, which is that Excel files (xls and xlsx) are binary. Git doesn't generally handle binary very well. Each commit you make on an Excel file will likely record the entire file as the diff. In addition, comparing Excel files from two different commits/branches won't give you much insight.
One workaround which comes to mind would be to version plain text CSV versions of your Excel worksheets. Such CSV files would likely version well with Git. Of course, if the worksheets have lots of rich content on top of the data, then this option might not work as well.
There is an open-source Git extension that makes Excel workbook files diff- and mergeable: https://github.com/ZoomerAnalytics/git-xltrail (disclaimer, I'm one of the authors)
It installs a custom differ and merger for xls* types and configures Git accordingly so that it behaves the same way as if it were a text file.
For docs and a short video, have a look at https://www.xltrail.com/client
Excel is a bit useless in Git - it does not matter whether it is a binary (xlsb or not xlsx) - it will just copy the file and leave it as it is. Thus, it is a bit of a challenge to do a working source control for VBA developers - in general it is accepted that it does not exist and cannot be done (this is what I usually hear), but there are some ways for workaround - e.g., if you follow MVC and you do not put any business logic in the worksheets.
What you can do is simply to save the worksheets to a csv and proceed working as if it is normal plain text. At the end, even some "manual" merge with formulas is possible, based on the different worksheets (this is the bonus excel gives).

Delete some columns, re-arrange remaining columns and move processed files for multiple .csv files using SSIS 2008 R2

Googled for some tips on how to crack this. But did not get any helpful hits.
Now, I wonder if I can achieve the same in SSIS or not.
There are multiple .csv files in a folder. What I am trying to achieve is to:
open each .csv file (I would use a parameter as filenames' change)
Delete some columns
re-arrange the remaining columns in a particular order
save the .csv file (without the Excel confirmation message box)
Close the .csv file
Move the processed file to another folder.
and re-start the entire above process until all the .csv files in the folder are processed.
Initially I thought I can use the For Each Loop Container and Execute process Task to achieve this. However, not able to find any resource as to how to achieve the above desired objective.
Example:
Header of every Source .csv file:
CODE | NAME | Value 1 | Value 2 | Value 3 | DATE | QTY | PRICE | VALUE_ADD | ZONE
I need to delete columns: NAME | VALUE_ADD | ZONE from each file and re-arrange the columns in the below order.
Desired column order:
CODE | DATE| Value 1 | Value 2 | Value 3 | PRICE | QTY
I know this is possible within SSIS. But am not able to figure it out. Thanks for your help in advance.
Easily done using the following four steps :
Use a "Flat file Connection" to open your CSV.
Use a "Flat file Source" component to read your CSV.
Use a "Derived column" component to rearrange your columns.
Use a "Flat file Destination" component to save your CSV.
Et voilĂ !
After a lot of experimenting, managed to get the desired result. In the end, it seemed so simple.
My main motive for creating this package was that I had a lot of .csv files that needed the laborious task of opening each file and running a macro that eliminated a couple of columns, rearranged the remaining columns in the desired format. Then I had to manually save each of the files after clicking on the Excel Confirmation boxes. That was becoming too much. I wanted just a one click approach.
Giving a detailed way of what I did. Hope it helps people who are tying to get data from multiple .csv files as source, then get only the desired columns in the order they need, and finally save the desired output as .csv files into a new destination.
In brief, all I had to use was use:
a For Each Loop Container
a Data Flow Task within it.
And within the Data Flow Task:
a Flat File Source
a Flat File Destination
2 Flat File Connection Managers - One each for Source and Destination.
Also, had to use 3 Variables - all String Data Types with Project Scope - which I named: CurrFileName, DestFilePath, and FolderPath.
.
Detailed Steps:
Set default values to the variables:
CurrFileName: Just provide the name of one of the .scv files (test.csv) for temporary purpose.
FolderPath: Provide the path where your source .csv files are located (C:\SSIS\Data\Input)
DestFilePath: Provide the Destination path where you want to save the processed files (C:\SSIS\Data\Input\Output)
Step 1: Drag a For Each Loop Container to the Control Flow area.
Step 2: In collection, select the enumerator as 'Foreach File Enumerator'.
Step 3: Under Enumerator Configuration, under Folder: provide the folder path where the .csv files are located (In my case, C:\SSIS\Data\Input) and in Files:, provide the extension (in our case: *.csv)
Step 4: Under Retrieve file name, select 'Name and extension' radio button.
Step 5: Then go to the Variable Mappings section and select the Variable (in my case: User::CurrFileName.
Step 6: Create the source connection (let's call it SrcConnection)- right-click in the Connection Managers area and select the Flat File Connection manager and select one of the .csv files (for temporary purpose). Go to the Advanced tab and provide the correct desired data type for the columns you wish to keep. Click OK to exit.
Step 7: Then go to the Properties of this newly created source Flat File Connection and click the small box adjacent to the Expressions field to open the Property Expressions Editor. under 'Property', select 'ConnectionString' and in the Expression space, enter: #[User::FolderPath] + "\" + #[User::CurrFileName] and click OK to exit.
Step 8: In Windows Explorer, create a new folder inside your Source folder (in our case: C:\SSIS\Data\Input\Output)
Step 9: Create the Destination connection (let's call it DestConnection) - right-click in the Connection Managers area and select the Flat File Connection manager and select one of the .csv files (for temporary purpose). Go to the Advanced tab and provide the correct desired data type for the columns you wish to keep. Click OK to exit.
Step 10: Then go to the Properties of this newly created source Flat File Connection and click the small box adjacent to the Expressions field to open the Property Expressions Editor. under 'Property', select 'ConnectionString' and in the Expression space, enter: #[User::DestFilePath] + #[User::CurrFileName] and click OK to exit.
Step 11: Drag the Data Flow Task to the Foreach Loop Container.
Step 12: In the Data Flow Task, drag a Flat File Source and in the Flat file connection manager: select the source connection (in this case: SrcConnection). In Columns, de-select all the columns and select only the columns that you require (in the order that you require) and click OK to exit.
Step 13: Drag a Flat File Destination to the Data Flow Task and in the Flat File Connection manager: select the destination connection (in this case: DestConnection). Then, go to the Mappings section and verify if the mappings are as per desired output. Click OK to exit.
Step 14: That's it. Execute the package. it should execute without any trouble.
Hope this helped :-)
It isn't clear why you want to use SSIS to do this: your task seems to be to manipulate text files outside the database, and it's usually much easier to do this in a small script or program written in a language with good CSV parsing support (Perl, Python, PowerShell, whatever). If this should be part of a larger package then you can simply call the script using an Execute Process task. SSIS is a great tool, but I find it quite awkward for a task like this.

Resources