When using github to store an excel file, file seems to revert back to an old version every time - excel

I am trying to store an excel file in github. The file is a master list of "off limits entities" that my team must use and each update daily with items that the full team needs to exclude from their data analyses. Here is our current workflow:
Pull most recent version of "off_limits_entities.xlsx" list before starting work
Edit "off_limits_entities.xlsx" locally (generally, will add 10 - 50 entries in various columns and sheets)
Push "off_limits_entities.xlsx" to github
The problem is that when I do this, and then one of my teammates does the same, when we pull again our files are both missing all recent additions (even though our pushes and pulls all seemed to be successful). As a test, I created a smaller xlsx file (one sheet) and uploaded it to github, then had one of my teammates pull, edit, and push it, and when I pulled the same file, it WAS updated appropriately with new columns and cells. I also considered whether our "off limits entities" files was too large, but it is only 223KB and the github limit is 100MB.
Does anyone know why the github might be losing/not saving/erasing new entries in the file?

Related

How to prevent user from saving changes to the master excel file?

I have a fair understanding of basic to intermediate VBA coding - here's the predicament I am having at work - I am responsible for maintaining this master excel file that consists of 35 tabs and macros and event procedures etc. - this file is used by the other team (more than 10 people) as the primary tool for carrying out their daily tasks - as the author i always keep an original copy of this file as a backup for any contingent event, and I put a copy of the file in the team folder for the team to use.
However, it sometimes happens that some of the team members would open this file in the team folder and make changes (they are told not to) as normal practice and accidentally save the changes withtout realising it - now that potentially creates an issue for the next user (good user) who would make a copy of this file and save it to their own folder and continute to work on with it (good practice) but they did not realise there has been data left in the workbook from the previous user - this kind of incident could create a disatrous consequency if it's left unnoticed.
I am trying to think up a way or series of codes that can resovle the issue - i just do not know which way to begin with - I was thinking of using SheetChange or Open (eg upon detecting any change then save as a new file in a different location) event - with that i ran into another issu- how i do ensure the subject event will not intervene other events that already exist in the workbook in the subsequential workbook?
any suggestion on structuring the code to accommedate this situation?
many thanks in advance
#VBA #event #savechange
as described in above
I would keep the master copy well hidden from them.
Then, consider putting passwords on sheets they MUST not change.
Or, consider sub-master files for the detail that each team can change and then your master file can link to those sub-files to get the latest data.
I had a project to manage that had 6 team members. Gave them each their own file and linked to their data. Also passworded the functions so they could not change or delete them.
Save the file as an Excel Macro-Enabled Template (.xltm) file.
This way, on double-clicking the file (as you would to open any other file), it creates a new file and will not automatically overwrite the old file when saving.
Instead of taking copies of the file, your users simply have to 'open' the file then later save as whatever they need to.

Sending recently created Sharepoint-file as attachment with Power Automate

After some months I could say I am getting the hang of Microsoft Flow, however I could use some help with the following issue:
In a flow for reporting purposes, a temporary file (.xlsx) is created in a sharepoint folder by means of a template. This temporary file is then filled with rows and info from other sources. So far so good.
I use the body of this newly created and furnished file as an attachment for an e-mail to the chief. However, the attachment came out identical to the (empty) template file, without the rows and furnishing.
Adding a delay of two minutes before attaching and sending the mail solved it for relatively small reports, but this is not ideal as I want it to work regardless of file size. Furthermore I do not understand why it would send an empty (old) version of the temp. file in the first place, as all the furnishing operations should have executed before copying and attaching (the flow is entirely in series).
Sorry for the long story. Does anyone have a more elegant solution than using a Delay-node?

Get specific Version from Source control

I have 34 word templates in TFS and I'm suing VS2012.
Only 32 have been modified and saved under a change set.
I wanted to just extract those modified by that change set.
I made sure my mapped folder was empty before I started.
I used Advanced/Get Specific and then did a get using the changeset number
However, all 34 templates were downloaded into my folder.
The changeset get seems to get all files modified before and up to the change set I requested.
In my case I can pick out the 2 files and remove them. But if I had hundreds of files spread over a dozen folder it would be a nightmare.
Is there a way to get only those files modified by a specific changeset files ?
"Get Specific" means getting all the files as how they were at the time when ChangeSet was created. It doesn't mean getting only changed files.
Since you are using VS 2012, you could use Team Foundation Power Tools' tfpt GetCS command:
The GetCS tool retrieves all items listed in a changeset for a
given changeset version.
This is useful when a co-worker checks in a change that you need to
have in your workspace, but you cannot upgrade your entire workspace
to the latest version. Use the GetCS tool to get just the items
affected by your co-worker’s changeset. You can do this without
inspecting the changeset to manually list the changed files when using
a getcs command.
There is no graphical user interface for the GetCS tool. To invoke
the GetCS, type the following command. The parameter
changesetnum specifies the changeset number.
tfpt GetCS /changeset:changesetnum

The tree structure of epics and features in TFS has been lost after publish from Excel Add in

I use excel for reporting from TFS, but one day after publishing, the structure of epics and features was destroyed and now all user stories belong to an incorrect feature, resulting in a mess. I think the problem was that I ordered the list before I publish it.
I have tried to publish all again from a correct excel file backup, but this did not work the parents are not corrected.
Any help will be very appreciated
There is no easy way to fix this. When a tree structure is sorted in Excel, all work items will be re-linked to their new parent based on their indentation.
To undo, you have a few options:
Restore the Team Foundation Server Project Collection from backup. This will revert EVERYTHING (sources, builds, tests, work items etc) to the point in time of the backup.
Restore the Team Foundation Server from backup to a new instance. Extract the original parent/child tree structure in Excel, push the correct structure to the production TFS server.
With the original Excel file, you could try copying the data from it into the current (messed up) excel file and publish that again.
Use a PowerShell or C# based automation to iterate through each work item's history (revision), find the revision that was created at the time you made the update and relink the item to the correct parent based on the historical data. See:
https://oshamrai.wordpress.com/2018/09/11/vsts-rest-api-2-get-work-items/
https://oshamrai.wordpress.com/2018/12/21/azure-devops-services-rest-api-7-add-and-edit-work-item-links/
You can use the Team Foundation Server Client Object Model to access the Work Item data stores.

Updating lines in Excel when changes to S3 occur

In a project I am running an AWS Sagemaker Jupyter notebook instance that heavily interacts with files (gathering, converting, computing, interacting) and after every step the files are moved from folder to folder to prepare for the next interaction. I was wondering if there was any way to set some form of chart (like excel) that creates/updates a row when a file enters a folder. The charts end goal is to be used as some form of tracker, to see what stage all the different files are in.
Examples of how the desired chart should look like below
Chart Style 1
Chart Style 2
One way in which you could achieve this is by pushing the data programmatically to a central Google Spreadsheet and use the data from there to create charts.
You could use the GSpread Library in Python to push the data after certain steps in your code are run successfully. We use this extensively to push data to Google Sheets every day.
You would need to check the API rate limits enforced by Google Cloud which can be increased as per your requirements.

Resources