I have an location in one drive for business where an .xls file is getting daily replaced via flow automation. The data structure, columns is the same. What I want is to create an excel online workbook that would get its data from that daily replaced xls. I tried once but as soon as the source file got replaced and I clicked on Refresh all under data, the operation ended in error. Any ideas?
You can use Power Query in that scenario. Depending on the exact circumstances, you could
Get data from Folder
Filter the folder to show only files that contain '.xls' in the file name
If after that you still have more than one file, sort them by date modified and keep only the newest one.
Then process that one remaining file.
Related
I have a Power BI in my local folder.
I have inputs, excel files.
When I have a new input with new data, in the form of a new excel file, I only care about these new data. I do not care about anything data in the past file.
Currently, when I receive a new excel input:
I change the name of the excel file giving it the same name as the file that it will replace.
I suppress the old excel file in the folder where it is stored
I replace it with the new excel file, with the name of the old one in the folder of the old excel file
To provide a concrete example, here is my folder with two files and my power bi:
If I have a new file with corresponding to the "Rabbit employed by Batman", I supress this excel file in the folder.
I change the name of my new excel input, calling it "Rabbit employed by Batman".
I replace it in the folder with the Power BI.
I feel that this might not be very clever, and I wonder if there is a better way to proceed.
In Power Query use excel from folder, sort by date and choose the latest one, If you have more than one file with the same name, create an Index, rank over filename sort descending and use the number 1 as your binary source.
I recently modernized all my excel files, and started using the magic of PowerQueries and PowerPivot.
Background: I have 2 files:
- First one is a "master" with all sales and production logs, and everything works inside that excel file with Power queries to tables stored in that same file.
- Second one is mostly a different set of data about continuous improvement data, but i'd like to start linking them with the master file by having charts that compare efficiency to production, etc.
As it is now, I am using links by entering a direct reference to the cells/ranges in the master file (i.e: [Master.xlsm]!$A1:B2) However, every new version of the Master file, I have to update the links and it's not scalable if I have more documents in the future.
Options:
- Is it possible to store all the queries or data from the Master files in a separate file in the same folder and "call" for it when needed either in my Sales/Production master file or the Manufacturing file? That could be a database or connection file that has the queries to the data stored in the master file.
- If not, what is the best way to connect my Manufacturing file to my Master file without entering specifically the filename?
My fear is that as soon as the Master file name will change (date, version), I will have to navigate inside the queries and fix all the links again. Additionally, I wanna make this futureproof early one as I plan to gather large amounts of data and start more measurements.
Thanks for your help!
Once you have a data model built, you can create a connection to it from other Excel files. If you are looking for a visible way to control the source path of the connected file, you can add a named range to the Excel file that is connecting to the data model, and in the named range, enter the file path. In Power Query, add a new query that returns your named range (the file path), and swap out the static file path in your queries with the new named range query.
Here is a sample M code that gets the contents of a named range. This query is named "folderPath_filesToBeAudited".
let
Source = Excel.CurrentWorkbook(){[Name="folderPath_filesToBeAudited"]}[Content]{0}[Column1]
in
Source
Here is an example of M code showing how to use the new query to reference the file path.
Folder.Files(folderPath_filesToBeAudited)
Here is a step-by-step article.
https://accessanalytic.com.au/powerquery_namedcells_parameters/
I have a big spreadsheet(Excel file A) which will be updated every month. Also, I created a parametric search in another Excel file(file B) which can pull data from Excel file A. Therefore, Once I send my parametric search Excel file B to my colleagues, they can always pull the fresh data without updating file B (I would need to update file A monthly to keep data fresh)
I tried to connect data by using Microsoft Query/web data. However, I noticed that if I use web data, the source link changes everytime I update the File A. Therefore, the file B connection won't work.
(I uploaded the file A to JIRA as an attachment. I tried to upload to Sharepoint, but Excel does not recognize Excel file on Sharepoint as an Excel file, it recognize as a html file. Therefore, I gave up using sharepoint)
Is there a better way to achieve what I have described above?
Thanks,
Jennifer.
Since you are using SharePoint, choose From File > From SharePoint folder and input the root URL (e.g. https://companyname.sharepoint.com/sites/workspacename/).
This should give you a dialog box like this once you've logged in:
Click on Edit to open the query editor.
You likely only want one particular file in there, so click on Binary in the row that corresponds to the File A that you should have already uploaded to that space. This will import the Excel file.
Click to expand the Table in the row that corresponds to the table that you want to import. This should be the table you keep up to date that gets loaded in.
I would like to have my pivot table in excel automatically update upon opening or in the background to the newest edition of data stored as a csv in a folder.
The csv files have the same columns and follow the same naming convention csvFile_ddmmyy where the date is substituted. They're run everyday. I would like the excel to update the pivot tables source data to the newest dates data.
Preferably this will be done automatically, but i can also type in the date in a certain cell and have some macro to take this date and put it in the connection string.
If you may propose any solution to this problem, I'd greatly appreciate it.
Make a copy of the latest CSV and use the same file name every time. Point the data connection to that one file that never changes its file name.
EDIT: I think this question belongs over at superuser not here at Stackexchange.
What I would like to do is have a single excel file that calls up data from every excel file in a given directory. Specifically if I have a time sheet excel file from multiple people working multiple different job numbers I would like to have that data populated in a single file for everyones times. The directory where the files are stored would be updated weekly so I would want the "master" excel file to reflect the weekly changes automatically...hopefully. Is there an easy way to do this that I would be able to teach someone else?
Import every file to a database table using stored procedure and export one excel file. You can schedule this as a job. Use OPENROWSET and xp_cmdshell. What technology are you using?