After some months I could say I am getting the hang of Microsoft Flow, however I could use some help with the following issue:
In a flow for reporting purposes, a temporary file (.xlsx) is created in a sharepoint folder by means of a template. This temporary file is then filled with rows and info from other sources. So far so good.
I use the body of this newly created and furnished file as an attachment for an e-mail to the chief. However, the attachment came out identical to the (empty) template file, without the rows and furnishing.
Adding a delay of two minutes before attaching and sending the mail solved it for relatively small reports, but this is not ideal as I want it to work regardless of file size. Furthermore I do not understand why it would send an empty (old) version of the temp. file in the first place, as all the furnishing operations should have executed before copying and attaching (the flow is entirely in series).
Sorry for the long story. Does anyone have a more elegant solution than using a Delay-node?
Related
I have two XLS files. Both are server generated. One is from our bank (account statement) and the other is an internal ledger report from our company's web portal. I am trying to read both files within my Retool app.
The first one is read successfully using the following piece of code (ignore the splices. I am just getting rid of some useless rows).
As you can see, I am getting the correct data from this file (visible in the table).
Now if I try the second file, I get this:
As you can see, the parsed value is showing some strange values. I do not know what to do here. I would really appreciate it if someone could point me in the right direction.
This sounds so simple in my head, but Power Automate doesn't like it.
I have a library with a lookup column. I have a Flow created which takes the filename of the document and puts this name into a "Title" column. Then I can use a lookup column on the Title column to find all the files in the library.
I've used "When a file is created or modified". Yet this flow runs constantly. No files are being update or modified at 1am, yet it still runs over and over. I've had an automated email telling me to fix this before it is disabled.
All I want it to do is run the damn flow ONLY when a file is updated or uploaded, just as its own function title suggested.
It would seem I need to add trigger rules, but all the guides I found were talking about checking if a specific person has modified it.
This used to be so simple with workflows, it would only run if something was modified or uploaded.
I am trying to build a flow based on the PowerAutomate template
Create Planner task and add attachments to SharePoint on new email
arrival
This template works fine, in that it saves all the mail attachments to my sharepoint. But it only shows the link to the last attachment in the task.
I have worked around it, by adding a string variable and appending all the sharepoint paths to this variable.
With my Flow, everything runs smoothly. But the stored files are about 10%- 20% bigger in size than the original and they turn out to be corrupted.
The only difference I can spot in the saving of the file is as follows:
Template section has "get attachment" and the according "body('get attachment'):
While my in my version I can only select "get attachment (V2)" and the corresponding "body('get attachment (V2)')
There is an option with V2 that allows or disallows chunking, but there is no effect on my filesize.
The other difference is, that I have my flow create a different folder based on the task ID, since there where errors, if the same name attachment came a second time. But I have tried my flow without the added folders and there is no difference in file size.
The original files:
and the corrupted files:
It makes no difference if I use the sharepoint link provided through the flow to my new planner task, or if I open the files directly within sharepoint. The result is an error.
Can anyone guess, why my flow seems to store something more within the file and thus corrupting it? I can provide the other parts of the flow in more detail too. Here is the overview of my custom flow:
I actually found the answer after rewriting it from scratch:
Using the old template had me looking for the wrong information when adding the attachment content to sharepoint. I had always searched for "body" which was used in the template and gave me this
But searching for attachment the dynamic content actually showed me the right pieces. I am not sure, if I missed it before, or if recoding a template hid them somehow. With the rewrite from scratch I found this:
So, to make a long story short: Use "Content Bytes" of the "Get_Attachment_(V2)" Method and everything works fine.
Tl;dwtr: General file with macro's, one to create duplicate (a Project), one to send automated mail to request stuff. Last one opens general file and is not using the duplicated in which I clicked said macro.
In my work I create loads of 'Projects' from one general excel sheet. These projects have specified information stored in them. At the moment I can make a duplicate form the general file en store this with names dates info etc.
Problem being, I wrote a macro in the general file, which uses the specified info and sends automated emails to request some type of file bases on the info. Now when I do this (use the macro inside the duplicated file) it opens the general file and uses the leftover info which is still there at the moment.
My guess is that the modules are somehow linked to a certain xls file, but im not sure nor experienced enough to figure it out. I hope some of you can help me out.
Log data from a test is uploaded to a web service, and the processed CSV is downloaded back into Excel for viewing in charts. At the moment, this is done via copy and paste for short CSV files and the Data > From Text feature for larger CSV files. Unfortunately, this takes a bunch of time for every test, and I need to make the process very simple for someone else to update the Excel spreadsheet.
The Excel spreadsheet contains 5 raw-data pages which are used to store the CSV from the server. I have no issues selecting Data > From Text, entering the website URL, and completing the format to import. This process can be repeated (same as the Copy and Paste) for all 5 pages to import the data.
This process only allows me to put in one filename, so I am using the same URL for the data, and having PHP return the CSV of the latest (or a specifically configured) test whenever the website is accessed. I've verified that this process is working correctly.
Unfortunately, when I do 'Refresh All', it prompts for a filename unless I go to Data > Connections > Properties, and uncheck 'Prompt for file name on refresh'.
However, even when I do that, I'm getting mixed results. Sometimes only one of the pages will update. (Seems to be the last one I set up.) Sometimes none of them do. I need a solution which updates all 5 pages based on the current CSV from the server without having to set up the connections again every time. Ideally I'd like to just hide these raw data sheets so we can have an Excel file that's just the final charts.
Surely this is a common function and I am doing something wrong, yet all the guides I try on the Internet don't seem to work. For example, this one:
http://www.kimgentes.com/worshiptech-web-tools-page/2010/8/18/web-connecting-csv-files-as-external-data-to-excel-spreadshe.html [URL is corrected]
Seems like they only set up one connection. I can get one working to refresh, but not more than one.
I have seen this happen and finally figured it out. There are actually 3 things that can happen to give this result, and a separate solution for each:
First, Excel software uses the IE 11 web object to when it does web
retrieval of data. This means it will be "sticky" to sessions using
IE11 to access the data. Most websites these days are run by cloud
servers, which generate sessions on the server with the most load.
This normally has no impact on users on web browsers since they
login and can visually enter their credentials etc. But when a
program accesses a website and must use a specific web browser, it
must use the properties of that browser and how it works. I ran into
this a lot when I would generate and be able to download my CSV
files on the website in Chrome, then try to use Excel to import the
same files wouldn't work (it would say they weren't there). The
solution to this, at least for now, is to use IE 11, login to the
website, generate the CSV files and test that they can be
downloaded. Then use Excel to run the web import and it should pick
up the same sticky session to get the CSV files.
Second, password entry is a different thing, but also has to do with the stickiness
of the data. For some reason Excel will not cache your credential
responses for logging into a website without you entering them 3
times. This experience may change for you, but I found that I must
enter a new credential set (for a new web import of a CSV) 3 times
before it becomes permanently cached by Excel. After this, I don't
have the problem.
Third, most complex Excel programs that require
web import may also require that you either import local data you
downloaded from a website, import data from a website into a sheet
or run more complex objects like Macros. All of these need proper
permissions. You may need to set your Trust Center settings to allow
you to use your Excel program on your computer in this way. That is
part of MS office. You can set add and update those as per MS info
here:
https://support.microsoft.com/en-us/office/add-remove-or-change-a-trusted-location-7ee1cdc2-483e-4cbb-bcb3-4e7c67147fb4