How to control execution order for unrelated Alteryx IO tasks? - alteryx

I have 3 completely unrelated Excel files. Each needs to be uploaded to a separate database table. Unrelated files, unrelated tables. So I have 3 completely independent Input --> Output structures.
Once all these Input --> Output routines complete, then I have other code I need to execute.
The problem is I want to guarantee my "other" code doesn't start until ALL 3 Excel files get uploaded. How can I BlockUntilComplete for all these 3 Excel files?

Something like the below might work... input the file paths; use multiple block-until-done's... filter the filename to work with, dynamic input to grab it, then do your upload or whatever... then later on continue on to the rest of the wrokflow. (See picture)

Related

Using modules from a specific excel file in files generated from said specific one

Tl;dwtr: General file with macro's, one to create duplicate (a Project), one to send automated mail to request stuff. Last one opens general file and is not using the duplicated in which I clicked said macro.
In my work I create loads of 'Projects' from one general excel sheet. These projects have specified information stored in them. At the moment I can make a duplicate form the general file en store this with names dates info etc.
Problem being, I wrote a macro in the general file, which uses the specified info and sends automated emails to request some type of file bases on the info. Now when I do this (use the macro inside the duplicated file) it opens the general file and uses the leftover info which is still there at the moment.
My guess is that the modules are somehow linked to a certain xls file, but im not sure nor experienced enough to figure it out. I hope some of you can help me out.

Extract multiple Cognos report definitions

In COGNOS is there a way to get the definitions (filters, selected fields) from a number of reports in a folder?
I've inherited around 500 reports defined in a folder and they all need to be checked and fixed as they have business errors (not technical errors). If it was possible to get all their definitions in a single extract that would save an enormous amount of time having to click multiple times to get that information from each report one by one.
In ACCESS this can be done with VBA (for query definitions), but I'm not sure if there is a scripting language that can be used with COGNOS to achieve a similar result.
It sounds like you may want to "validate" each of these 500 reports (effectively equivalent to pressing the "validate" button on each individual report if it was open in the authoring studio).
Validation will ensure that a report specification XML is still syntactically correct, references a package which is still present the content store, references only query items from that package which still exist, generates valid SQL vs. the underlying datasource, etc.
If that's what you're looking for, an easy way to do batch validation for all 500 reports would be to use MotioPI (its a free admin tool for Cognos). Here's a short article which walks you through the process:
http://info.motio.com/Blog/bid/70357/Batch-Validation-of-Cognos-Reports
If you're wanting to retrieve the actual report specification (XML) for each of these 500 objects, then you'd need to write a program which utilizes the Cognos SDK to retrieve the specification XML from each of the 500 report objects. After that, you'd need to add logic which examines each of these 500 XML documents, looking for whatever it is you're looking for.
We solved this by exporting the XML of the reports using a SQL query on the content store.
The output is processed with a Python script to convert XML to table layout in CSV format.
This CSV-file can easely be imported in Excel.
You might want to process the reports XML directly in a SQL query with the xmltable function. In our situation this turned out to be a heavy proces we don't want to burden the content store database with. For a small set of reports this is working fine though.

Pentaho, multiple outputs for multiple inputs

I have been trying to figure out how to set Pentaho to write DIFFERENT files for each input of the job.
My transformation will soon be able to fetch .txt files from an FTP, a varying number of files, the way my transformation is right now, whatever the number of files it gets from the folder(FTP or local) it is generating one big XLS output, the information in the output side is all correct, it all matches the data i want to extract with precision, but for organizing those files, i need pentaho to create a single file, from a single input.
If files (//PentahoIn0001.txt, //PentahoIn0002.txt, //PentahoIn0003.txt) are processed i want (//PentahoOut0001.xls, //PentahoOut0002.xls, //PentahoOut0003.xls) to be created, and the way it is right now it's only creating a single file with data worth of all three inputs.
So far i have tried several ways with no result, even posts from here and outside containing several other aid Transformations and jobs to do it, and it simply doesn't.
Save the output filename in the row, and making sure the rows are sorted on the filename call Transformation Executor with a new transformation that should save the data. Make sure to enable Row grouping on the filename field, and also pass the filename as a Parameter to the new transformation.
In the child transformation start with Get rows from result and save the result to the file using the passed filename parameter.

Input data recieved from an email into a CSV/Excel/LibreOffic Calc file

Having a bit of trouble with a script I am trying to create. Basically I would like it to send out a reminder email to send hours I worked that day, then I send a reply, the script will read the email for date start time and end time and then input this data into a CSV/Excel/LibreOffic calc file. A new line for each date. I have manage to sort out the email sending and reading part, then inputting the data into a variable for the next subroutine to read (the excel bit). But I am not sure on how to go about this part. I have seen many suggestions of using Text::CSV and other modules but i'm not certain on how to use them? also how would I go about making it append to the end of the document instead of just overwriting it?
Thanks in advance guys
CSV is very easy to read and parse, and Text::CSV is very easy to use too. What specific problems are you having with Text::CSV, and what have you tried?
If you want true Excel format, it looks like you'd need to read the existing contents with something like Spreadsheet::XLSX, write it back out using something like Excel::Writer::XLSX, and append your data to it as you're back out the original data.
CSV is simpler, though, if you can live with it. You could open the file in append mode, and just write to it. To do so, build up your data into columns, and "combine" them ($csv->combine (#columns)), then create a string out of that ($csv->string()) that you can write to your original file.

"Lights Out" Automated Scheduled Batch Creation of Excel Workbook?

Anybody have a good approach to automate the batch creation of custom-formatted Excel workbooks at a regularly scheduled time or after an event occurs (e.g., file created, table loaded)?
To make things even more interesting, let's further assume that the source data is in a relational database or Unix file, and the results need to be e-mailed or transmitted to a Unix web server. Thanks!
In fact this is not a question, but a series of questions. I can see the following distinct questions inside this one (assuming, that you are going to use a scripting language like perl or python or something similar):
The task must be performed
At regular time intervals: use cron
After a predefined event: not much to say here, it depends on what exactly you want.
The data has to be retrieved from:
from the database: use the bindings for your language for the specific database you are using (example: python bindings for sqlite3)
from a file (what is a Unix file anyway?): depending on the format of the file you can get away by using sed/awk or write a parser in your scripting (or otherwise :P) language of choice.
The data has to be massaged into an excel workbook. Well, I am not so sure, what you mean by "custom formatted", but the easiest way to create an excel workbook is, indeed, just dumping it to .csv, but you can go further and actually produce a "pseudo-xls", by using the following template and saving the resulting file as .xls (it actually works):
<table>
<tr>
<td>field0</td>
<td>field1</td>
..
<td>fieldX</td>
</tr>
... ad inf
</table>
The resulting file has to be:
emailed: use the mail command, which usually points to the default mailer on your system (exim, sendmail, postfix)
"transmitted" to a web server, - I am assuming here, that this means "transfered to another machine, so that it can be made accessible via http(s)". In that case you can use ftp, sftp or rsync (my favourite).
Sorry for being extremely non-specific, but it is not easy to deduce what exactly you are trying to achieve from your question.
Excel can read .csv (comma-seperated variable) or .tsv (tab-seperated variable) files. It's 'trivial' to dump your output into a csv (just make sure you escape any commas or tabs in your input), and excel can then read that.
If you want to produce a .xls file, you'll have to find a library in your language of choice that implements handling of .xls files. For instance, if you're using python, there's an entire mailing list devoted to talking about doing this

Resources