How to output to the source data file? - alteryx

I am brand new to Alteryx and am building a workflow that will be reused with several different Excel reports. Each report has a different format (different column headers, etc).
Before running the workflow, I change the Data Input and update the fields in the Select Tool.
At the end of the workflow, I need to output the results to a new sheet within the original Excel workbook.
I know that the Input Tool has the "Output File Name as Field" option, but I can not figure out how to use that within the Output Tool.
Is there a better way to do this? Right now, I am having to select the new file in the Input Tool and the Output Tool on each run; if I forget to change the output, it will overwrite the sheet in the wrong file.

You can chose a field to determine the file that will be output.
In the Output Data tool, check "Take File/Table Name from Field" and select "Change Entire File Path". You then choose which field contains the output file name. Does that help with your problem?

Related

Can I run a script in Excel that returns dbms_output instead of a query

I have a stored procedure that returns formatted, delimited text using dbms_output.put_line statements. Currently, we run the script in Toad and manually paste the output into Excel, but I was hoping I could cut out a step and get the output directly into Excel. I created a connection and set the properties to run the SP: that works fine (more or less -- the next step would have been to figure out how to supply a parameter). However, since no query is being returned, Excel doesn't recognize that there's anything to be done. Is there any way to do this automagically? Thanks.
ETA: I was just trying to figure out if I could build a cursor by inserting the GET_LINE output into it and return that, but that didn't look like it was going to work out.
If you are using Toad most recent versions (10+) allow you to save your output as an excel file. Earlier versions also allow this but with different commands.
In the output section at the bottom right click on any part of the results:
select "Export Dataset".
select your choice of export file (Excel file)
select a file path and file name
choose whatever options such as saving the sql on a separate worksheet you need
press the button in the lower right corner
Even if the output is comma delimited csv you can then have excel convert it into a real xls or xlsx format.

How to prevent excel from truncating numbers in a CSV file?

The first few lines of my CSV file look like this (when viewed from Notepad++):
Trace,Original Serial Number,New Serial number
0000073800000000097612345678901234567890,0054,0001
When I open this file in excel, I get this:
For some reason, excel is truncating the serial numbers and the trace number. I have tried changing the format to Text but that still doesn't work, as excel only sees the value up to the 6:
7.38000000000976E+34
If I change it to Number:
73800000000097600000000000000000000.00
What can I do? I only have 60 lines, so if I have to start over and some how recopy the text into excel I will, but I'm afraid saving it will change the format once again.
You shouldn't need to start over or alter the existing CSV. The fastest way might be to use Excel's text import wizard. In the data tab under Get External Data click From Text and select your CSV file.
The wizard that appears will let you tell Excel the data type of each "column" and you can tell it to use text for your barcode.
Excel is trying to "help" you by formatting the input values. To avoid this, do not double-click the file to open it. Instead, open the Data tab and in the Get External Data section, click on From Text
Then tell the Import Wizard that the fields are Text:
One solution that may work for you depending on the environment you consume the csv, you can add a nonnumeric character to the beginning and end (e.g. a "_") of the values. This will force Excel to recognize it as text. You can then remove the "_"s in your downstream environment (SQL, Databricks, etc.) or even keep them if they don't interfere with your reporting.

How to order input files on excel step in Pentaho

I'm using an Excel input step in a transformation; I need to process a lot of excel files in a directory; the problem is that kettle is processing them in an arbitrary way, so that the result is not always what I was hoping for. Is there some way to specify the order for processing the files? I need spoon to process them by date, starting from the oldest to the newest. Thank you.
Late reply, but mybe still helpful.
You could first use a "Get File Names" and get the list of the files in the directory. Then you use "Sort Rows" and sort by "lastmodifiedtime" (don't think there is "filecreatedtime" availble, so that is a risk). Then you write the result to log. Afterwards you read this log a process the file one by one.
I don't know if there's a reliable way to make PDI process the files in a particular order at the job level.
But what you can do is go to the 'Additional output fields' tab in the Excel input step and specify a field name for the file name (either 'Full filename field' or 'Short filename field'). This will cause your file name to be added as a column in output of the Excel input step with the name you specify. Then simply flow this through a Sort rows step and sort by that column.

Configure and link Excel to a delimited file for repeated use

I am dumping data in a tab delimited file that I would like to view and analyze in Excel. But the file contents change frequently and I do not want to go through the importing steps every time, i.e. define delimiters, column names etc. Is there a way to save a link metadata in an Excel file so that you can skip the definition steps upon subsequent openings, i.e. that it knows that the first row are column names, it is tab delimited etc.?
Thanks
Yes, you can. Go through the Get External Data route. Once you set it up. All you have to do next is "Refresh Data". No macro needed.

sharepoint list version history export to excel

Good day !
I need to do the export versions of the log data item in the excel.
This solution I unfortunately cannot use because I'm from Russia, and the solution only supports the Latin alphabet.
so I have the most to learn to extract data from the version history for a single item list.
please help. how is this done?
While I haven't found a clean way of doing this, I have a partial workaround.
Use the Export to Excel option in the List section of the ribbon (make sure the list view you export includes a modified column - thanks T6J2E5).
Save the owssvr.iqy file and open with notepad
Copy just the URL from the file and paste it back into your browser, adding "&IncludeVersions=TRUE"
Save the XML file and open in Excel (or your favorite XML viewer), selecting the "As an XML table" open option.
You'll have to delete the first few columns and rows as they contain the schema data but other than that you should have all the version history (I suggest you add the Version column to the view). You can sort by the Modified column to get a chronological change log of the entire list.
FYI "IncludeVersions=TRUE" should be before the List ID for anyone else that needs this.
spurl/_vti_bin/owssvr.dll?XMLDATA=1&IncludeVersions=TRUE&List={ListID}&View={VIEWID}&RowLimit=0&RootFolder=name
I am facing Error after doing the same that semicolon is missing. how to resolve it.

Resources