I have 2 KTA process, the first collects email attachments and then uses a .net activity to flatten with a cmd utility.
It is called as a .vbs script from the KTA .net activity.
The file is output to new directory. The second process is supposed to pick up again via file import connector but wont detect the file.
If I copy it there manually it collects.
Any other method like windows scheduler copy or VB copy or KTA process copy causes the import connector not to pick up the file.
I tried trigger and time based modified file.
Any suggestions what could be the problem?
Thanks
Shane
Related
I need help on ADF(Not Devops) orchestration. I am giving process flow with ADF activity which are denoted by numbers
SAP tables---> Raw Zone---->Prepared Zone----->Trusted Zone------->sFTP
1 2 3 4
Kafka Ingestion (Run by ADF)
Databrick jar(Run by ADF)
Databrick jar(Run by ADF)
ADF Copy activity
The below tasks need to be done
After files are generated in trusted zone, a synchronous process would copy the files into sFTP location.
To copy files into sFTP, it would get all .ctl files (trigger/control files) and compare with what’s been flagged as processed in JOB_CONTROL table. Copy the new files that were not processed/copied before.
The copy program should poll for .ctl files and following steps to be performed
a. Copy csv file with same as ctl file.
b. Copy ctl file
c. Insert/Update a record in JOB_CONTROL using file type that the file is processed successfully. If it is successful, the file will not be considered for next run.
d. In the event of error, it should mark with respective status flag so that the same file to be considered in next run as well
Please help me to achieve this.
Regards,
SK
This is my understanding about the ask , you are logging the file copied in a table and the intend is to initiate the copy of the files which failed .
I think you can use a Lookup activity to read the file(s) which failed and then pass that to a foreach(FE) loop . Inside the FE loop you can add the copy activity ( you will have to paramterized the dataset ) .
HTH
I have an excel sheet with employee data. My task is to store the data from excel file in MongoDB database > employee collection(a row from excel sheet in mongodb document). I'm doing all this in a react application. I thought of using mongoimport. Since I need it in a CSV or Json format, I converted my excel file to CSV using SheetJs npm package and created a blob file of type csv. And then using the below command I was able to import that CSV file to my mongoDB database.
mongoimport --db demo --collection employees --type csv --headerline --file /path/to/myfile.csv
But I did this from mongo shell by giving a path of my local disk. Now I'm trying to implement this within my react app. Initially I proceeded with this idea - as soon as I upload an excel file, I will convert that to CSV file and I will call a post api with that CSV file in body. Upon sending that request, I will call the "mongoimport" command in my nodejs backend/server so that the data from that CSV file will be stored in my mongoDb collection. Now I can't find any solution to usmongoimport command programmatically. How can I call the "Mongoimport" command in my nodejs server code? I couldn't find any documentation regarding it.
If that is not the right way of doing this task, please suggest me any entirely other way of achieving this task.
In Layman's terms I want to import data from an excel file to MongoDb database using Reactjs/nodejs app.
how are you?
First of all, mongoimport also allows you to import TSV files (same command as for CSV but putting --type tsv), many times friendlier to use with Excel.
Regarding mongoImport, I regret to report that mongoImport cannot be used by any means other than the command line.
What you can do from NodeJs is execute commands in the same way that they are executed by a terminal. For this you can use the child_proccess Module or Exec Function.
I hope that helps even a little.
Regards!
I have an excel workbook that uses a hotkey that launches a batch file, which launches a Node script, which updates a CSV file. Technical details on that are further below.
The workbook uses the CSV file as a data source. I can manually update the Workbook with the data from the CSV file by going to Data > Refresh All > Refresh All.
Is there any way to trigger an update in the workbook once there is new data in the CSV file, or when the batch file finishes? Conceptually, I'm asking how an external event can trigger something in Excel.
Here are fine details on the process:
When a hotkey is pressed in the Excel workbook, it launches MS console ("cmd.exe") and passes the location of a batch file to be ran and the value of the selected cell. The reason the batch file is run this way is probably not relevant to this question but I know it will be asked, so I'll explain: The batch file is to be located in the same directory as the workbook, which is not to be a hard-coded location. The problem is that launching a batch-file/cmd.exe directly will default to a working directory of C:\users\name\documents. So to launch the batch file in the same directory as the workbook, the path of the workbook is passed along to cmd.exe like so: CD [path] which is then concatenated inline with another command to launch the batch file with the value of the selected cell as an argument like so: CD [path] & batch.bat cellValue
Still with me?
The batch file then launches a Node script, again with the selected cell value as an argument.
The Node script pulls data from the web and dumps it in to a CSV file.
At this point, the workbook still has outdated data, and needs to be Refreshed. How can this be automatic?
I could just start a static timer in VBA after the batch file is launched, which then runs ActiveWorkbook.RefreshAll, but if the batch file takes too long, there will be issues.
I found a solution, although it may not be the most efficient way.
Right now, after Excel launches the batch file, I have it set to loop and repeatedly check the date modified of the CSV file via FileDateTime("filename.csv")
At first, this looping was an issue because I was worried about Excel excessively checking the date modified of the CSV. I thought it may cause issues with resources while it checks however many hundred or thousands of times a second. I could add a 1 second delay with the sleep or wait functions, but those cause Excel to hang. It would be frozen until the CSV files were updated, if at all. The user would have to use CTRL+BREAK in an emergency.
I was able to use a solution that just loops and performs DoEvents while checking until a certain amount of time has passed. This way, Excel is still functional during the wait. More info on that here: https://www.myonlinetraininghub.com/pausing-or-delaying-vba-using-wait-sleep-or-a-loop
I am doing PCA on CIFAR 10 image on IBM WATSON Studio Free version so I uploaded the python file for downloading the CIFAR10 on the studio
pic below.
But when I trying to import cache the following error is showing.
pic below-
After spending some time on google I find a solution but I can't understand it.
link
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html
the solution is as follows:-
Click the Add Data icon (Shows the Add Data icon), and then browse the script file or drag it into your notebook sidebar.
Click in an empty code cell in your notebook and then click the Insert to code link below the file. Take the returned string, and write to a file in the file system that comes with the runtime session.
To import the classes to access the methods in a script in your notebook, use the following command:
For Python:
from <python file name> import <class name>
I can't understand this line
` and write to a file in the file system that comes with the runtime session.``
Where can I find the file that comes with runtime session? Where is the file system located?
Can anyone plz help me in this with the details where to find that file
You have the import error because the script that you are trying to import is not available in your Python runtime's local filesystem. The files (cache.py, cifar10.py, etc.) that you uploaded are uploaded to the object storage bucket associated with the Watson Studio project. To use those files you need to make them available to the Python runtime for example by downloading the script to the runtimes local filesystem.
UPDATE: In the meanwhile there is an option to directly insert the StreamingBody objects. This will also have all the required credentials included. You can skip to writing it to a file in the local runtime filesystem section of this answer if you are using insert StreamingBody object option.
Or,
You can use the code snippet below to read the script in a StreamingBody object:
import types
import pandas as pd
from botocore.client import Config
import ibm_boto3
def __iter__(self): return 0
os_client= ibm_boto3.client(service_name='s3',
ibm_api_key_id='<IBM_API_KEY_ID>',
ibm_auth_endpoint="<IBM_AUTH_ENDPOINT>",
config=Config(signature_version='oauth'),
endpoint_url='<ENDPOINT>')
# Your data file was loaded into a botocore.response.StreamingBody object.
# Please read the documentation of ibm_boto3 and pandas to learn more about the possibilities to load the data.
# ibm_boto3 documentation: https://ibm.github.io/ibm-cos-sdk-python/
# pandas documentation: http://pandas.pydata.org/
streaming_body_1 = os_client.get_object(Bucket='<BUCKET>', Key='cifar.py')['Body']
# add missing __iter__ method, so pandas accepts body as file-like object
if not hasattr(streaming_body_1, "__iter__"): streaming_body_1.__iter__ = types.MethodType( __iter__, streaming_body_1 )
And then write it to a file in the local runtime filesystem.
f = open('cifar.py', 'wb')
f.write(streaming_body_1.read())
This opens a file with write access and calls the write method to write to the file. You should then be able to simply import the script.
import cifar
Note: You can get the credentials like IBM_API_KEY_ID for the file by clicking on the Insert credentials option on the drop-down menu for your file.
The instructions that op found miss one crucial line of code. I followed them and was able to import modules but wasn't able to use any functions or classes in those modules. This was fixed by closing the files after writing. This part in the instrucitons:
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
should instead be (at least this works in my case):
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
f.close()
Hopefully this helps someone.
I have qvw file with sql query
Data:
LOAD source, color, date;
select source, color, date
as Mytable;
STORE Data into [..\QV_Data\Data.qvd] (qvd);
Then I export data to excel and save.
I need something to do that automatically instead of me
I need to run query every day and automatically send data to excel but keep old data in excel and append new value.
Can qlikview to do that?
For that you need to create some crazy macro that runs after a reload task in on open-trigger. If you schedule a windows task that execute a bat file with path to qlikview.exe with the filepath as parameters and -r flag for reload(?) you can probably accomplish this... there are a lot of code of similar projects to be found on google.
I suggest adding this to the loadscript instead.
STORE Table into [..\QV_Data\Data.csv] (txt);
and then open that file in excel.
If you need to append data you could concatenate new data onto the previous data.. something like:
Data:
load * from Data.csv;
//add latest data
concatenate(Data)
LOAD source, color, date from ...
STORE Data into [..\QV_Data\Data.csv] (txt);
I assume you have the desktop version so you don't have access to the Qlikview Management Console (if you do, this is obviously the best way).
So, without the Console, you should create a txt file with this command: "C:\Program Files\QlikView\Qv.exe" /r "\\thePathToYourFile\test.qvw". Save this file with .cmd file extension. After that you can schedule this command file with the windows task scheduler.