store .db file into application jar file - java-me

In my j2me application, The RecordStore name is "UserAns", and the entered data is stored in the '0000003-User-Ans.db' file.
I can fetch the data from '0000003-User-Ans.db' file and display it on Emulator.
But when I run the application on device, then it can't display any data.
So, how can I add '0000003-User-Ans.db' file into Application Jar File ?

RecordStore internals is device dependent. The .db file that works on emulator will not work on real devices.
However, you could store your initial data in a different format, say csv, for example, and have an import procedure running at the very first time the application is executed (your RecordStore will be empty then).
Update after comments
Let's say you only have one "table" to initiate: Product. With columns: id, name, price. And that there is a Product Java class with corresponding attributes. A csv sample would be
1, Pen, 1
2, Clip, 0.05
3, Eraser, 0.5
Have this content stored in a file inside the jar, for example, products.csv. Open this file and parse each line to create a Product instance. Convert each instance to a byte array and create a record entry for each of them.
Next time the application is initiated the RecordStore will not be empty and a Product instance can be created for each record found.

Related

Data Factory Data Flow sink file name

I have a data flow that merges multiple pipe delimited files into one file and stores it in Azure Blob Container. I'm using a file pattern for the output file name concat('myFile' + toString(currentDate('PST')), '.txt').
How can I grab the file name that's generated after the dataflow is completed? I have other activities to log the file name into a database, but not able to figure out how to get the file name.
I tried #{activity('Data flow1').output.filePattern} but it didn't help.
Thank you
You can use GetMeta data activity to get the file name that is generated after the data flow.

BigQuery howto load data from local file as content

I have a requirement where in I will receive file content which I need to load to BigQuery tables. Standard API shows how to load data from local file but I don't see any variant of the load method which accepts file content as string rather than a file path. Any idea how I can achieve this ?
As we can see in the source code and official documentation load function loads data only from a local file or Storage File. Allowed options are:
AVRO,
CSV,
JSON,
ORC,
PARQUET
The load job is created and it will run your data load asynchronously. If you would like instantaneous access to your data, insert it using Table insert function, where you need to provide the rows to insert into the table:
// Insert a single row
table.insert({
INSTNM: 'Motion Picture Institute of Michigan',
CITY: 'Troy',
STABBR: 'MI'
}, insertHandler);
If you want to load i.e. CSV file, firstly you need to save data to a CSV in Node.js manually. Then, load it as a single column CSV using load() method. That will load the whole string as a single column.
Additionally, what I can recommend you is to use Dataflow templates, i.e. Cloud Storage Text to BigQuery, that read text files stored in Cloud Storage, transform them using a JavaScript User Defined Function (UDF), and output the result to BigQuery. But your data to load needs to be stored in Cloud Storage.

Read data from .txt file or .cs file and update the file contents in sql database using python

How to write automation script for below example:
already tables are available only read the data from .txt file and and update in the tables columns in a sequence.
ex: file1.txt
a,b,c,b,e
DB:
TABLE NAME:SATYA
COLUMN NAME:1|2|3|4|5
UPDATED DB should be:
COLUMN NAME:1|2|3|4|5
UPDATED VALUES:a b c b e
You put robotframework tag, so i guess that you want to do it in python, but call the function from robot.
Why you want to develop this function in python and lose all robot's abstraction?
DatabaseLibrary was made to work with db, already from documentation is possible to understand how to make a keword to update db.
Link to library with example below:
https://franz-see.github.io/Robotframework-Database-Library/api/0.5/DatabaseLibrary.html

Stream Analytics job reference data join creating duplicates

I am using Stream Analytics to join streaming data (via IoT Hub) and reference data (via blob storage). The reference data blob file is generated every minute with latest data and is in a format "filename-{date} {time}.csv". The reference blob file data is used in the Azure Machine Learning function as parameters in SA job. The output of stream analytics job (into Azure SQL or Power BI) seems to be generating multiple rows instead of one for Azure Machine Learning function's output, one each for parameter values from previous blob files. My understanding is that it should only use the latest blob file content but looks like it is using all the blob files and generating multiple rows from AML output. Here is the query I am using:
SELECT
AMLFunction(Ref.Input1, Ref.Input2), *
FROM IoTInput Stream
LEFT JOIN RefBlobInput Ref ON Stream.DeviceId = Ref.[DeviceID]
Please can you advice if the query or the file path needs changing to avoid duplicating records? Thanks
To take effect of only latest file, you need to store your file in particular folder structure.
If you have note down, whenever you select reference data file as stream input; stream input dialog asks you for folder structure along with date and time format.
Stream always search for reference file from latest {date}/{time} folder. i.e. you need to store your file like,
2018-01-25/07:30/filename.json (YYYY-MM-DD/HH-mm/filename.json)
NOTE: Here your time folder needs to be unique for each minute. Same as, date folder needs to be unique for each date. Whenever you create new file, create it with under new time stamp folder and under current date folder.
You can use any datetime format that stream input supports.

Qlikview - append data to Excel

I have qvw file with sql query
Data:
LOAD source, color, date;
select source, color, date
as Mytable;
STORE Data into [..\QV_Data\Data.qvd] (qvd);
Then I export data to excel and save.
I need something to do that automatically instead of me
I need to run query every day and automatically send data to excel but keep old data in excel and append new value.
Can qlikview to do that?
For that you need to create some crazy macro that runs after a reload task in on open-trigger. If you schedule a windows task that execute a bat file with path to qlikview.exe with the filepath as parameters and -r flag for reload(?) you can probably accomplish this... there are a lot of code of similar projects to be found on google.
I suggest adding this to the loadscript instead.
STORE Table into [..\QV_Data\Data.csv] (txt);
and then open that file in excel.
If you need to append data you could concatenate new data onto the previous data.. something like:
Data:
load * from Data.csv;
//add latest data
concatenate(Data)
LOAD source, color, date from ...
STORE Data into [..\QV_Data\Data.csv] (txt);
I assume you have the desktop version so you don't have access to the Qlikview Management Console (if you do, this is obviously the best way).
So, without the Console, you should create a txt file with this command: "C:\Program Files\QlikView\Qv.exe" /r "\\thePathToYourFile\test.qvw". Save this file with .cmd file extension. After that you can schedule this command file with the windows task scheduler.

Resources