DB2 Antlr4 grammar file - github-mantle

In my project, I need DB2 grammar file i.e. Laxer and Parser.g4 files, I searched on the google but file not found, if it is not available, please suggest me which grammar file is suitable for DB2 database.

Related

How to reference the most current Physical Sequential (PS) file in JCL

I wanted to create a job where I need to consider the latest file available as input file.
File format is as below: FILE1.TEST.TYYMMDD
is there any way to identify latest file based on date present in file name via JCL.
P.S. GDG versions are not created in existing process . Only PS file is created.
Thank you
I wanted to create a job where I need to consider the latest file available as input file. File [name] format is as below: FILE1.TEST.TYYMMDD is there any way to identify latest file based on date present in file name via JCL.
No.
You indicate that GDGs are not created in the existing process. GDGs would be the best way to accomplish your goal. Absent GDGs, you must write code.
You could accomplish your goal by writing (C, clist, COBOL, PL/I, Rexx) code using the LMDINIT and LMDLIST ISPF services. Then you would execute your code by running ISPF in batch. Many mainframe shops have a cataloged procedure to execute ISPF in batch.
Agree with #cschneid that there is not a platform way to handle this. However, I want to point out that GDGs are the platform way of managing PS files for access in a relative form.
Your comment
GDG versions are not created in existing process . Only PS file is
created.
That statement didn't make sense to me. GDGs are not a file type like physical sequential (PS) or partitioned (PO). It's a convention to allow relative reference to files created over time which sounds like what you want. I've only seen the use of GDGs for PS files.
Putting the date in the file name can have its uses but to z/OS its only part of the filename and not meta information that it operates on (like G0000v00's in GDGs.

MapDB file types

I have a problem with mapDB version 1.0.6. When i create a database i end up with two files with the same name but with different file types.
One is for example IRTree with file type FILE and the other is IRTree with file type .p
Having said that, whenever i try to read my database providing a filename IRTree i end up with an exception:
NullPointerException with the command DBMaker.newFileDB(new File(filename)).readOnly().make(); or an IOException: storage header is invalid.
Can anyone explain to me what's going on?
MapDB uses two files. .P file is used to store data. Always open file without extension, otherwise it will try to open incorrect file.

Stanford CoreNLP model sentiment.ser.gz missing?

I am new stanford to corenlp and trying to use it. I was able to run sentimental analysis pipeline and corenlp software. While when I am trying to execute evaluate tool it is asking for model sentiment.ser.gz.
java edu.stanford.nlp.sentiment.Evaluate edu/stanford/nlp/models/sentiment/sentiment.ser.gz test.txt
I could not find the model in the software that I downloaded from stanford site or anywhere on internet.
Can someone please guide if we can create our own model or if I can find anywhere on the internet.
Appreciate your help.
The file stanford-corenlp-full-2014-01-04.zip contains another file called stanford-corenlp-3.3.1-models.jar. The latter file is a ZIP archive that contains the model file you are looking for.
CoreNLP is able to load the model file from the classpath if you add the stanford-corenlp-3.3.1-models.jar to your Java classpath, so you do not have to do anything.
It also appears the documentation on running the Evaluate tool is slightly outdated.
The correct call goes like this (tested with CoreNLP 3.3.1 and the test data downloaded from the sentiment homepage):
java -cp "*" edu.stanford.nlp.sentiment.Evaluate -model edu/stanford/nlp/models/sentiment/sentiment.ser.gz -treebank test.txt
The '-cp "*"' adds everything in the current directory to the classpath. Thus, the command above must be executed in the directory to which you extracted CoreNLP, otherwise it will not work.
If you do not add the "-model" and -treebank" to the call, you'll an error message like this
Unknown argument test.txt
If you do not supply a treebank and a model, you get another error message
Exception in thread "main" java.lang.NullPointerException
at java.io.File.<init>(File.java:277)

node.js rename the files incrementally

I have been using Node.JS file system module for performing various file related operations. I have a need to verify the file name if exists in a directory and if exists i would need to keep a suffix at the end of the file. Typically how windows does with duplicate file names..
if TestFile.txt already exists and another file with same names comes in during processing the new file should be renamed as TestFile (1).txt and next file with same name should be renamed as TestFile (2).txt.
What could be the best way to achieve this. Do i have to use a temporary array to keep all file names and traverse through for each? This is a multi threaded environment and there could be 50,000+ documents coming for processing.
Thanks a ton.

Proper way to differentiate pst and dbx files in bash shell

I want to identify the file-format of the input file given to my shell script - whether a .pst or a .dbx file. I checked How to check the extension of a filename in a bash script?. That one deals with txt files and two methods are given there -
check if the extension is txt
check if the mime type is application/text etc.
I tried file -ib <filename> on a .pst and a .dbx file and it showed application/octet-stream for both. However, if I just do file <filename>, then I get
this for the dbx file -
file1.dbx: Microsoft Outlook Express DBX File Message database
and this for the pst file -
file2.pst: Microsoft Outlook binary email folder (Outlook >=2003)
So, my questions are -
is it better to use mime type detection everytime when the output can be anything and we need a proper check?
How to apply mime type check in this case - both returning "application/octet-stream"?
Update
I didn't want to do an extension based detection because it seems we just can't be sure on a Unix system, that a .dbx file truly is a dbx file. Since file <filename> returns a line which contains the correct information of the file (e.g. "Microsoft Outlook Express DBX File Message database"). That means the file command is able to identify the file type properly. Then why does it not get the correct information in file -ib <filename> command?
Will parsing the string output of file <filename> be fine? Is it advisable assuming I only need to identify a narrow set of data storage files of outlook family (MS Outlook Express, MS Office Outlook 2003,2007,2010 etc.). A small text identifier like application/dbx which could be compared would be all I need.
The file command relies on having a file type detection database which includes rules for the file types that you expect to encounter. It may not be possible to recognize these file types if the file content doesn't have a unique code near the beginning of the file.
Note that the -i option to emit mime types actually uses a separate "magic" numbers file to recognize file types rather than translating long descriptions to file types. It is quite possible for these two databases to be out of sync. If your application really needs to recognize these two file types I suggest that you look at the Linux source code for "file" to see how they recognize them and then code this recognition algorithm right into your app.
If you want to do the equivalent of DOS file type detection, then strip the extension off the filename (everything after the last period) and look up that string in your own table where you define the types that you need.

Resources