How can we add some prefix in file field in CLLE program? - add

We use PREFIX keyword in RPGLE program to add some prefix in file field like below.
FTESTPF i f e k disk Prefix(#)
Is there any alternative of PREFIX keyword in CLLE?

CL is not intended as a database language, so it doesn't support as many features as RPG or other business languages. There shouldn't be a need for a "prefix" in CL.
However, the OPNID() parameter of DCLF provides a basic "prefix" capability.

Related

How do you extend a library created variable in onvif python?

onvif python will create base variables from WSDL but not the optional elements. How do I add the optional variables to the existing definition?
as in a = create(sometype)
This defines the elements a.b and a.c.
I need to add elements a.c.d, a.c.e.g and a.c.e.h.
The short answer: It depends on what the existing variable is.
The longer answer: Since the existing variable is defined by a third party library with little to no visibility, run the code under a debugger that will tell you what the existing variable is, e.g, list, dict, etc. From that information look in the python documentation if you are not familiar with that type of variable.

How to use JOOQ Java Generator includes and excludes

The JOOQ Java code generation tools uses regular expressions defined in the includes and excludes elements to control what is generated. I cant find an explanation of what the schema structure is that these expressions are run against.
I want to have the ability to exclude specific databases in the server as well as tables by prefix or specifically.
Simple examples:
Given a SQL server with two DBs 'A' and 'B', how do I instruct JOOQ to only generate for tables in DB 'A'?
How do in instruct JOOQ to only generate for tables starting with the prefix "qtbl"?
It would be great if there were some example use cases available showing some simple common configurations.
The jOOQ manual section about includes and excludes, as well as a few other sections that explain the code generator's usage of regular expressions to match identifier establishes that the code generator will always try to:
Match fully qualified identifiers
Match unqualified identifiers
Or, if you're using jOOQ 3.12+ and did not turn off <regexMatchesPartialQualification/>:
Match partially qualified identifiers (see #7947)
For example:
<excludes>
(?i: # Using case insensitive regex for the example
database_prefix.*?\. # Match a catalog prefix prior to the qualifying "."
.*?\. # You don't seem to care about schema names, so match them all
table_prefix.*? # Match a table prefix at the end of the identifier
)
</excludes>
In addition to the above, if you want to exclude specific databases ("catalogs") from being generated without pattern matching, you'll get even better results if you specify your <inputCatalog>A</inputCatalog>. See also the manual's section about schema mapping.
Benefits include a much faster code generation, because only that catalog will be searched for objects to generate, prior to excluding them again using regular expressions. So, your configuration could be this:
<!-- Include only database A -->
<inputCatalog>A</inputCatalog>
<!-- Include only tables with this (unqualified) prefix -->
<includes>qtbl.*</includes>

Is there a configuration file format that supports colons in the identifiers and multiline values?

It would be super convenient for my users if my application's configuration file could support identifiers that have colons in the name, and multi-line values. ie...
base:sub:sub-sub13 =
complicated multi * worded string #1
multi word strin6 #234
And then I read it with my Python program like this...
config = ConfigObject('new-config')
print(config['base:sub:sub-sub13'])
Is there a widely used configuration format that supports this kind of format, and has a Python library already in existence?

U-SQL Error - Change the identifier to use at least one lower case letter

I am fairly new to U-SQL and trying to run a U-SQL script in Azure Data Lake Analytics to process a parquet file using the Parquet extractor functionality. I am getting the below error and I don't find a way to get around it.
Error - Change the identifier to use at least one lower case letter. If that is not possible, then escape that identifier (for example: '[ACTIVITY]'), or embed it in a CSHARP() block (e.g CSHARP(ACTIVITY)).
Unfortunately all the different fields generated in the Parquet file are capitalized and I don't want to to escape these identifiers. I have tried if I could wrap the identifier with CSHARP block and it fails as well (E_CSC_USER_RESERVEDKEYWORDASIDENTIFIER: Reserved keyword CSHARP is used as an identifier.) Is there anyway I could extract the parquet file? Thanks for your help!
Code Snippet:
SET ##FeaturePreviews = "EnableParquetUdos:on";
#var1 =
EXTRACT ACTIVITY string,
AUTHOR_NAME string,
AFFLIATION string
FROM "adl://xxx.azuredatalakestore.net/Abstracts/FY2018_028"
USING Extractors.Parquet();
#var2 =
SELECT *
FROM #var1
ORDER BY ACTIVITY ASC
FETCH 5 ROWS;
OUTPUT #var2
TO "adl://xxx.azuredatalakestore.net/Results/AbstractsResults.csv"
USING Outputters.Csv();
Based on your description you try to say
EXTRACT ALLCAPSNAME int FROM "/data.parquet" USING Extractors.Parquet();
In U-SQL, we reserve all caps identifiers so we can add new keywords in the future without invalidating old scripts.
To work around, you just have to quote the name (escape it) like in any other SQL dialect:
EXTRACT [ALLCAPSNAME] int FROM "/data.parquet" USING Extractors.Parquet();
Note that this is not changing the name of the field. It is just the syntactic way to address the field.
Also note, that in most SQL communities, it is considered a best practice to always quote identifiers to avoid reserved keyword clashes.
If all fields in the Parquet file are all caps, you will have to quote them all... In a future update you will be able to say EXTRACT * FROM … for Parquet (and Orc) files, but you still will need to quote the columns when you refer to them explicitly.

Use Alex macros from another file

Is there any way to have an Alex macro defined in one source file and used in other source files? In my case, I have definitions for $LowerCaseLetter and $UpperCaseLetter (these are all letters except e and O, since they have special roles in my code). How can I refer to these macros from other .x files?
Disproving something exists is always harder than finding something that does exist, but I think the info below does show that Alex can only get macro definitions from the .x file it is reading (other than predefinied stuff like $white), and not via includes from other files....
You can get the sourcecode for Alex by doing the following:
> cabal unpack alex
> cd alex-3.1.3
In src/Main.hs, predefined macros are first set in variables called initSetEnv (charset macros $white, $printable, and "."), and initREEnv (regexp macros, there are none). This gets passed into runP, in src/ParseMonad.hs, which is used to hold the current parsing state, including all defined macros. The initial state is set using the values passed in, but macros can be added using a function called newSMac (or newRMac for regular expression macros).
Since this seems to be the only way that macros can be set, it is then only a matter of some grep bookkeeping to verify the only ways that macros can be added is through an actual macro definition in the source .x file. Unsurprisingly, Alex recursively uses its own .x/.y files for .x source file parsing (src/parser.y, src/Scan.x). It is a couple of levels of indirection away, but you can verify that the only way newSMac can be called is through the src/Scan.x macro
#smac = \$ #id | \$ \{ #id \}
<0> #smac #ws? \= { smacdef }
Other than some obvious predefined stuff, I don't believe reuse in lexers is all that typical anyway, because at the token level things are usually pretty simple (often simple tokens like SPACE, WORD, NUMBER, and a few operators, symbols and parens are all that are needed). The complexity comes at the parsing stage, although for technical reasons, parser-includes aren't that common either (see scannerless parsing for a newer technology that does allow reuse through nesting, like javascript embedded in html.... The tools for scannerless parsing are still pretty primitive though).

Resources