I copied the SqlBase.g4 file from the Trino project to generate sql parser.However,It seems that the parser dosen't support the lowercase keyword like 'select'.Why?
You need to set up the parser with a CaseInsensitiveStream. See https://github.com/trinodb/trino/blob/master/core/trino-parser/src/main/java/io/trino/sql/parser/SqlParser.java#L111
SqlBaseLexer lexer = new SqlBaseLexer(new CaseInsensitiveStream(CharStreams.fromString(sql)));
CommonTokenStream tokenStream = new CommonTokenStream(lexer);
SqlBaseParser parser = new SqlBaseParser(tokenStream);
There's also additional setup you need to do for proper error handling and reporting, so take a look at the SqlParser.java class for how it's done.
Related
Getting "error occurred during batching: ORA-00933 SQL command not properly ended.
I'm trying to update/insert byte array in Oracle BLOB column using jooq syntax as follows:-
Map<Field<Object>, Object> fieldValueMap = new HashMap<>();
fieldValueMap.put(field("BLOB_COLUMN"), "test".getBytes());
Query = DSLContext.update(table(tablename)).set(fieldValueMap).where(condition)
Formed query for blob column as follows:-
Update tablename set BLOB_COLUMN = X'74657374' where condition.
Please help with the above issue.
X'74657374' is the default rendering of bytes[] inline literals for unknown dialects, as well as a few known dialects, including e.g. H2, HSQLDB, MariaDB, MySQL, SQLite. If you had used the SQLDialect.ORACLE dialect, then you'd have gotten something like hextoraw('74657374') as generated SQL, which wouldn't produce the error you've seen.
But you probably don't want to get inline literals anyway. This probably happened because you either:
Used static statements
Explicitly used inline literals
Using ANTLR 4.9.2 for C++.
Depending on the first tokens I might need to insert some tokens before parsing. My approach (simplified)
antlr4::ANTLRInputStream antlrIs(properlyEscaped);
Lexer lexer(&antlrIs);
antlr4::CommonTokenStream tokens(&lexer);
antlr4::TokenStreamRewriter tokenStreamRewriter(&tokens);
if (!(tokens.LA(1) == Lexer::MY_SPECIAL_TOKEN))
{
tokenStreamRewriter.insertBefore(tokens.LT(1), string("begin"));
}
Parser parser(&tokens);
Parser::FileContext* fileContext = parser.file();
Stepping with the debugger I see that the token is actually inserted. But the new token I insert seems be be ignored by parser.file().
How can I insert tokens so that parser.file() uses them?
TokenStreamRewriter just builds up a set of instructions for how the input stream should be changed. It doesn’t actually change the token stream itself.
Once you have executed all of your modification calls, you’ll need to call .getText() (or .getText(String programName)) to get get a String that has all of your changes incorporated. Then you can use that as the input to your Lexer to get a token stream containing your modifications.
I was looking for a way to upload a text file of dictionary of synonyms in azure search, the nearest I could find was
https://azure.microsoft.com/en-in/blog/azure-search-synonyms-public-preview/
https://learn.microsoft.com/en-us/azure/search/search-synonyms
I know it is not a good idea to compare products of different companies, but if there exists a way to upload a dictionary of synonyms in azure search like it is supported in elastic search, then it will of great help and might save a lot of time and rework.
Please help me know how to achieve such thing like uploading the dictionary of the synonym in azure search
The latest .NET SDK for Azure Cognitive Search has this capability. From this sample:
// Create a new SearchIndexClient
Uri endpoint = new Uri(Environment.GetEnvironmentVariable("SEARCH_ENDPOINT"));
AzureKeyCredential credential = new AzureKeyCredential(
Environment.GetEnvironmentVariable("SEARCH_API_KEY"));
SearchIndexClient indexClient = new SearchIndexClient(endpoint, credential);
// Create a synonym map from a file containing country names and abbreviations
// using the Solr format with entry on a new line using \n, for example:
// United States of America,US,USA\n
string synonymMapName = "countries";
string synonymMapPath = "countries.txt";
SynonymMap synonyms;
using (StreamReader file = File.OpenText(synonymMapPath))
{
synonyms = new SynonymMap(synonymMapName, file);
}
await indexClient.CreateSynonymMapAsync(synonyms);
The SDKs for Java, Python, and Javascript also support creating synonym maps. The Java SDK accepts a string rather than a file stream, so you'd have to read the file contents yourself. Unfortunately the Python and Javascript SDKs seem to require a list of strings (one for each line of the file), which is something we should improve. I'm following up with the Azure SDK team to make these improvements.
When adding a Rule to a SubscriptionClient, I get a syntax error if I don't remove all of the whitespace. None of the numerous examples I've read have to do this. Any ideas why?
// This works (whitespace stripped from expression)
var rd= new RuleDescription("ZonesRule", new SqlFilter("Zone='All'"));
subscriptionClient.AddRule(rd);
// This does not work (normal whitespace in expression)
var rd= new RuleDescription("ZonesRule", new SqlFilter("Zone = 'All'"));
subscriptionClient.AddRule(rd);
Microsoft.ServiceBus.Messaging.FilterException: 'There was an error
parsing the SQL expression. [Token line=1, column=4, Token in error=
, Additional details= Unrecognized character. ' ']
TrackingId:4087836f-321c-45d7-b217-cb7fae75ee67_G11_B27...'
As forester123 mentioned that syntax has no problem at all. I also test it on my side, it works correctly.We also could get SQLFilter syntax from azure official document.
If it is possible, please have a try use latest 4.1.3 version of WindowsAzure.ServiceBus.
I am using lucene.net 2.9.4 (cannot upgrade atm). I am also making use of highlighter.net from lucene.net contrib. I can get it working fine when i am searching on one index my code looks like:
QueryScorer fragmentScorer = new QueryScorer(query.Rewrite(searcher.GetIndexReader()));
Highlighter highlighter = new Highlighter(this.HighlightFormatter, fragmentScorer);
Lucene.Net.Analysis.TokenStream tokenStream = this.HighlightAnalyzer.TokenStream(highlightField, new System.IO.StringReader(value));
return highlighter.GetBestFragments(tokenStream, value, this.MaxNumHighlights, this.Separator);
return highlightField;
The issue is when my searcher object is multisearcher then I do not have the GetIndexReader method.
With multi searcher you are using more than one reader under the hood so kind of makes sense you do not have GetIndexReader.
Is it even possible to highlight with multisearcher? If not then is there a way todo this?