I'm attempting to apply row-level security I have an S3-based dataset with username and what I want to filter on. The dataset looks good in quicksight. I can create an analysis on it. rls_rater_action_username maps to a column in my dataset. However, no matter what I do with this file I get the error "An unexpected error occurred. If this problem continues, contact your administrator. Error code: DatasetRulesUnexpectedError".
csv file contents:
username,rls_rater_action_username
Dave, test
The error is kind of useless. I have no idea what the issue is? Anyone have any guesses?
Usernames are case sensitive, write it exactly as amazon suggests with an uppercase Username.
Make sure the data type of both datasets(dataset,rules dataset), are strings, that the filter will apply.
Column order also matters in your s3 rules dataset.
Related
I am doing a full extract from a table ABC. In copy activity, I have given a query as
select * from ABC
whereas I am facing issue for few rows (It has special characters - Japanese and Korean)
Error code 2200
Failure type User configuration issue
Details Failure happened on 'Source' side. ErrorCode=DB2DriverRunFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error thrown from driver. Sql code: '-343',Source=Microsoft.DataTransfer.ClientLibrary.Db2Connector,''Type=Microsoft.HostIntegration.DrdaClient.DrdaException,Message=HISMPCB0001 In BasePrimitiveConverter an exception has occurred. Exception description: Output buffer is smaller than required size 12 SQLSTATE=HY000 SQLCODE=-343,Source=Microsoft.HostIntegration.Drda.Requester,'
The character which is causing the issue is '轎ᆃ '
In the error description, it states that there is BasePrimitiveConverter exception that has occurred. The exception description indicates that the output buffer is smaller than the required size. So, please try converting the column to an acceptable type like graphic in db2. Refer to the following link to understand more.
https://bytes.com/topic/db2/answers/488983-storing-some-japanese-data
Referring to these links, I understand that this error might be due to the datatype of source column, or the encoding used. Try working with different encoding options available in your source dataset. Here is a similar problem with a different source but a similar problem of not being able to retrieve special characters.
https://learn.microsoft.com/en-us/answers/questions/467456/failure-happened-source-side-in-copy-activity-for.html
I am trying to "beautify" the data I receive from some windows logs on Graylog. My idea is to change the windows log ID from a number to the actual definition for that ID. For example: I receive a log with ID 4625, I want to show in my widget "An account failed to log on".
To do that, I am using a pipeline and a lookup table, which reads the IDs and the respective definitions in natural language from a .csv that I've uploaded on the server.
This is the rule that I wrote for my pipeline, that doesn't seem to work:
rule "eventid_windows_rule"
when
has_field("winlogbeat_winlog_event_id")
then
let winlogbeat_winlog_italiano = lookup("winlogbeat_winlog_event_id", to_string($message.winlogbeat_winlog_event_id));
set_field("winlogbeat_winlog_italiano", winlogbeat_winlog_italiano);
end
I think my problem is specifically in this rule, because Graylog allows to test the lookup tables, and if I manually write an ID, the lookup table finds the respective description.
I solved the issue myself, this is the correct code for the rule:
rule "eventid_windows_rule"
when
has_field("winlogbeat_winlog_event_id")
then
let winlogbeat_winlog_italiano = lookup("eventid_widget_windows_lookup", $message.winlogbeat_winlog_event_id);
set_field("winlogbeat_winlog_italiano", winlogbeat_winlog_italiano);
end
This rule checks if the log has the field "winlogbeat_winlog_event_id", then it generates the new field "winlogbeat_winlog_italiano", associates the numeric value of "winlogbeat_winlog_event_id" with the description in natural language thanks to the .csv that I've created, then puts the description in the field "winlogbeat_winlog_italiano".
I have a data element expression I want to use as a category for a crosstable.
This gives me the errors "QE-DEF-0459 CCLException" and "QE-DEF-0261 QFWP", although I have followed the syntax properly. Any ideas what is causing this? It seems to be related to the [BIRTHDATE] column inside the when-clauses.
The error message goes like this: qe-def-0260 parsing error before or near position: 40 in: "case when (_years_between(current_date,"
The source database is Oracle.
Usually there are messages which are appended after the error number. The message should be helpful in solving your problem, so reading it would be helpful for you and including the text, rather than just quoting an error number when you ask for assistance could be helpful to others.
I'm not familiar with any case function in Cognos where the query item is required after the case.
Also Case requires an end operator.
Re-write your expression to be something like this, where I've removed birthdate and added the end.
case
when (_years_between(current_date, [BIRTHDATE])>=0 and _years_between(current_date, [BIRTHDATE])<=49) then '0-49'
when (_years_between(current_date, [BIRTHDATE])>=50 and _years_between(current_date, [BIRTHDATE])<=100) then '50-100'
else 'null'
end
Trying out the queue system for a better user upload experience with Laravel-Excel.
.env was been changed from 'sync' to 'database' and migrations run. All the necessary use statements are in place yet the error above persists.
The exact error happens here:
Illuminate\Queue\Queue.php:97
$payload = json_encode($this->createPayloadArray($job, $queue, $data));
if (JSON_ERROR_NONE !== json_last_error()) {
throw new InvalidPayloadException(
If I drop ShouldQueue, the file imports perfectly in-session (large file so long wait period for user.)
I've read many stackoverflow, github etc comments on this but I don't have the technical skills to deep-dive to fix my particular situation (most of them speak of UTF-8 but I don't if that's an issue here; I changed the excel save format to UTF-8 but it didn't fix it.)
Ps. Whilst running the migration, I got the error:
SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `jobs` add index `jobs_queue_index`(`queue`))
I bypassed by dropping the 'add index'; so my jobs table is not indexed on queue but I don't feel this is the cause.
One thing you can do when looking into json_encode() errors is use the json_last_error_msg() function, which will give you a bit more of a readable error message.
In your case you're getting a '5' back, which is the JSON_ERROR_UTF8 error code. The error message back for this is a slightly more informative one:
'Malformed UTF-8 characters, possibly incorrectly encoded'
So we know it's encountering non-UTF-8 characters, even though you're saving the file specifically with UTF-8 encoding. At first glance you might think you need to convert the encoding yourself in code (like this answer), but in this case, I don't think that'll help. For Laravel-Excel, this seems to be a limitation of trying to queue-read .xls files - from the Laravel-Excel docs:
You currently cannot queue xls imports. PhpSpreadsheet's Xls reader contains some non-utf8 characters, which makes it impossible to queue.
In this case you might be stuck with a slow, non-queueable option, or need to convert your spreadsheet into a queueable format e.g. .csv.
The key length error on running the migration is unrelated. It has been around for a while and is a side-effect of using an older version of MySQL/MariaDB. Check out this answer and the Laravel documentation around index lengths - you need to add this to your AppServiceProvider::boot() method:
Schema::defaultStringLength(191);
I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel