I am trying to use remote data for tom-select (https://tom-select.js.org/examples/optgroups/). I am at a loss how to configure option groups with remote data. I have the select loading with remote data like this:
"optgroup": "1 Materials | 1.2 Gravel",
"value": 65,
"label": "1.2.1 Tanks"
From the docs I got the impression that you set optgroupField: 'optgroup' and the option groups would be set automatically. Do I need to add the optgroups array to my JSON data? I can't seem to find any examples of remote data with option groups anywhere.
tom-select shares much of it's code from Selectize.js so I am cross tagging this also.
I found a solution in the selectize world:
https://github.com/selectize/selectize.js/issues/151#issuecomment-111056161
I added a group id:
"optgroup_id": 13,
"optgroup": "1 Materials | 1.1 Pipe, valves & fittings",
"value": 5,
"label": "1.1.1 Line Pipe"
Reset the group field: optgroupField: 'optgroup_id' then added this after the json callback in load:
json.items.forEach((item) => {
this.addOptionGroup(item['optgroup_id'], { label: item['optgroup'] } );
});
this.refreshItems()
I am also playing around with adding a second optgroups json array with just the groups to avoid cycling through all the options.
I hope there is a better answer - will leave this open for that. Hoping this helps someone else.
Related
I'm running into this error - ActionFailed. An action failed. No dependent actions succeeded - when trying to run this logic app to add an IP to be blocked.
Error
I'm not sure where to start. The input looks ok. Help? Thanks in advance!
p.s. - sorry, it won't allow me to post the pics due to not having enough points.
Tried changing some parts of the body. Not sure what to change really.
According to Microsoft's documentation on Submit or Update Indicator API the request body should be as follows:
{
"indicatorValue": "220e7d15b011d7fac48f2bd61114db1022197f7f",
"indicatorType": "FileSha1",
"title": "test",
"application": "demo-test",
"expirationTime": "2020-12-12T00:00:00Z",
"action": "AlertAndBlock",
"severity": "Informational",
"description": "test",
"recommendedActions": "nothing",
"rbacGroupNames": ["group1", "group2"]
}
Since the error you get is too generic, it isn't clear enough to know exactly.
You are not passing in recommendedActions and rbacGroupNames, they may not be required but may want to pass the column even if no value is included.
I would also validate calling this API using manual values (even the exact value from their documentation) and if that does work, use process of elimination to figure out which property is giving you the trouble.
i.e. application might not accept a space value or combining the two values for description should be done outside of the HTTP call using compose and then passed as a single value of the output.
In Azure Cosmos DB (SQL API) I've created a container whose "partition key" is set to /part_key and I am now trying to create and edit data in Data Explorer.
I created an item that looks like this:
{
"id": "test_id",
"value": "val000",
"magicNumber": 32,
"part_key": "asdf"
}
I am now trying to create an item that looks like this:
{
"id": "frank",
"value": "val001",
"magicNumber": 33,
"part_key": "asdf"
}
Based on the documentation I believe that each item within a partition key needs a distinct id, which to me implies that multiple items can in fact share a partition key, which makes a lot of sense.
However, I get an error when I try to save this second item:
{"code":409,"body":{"code":"Conflict","message":"Entity with the specified id already exists in the system...
I see that if I change the value of part_key to something else (say asdf2), then I can save this new item.
Either my expectations about this functionality are wrong, or else I'm doing this wrong somehow. What is wrong here?
Your understanding is correct, It could happen if you are trying to instead a new document with id equal to id of the existing document. This is not allowed, so operation fails.
Before you insert the modified copy, you need to assign a new id to it. I tested the scenario and it looks fine. May be try to create a new document and check
Hello all!
I am trying to use the Aggregate filter plugin of Logstash v7.7 to correlate and combine data from two different CSV file inputs which represent API data calls. The idea is to produce a record showing a combined picture. As you can expect the data may or may not arrive in the right sequence.
Here is as an example:
/data/incoming/source_1/*.csv
StartTime, AckTime, Operation, RefData1, RefData2, OpSpecificData1
231313232,44343545,Register,ref-data-1a,ref-data-2a,op-specific-data-1
979898999,75758383,Register,ref-data-1b,ref-data-2b,op-specific-data-2
354656466,98554321,Cancel,ref-data-1c,ref-data-2c,op-specific-data-2
/data/incoming/source_1/*.csv
FinishTime,Operation,RefData1, RefData2, FinishSpecificData
67657657575,Cancel,ref-data-1c,ref-data-2c,FinishSpecific-Data-1
68445590877,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
55443444313,Register,ref-data-1a,ref-data-2a,FinishSpecific-Data-2
I have a single pipeline that is receiving both these CSVs and I am able to process and write them as individual records to a single Index. However, the idea is to combine records from the two sources into one record each representing a superset. of Operation related information
Unfortunately, despite several attempts I have been unable to figure out how to achieve this via Aggregate filter plugin. My primary question is whether this is a suitable use of the specific plugin? And if so, any suggestions would be welcome!
At the moment, I have this
input {
file {
path => ['/data/incoming/source_1/*.csv']
tags => ["source1"]
}
file {
path => ['/data/incoming/source_2/*.csv']
tags => ["source2"]
}
# use the tags to do some source 1 and 2 related massaging, calculations, etc
aggregate {
task_id = "%{Operation}_%{RefData1}_%{RefData1}"
code => "
map['source_files'] ||= []
map['source_files'] << {'source_file', event.get('path') }
"
push_map_as_event_on_timeout => true
timeout => 600 #assuming this is the most far apart they will arrive
}
...
}
output {
elastic { ...}
}
And other such variations. However, I keep getting individual records being written to the Index and am unable to get one combined. Yet again, as you can see from the data set there's no guarantee of the sequencing of records - so I am wondering if the filter is the right tool for the job, to begin with? :-\
Or is it just me not being able to use it right! ;-)
In either case, any inputs/ comments/ suggestions welcome. Thanks!
PS: This message is being cross-posted over from Elastic forums. I am providing a link there just in case some answers pop up there too.
The answer is to use Elastic search in upsert mode. Please see the specifics here..
I recommend first that the information reaches you in order so that the filter can take it better, secondly, you could set the options in your pipeline.yml: pipeline.workers: 1 and pipeline.ordered: true, thus guaranteeing the order of processing.
I have a lot of files each containing a set of json objects like this:
{ "Id": "1", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "2", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "3", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
etc.
Each file containts information about a specific type of session. In this case it are sessions from a Web App, but it could also be sessions of a Desktop App. In that case the value for Origin is "DesktopClient" instead of "WebClient"
For analysis purposes say I am only interested in DesktopClient sessions.
All files representing a session are stored in Azure Blob Storage like this:
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef77.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef78.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef79.json
Is it possible to skip files of which the first line already makes it clear if it is not a DesktopClient session file, like in my example? I think it would save a lot of query resources if files that I know of do not contain the right session type can be skipped since they can be quit big.
At the moment my query read the data like this:
#RawExtract = EXTRACT [RawString] string
FROM #"wasb://plancare-events-blobs#centrallogging/2017/07/20/{*}.json"
USING Extractors.Text(delimiter:'\b', quoting : false);
#ParsedJSONLines = SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple([RawString]) AS JSONLine
FROM #RawExtract;
...
Or should I create my own version of Extractors.Text and if so, how should I do that.
To answer some questions that popped up in the comments to the question first:
At this point we do not provide access to the Blob Store meta data. That means that you need to express any meta data either as part of the data in the file or as part of the file name (or path).
Depending on the cost of extraction and sizes of files, you can either extract all the rows and then filter out the rows where the beginning of the row is not fitting your criteria. That will extract all files and all rows from all files, but does not need a custom extractor.
Alternatively, write a custom extractor that checks for only the files that are appropriate (that may be useful if the first solution does not give you the performance and you can determine the conditions efficiently inside the extractors). Several example extractors can be found at http://usql.io in the example directory (including an example JSON extractor).
Let me stress that I am not a programmer but I like messing around with things. I've been using #ifttt and #nest for years and recently started using #smartthings to do cool things in my house.
I wanted to power off devices such as my lights and water heater based on leaving my house. Rather than having this depend on one device such as a phone or keyfoob, I wanted to use the nest "auto-away" feature.
Auto-away doesn't appear to be exposed to #ifttt or #smartthings. I've asked #nestsupport and they told me to come here :-o.
Does anyone from nest developer team know when developers and other products will be able to consume this from he nest device? Its a real shame that after several years this isn't exposed yet. Not only that but it could be an additional selling point to integrate and turn on/off items in your house.
Thank
I'm not from the Nest developer team, but I've played around with the Nest API in the past, and use it to plot my energy usage.
The 'auto away' information is already accessible in the API, and looks to be used in a number of IFTTT recipes:
https://ifttt.com/recipes/search?q=auto+away&ac=false
Within the (JSON) data received back in the API, the 'auto away' status is accessible via;
shared->{serial_number}->auto_away
This is set as a boolean (0 or 1).
If you like messing around with code, and know any PHP, then this PHP class for the Nest API is very useful at grabbing all information etc;
https://github.com/gboudreau/nest-api
Auto-Away is and always has been readable https://developer.nest.com/documentation/cloud/api-overview#away
There are a few ways you could go about doing this, but if you're writing up a SmartApp just for your own uses, I'd suggest piggybacking off of one of the existing device types for the Nest on SmartThings. As a quick example, I'll use the one that I use:
https://github.com/bmmiller/device-type.nest/blob/master/nest.devicetype.groovy
After line 96, this is to expose the status to any SmartApp you may write:
attribute "temperatureUnit", "string"
attribute "humiditySetpoint", "number"
attribute "autoAwayStatus", "number" // New Line
Now, you'll want to take care of getting the data in the existing poll() method, currently starting at line 459.
After line 480, to update the attribute
sendEvent(name: 'humidity', value: humidity)
sendEvent(name: 'humiditySetpoint', value: humiditySetpoint, unit: Humidity)
sendEvent(name: 'thermostatFanMode', value: fanMode)
sendEvent(name: 'thermostatMode', value: temperatureType)
sendEvent(name: 'autoAwayStatus', value: data.shared.auto_away) // New Line
This will expose a numerical value for the auto_away status.
-1 = Auto Away Not Enabled
0 = Auto Away Off
1 = Auto Away On
Then, in your SmartApp you write, where you include an input of type thermostat like this:
section("Choose thermostat... ") {
input "thermostat", "capability.thermostat"
}
You will be able to access the Auto Away status by referring to
thermostat.autoAwayStatus
From anywhere in your code where you can do something like
if (thermostat.autoAwayStatus == 1) {
// Turn off everything
}