Azure media services encoding failed with error : 0x80131040 - azure

I have some Azure media encoding jobs failing with the below error.
This is an error from azure backend side after the job has been submitted successfully.
Task 'encode', Error : ErrorProcessingTask : Unexpected error when
setting up the Windows Azure Media Encoder task workflow:Could not
load file or assembly 'Microsoft.WindowsAzure.MediaServices.Platform,
Version=2.10.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or
one of its dependencies. The located assembly's manifest definition
does not match the assembly reference. (Exception from HRESULT:
0x80131040)
Any idea what is happening here?

I found out that for some reason using the old encoding job preset names like H264 Adaptive Bitrate MP4 Set 720p and the old processor name Windows Azure Media Encoder do not work anymore. Maybe it is temporary maybe not
I fixed the issue by using the processor name Media Encoder Standard and using a job config string like :
{
"Version": 1.0,
"Codecs": [
{
"KeyFrameInterval": "00:00:02",
"StretchMode": "AutoSize",
"H264Layers": [
{
"Profile": "Auto",
"Level": "auto",
"Bitrate": 3400,
"MaxBitrate": 3400,
"BufferWindow": "00:00:05",
"Width": 1280,
"Height": 720,
"BFrames": 3,
"ReferenceFrames": 3,
"AdaptiveBFrame": true,
"Type": "H264Layer",
"FrameRate": "0/1"
},
{
"Profile": "Auto",
"Level": "auto",
"Bitrate": 2250,
"MaxBitrate": 2250,
"BufferWindow": "00:00:05",
"Width": 960,
"Height": 540,
"BFrames": 3,
"ReferenceFrames": 3,
"AdaptiveBFrame": true,
"Type": "H264Layer",
"FrameRate": "0/1"
},
{
"Profile": "Auto",
"Level": "auto",
"Bitrate": 1500,
"MaxBitrate": 1500,
"BufferWindow": "00:00:05",
"Width": 960,
"Height": 540,
"BFrames": 3,
"ReferenceFrames": 3,
"AdaptiveBFrame": true,
"Type": "H264Layer",
"FrameRate": "0/1"
},
{
"Profile": "Auto",
"Level": "auto",
"Bitrate": 1000,
"MaxBitrate": 1000,
"BufferWindow": "00:00:05",
"Width": 640,
"Height": 360,
"BFrames": 3,
"ReferenceFrames": 3,
"AdaptiveBFrame": true,
"Type": "H264Layer",
"FrameRate": "0/1"
},
{
"Profile": "Auto",
"Level": "auto",
"Bitrate": 650,
"MaxBitrate": 650,
"BufferWindow": "00:00:05",
"Width": 640,
"Height": 360,
"BFrames": 3,
"ReferenceFrames": 3,
"AdaptiveBFrame": true,
"Type": "H264Layer",
"FrameRate": "0/1"
},
{
"Profile": "Auto",
"Level": "auto",
"Bitrate": 400,
"MaxBitrate": 400,
"BufferWindow": "00:00:05",
"Width": 320,
"Height": 180,
"BFrames": 3,
"ReferenceFrames": 3,
"AdaptiveBFrame": true,
"Type": "H264Layer",
"FrameRate": "0/1"
}
],
"Type": "H264Video"
},
{
"Profile": "AACLC",
"Channels": 2,
"SamplingRate": 48000,
"Bitrate": 128,
"Type": "AACAudio"
}
],
"Outputs": [
{
"FileName": "{Basename}_{Width}x{Height}_{VideoBitrate}.mp4",
"Format": {
"Type": "MP4Format"
}
}
]
}

Related

I have a JSON formatted output as below, how to access and valid the outputs of each node with Assert statement

I have a JSON formatted output as below, how to access and valid the outputs of each node with Assert statement
{
"Type": "Page",
"X": 0,
"Y": 0,
"Width": 696,
"Height": 888,
"Children": [
{
"Type": "Column",
"X": 0,
"Y": 0,
"Width": 696,
"Height": 888,
"Children": [
{
"Type": "Paragraph",
"X": 209,
"Y": 290,
"Width": 248,
"Height": 24,
"Children": [
{
"Type": "Line",
"X": 209,
"Y": 290,
"Width": 248,
"Height": 24,
"Children": [
{
"Type": "Word",
"X": 209,
"Y": 290,
"Width": 49,
"Height": 24,
"Children": [
],
"Content": "Core"
},
{
"Type": "Word",
"X": 263,
"Y": 290,
"Width": 106,
"Height": 24,
"Children": [
],
"Content": "Enterprise"
},
{
"Type": "Word",
"X": 375,
"Y": 290,
"Width": 82,
"Height": 24,
"Children": [
],
"Content": "Installer"
}
],
"Content": null
}
],
"Content": null
},
{
"Type": "Paragraph",
"X": 580,
"Y": 803,
"Width": 79,
"Height": 13,
"Children": [
{
"Type": "Line",
"X": 580,
"Y": 803,
"Width": 79,
"Height": 13,
"Children": [
{
"Type": "Word",
"X": 580,
"Y": 803,
"Width": 46,
"Height": 13,
"Children": [
],
"Content": "Version"
},
{
"Type": "Word",
"X": 629,
"Y": 803,
"Width": 12,
"Height": 13,
"Children": [
],
"Content": "8."
},
{
"Type": "Word",
"X": 640,
"Y": 803,
"Width": 12,
"Height": 13,
"Children": [
],
"Content": "0."
},
{
"Type": "Word",
"X": 651,
"Y": 803,
"Width": 8,
"Height": 13,
"Children": [
],
"Content": "0"
}
],
"Content": null
}
],
"Content": null
}
],
"Content": null
}
],
"Content": null
}
Looking for solutions

timeUnit does not work after a flatten and flod transformation

Is it possible to use timeUnit after a flatten and flod transformation?
In the example below it doesnt work!
If I remove the timeUnit from the x axis it plots, but without the good things that come with the timeUnit.
Thanks
This is an example code that can be executed in the link below
https://vega.github.io/editor/#/edited
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "Sales in a Year.",
"width": 500,
"height": 200,
"data": {
"values": [
{"timestamp": ["2019-01-01","2019-02-01","2019-03-01","2019-04-01","2019-05-01","2019-06-01",
"2019-07-01","2019-08-01","2019-09-01","2019-10-01","2019-11-01","2019-12-01"],
"cars" : [55, 43, 91, 81, 53, 19, 87, 52, 52, 44, 52, 52],
"bikes" : [12, 6, 2, 0, 0, 0, 0, 0, 0, 3, 9, 15]}
]
},
"transform": [
{"flatten": ["timestamp", "cars", "bikes"]},
{"fold": ["cars", "bikes"]}
],
"mark": {"type":"bar", "tooltip": true, "cornerRadiusEnd": 4},
"encoding": {
"x": {"field": "timestamp",
"timeUnit": "month",
"type": "ordinal",
"title": "",
"axis": {"labelAngle": 0}},
"y": {"field": "value",
"type": "quantitative",
"title": "Soiling Loss"},
"color":{"field": "key",
"type": "nominal"}
}
}
For convenience, strings in input data with a simple temporal encoding are automatically parsed as dates, but such parsing is not applied to data that is the result of a transformation.
In this case, you can do the parsing manually with a calculate transform (view in editor):
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "Sales in a Year.",
"width": 500,
"height": 200,
"data": {
"values": [
{
"timestamp": [
"2019-01-01",
"2019-02-01",
"2019-03-01",
"2019-04-01",
"2019-05-01",
"2019-06-01",
"2019-07-01",
"2019-08-01",
"2019-09-01",
"2019-10-01",
"2019-11-01",
"2019-12-01"
],
"cars": [55, 43, 91, 81, 53, 19, 87, 52, 52, 44, 52, 52],
"bikes": [12, 6, 2, 0, 0, 0, 0, 0, 0, 3, 9, 15]
}
]
},
"transform": [
{"flatten": ["timestamp", "cars", "bikes"]},
{"fold": ["cars", "bikes"]},
{"calculate": "toDate(datum.timestamp)", "as": "timestamp"}
],
"mark": {"type": "bar", "tooltip": true, "cornerRadiusEnd": 4},
"encoding": {
"x": {
"field": "timestamp",
"timeUnit": "month",
"type": "ordinal",
"title": "",
"axis": {"labelAngle": 0}
},
"y": {"field": "value", "type": "quantitative", "title": "Soiling Loss"},
"color": {"field": "key", "type": "nominal"}
}
}

Python: requests doesn't get the whole content of a URL

Here is the URL that I am looking at:
url = https://i.instagram.com/api/v1/users/8538441802/info/
If I copy this url on the navigation bar of a web-browser I see this content:
{"user": {"pk": 8538441802, "username": "bobby_ww4", "full_name": "\ud83c\udf78 B O B B Y \ud83c\udf78", "is_private": false, "profile_pic_url": "https://scontent-sjc3-1.cdninstagram.com/vp/8a743eb7285d9cec268248e7fe6b018e/5D0470C8/t51.2885-19/s150x150/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com", "profile_pic_id": "1963771869542832625_8538441802", "is_verified": false, "has_anonymous_profile_picture": false, "media_count": 42, "follower_count": 2132, "following_count": 2881, "following_tag_count": 0, "biography": "Cake murder 9may\nLUNATIC \ud83d\udc40\nFashionholic \ud83d\udc54\n Attrangi \nUncontrolled rage \ud83d\ude44\nKDM LOVER \u2620\nHold the sea in my embrance", "external_url": "", "total_igtv_videos": 0, "total_ar_effects": 0, "reel_auto_archive": "on", "usertags_count": 44, "is_favorite": false, "is_interest_account": false, "hd_profile_pic_versions": [{"width": 320, "height": 320, "url": "https://scontent-sjc3-1.cdninstagram.com/vp/e6dc6596c0fed678fe6e520e020a1cbc/5D1C6238/t51.2885-19/s320x320/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com"}, {"width": 640, "height": 640, "url": "https://scontent-sjc3-1.cdninstagram.com/vp/4dd3319b1972445ce27eed7c827afbc4/5D033F57/t51.2885-19/s640x640/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com"}], "hd_profile_pic_url_info": {"url": "https://scontent-sjc3-1.cdninstagram.com/vp/30ff98f96900aee2a5604b7d833e0e85/5D130A32/t51.2885-19/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com", "width": 1080, "height": 1080}, "mutual_followers_count": 0, "has_highlight_reels": true, "can_be_reported_as_fraud": false, "direct_messaging": "UNKNOWN", "fb_page_call_to_action_id": "", "address_street": "", "business_contact_method": "CALL", "category": "Fashion Model", "city_id": 106313309406070, "city_name": "Ludhiana, Punjab, India", "contact_phone_number": "+919914934996", "is_call_to_action_enabled": false, "latitude": 30.9, "longitude": 75.85, "public_email": "bobbyvaid1137#gmail.com", "public_phone_country_code": "91", "public_phone_number": "9914934996", "zip": "", "instagram_location_id": "", "is_business": true, "account_type": 2, "can_hide_category": false, "can_hide_public_contacts": false, "should_show_category": true, "should_show_public_contacts": true, "include_direct_blacklist_status": true, "is_potential_business": false, "is_bestie": false, "has_unseen_besties_media": false, "show_account_transparency_details": false, "auto_expand_chaining": false, "highlight_reshare_disabled": false}, "status": "ok"}
NOTE: you must be logged in Instagram to see the above content.
Here is the line that I use requests to read this page:
page = requests.get(url, headers={"User-Agent": "Mozilla"})
If I look at the page.text this is what I see:
{"user": {"pk": 8538441802, "username": "bobby_ww4", "full_name": "\ud83c\udf78 B O B B Y \ud83c\udf78", "is_private": false, "profile_pic_url": "https://scontent-sjc3-1.cdninstagram.com/vp/8a743eb7285d9cec268248e7fe6b018e/5D0470C8/t51.2885-19/s150x150/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com", "profile_pic_id": "1963771869542832625_8538441802", "is_verified": false, "has_anonymous_profile_picture": false, "media_count": 42, "follower_count": 2132, "following_count": 2881, "following_tag_count": 0, "biography": "Cake murder 9may\nLUNATIC \ud83d\udc40\nFashionholic \ud83d\udc54\n Attrangi \nUncontrolled rage \ud83d\ude44\nKDM LOVER \u2620\nHold the sea in my embrance", "external_url": "", "total_igtv_videos": 0, "total_ar_effects": 0, "reel_auto_archive": "on", "usertags_count": 44, "is_interest_account": false, "hd_profile_pic_versions": [{"width": 320, "height": 320, "url": "https://scontent-sjc3-1.cdninstagram.com/vp/e6dc6596c0fed678fe6e520e020a1cbc/5D1C6238/t51.2885-19/s320x320/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com"}, {"width": 640, "height": 640, "url": "https://scontent-sjc3-1.cdninstagram.com/vp/4dd3319b1972445ce27eed7c827afbc4/5D033F57/t51.2885-19/s640x640/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com"}], "hd_profile_pic_url_info": {"url": "https://scontent-sjc3-1.cdninstagram.com/vp/30ff98f96900aee2a5604b7d833e0e85/5D130A32/t51.2885-19/50103965_250281542552654_8842589758333911040_n.jpg?_nc_ht=scontent-sjc3-1.cdninstagram.com", "width": 1080, "height": 1080}, "has_highlight_reels": true, "can_be_reported_as_fraud": false, "is_potential_business": false, "auto_expand_chaining": false, "highlight_reshare_disabled": false}, "status": "ok"}
As it can be seen on the web-browser we can see some information, like contact_phone_number that cannot be seen in the page.txt.
Why is that and how can I use requests or any othr Python functions to read exactly what I can see on a web-browser?

fabric on nodeJS, No images rendering at all

The issue is on Ubuntu 14.04:
NodeJS: 0.10.32
Canvas: 1.3.6
Fabric: 1.6.0-rc.1
Example JSON:
{
"objects": [{
"id": 0,
"name": "1452525510_death_star.svg",
"type": "image",
"originX": "left",
"originY": "top",
"left": 78,
"top": 21,
"width": 512,
"height": 512,
"fill": "rgb(0,0,0)",
"stroke": null,
"strokeWidth": 1,
"strokeDashArray": null,
"strokeLineCap": "butt",
"strokeLineJoin": "miter",
"strokeMiterLimit": 10,
"scaleX": 0.46,
"scaleY": 0.46,
"angle": 0,
"flipX": false,
"flipY": false,
"opacity": 1,
"shadow": null,
"visible": true,
"clipTo": null,
"backgroundColor": "",
"fillRule": "nonzero",
"globalCompositeOperation": "source-over",
"transformMatrix": null,
"_controlsVisibility": {
"tl": false,
"tr": true,
"br": true,
"bl": false,
"ml": true,
"mt": false,
"mr": false,
"mb": true,
"mtr": true
},
"src": "http://somedomain.com/media/patterns/users/1fb158157a882d6a4c983ddc401101d1.svg",
"filters": [{
"type": "Tint",
"color": "#c485c4",
"opacity": 1
}],
"crossOrigin": "",
"alignX": "none",
"alignY": "none",
"meetOrSlice": "meet"
}, {
"id": 1,
"name": "Baby inside",
"type": "image",
"originX": "left",
"originY": "top",
"left": 102,
"top": 290,
"width": 470,
"height": 427,
"fill": "rgb(0,0,0)",
"stroke": null,
"strokeWidth": 1,
"strokeDashArray": null,
"strokeLineCap": "butt",
"strokeLineJoin": "miter",
"strokeMiterLimit": 10,
"scaleX": 0.5,
"scaleY": 0.5,
"angle": 0,
"flipX": false,
"flipY": false,
"opacity": 1,
"shadow": null,
"visible": true,
"clipTo": null,
"backgroundColor": "",
"fillRule": "nonzero",
"globalCompositeOperation": "source-over",
"transformMatrix": null,
"_controlsVisibility": {
"tl": false,
"tr": true,
"br": true,
"bl": false,
"ml": true,
"mt": false,
"mr": false,
"mb": true,
"mtr": true
},
"src": "http://somedomain.com/media/patterns/12.png",
"filters": [{
"type": "Tint",
"color": "#FFFFFF",
"opacity": 1
}],
"crossOrigin": "",
"alignX": "none",
"alignY": "none",
"meetOrSlice": "meet"
}],
"background": "#b0b0b0",
"backgroundImage": {
"id": 0,
"name": "",
"type": "image",
"originX": "left",
"originY": "top",
"left": 0,
"top": 0,
"width": 470,
"height": 574,
"fill": "rgb(0,0,0)",
"stroke": null,
"strokeWidth": 1,
"strokeDashArray": null,
"strokeLineCap": "butt",
"strokeLineJoin": "miter",
"strokeMiterLimit": 10,
"scaleX": 1,
"scaleY": 1,
"angle": 0,
"flipX": false,
"flipY": false,
"opacity": 1,
"shadow": null,
"visible": true,
"clipTo": null,
"backgroundColor": "",
"fillRule": "nonzero",
"globalCompositeOperation": "source-over",
"transformMatrix": null,
"_controlsVisibility": null,
"src": "http://somedomain.com/media/products/121_37_2.jpg",
"filters": [],
"crossOrigin": "",
"alignX": "none",
"alignY": "none",
"meetOrSlice": "meet"
}
Note that this JSON exported with toJSON() has some custom fields: [name, id].
This is from my Node script:
function savetoFile() {
var jsonData = JSONfromAbove;
var out = fs.createWriteStream(filepath);
canvas = fabric.createCanvasForNode(470, 574);
canvas.loadFromJSON(jsonData, function () {
CanvasZoom(parseInt(zoom), function(){
console.log('after zooom');
console.log(canvas.getObjects());
var stream = canvas.createPNGStream();
stream.on('data', function (chunk) {
out.write(chunk);
});
stream.on('end', function () {
out.end();
});
});
});
}
function CanvasZoom(z, callback) {
width = canvas.width;
height = canvas.height;
canvas.setWidth(width*z);
canvas.setHeight(height*z);
canvas.setZoom(z);
canvas.renderAll.bind(canvas);
callback();
}
Facts:
No matter what types of objects I add ('image', 'path',
'path-group') they are not rendering at all, except text and
maybe (I did not tested it) PATHS not from URL's.
In JSON above
there is background img - it doesn't rendering too.
There is no errors at all, however:
The same identical script on OSX works fine BUT:
When I'm trying to add "large" SVG file it gives me:
"image given has not completed loading"
Works fine with HUGE numbers of normal PNG's.
The time to "render" final PNG is proportional to number of objects and their image sizes which might tell that they are loading some kind of well.
I have installed all dependent libs.
Tried to add one object like that ending with the same problem:
fabric.Image.fromURL('http://somedomain.com/media/patterns/12.png', function(oImg, e) {...});
I bet for node-canvas someway failing with URL's.
I spend almost 2 days trying to fix this devilish problem ];>
he issue is that on www.somedomain.com was httpasswd, so it just can't download files but did not throw any errors.
When switched 1.6.0-rc1 to 1.5.X an error occur: "Segmentation Fault".

Kibana: searching for a specific phrase, returns without results, while another search returns the phrase

Looks like a simple usecase but for some reason I just can't figure out how to do this, or google a clear example.
Lets say I have a message stored in logstash
message:
"info: 2015-11-28 22:02:19,232:common:INFO:ENV: Production
User:None:Username:None:LOG: publishing to bus "
And I want to search in kibana (version 4) for the phrase:"publishing to bus"
I'll get a set of results
But if I'll search for: "None:LOG: publishing to bus"
Then I get "No results found".
While Obviously this phrase does exists and is returned by the previous search.
So my question is basically - What is going on? What is the correct way to search for a possible long phrase and why does the second example fail.
EDIT:
The stored JSON.
{
"_index": "logz-ngdxrkmolklnvngumaitximbohqwbocg-151206_v1",
"_type": "django_logger",
"_id": "AVF2DPxZZst_8_8_m-se",
"_score": null,
"_source": {
"log": " publishing to bus {'user_id': 8866, 'event_id': 'aibRBPcLxcAzsEVRtFZVU5', 'timestamp': 1449384441, 'quotes': {}, 'rates': {u'EURUSD': Decimal('1.061025'), u'GBPUSD': Decimal('1.494125'), u'EURGBP': Decimal('0.710150')}, 'event': 'AccountInstrumentsUpdated', 'minute': 1449384420}",
"logger": "common",
"log_level": "INFO",
"message": "2015-12-06 06:47:21,298:common:INFO:ENV: Production User:None:Username:None:LOG: publishing to bus {'user_id': 8866, 'event_id': 'aibRBPcLxcAzsEVRtFZVU5', 'timestamp': 1449384441, 'quotes': {}, 'rates': {u'EURUSD': Decimal('1.061025'), u'GBPUSD': Decimal('1.494125'), u'EURGBP': Decimal('0.710150')}, 'event': 'AccountInstrumentsUpdated', 'minute': 1449384420}",
"type": "django_logger",
"tags": [
"celery"
],
"path": "//path/to/logs/out.log",
"environment": "Staging",
"#timestamp": "2015-12-06T06:47:21.298+00:00",
"user_id": "None",
"host": "path.to.host",
"timestamp": "2015-12-06 06:47:21,298",
"username": "None"
},
"fields": {
"#timestamp": [
1449384441298
]
},
"highlight": {
"message": [
"2015-12-06 06:47:21,298:common:INFO:ENV: Staging User:None:Username:None:LOG: #kibana-highlighted-field#publishing#/kibana-highlighted-field# #kibana-highlighted-field#to#/kibana-highlighted-field# #kibana-highlighted-field#bus#/kibana-highlighted-field# {'user_id': **, 'event_id': 'aibRBPcLxcAzsEVRtFZVU5', 'timestamp': 1449384441, 'quotes': {}, 'rates': {u'EURUSD': Decimal('1.061025'), u'GBPUSD': Decimal('1.494125'), u'EURGBP': Decimal('0.710150')}, 'event': 'AccountInstrumentsUpdated', 'minute': 1449384420}"
]
},
"sort": [
1449384441298
]
}
Accodrding to Elasticsearch, it uses standard analyzer as default. The standard analyzer tokenizes the message field as follows:
"2015-12-06 06:47:21,298:common:INFO:ENV: Production
User:None:Username:None:LOG: publishing to bus {'user_id': 8866,
'event_id': 'aibRBPcLxcAzsEVRtFZVU5', 'timestamp': 1449384441,
'quotes': {}, 'rates': {u'EURUSD': Decimal('1.061025'), u'GBPUSD':
Decimal('1.494125'), u'EURGBP': Decimal('0.710150')}, 'event':
'AccountInstrumentsUpdated', 'minute': 1449384420}"
{
"tokens": [
{
"token": "2015",
"start_offset": 0,
"end_offset": 4,
"type": "<NUM>",
"position": 0
},
{
"token": "12",
"start_offset": 5,
"end_offset": 7,
"type": "<NUM>",
"position": 1
},
{
"token": "06",
"start_offset": 8,
"end_offset": 10,
"type": "<NUM>",
"position": 2
},
{
"token": "06",
"start_offset": 11,
"end_offset": 13,
"type": "<NUM>",
"position": 3
},
{
"token": "47",
"start_offset": 14,
"end_offset": 16,
"type": "<NUM>",
"position": 4
},
{
"token": "21,298",
"start_offset": 17,
"end_offset": 23,
"type": "<NUM>",
"position": 5
},
{
"token": "common:info:env",
"start_offset": 24,
"end_offset": 39,
"type": "<ALPHANUM>",
"position": 6
},
{
"token": "production",
"start_offset": 41,
"end_offset": 51,
"type": "<ALPHANUM>",
"position": 7
},
{
"token": "user:none:username:none:log",
"start_offset": 52,
"end_offset": 79,
"type": "<ALPHANUM>",
"position": 8
},
{
"token": "publishing",
"start_offset": 81,
"end_offset": 91,
"type": "<ALPHANUM>",
"position": 9
},
{
"token": "to",
"start_offset": 92,
"end_offset": 94,
"type": "<ALPHANUM>",
"position": 10
},
{
"token": "bus",
"start_offset": 95,
"end_offset": 98,
"type": "<ALPHANUM>",
"position": 11
},
{
"token": "user_id",
"start_offset": 100,
"end_offset": 107,
"type": "<ALPHANUM>",
"position": 12
},
{
"token": "8866",
"start_offset": 109,
"end_offset": 113,
"type": "<NUM>",
"position": 13
},
{
"token": "event_id",
"start_offset": 115,
"end_offset": 123,
"type": "<ALPHANUM>",
"position": 14
},
{
"token": "aibrbpclxcazsevrtfzvu5",
"start_offset": 125,
"end_offset": 147,
"type": "<ALPHANUM>",
"position": 15
},
{
"token": "timestamp",
"start_offset": 149,
"end_offset": 158,
"type": "<ALPHANUM>",
"position": 16
},
{
"token": "1449384441",
"start_offset": 160,
"end_offset": 170,
"type": "<NUM>",
"position": 17
},
{
"token": "quotes",
"start_offset": 172,
"end_offset": 178,
"type": "<ALPHANUM>",
"position": 18
},
{
"token": "rates",
"start_offset": 184,
"end_offset": 189,
"type": "<ALPHANUM>",
"position": 19
},
{
"token": "ueurusd",
"start_offset": 192,
"end_offset": 199,
"type": "<ALPHANUM>",
"position": 20
},
{
"token": "decimal",
"start_offset": 201,
"end_offset": 208,
"type": "<ALPHANUM>",
"position": 21
},
{
"token": "1.061025",
"start_offset": 209,
"end_offset": 217,
"type": "<NUM>",
"position": 22
},
{
"token": "ugbpusd",
"start_offset": 220,
"end_offset": 227,
"type": "<ALPHANUM>",
"position": 23
},
{
"token": "decimal",
"start_offset": 229,
"end_offset": 236,
"type": "<ALPHANUM>",
"position": 24
},
{
"token": "1.494125",
"start_offset": 237,
"end_offset": 245,
"type": "<NUM>",
"position": 25
},
{
"token": "ueurgbp",
"start_offset": 248,
"end_offset": 255,
"type": "<ALPHANUM>",
"position": 26
},
{
"token": "decimal",
"start_offset": 257,
"end_offset": 264,
"type": "<ALPHANUM>",
"position": 27
},
{
"token": "0.710150",
"start_offset": 265,
"end_offset": 273,
"type": "<NUM>",
"position": 28
},
{
"token": "event",
"start_offset": 277,
"end_offset": 282,
"type": "<ALPHANUM>",
"position": 29
},
{
"token": "accountinstrumentsupdated",
"start_offset": 284,
"end_offset": 309,
"type": "<ALPHANUM>",
"position": 30
},
{
"token": "minute",
"start_offset": 311,
"end_offset": 317,
"type": "<ALPHANUM>",
"position": 31
},
{
"token": "1449384420",
"start_offset": 319,
"end_offset": 329,
"type": "<NUM>",
"position": 32
}
]
}
The phrase "Production User:None:Username:None:LOG: publishing to bus "
{
"token": "production",
"start_offset": 41,
"end_offset": 51,
"type": "<ALPHANUM>",
"position": 7
},
{
"token": "user:none:username:none:log",
"start_offset": 52,
"end_offset": 79,
"type": "<ALPHANUM>",
"position": 8
},
{
"token": "publishing",
"start_offset": 81,
"end_offset": 91,
"type": "<ALPHANUM>",
"position": 9
},
{
"token": "to",
"start_offset": 92,
"end_offset": 94,
"type": "<ALPHANUM>",
"position": 10
},
{
"token": "bus",
"start_offset": 95,
"end_offset": 98,
"type": "<ALPHANUM>",
"position": 11
}
So if you search "publishing to bus" the elasticsearch matches the above three token and return the document.
if you search "None:LOG: publishing to bus" "None:LOG:" doesn't match fully so it doesn't return the document.
you can try "User:None:Username:None:LOG: publishing to bus" to get the result.
There are some problems in Kibana with special character as : | and -. When kibana found that kind of character they save in different parts, not in the same field. For that is easy to find publishing to bus or None or log. The solution is that you must indicate to kibana that the field wil not be analyzed.

Resources