Inherit and modify kanban view using web kanban one2many module - odoo-13

I am trying to use the module web kanban one2many from the odoo app store with version 13 of odoo.
I find the documentation quite sparse for this module, but others seem to have it working.
I installed the module, and added the following to my module:
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record model="ir.ui.view" id="co7_crm_kanban_inherit">
<field name="name">crm.lead.kanban.inherit</field>
<field name="model">crm.lead</field>
<field name="inherit_id" ref="crm.crm_case_kanban_view_leads" />
<field name="arch" type="xml">
<kanban>
<field name="order_ids"/>
<templates>
<t t-name="kanban-box">
<div>
<div>SOs:</div>
<t t-foreach="order_ids.raw_value" t-as="record">
<t t-esc="record.name"/>
</t>
</div>
</t>
</templates>
</kanban>
</field>
</record>
</odoo>
I have confirmed the view is loaded correctly, and that it is inheriting the correct parent view. The kanban view I am attempting to modify is the crm.lead kanban view.
Unfortunately, nothing happens... None of the order_ids are displayed, and even the <div>SOs:</div> doesn't show.
I am confused as to how the templating engine is supposed to know where I want my <div> to go. I was expecting to have to use <x-path> to place my content where I wanted, but this is how everyone seems to be doing it on SO.
Basically, I just want a list of associated sales orders to appear above the "tags" on each kanban "card". What am I doing wrong?
More info:
The web one2many module is working, in chrome debugging tools I can see an HTTP request and the correct response:
{"jsonrpc": "2.0", "id": 795331297, "result": [[{"id": 21, "name": "S00020", "origin": "Modern Open Space", "client_order_ref": false, "reference": false, "state": "draft", "date_order": "2022-01-18 12:34:31", "validity_date": false, "is_expired": false, "require_signature": true, "require_payment": true, "remaining_validity_days": 0, "create_date": "2022-01-18 17:34:38", "user_id": [2, "Mitchell Admin"], "partner_id": [10, "Deco Addict"], "partner_invoice_id": [10, "Deco Addict"], "partner_shipping_id": [10, "Deco Addict"], "pricelist_id": [1, "Public Pricelist (USD)"], "currency_id": [2, "USD"], "analytic_account_id": false, "order_line": [], "invoice_count": 0, "invoice_ids": [], "invoice_status": "no", "note": "", "amount_untaxed": 0.0, "amount_by_group": [], "amount_tax": 0.0, "amount_total": 0.0, "currency_rate": 1.0, "payment_term_id": [4, "30 Days"], "fiscal_position_id": false, "company_id": [1, "YourCompany"], "team_id": [1, "Europe"], "signature": false, "signed_by": false, "signed_on": false, "commitment_date": false, "expected_date": false, "amount_undiscounted": 0.0, "type_name": "Quotation", "transaction_ids": [], "authorized_transaction_ids": [], "sale_order_template_id": false, "sale_order_option_ids": [], "tag_ids": [4], "opportunity_id": [25, "Modern Open Space"], "campaign_id": [3, "Email Campaign - Services"], "source_id": [1, "Search engine"], "medium_id": false, "activity_ids": [], "activity_state": false, "activity_user_id": false, "activity_type_id": false, "activity_date_deadline": false, "activity_summary": false, "activity_exception_decoration": false, "activity_exception_icon": false, "message_is_follower": true, "message_follower_ids": [263], "message_partner_ids": [3], "message_channel_ids": [], "message_ids": [305], "message_unread": false, "message_unread_counter": 0, "message_needaction": false, "message_needaction_counter": 0, "message_has_error": false, "message_has_error_counter": 0, "message_attachment_count": 0, "message_main_attachment_id": false, "website_message_ids": [], "message_has_sms_error": false, "access_url": "/my/orders/21", "access_token": false, "access_warning": "", "display_name": "S00020", "create_uid": [2, "Mitchell Admin"], "write_uid": [2, "Mitchell Admin"], "write_date": "2022-01-18 17:34:38", "__last_update": "2022-01-18 17:34:38"}]]}

Related

Imgur seems to be letting me search private images, have they screwed up?

According the imgur website:
When you upload a post to Imgur, you have two post privacy options: Hidden and Public.
A hidden post means that your post is not shared with the general Imgur community and can only be accessed via the URL. Hidden posts cannot be searched,...
Using only my client id (no OAuth), I hit the gallery search endpoint using the tag '#wow' (ie GET https://api.imgur.com/3/gallery/t/wow/). I get the same results as when visiting this publicly accessible page: https://imgur.com/t/wow.
Viewing the metadata for these images that I get from my api request, almost all (56 / 60) have a privacy value of "hidden"
Could be concerning, though wondered if an imgur expert could explain this before I try contact imgur about it.
Portion of the API json response:
{"data":
{"name": "wow",
"display_name": "wow",
"followers": 29719,
"total_items": 17077,
"following": false,
"is_whitelisted": true,
"background_hash": "QL9pTeJ",
"thumbnail_hash": nil,
"accent": "159559",
"background_is_animated": false,
"thumbnail_is_animated": false,
"is_promoted": false,
"description": "",
"logo_hash": nil,
"logo_destination_url": nil,
"description_annotations": {},
"items":
[{"id": "yWPI5rf",
"title": "Lake basement.",
"description": nil,
"datetime": 1656844974,
"cover": "IrUShgg",
"cover_width": 720,
"cover_height": 960,
"account_url": "DacianFalx",
"account_id": 64529075,
"privacy": "hidden", // 56 / 60 items have the same privacy value
"layout": "blog",
"views": 50557,
"link": "https://imgur.com/a/yWPI5rf",
"ups": 481,
"downs": 6,
"points": 475,
"score": 499,
"is_album": true,
"vote": nil,
"favorite": false,
"nsfw": false,
"section": "",
"comment_count": 116,
"favorite_count": 40,
"topic": nil,
"topic_id": nil,
"images_count": 1,
"in_gallery": true,
"is_ad": false,
"tags":
[{"name": "wow",
"display_name": "wow",
"followers": 29719,
"total_items": 17076,
"following": false,
"is_whitelisted": false,
"background_hash": "QL9pTeJ",
...omitted...

Azure Search Fails to Return Expected Result When No OR Multiple SearchFields Are Defined

I have a fairly basic Azure Search index with several fields of searchable string data, for example [abridged]...
"fields": [
{
"name": "Field1",
"type": "Edm.String",
"facetable": false,
"filterable": true,
"key": true,
"retrievable": true,
"searchable": true,
"sortable": false,
"analyzer": null,
"indexAnalyzer": null,
"searchAnalyzer": null,
"synonymMaps": [],
"fields": []
},
{
"name": "Field2",
"type": "Edm.String",
"facetable": false,
"filterable": true,
"retrievable": true,
"searchable": true,
"sortable": false,
"analyzer": "en.microsoft",
"indexAnalyzer": null,
"searchAnalyzer": null,
"synonymMaps": [],
"fields": []
}
]
Field1 is loaded with alphanumeric id data and Field2 is loaded with English language string data, specifically the name/title of the record. searchMode=all is also being used to ensure the accuracy of the results.
Let's say one of the records indexed has the following Field2 data: BA (Hons) in Business, Organisational Behaviour and Coaching. Putting that into the en.microsoft analyzer, this is the result we get out:
"tokens": [
{
"token": "ba",
"startOffset": 0,
"endOffset": 2,
"position": 0
},
{
"token": "hon",
"startOffset": 4,
"endOffset": 8,
"position": 1
},
{
"token": "hons",
"startOffset": 4,
"endOffset": 8,
"position": 1
},
{
"token": "business",
"startOffset": 13,
"endOffset": 21,
"position": 3
},
{
"token": "organizational",
"startOffset": 23,
"endOffset": 37,
"position": 4
},
{
"token": "organisational",
"startOffset": 23,
"endOffset": 37,
"position": 4
},
{
"token": "behavior",
"startOffset": 38,
"endOffset": 47,
"position": 5
},
{
"token": "behaviour",
"startOffset": 38,
"endOffset": 47,
"position": 5
},
{
"token": "coach",
"startOffset": 52,
"endOffset": 60,
"position": 7
},
{
"token": "coaching",
"startOffset": 52,
"endOffset": 60,
"position": 7
}
]
As you can see, the tokens returned are what you'd expect for such a string. However, when it comes to using that same indexed string value as a search term (sadly a valid user case in this instance), the results returned are not as expected unless you explicitly use searchFields=Field2.
Query 1 (Returns 0 results):
?searchMode=all&search=BA%20(Hons)%20in%20Business%2C%20Organisational%20Behaviour%20and%20Coaching
Query 2 (Returns 0 results):
?searchMode=all&searchFields=Field1,Field2&search=BA%20(Hons)%20in%20Business%2C%20Organisational%20Behaviour%20and%20Coaching
Query 3 (Returns 1 result as expected):
?searchMode=all&searchFields=Field2&search=BA%20(Hons)%20in%20Business%2C%20Organisational%20Behaviour%20and%20Coaching
So why does this only return the expected result with searchFields=Field2 and not with no searchFields defined or searchFields=Field1,Field2? I would not expect a no match on Field1 to exclude a result that's clearly matching on Field2?
Furthermore, removing the "in" and "and" within the search term seems to correct the issue and return the expected result. For example:
Query 4 (Returns 1 result as expected):
?searchMode=all&search=BA%20(Hons)%20Business%2C%20Organisational%20Behaviour%20Coaching
(This is almost like one analyzer is tokenizing the indexed data and a completely different analyzer is tokenizing the search term, although that theory doesn't make any sense when taking into consideration Query 3, as that provides a positive match using the exact same indexed data/search term.)
Is anybody able to shed some light as to what's going on here as I'm completely out of ideas and I can't find anything more in the documentation?
NB. Please bear in mind that I'm looking to understand why Azure Search is behaving in this way and not necessarily wanting a work around.
The reason you don't get any hits is due to how stopwords are handled when you use searchMode=all. The standard analyzer does not remove stopwords. The Lucene and Microsoft analyzers for English removes stopwords. I verified by creating an index with your property definitions and sample data. If you use the standard analyzer, stopwords are not removed and you will get a match also when using searchMode=all. To get a match when using either Lucene or Microsoft analyzers with simple query mode, you would have to use a phrase search.
When you test the en.microsoft analyzer in your example, you only get the response from what the first stage of the analyzer does. It splits your query into tokens. In your case, two of the tokens are also stopwords in English (in, and). Stopword removal is part of lexical analysis, which is done later in stage 2 as explained in the article called Anatomy of a search request. Furthermore, lexical analysis is only applied to "query types that require complete terms", like searchMode=all. See Exceptions to lexical analysis for more examples.
There is a previous post here about this that explains in more detail. See Queries with stopwords and searchMode=all return no results
I know you did not ask for workarounds, but to better understand what goes on it could be useful to list some possible workarounds.
For English analyzers, use phrase search by wrapping the query in quotes: search="BA (Hons) in Business, Organisational Behaviour and Coaching"&searchMode=all
The standard analyzer works the way you expect: search=BA (Hons) in Business, Organisational Behaviour and Coaching&searchMode=all
Disable lexical analysis by defining a custom analyzer.

Not able to crawl JSON from script tag using XPATH python

I encounter the JSON data in a script tag. I want the JSON data available in that script tag. Is there any possible way to do it.
I have tried a few tweaks regarding it but couldn't get an idea for the script tag.
html code:
<div id="staticid" class="a-section a-spacing-none a-padding-none">
<script type="a-state" data-a-state="{"key":"turbo-checkout-page-state"}">{"turboWeblab":"RCX_CHECKOUT_TURBO_DESKTOP_NONPRIME_87784","strings":{"TURBO_CHECKOUT_HEADER":"Buy now: 1byone Fake TV Simulator Anti-Burglar and Theft Deterrent with LED Light","TURBO_LOADING_TEXT":"Loading your order summary"},"inputs":{"a":"B017SJR6JS","quantity":"1","requestId":"P2JG384YYYBDM166PD5N","customItemPrice":"","oid":"dwC1O6h7HNFmAorkhv9i8nvDzUpdCtjNPCatSnP1kq1INA1KtQHHN%2F233KfCXVMuFL%2BF5rUWX5RBDz19uhFQqPVIanAuuf10V2zoV61qaytpGMPXObsZ8mHCnUFkWVEEcC7GM102R3Wk%2FB1j5q2%2FcVWrlbfu8S7n","sessionId":"260-4039899-0659318","addressId":"add-new"},"configurations":{"isSignInEnabled":true,"initiateSelector":"#buy-now-button","prefetchEnabled":true},"buttonID":"buy-now","eligibility":{"prime":false,"canOneClick":false,"preOrder":false,"stockOnHand":70,"isEligible":false,"primeShipping":true,"customerDefaults":false,"canBuyNow":true},"turboWeblabTreatment":"T1","timeout":"5000"}</script>
</div>
Python code:
parser.xpath("//div[#id='staticid']/script")
This returns an empty list.
Output expected:
{
"turboWeblab": "RCX_CHECKOUT_TURBO_DESKTOP_NONPRIME_87784",
"strings": {
"TURBO_CHECKOUT_HEADER": "Buy now: 1byone Fake TV Simulator Anti-Burglar and Theft Deterrent with LED Light",
"TURBO_LOADING_TEXT": "Loading your order summary"
},
"inputs": {
"a": "B017SJR6JS",
"quantity": "1",
"requestId": "P2JG384YYYBDM166PD5N",
"customItemPrice": "",
"oid": "dwC1O6h7HNFmAorkhv9i8nvDzUpdCtjNPCatSnP1kq1INA1KtQHHN%2F233KfCXVMuFL%2BF5rUWX5RBDz19uhFQqPVIanAuuf10V2zoV61qaytpGMPXObsZ8mHCnUFkWVEEcC7GM102R3Wk%2FB1j5q2%2FcVWrlbfu8S7n",
"sessionId": "260-4039899-0659318",
"addressId": "add-new"
},
"configurations": {
"isSignInEnabled": true,
"initiateSelector": "#buy-now-button",
"prefetchEnabled": true
},
"buttonID": "buy-now",
"eligibility": {
"prime": false,
"canOneClick": false,
"preOrder": false,
"stockOnHand": 70,
"isEligible": false,
"primeShipping": true,
"customerDefaults": false,
"canBuyNow": true
},
"turboWeblabTreatment": "T1",
"timeout": "5000"
}

Can see the terminal output but cannot save it to file using shell redirector

The content is on stdout
I'm trying to get the result of this package using Nodejs. I have been trying to use spawn, exec and log the child_process object to debug it but cannot see the value on stdout, even though the stderr data is ok.
when I direct terminal output, I'm able to log the stderr, but stdout is just empty if I log it to file, but it does show up in the terminal.
Then I tried using just the tool to check result then thought it's the tool problem, not the Nodejs code.
EDIT: Adding terminal content in text
Macbooks-MacBook-Pro:query macos$ lola --formula="EF DEADLOCK" input.lola --quiet --json
{"analysis": {"formula": {"parsed": "EF (DEADLOCK)", "parsed_size": 13, "type": "deadlock"}, "result": true, "stats": {"edges": 3, "states": 4}}, "call": {"architecture": 64, "assertions": false, "build_system": "x86_64-apple-darwin17.7.0", "error": null, "hostname": "Macbooks-MacBook-Pro.local", "optimizations": true, "package_version": "2.0", "parameters": ["--formula=EF DEADLOCK", "input.lola", "--quiet", "--json"], "signal": null, "svn_version": "Unversioned directory"}, "files": {"net": {"filename": "input.lola"}}, "limits": {"markings": null, "time": null}, "net": {"conflict_sets": 6, "filename": "input.lola", "places": 8, "places_significant": 6, "transitions": 7}, "store": {"bucketing": 16, "encoder": "bit-perfect", "threads": 1, "type": "prefix"}}
Macbooks-MacBook-Pro:query macos$ lola --formula="EF DEADLOCK" input.lola --quiet --json 2> aaa.txt
{"analysis": {"formula": {"parsed": "EF (DEADLOCK)", "parsed_size": 13, "type": "deadlock"}, "result": true, "stats": {"edges": 3, "states": 4}}, "call": {"architecture": 64, "assertions": false, "build_system": "x86_64-apple-darwin17.7.0", "error": null, "hostname": "Macbooks-MacBook-Pro.local", "optimizations": true, "package_version": "2.0", "parameters": ["--formula=EF DEADLOCK", "input.lola", "--quiet", "--json"], "signal": null, "svn_version": "Unversioned directory"}, "files": {"net": {"filename": "input.lola"}}, "limits": {"markings": null, "time": null}, "net": {"conflict_sets": 6, "filename": "input.lola", "places": 8, "places_significant": 6, "transitions": 7}, "store": {"bucketing": 16, "encoder": "bit-perfect", "threads": 1, "type": "prefix"}}
Macbooks-MacBook-Pro:query macos$
It sounds to me like the issue isn't really nodejs related. I did some quick googling on this Lola tool, and it looks like it might have some custom stdout/stderr, so it is possible for it to behave differently when used regularly from the terminal than when you specify redirects for the stdout and stderr.
One possible fix for you would be to use the --json option and specify a temporary filename, then have your nodejs code read the result from the temporary file created. (If you can't figure out the stdout/stderr issue.)

Fetching host availability to external webpage in Nagios

Is there any possible way to fetch the live availability of host/host group from Nagios monitoring tool (where host/hostgroups are already configured) which can be redirected/captured to an external webpage.
are there any exposed API's to do that, couldn't found a way.
Nagios is on a Linux host.
Any help or info is appreciated.
EDIT1:
I have a hostgroup say for example 'All_prod' in this hostgroup I will be having around 20 linux hosts for all the host there would be some metrics/checks defined (example availability, cpu load, free memory ..etc). Here I want the report of only availability metrics of all the host(example : lets say if in 24 hours if the availability is down for 10 minutes then it should provide me with the report as it was down for 10 minutes in 24 hours or just give me any related info which i can evaluate using data evaluation).
it would be great if there are any API's to fetch that information, which will return the data as json/xml.
You can use the Nagios JSON API. You can use the query builder here http://NAGIOSURL/jsonquery.html.
But, to answer your specific question, the queries for hosts would look like this:
http://NAGIOSURL/cgi-bin/statusjson.cgi?query=host&hostname=localhost
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384499000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "host",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384489000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"host": {
"name": "localhost",
"plugin_output": "egsdda",
"long_plugin_output": "",
"perf_data": "",
"status": 8,
"last_update": 1497384489000,
"has_been_checked": true,
"should_be_scheduled": false,
"current_attempt": 10,
"max_attempts": 10,
"last_check": 1496158536000,
"next_check": 0,
"check_options": 0,
"check_type": 1,
"last_state_change": 1496158536000,
"last_hard_state_change": 1496158536000,
"last_hard_state": 1,
"last_time_up": 1496158009000,
"last_time_down": 1496158536000,
"last_time_unreachable": 1480459504000,
"state_type": 1,
"last_notification": 1496158536000,
"next_notification": 1496165736000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 2,
"accept_passive_checks": true,
"event_handler_enabled": true,
"checks_enabled": false,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0.49,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
And for hostgroups:
http://NAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=hostlist&hostgroup=linux-servers
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384613000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "hostlist",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384609000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"selectors": {
"hostgroup": "linux-servers"
},
"hostlist": {
"localhost": 8
}
}
}
Hope this helps!
EDIT 1 (To correspond with the question's EDIT 1):
What you're asking for isn't built in by default. You can use the above methods to grab the data for each host (but it sounds like you want it for each service), so again we will use the JSON API found at http://YOURNAGIOSURL/jsonquery.html to grab service data..
http://YOURNAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=service&hostname=localhost&servicedescription=Current+Load
We'll get the following output (something similar, anyway):
{
"format_version": 0,
"result": {
"query_time": 1497875258000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "service",
"query_status": "released",
"program_start": 1497800686000,
"last_data_update": 1497875255000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"service": {
"host_name": "localhost",
"description": "Current Load",
"plugin_output": "OK - load average: 0.00, 0.00, 0.00",
"long_plugin_output": "",
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
"max_attempts": 4,
"current_attempt": 1,
"status": 2,
"last_update": 1497875255000,
"has_been_checked": true,
"should_be_scheduled": true,
"last_check": 1497875014000,
"check_options": 0,
"check_type": 0,
"checks_enabled": true,
"last_state_change": 1497019191000,
"last_hard_state_change": 1497019191000,
"last_hard_state": 0,
"last_time_ok": 1497875014000,
"last_time_warning": 1497019191000,
"last_time_unknown": 0,
"last_time_critical": 1497018891000,
"state_type": 1,
"last_notification": 0,
"next_notification": 0,
"next_check": 1497875314000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 0,
"accept_passive_checks": true,
"event_handler_enabled": true,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
The most important line for what you're trying to do (as far as I understand it) is the perfdata line:
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
This is the data you'd use to generate whatever custom metrics report you're trying to generate.
Keep in mind this is something that is sort of built in to Nagios XI (not in an exportable format like you're requesting) but the metrics component does allow you to easily drill down and take a look at some metric specific data.
Hope this helps!

Resources