Kendo Grid misbehaving in certain situations with Durandal/RequireJS - requirejs

BACKGROUND:
I have an existing site which makes use of the following technologies:
ASP.NET MVC5,
KnockoutJS,
Kendo UI 2014.1.318
Web API 2
OData v3
There are many Kendo Grids on my site, all working perfectly fine. Until now, that is... when I started integrating Durandal.
PROBLEM:
95% of the grids are perfectly fine, but there are 2 of them, which are getting data from an OData v3 Action (POST action). For example:
[EnableQuery(AllowedQueryOptions = AllowedQueryOptions.All)]
[HttpPost]
public IQueryable<ServiceInfoResult> GetServices(ODataActionParameters parameters)
{
}
Yes, it's unusual, but for reasons I won't go into, I have data coming from an OData (POST) Action. The grids work fine usually, I just have to make sure to set the following:
schema: {
data: function (data) {
return data.value;
},
total: function (data) {
//return data["odata.count"]; // this is the one normally used for other grids, but not here...
// instead, I need to use the following and do paging locally, which is fine, since there's a VERY small number of records, so there's no issue.
return data.value.length;
},
//etc
}
Anyway, now that I am using Durandal/RequireJS, something weird is happening...; on first load everything looks perfectly fine, but when clicking on a page (2, 3, 4, etc...), then the grid shows ALL of the records even though at the footer of the grid it still says showing 11-20 of 238 items and has the page numbers.
Again, I say it was working fine before. Does anyone have any idea as to why this might be happening and what I can do about it?
UPDATE
I just discovered something. With all my grids, I am using a property on the viewModel to specify the gridPageSize. Basically, I am doing this:
var ViewModel = function () {
var self = this;
self.gridPageSize = 10;
//etc
self.attached = function () {
//etc
self.gridPageSize = $("#GridPageSize").val(); //this is a hidden field I am using to get the page size which was set in admin area
//etc
}
//etc
and in the grid configuration, I have:
pageSize: self.gridPageSize,
serverPaging: false,
serverFiltering: false,
serverSorting: false,
//etc
This works perfectly fine with all the grids that use server-side paging. However, this grid is using client-side paging. What I did now was simply the following:
pageSize: 10,
and now it works as expected. This is a good workaround, but not perfect, as I cannot dynamically set the page size. Any ideas as to what's happening here?

You can dynamically change the pageSize of your grid. All you need to do is call the pageSize() method of the grid's dataSource and it should work as expected.
$('#grid').data('kendoGrid').dataSource.pageSize(100);

This is no longer an issue, because OData 5.7 now returns #odata.count for actions/functions returning collections of complex types. Previously, I turned off server side paging and filtering.. just got all the data on the client, which I didn't like, but had no choice... but now I can use server side paging and don't need to care about this weird problem anymore. More info on the OData fix here: https://github.com/OData/WebApi/issues/484#issuecomment-153929767

Related

Adaptive Card QuickView Navigation not firing

Ok, so I'm just getting started with adaptive cards and downloaded the PNP ACE project (Git Hub Link) to use as a jumping off point. Started the project up and ran gulp serve.... everything seemed great.... until I tried to click either the View Items or Add Item buttons on the CardView. Both of these buttons fire QuickViews, but when you click them nothing happened. Tried 2 different browsers, no errors registered in the console, it just acts like there isn't an action tied to the buttons.
So, thinking "well maybe something got screwed up in a commit", I started a brand new project using yeoman.
Got the project set up, building and served it up; EXACT SAME PROBLEM!! The button in the default ACE project template didn't work either. I can't figure out what gives.
Here is the function for the button that the template created :
public get cardButtons(): [ICardButton] | [ICardButton, ICardButton] | undefined {
return [
{
title: strings.QuickViewButton,
action: {
type: 'QuickView',
parameters: {
view: QUICK_VIEW_REGISTRY_ID
}
}
}
]};
That looks just like the PNP example (and every other example I've seen online). Even the quickViewNavigator is populated the same. Here is the one from the template project (class definitions removed to save space):
const CARD_VIEW_REGISTRY_ID: string = 'JasonAdaptiveTest_CARD_VIEW';
export const QUICK_VIEW_REGISTRY_ID: string = 'JasonAdaptiveTest_QUICK_VIEW';
public onInit(): Promise<void> {
this.state = { };
this.cardNavigator.register(CARD_VIEW_REGISTRY_ID, () => new CardView());
this.quickViewNavigator.register(QUICK_VIEW_REGISTRY_ID, () => new QuickView());
return Promise.resolve();
}
So what gives? Why do these not work? Is there some NPM package that may be missing?
Well, found the line buried in the tutorials that explains my problem. A single note, in a page that isn't necessarily about quick views.
Note
ACE interaction is disabled while in Edit mode. The Workbench or Page must be >in Preview or Read mode to interact with the ACE.

Why is my dynamic Gatsby page not working

I'm trying to create dynamic pages based on a database that grows by the minute. Therefor it isn't an option to use createPage and build several times a day.
I'm using onCreatePage here to create pages which works fine for my first route, but when I try to make an English route somehow it doesn't work.
gatby-node.js:
exports.onCreatePage = async ({ page, actions: { createPage } }) => {
if (page.path.match(/^\/listing/)) {
page.matchPath = '/listing/:id'
createPage(page)
}
if (page.path.match(/^\/en\/listing/)) {
page.matchPath = '/en/listing/:id'
createPage(page)
}
}
What I'm trying to achieve here is getting 2 dynamic routes like:
localhost:8000/listing/123 (this one works)
localhost:8000/en/listing/123 (this one doesn't work)
My pages folder looks like this:
pages
---listing.tsx
---en/
------listing.tsx
Can anyone see what I'm doing wrong here?
--
P.S. I want to use SSR (available since Gatsby v4) by using the getServerData() in the templates for these pages. Will that work together with pages created dynamically with onCreatePage or is there a better approach?
According to what we've discussed in the comment section: the fact that the /en/ path is never created, hence is not entering the following condition:
if (page.path.match(/^\/en\/listing/)) {
page.matchPath = '/en/listing/:id'
createPage(page)
}
Points me to think that the issue is on your createPages API rather than onCreatePage, which means that your english page is not even created.
Keep in mind that onCreatePage API is a callback called when a page is created, so it's triggered after createPages.
If you add a console.log(page.path) you shouldn't see the English page in the IDE/text editor console so try debugging how are you creating the /en/ route because it seems that onCreatePage doesn't have any problem.

How to add a custom dimension to request telemetry in a Nodejs/typescript azure function?

Goal
A request comes in and is handled by the Azure Functions run-time. By default it creates a Request entry, and a bunch of Trace entries in Application Insights. I want to add a custom dimension to that top level request item (on a per-request basis) so I can use it for filtering/analysis later.
Query for -requests- on Application Insights
Resulting list of requests including custom dimensions column
The Azure Functions runtime adds a few custom dimensions already. I want to add a few of my own.
Approach
The most promising approach I've found is show below (taken from here https://github.com/microsoft/ApplicationInsights-node.js/issues/392)
appInsights.defaultClient.addTelemetryProcessor(( envelope, context ) => {
var data = envelope.data.baseData;
data.properties['mykey'] = 'myvalue';
return true;
});
However, I find that this processor is only called for requests that I initialise within my function. For example, if I make an HTTP request to another service, then details of that request will be passed thru the processor and I can add custom properties to it. But the main function does not seem to pass thru here. So I can't add my custom property.
I also tried this
defaultClient.commonProperties['anotherCustomProp'] = 'bespokeProp2'
Same problem. The custom property doesn't arrive in application insights. I've played with many variations on this and it appears that the logging done by azure-functions is walled off from anything I can do within my code.
The best workaround I have right now, is to call trackRequest manually. This is okay, except I end up with each request logged twice in application insights, one by the framework and one by me. And both need to have the same operation_id otherwise I can't find the associated trace/error items. So I'm having to extract the operationId in a slightly hacky way. This may be fine, my knowledge of application insights is pretty naive at this point.
import { setup, defaultClient } from 'applicationinsights' // i have to import the specific functions, because "import ai from applicationinsights" returns null
// call this because otherwise defaultClient is null.
// Some examples call start(), I've tried with and without this.
// I think the start() function must be useful when you're adding application-insights to a project fresh, whereas I think the azure-functions run-time must be doing this already.
setup()
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
// Extract the operation id from the traceparent as per w3 standard https://www.w3.org/TR/trace-context/.
const operationId = context.traceContext.traceparent.split('-')[1]
var operationIdOverride = { 'ai.operation.id': operationId }
// Create my own trackRequest entry
defaultClient.trackRequest({
name: 'my func name',
url: context.req.url.split('?')[0],
duration: 123,
resultCode: 200,
success: true,
tagOverrides: operationIdOverride,
properties: {
customProp: 'bespokeProp'
}
})
The Dream
Our C# cousins seem to have an array of options, like Activity.Current.tags and the ability to add TelemetryInitializer. However it looks like what I'm trying to do is supported, I'm just not finding the right combination of commands! Is there something similar for javascript/typescript/nodejs, where I can just add a tag on a per-request basis? Along the lines of context.traceContext.attributes['myprop'] = 'myValue'
Alternative
Alternatively, instrumenting my code using my own TelemetryClient (rather than the defaultClient) using trackRequest, trackTrace, trackError etc, is not a very big job and should work well - that would be more explicit. Should I just do that? Is there a way to disable the azure functions tracking - or perhaps I just leave that as something running side-by-side.

List&Label Webreporting: Export/Printing does not work on IIS Server

Hi and thank you in advance for any help,
I am trying to post a JSON-Object to an ASP.Net MVC-Server via JQuery/Ajax. The Controller method is supposed to take the JSON input and use it as a DataProvider for List&Label 22. The Report should then be generated and offered to the user as a PDF file for download.
Since I want the structure of the JSON Object to be generic, I don't want to create a specific model in ASP.Net for this request, but rather pass over the JSON object as a string (I know that I might run into some size restrictions, but I will worry about that later :) ).
Here is my POST request:
<script>
function getReport() {
//dummy data
data = { JsonVariable1: 1, JsonVariable2: "JsonVariable2" };
var dataSource = JSON.stringify(data);
$.ajax({
type: "POST",
dataType: "text",
url: "#Url.Action("/JsonTest")",
data: "aDataSource=" + dataSource,
async: false,
success: function (result) {
alert('Success!');
}
});
}
And this is the Controller method:
[HttpPost]
public ActionResult JsonTest(string aDataSource) {
combit.ListLabel22.ListLabel vLL = new combit.ListLabel22.ListLabel();
JsonDataProvider vJsonProvider = new JsonDataProvider(aDataSource);
vLL.FileRepository = GetCurrentRepository();
vLL.AutoProjectFile = mReportRepositoryId;
vLL.DataSource = vJsonProvider;
vLL.ExportOptions.Add(LlExportOption.ExportTarget, "PDF");
vLL.Print(); //This causes problem on published server
return Json("Success");
}
Locally, i.e. in my Visual Studio (2015) dev environment this works fine. However, when I publish the code to my IIS-Server, the POST-request doesn't terminate. I have found this line
vLL.Print();
to be the problem. If I comment out this line, the request terminates as expected. This line generates the report and exports it to a PDF which will in turn be offered to the user as a download.
I'm using IIS 8.5 and .NET-Framework 4.5 on a Machine running Windows Server 2012 R2. A Printer Driver is installed and the regular List&Label funcationality is working (e.g. starting the web designer, previewing reports via HTML, etc).
Does anyone have any idea what I am missing here? I am not a web developer, and I may also have forgotten to adjust some configurations on my IIS Server.
Thanks!
Just calling the Printmethod is not enough as you need to tell List & Label where to generate the report. I'd rather use the Export method (as it wires up a number of convenient things like muting the file dialogs) in this way:
string reportResult = "Report." + Guid.NewGuid() + ".pdf";
string outputFile = Server.MapPath("~/exports/") + reportResult;
ExportConfiguration exportConfiguration = new ExportConfiguration(LlExportTarget.Pdf, outputFile, mReportRepositoryId);
vLL.Export(exportConfiguration);
You should then find your PDF in the "exports" path of your web application on the server. To troubleshoot issues like this you can use the provided debugging tool Debwin4. It shows you all calls to the API and hints on missing options or input.

drupal_add_js() only adds the JS when no error message (D6)

In my custom form (in a custom module) drupal_add_js() only adds the JS when there is no error message.
My code goes like this:
function ntcf_redo_order_form( &$form_state = array() ) {
global $base_path, $user;
$my_dir = drupal_get_path('module', 'ntcf_redo');
drupal_add_js("$my_dir/order.js", 'module', 'header', FALSE, TRUE, FALSE);
$form = array();
...
return $form;
}
If the validation function used _form_set_error()_ to display an error message and highlight the offending field, the message is displayed and the field highlighted, but the _drupal_add_js()_ call does nothing. Without a pending error message to display, all is well.
EDIT: this problem does not occur with drupal_set_message(), only with form_set_error().
I tried adding the 3 later parameters to the *drupal_add_js()* call to tell it to not optimize it (don't combine it with other JS files). There is no mention of the file order.js in the HTML, and it makes no difference whether I use the last 4 parameters ('header', FALSE, TRUE, FALSE) or not.
In Admin/Performance, I turned off Optimize Javascript Files, and pretty much all caching, which also made no difference.
Extra Details:
I'm not sure if this makes a difference, but it wouldn't surprise me, so here goes:
What I'm doing here is a multi-part "wizard" form that allows the user to proceed forward and go back. Also, many of the pages use AJAX, so I need to do all the "required" field validation in the _submit function instead of letting Drupal do it automatically (since that makes a mess of AJAX). So, if there's a "required" field that's missing, the _submit() function sets an error message, and the form generation function generates the same form again (with the additional decoration resulting from the error message).
Also: this is off-topic, but it might help someone using Google: when doing a multi-page form that allows going backward, you MUST assign a weight to every item on the form, or else the fields tend to "wander" when you go backwards.
Any ideas?
I had the same problem, this is a workaround I found (for Drupal 7, may work in 6) :
1. in your form setup (or hook_form_alter), do this :
$form['#post_render'][]='yourfunction';
2. define :
function yourfunction($content,$element){
$my_dir = drupal_get_path('module', 'ntcf_redo');
drupal_add_js("$my_dir/order.js", 'module', 'header', FALSE, TRUE, FALSE);
return $content;
}
I think this works (while your approach does not), because hook_form_alter (and/or hook_form)
do NOT get called again for a prepared/cached form, so the initial form load WILL load the javascript, but subsequent posts will NOT.
HTH
Mikes answer ($form['#post_render'][]='yourfunction';), will work, though its not the optimal way and will cause issues with drupal_add_js.
The best way to do this is by adding your javascript via the form api '#attached'.
Instead of using drupal_add_js or a new callback on the '#post_render':
$form['#attached']['js'] = array(
drupal_get_path('module', 'module_name') .'/file/path/filename.js',
);
You may pass in a 'css' array as well. Being an array, you can pass in as may files as you want.
*This is for Drupal 7. Other versions may be different.

Resources