We are comparing these two reports as I mentioned and noticed that they do not match with the Inventory Account. Is it possible that they could be calculating a different way for each other? For example, you can say that it always seems that the Historical Inventory Balance is always about $120,000.00 less than the other. We have compared multiple reports multiple times and keep coming up with the same result. We have compared inventory items within the two reports and they seem to match, but something is still off. I have checked how the report is designed in the report designer and cannot come up with any solutions. Does anyone else seem to have this problem, idea, or are we completely missing something?
Is it possible that they could be calculating a different way for each other?
You're not missing anything. Because the results are different it is logical to assume this is almost a fact.
The reports could be pulling data from the same source but applying different filtering. If that's the case, the fastest way to debug this issue is to dump and compare the SQL requests executed by the reports.
You can do so by opening the Trace window from the report page:
Then you launch the report and refresh the Trace window, the full SQL request will be available in the trace details. You can then execute it in Microsoft SQL Management Studio and compare the results. I suggest removing filters one by one until you find the ones which yields different results. You can also dump SQL requests of most Acumatica web page by using the Request Profiler screen (SM205070).
Related
TLDR of the question:
Is it possible to use Graph to query a SharePoint list, which contains lookups that would need to be fetched from a different SharePoint list?
The "old" SharePoint API can do that in one request.
Follow up question as a result of my attempts to work around that limitation:
Why does Graph not allow me to ask for multiple list entries by ID?
This makes literally no sense to me.
Background for the question:
I've been given the task to move a small SharePoint app from the normal SharePoint API over to the Graph API, so the features could be expanded into incorporating Exchange too. I've never worked with either prior to this, so I didn't really have any idea what I was getting into.
And while I did succeed in finding equivalent queries to Graph for everything that was needed so far, do I also start to doubt that Graph is seriously intended to be used for SharePoint access.
Lists are the best example. The SharePoint API offers resolving LoopupId values when requesting multiple items.
Graph doesn't even offer that when requesting an item directly, let alone multiple.
To make things worse, after I wrote my own lookup routine that picks the columns that are lookups, and having to manually tell it where to find the values for that, I discovered that Graph won't even let me request multiple items by ID...
At first I tried to chain id eq '<id>' requests, because even $batch requests are limited to 20 individual requests, limiting the amount of items I could lookup to that at most.
But filtering 'id' is apparently unintended.
https://graph.microsoft.com/v1.0/sites/{site}/lists/{list}/items?$filter=id+eq+'67'
results in "General exception while processing", which I've never even seen as a response until this.
I then tried the in keyword:
https://graph.microsoft.com/v1.0/sites/{site}/lists/{list}/items?$filter=id+in+('67')
which results in "Invalid request".
After that I thought I could be smart an add a calculated column which copies the item's id and index on that, but guess what: can't set an index on that column in the first AND it also refuses filtering on that on top.
Not even offering the header fix for indexing on unindexed columns, nope. Outright complains that the field isn't usable.
With all this, I feel like I will have to settle for a hybrid approach, unless I'm seriously missing something here.
I thought that having to write my own LookupId resolver was bad, but being unable to even optimize the requests to return all matching items from a list in one request at least, and instead having to request every single item EACH, because filtering by id is forbidden and the ONLY access by id is singular, just gives me the feeling that Graph was never meant to be used for SharePoint lists at all.
Is there an actual question here?
Microsoft has been recommending Graph for working with SharePoint Online for a while now. Though it is true there are shortcomings at present, Microsoft is constantly investing in improving Graph whereas older methods like CSOM are deprecated and are no longer updated.
If there is any sufficient reason why you have to filter by id instead of using GET .../items/{id} as many times as needed, you may try to filter by fields/ID, but don't use $ before filter parameter for that case - it won't work, so try
GET https://graph.microsoft.com/v1.0/sites/{site}/lists/{list}/items?filter=fields/ID eq 67
with setting header Prefer=HonorNonIndexedQueriesWarningMayFailRandomly
Also, getting multiple items from large lists may cause issues, so for that case I'd add &top=5000
The issue we are having is trying to map/relate the fields with different tables from result of saved search created on Records Browser Item(http://www.netsuite.com/help/helpcen...cord/item.html).
We have a retail inventory management system with many modules. So the attempt relating our columns to NetSuite has been going on for a while without any conclusion.
The approach we are trying is to run SuiteScript on the debugger and view the dataset. We were successful those with relatively little volume of data. As the limit is 10,000 rows, we are stuck with Search on Item that returns 1Mil. records. The search returns this volume of data when we add all the search columns. The problem the process of add/removing individual columns is rigorous and just with one column it returns more than 10,000 rows. So it becomes impossible to fetch the data and complete the mapping process.
So I would like to know if there is any way we can only see the schema and their relationships for a saved search?
Thanks.
In SuiteScript 1.0, this can be achieved by a scheduled script that creates multiple CSV files from a saved search (SuiteAnswers article 36206). You'll have to get around the search limit (SuiteAnswers article 33496) AND the governance limit (SuiteAnswers article 23406). If you make the file Available Without Login, you should be able to retrieve the CSV with an HTTP GET request without credentials. However, that will make the data potentially viewable by anyone who knows the URL--a security concern that you will have to consider.
In SuiteScript 2.0, this can probably be achieved with a Map/Reduce script (SuiteAnswers article 43795). This may be a better way to optimize the script, but I have not tested it myself in SuiteScript 2.0.
I have a general requirement in my current project to make an existing XPage application faster. One thing we looked at was how to speed up some slower type-ahead fields, and one solution to this which seems to be fast, is implementing it using FTSearch rather than the DBColumn we originally had. I want to get advice on whether this would be an OK approach, or if there are any suggestions to do what we need in a different way.
Background:
While there are a number of factors affecting the speed (like network latency, server OS, available server memory etc.), as we are using 8.5.3, we have optimized the application in general as far as we can, making use of the IBM Toolkit to find problem areas, and also using the features IBM added to help with this in 8.5.3 (e.g. Partial Execution, using the optimized JS and CSS option, etc.). Unfortunately we are stuck with the server running on a 32bit Windows OS with 3.5Gb Ram for another few months.
One of the slowest elements to respond are in certain type-aheads which reference a large number of documents. The worst one averages around 5 or 6 seconds before the suggested list appears for a type-ahead enabled field.
It uses SSJS to call a java class to perform a dbcolumn call (using Ferry Kranenburg's XPages Snippet) to get a unique list from a view, then back in SSJS it loops though the array to check if each entry contains the search key value, and if found it adds a highlight (bold) html tag around the search text in the word, then returns the formatted list back to the browser.
I added a print statement to output the elapsed time it takes to run the code, and on average today on our dev server it is around 3250 ms.
I tried a few things to see how we could make this process faster:
Added a Java class to do all processing (so not using SSJS). This only saved an average of 100ms.
Using a view-scoped Managed Bean, I loaded the unique Lookup list into memory when the page is loaded. This produces a really fast type-ahead response (16ms), but I suspect this is a very bad way to do this with a large data set - and could really impact the general server if multiple users were accessing the application. I tried to find information on what would be considered a large object, but couldn't find any guidance or recommendation on how much is too much to store in memory (I searched JSF and XPage sites). Does anyone have any suggestions on this?
Still in a Java class - instead of performing a dblookup to get the 'list' of all values to search through, I have the code run a FT Search to get the doc collection, then loop each doc to extract the field value I want and add those to a 'SortedSet' (which automatically doesn't allow duplicates), then loop the sorted set to insert the bold tags around the search term, and return that to the browser. This takes on average 100ms - which is great and barely noticeable. Are there an drawbacks to this approach - or reasons I should not do it this way?
Thanks for any feedback or advice on this.
Pam.
Update Aug, 14. 2013: I tried another approach (inspired by the IBM/Tony McGuckin Insights application on OpenNtf) as the Company Search type-ahead in that is using managed beans and is fast across a lot of data.
4 . Although the Insights application deals with data split across multiple databases, the principle for the type-ahead is similar. I couldn't use a view with getAllEntriesByKey though as I needed to search for a string within the text too, not just at the start of the entry. I tried creating a ViewEntryCollection based on a view FTSearch, but as we have a lot of duplicate names in the column, this didn't give the unique list I wanted. I then tried using a NotesViewNavigator on a categorized view, and looping through that. This produced the unique list I needed, but it turned out to be slower than any of the other methods above. (I did implement these ViewNavigator performance tips).
From my standpoint, performance may be affected by any of many layers every Domino application (not only XPages) consists of.
From top - browser (DOM, JS, CSS, HTML...), network (latencies, DNS, SSO...) to application layer (effective algorithms, caches), database/API (amount of data, indexes, reader names...) and OS/hardware (disks, memory...)
According to things you tested:
That is interresting, but could be expected: SSJS is cached and may use lower level API to get data (NAPI).
For your environment (32bit/3.5G RAM - I expect your statement about 3.5M is typo) I DO NOT recommend to cache big lists, especially if you apply it as a pattern to many fields/forms/applications. Cache in WeakHashMap could be more stable, though.
Use of FT search is perfectly fine, unless you need data that update frequently. FT index need some time and resources to update.
My suggestion is: go for FT, if it solves your problem. Definitely, troubleshoot FT performance in some heavy performance test on your server first.
(I cannot comment because of my low reputation)
I have recently been tackling with a similar problem. Here are some additional points to consider:
Are there many duplicate keywords in the view? Consider making a categorized view for #DbColumn.
FTSearching a view is often slower than a database, I believe. See Andre Guirard's article. Consider using db.FTSearch() and refining your FT query to include view's selection #Formula, if possible.
The FT index can be updated programmatically with db.updateFTIndex(). If keywords are added rarely, but need to be instantly available, you can perform index update in keyword document's QuerySave event (or similar). We used this approach when the keywords were stored in different (much smaller) database and the update was very fast.
The memory consumption can be checked this way:
Install XPages Toolbox from OpenNTF.
Open your application.
Create a JVM memory dump (Session dumps - Generate Heap Dump).
Install Eclipse Memory Analyzer Tool
Install IBM Diagnostic Tool Framework into Memory Analyzer.
Load your memory dump into MAT. You will see every Java object and their sizes.
In the end, I believe that there is no single general answer to your question. You need to test different approaches to find the fastest solution in your environment.
One problem with FT search is this error:
The full text index for this database is in use
Based on my experience this will occur for a while (maybe a few seconds) when the indexer task starts to index the database. If your users are not very demanding they can just try again and it will probably work.
But in many cases you want to minimize the errors the users get and will have to handle this error nicely. I've built my own FTSearch method which waits a bit and tries again until the error is not received. This will show as slowness to the user instead of error.
I have a requirement to retrieve data from share point (I guess it is 2010, but will check with admin if relevant) and generate an excel report/chart. Say we have a bug tracking system in share point. Currently, I could create a view and see some statistics, but I need to plot a graph to see historically (every week) how the number of bugs changed. For example,
get the number of bugs filed in a specific week
do some grouping based on type/severity
based on classification get number of bugs solved that week etc.
If I can get the numbers based on date range, I may use excel to plot the graph.
After some reading, SharePoint object model come close to what I used to work with (Oracle DB). I understand it may be entirely different from tradition db and querying.
Please help me with
What is the best method to approach this?
Is there a good book/resource.
Thanks a lot,
bsr
The easiest apprach would be to LINK to the sharepoint lists using Access 2007 or 2010 and then export the data to Excel for further processling. Of course, you could also write a program that uses CAML query to access the data. Your requirement sound straightforward, unless you need to automate the reporting process, the simplest approach would be to access the lists via an access database.
You could also create a web service via REST that pulls the data directly into Excel.
SharePoint has it's own query language: CAML query, and in theory that could be used to retrieve the list you seek.
And you should be prepared for "some" trial and error.
Tools I used:
http://www.u2u.be/res/tools/camlquerybuilder.aspx
http://spud.codeplex.com/
what I understand from this question is that you have the need to put the SharePoint data to an excel file and this from within the SharePoint site? So it looks to me that you could just create a simple SharePoint web part that consists of one button "generate excel file". So when the user clicks on the button you would just query your SPList object(SharePoint object model) and you would get all the necessary data from the list (SPListItems).
This is the way that I would take. Mind you that this is offcourse custom SharePoint Development (.NET c#). There are lots of books or blogs that described how to create your own web part in SharePoint.
I am building a tool that searches people based on a number of attributes. The values for these attributes are scattered across several systems.
As an example, dateOfBirth is stored in a SQL Server database as part of system ABC. That person's sales region assignment is stored in some horrible legacy database. Other attributes are stored in a system only accessible over an XML web service.
To make matters worse, the the legacy database and the web service can be really slow.
What strategies and tips should I consider for implementing a search across all these systems?
Note: Although I posted an answer, I'm not confident its a great answer. I don't intend to accept my own answer unless no one else gives better insight.
You could consider using an indexing mechanism to retrieve and locally index the data across all the systems, and then perform your searches against the index. Searches would be an awful lot faster and more reliable.
Of course, this just shifts the problem from one part of your system to another - now your indexing mechanism has to handle failures and heterogeneous systems, but that may be an easier problem to solve.
Another factor is how often the data changes. If you have to query data in real-time that goes stale very quickly, then indexing may not be practical.
If you can get away with a restrictive search, start by returning a list based on the search criteria corresponding to the fastest data source. Then join up those records with the other systems and remove records which don't match the search criteria.
If you have to implement OR logic, this approach is not going to work.
While not an actual answer, this might at least get you partway to a workable solution. We had a similar situation at a previous employer - lots of data sources, different ways of accessing those data sources, different access permissions, military/government/civilian sources, etc. We used Mule, which is built around the Enterprise Service Bus concept, to connect these data sources to our application. My details are a bit sketchy, as I wasn't the actual implementor, just an integrator, but what we did was define a channel in Mule. Then you write a simple integration piece to go between the channel and the data source, and the application and the channel. The integration piece does the work of making the actual query, and formatting the results, so we had a generic SQL integration piece for accessing a database, and for things like web services, we had some base classes that implemented common functionality, so the actual customization of the integration piecess was a lot less work than it sounds like. The application could then query the channel, which would handle accessing the various data sources, transforming them into a normalized bit of XML, and return the results to the application.
This had a lot of advantages for our situation. We could include new data sources for existing queries by simply connecting them to the channel - the application didn't have to know or care what data sources where there, as it only looked at the data from the channel. Since data can be pushed or pulled from the channel, we could have a data source update the application when, for example, it was updated.
It took a while to get it configured and working, but once we got it going, we were pretty successful with it. In our demo setup, we ended up with 4 or 5 applications acting as both producers and consumers of data, and connecting to maybe 10 data sources.
Have you thought of moving the data into a separate structure?
For example, Lucene stores data to be searched in a schema-less inverted indexed. You could have a separate program that retrieves data from all your different sources and puts them in a Lucene index. Your search could work against this index and the search results could contain a unique identifier and the system it came from.
http://lucene.apache.org/java/docs/
(There are implementations in other languages as well)
Have you taken a look at YQL? It may not be the perfect solution but I might give you starting point to work from.
Well, for starters I'd parallelize the queries to the different systems. That way we can minimize the query time.
You might also want to think about caching and aggregating the search attributes for subsequent queries in order to speed things up.
You have the option of creating an aggregation service or middleware that aggregates all the different systems so that you can provide a single interface for querying. If you do that, this is where I'd do the previously mentioned cache and parallize optimizations.
However, with all of that it you will need weighing up the development time/deployment time /long term benefits of the effort against migrating the old legacy database to a faster more modern one. You haven't said how tied into other systems those databases are so it may not be a very viable option in the short term.
EDIT: in response to data going out of date. You can consider caching if your data if you don't need the data to always match the database in real time. Also, if some data doesn't change very often (e.g. dates of birth) then you should cache them. If you employ caching then you could make your system configurable as to what tables/columns to include or exclude from the cache and you could give each table/column a personalizable cache timeout with an overall default.
Use Pentaho/Kettle to copy all of the data fields that you can search on and display into a local MySQL database
http://www.pentaho.com/products/data_integration/
Create a batch script to run nightly and update your local copy. Maybe even every hour. Then, write your query against your local MySQL database and display the results.