Ive converted a CQM model to DQM without errors but when I open some (2/10) of the reports - they are giving me an error.
The error: "XQE-PLN-0131 The SQL can not be generated because a circular reference is found in the sub-queries"
Im not able to figure out as to why a sub query will be circular referenced when the model is converted to DQM.
I am trying to run each query and see the error occurance but still not able to find it.
Trying to troubleshoot sql errors when converting from a CQM to DQM dispatcher is a messy business.
If the report is small, recreate it straight from the new model, tesing along the way.
If the report is big, locating the error can be very time consuming.
I advise this,
First, in your Framework Models, make sure that your Database layer Query Subjects from Data Sources are not named exactly the same as your table names. I simply added an underscore '_' to the end of their name.
This may fix your circular reference issue.
Invariably, you are going to have some reports that still fail. I suggest you follow the trend and set up a secondary dispatcher. Let your primary dispatcher be DQM, but setup a CQM dispatcher. I did this on the same server without any problems.
While the DQM server will have all services activated, the CQM should only have the following set to True in Cognos Configuration->Environment->IBM Cognos Services:
Agent Service
Batch Report Service
Delivery Service
Dispatcher Service
Job Service
Monitor Service
Query Service
Report Data service
Report Service
Relational Metadata Service
All remaining services should be set to 'False'.
In the environment settings, make sure your External Dispatcher URI, Internal Dispatcher URI and Dispatcher URI for external applications has a different Port number than your DQM install. For me DQM is the default 9300, and CQM is using 9301.
Your CQM can point to the same Notification Store, HTAS Store as your DQM.
On your Cognos Gateway, do not add your CQM dispatcher to the list of Dispatcher URI's for Gateway. This would cause requests to go to your CQM dispatcher any time the DQM is busy, which would cause many requests to fail. Requests will be routed to the CQM using Routing Rules.
In Congnos Configuration under "Dispatchers and Services" give your DQM and CQM dispatchers each a distinct 'Server Group' name.
Edit Properties on your Packages , check 'Advanced Routing', then Edit the routing sets. Here you can define new routing sets. You can publish your packages twice, once as DQM packages, and again as CQM packages (distinct name, ie. add _CQM to the name). For your DQM, add a 'Type Routing Sets' of 'DQM Reports'. For CQM, add a 'Type Routing sets' of 'CQM Reports'.
Next, go back to Cognos Administration.
From "Dispatchers and Services" -> "Specify Routing Rules" (icon at the top)
Set A package Routing Rule that sends DQM Reports to the DQM Dispatchers Server Group. Set a Routing Rule that Sends CQM reports tot he CQM Dispatchers Server Group. Then Set a Routing Rule that sends ANY Package to DQM Dispatchers Group at the bottom as a Catch-All.
Now you can go into your reports, point them to the DQM package. If they work, keep them there. If they do not, point them to the CQM package, and add them to a list.
Now you can have all your reports working, and converting reports to DQM will not be holding up your migration project. Now go through your list of CQM packages and one at a time recreate them using a DQM package.
If you do this, and make sure all new reports are DQM, eventually you will be done with the CQM packages. When that glorious day comes, you can simply turn off the CQM dispatcher service and you will be 100% DQM 64 bit goodness.
Related
I think I've broken my project's custom metrics.
Earlier yesterday, I was playing around with the cloud monitoring api, and I created a metric descriptor and added some time series data to it using the latest python3 cloud monitoring library create_time_series call. Satisfied with the results, I deleted the descriptor using the library, which threw an error as I had incorrectly passed in the descriptor's name. I called it again with the correct name, and it succeeded, but now every call to create_time_series on this project fails with an HTTP 500. The error message included simply says to "Try again in a few seconds," which I have, to no avail.
I have verified that I can create time series data on other projects of mine, and it works as expected. The API Explorer available in google's API documentation for metrics also gets an HTTP 500 back on calls to this project, but works fine on others. CURLing requests yields the same results.
My suspicion is that I erroneously deleted the custom.googleapis.com endpoint in its entirety, and that is why I am unable to create new metric descriptors/time series data. Is there a way to view the state of this endpoint, or recreate it?
It is impossible to to delete the data stored in your Google Cloud project but deleting the metric descriptor renders the data inaccessible. Also, according to data retention policy, there is a deletion of this data when it expires.
To delete your custom metric descriptor, call the metricDescriptors.delete method. You can follow the steps in this guide.
You are calling CreateMetricDescriptor every time when you call CreateTimeSeries. Some or all of these calls specify no metric labels, and these calls are therefore overwriting the metric descriptor with one that has no labels. The calls to ‘CreateTimeSeries’, on the other hand, do specify metric labels, causing the metric labels to be auto-added to the descriptor.
Custom metric names typically begin with custom.googleapis.com/, which differs from the built-in metrics.
When you create a custom metric, you define a string identifier that represents the metric type. This string must be unique among the custom metrics in your Google Cloud project and it must use a prefix that marks the metric as a user-defined metric. For Monitoring, the allowable prefixes are custom.googleapis.com/ and external.googleapis.com/prometheus. The prefix is followed by a name that describes what you are collecting. For details on the recommended way to name a custom metric, see Naming conventions.
I have a node.js application and I am using Application Insights to collect telemetry on our users. We are using the applicationinsights npm package.
Our users' privacy is very important to us and we want to collect as little data about them as we need. To this end, we do not want to collect location data (country, state/province, and client-ip). However, I can't see how we can avoid sending that data to azure. Is it possible to avoid sending it?
I'm guessing that the location data is coming directly from the http request. So, it might be that I need to change something in the npm package to remove the location headers from the request, but this does not appear to be exposed to the application.
Any ideas on how to fix this?
As Matt mentioned, you can change the data being sent to App Insights- this is certainly necessary in cases where the default logging contains information you don't want sent to our servers for any reason. The only thing I would adjust from his suggestion is the it is recommended to use TelemetryInitializers instead of TelemetryProcessors to do this modification. Any part of the data model should be able to be adjusted or removed from an initializer. This is also particularly useful if there is anything in the request data that you would consider PII. You can see the non-custom data model here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/export-data-model
All location data in App Insights is based on IP address. The IP address is sent to App Insights and from there is is processed using the GeoLite2 database. Once the conversion happens, we discard the IP address so that we are never keeping IP addresses permanently. That's why when you query your logs the IP address field will always contain 0.0.0.0 for all records.
If you look at live metrics in AI for a web service, it shows the number of servers currently active (this is dynamic, it goes up and down dependent on load).
We have some periodic major site issues, which we think could be when Azure sales up and adds a new instance, but cant find any way of recording/tracking/graphic/querying this.
The number of servers is shown in "live Metrics". Right now I can see we have 5.
They are also show in Performance->roles, but this only shows the number of servers (aka roles) right now, I cant see any history unfortunately.
any ideas how to see if/when a new instance was created and/or destroyed?
Actually, it's difficult to find out when a new instance was created / destroyed in history, since app service plan does not support diagnostics settings.
The most similar way is to query the requests logs then you may have the change to figure out. The query like below(to write the query, in azure portal -> your application insights -> Logs):
requests
| project timestamp, cloud_RoleName, cloud_RoleInstance
| order by timestamp desc
I'm new to the Security Command Center (SCC) and Data Loss Prevention (DLP). I'm trying to create a job in DLP to check if there is any PII in a BigQuery table, but I'm getting the following error upon creation:
Request violates constraint constraints/gcp.resourceLocations on the project resource.
Learn more https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations.
On the organization level, there's an organization policy (inherited in my project) allowing resources to be created in Europe only due to GDPR. I suspect that maybe DLP runs in some other region (US maybe) and that's why I'm getting this error.
I can't seem to be able to choose where does the job run in the options while creating the job, and I can't seem to find anything about this in the documentation. Any idea why am I getting this error and how to fix it?
The answer is copied from here.
Your organization policy is not allowing DLP to run the job, at this moment the DLP API is blocked by this "constraints/gcp.resourceLocations" and there's no workaround at the moment. However, there's a feature request to have the possibility to set a specific location rather than using "global", which is what in fact is causing this issue.
I am having an issue using an OData SharePoint List Source with a dynamically changing connection string (inside the OData Connection Manager). The OData Source inside of my Data Flow Task fails to validate with the error message, “Cannot acquire a managed connection from the run-time connection manager” when executing the DFT from a parent package.
I have done some extensive Googling, and combed the forums relentlessly; however, I have not found anything that seems to offer a solution to this problem. Any help figuring out a solution would be greatly appreciated!
Here is the general flow of the main SSIS package:
Truncate staging table
Get all Site Collection URLs and their GUIDs from SQL Table
Execute Package Task for each site collection (foreach ADO loop container)
Extract data from UserInformationList (OData source)
Add a column for the GUID of this site collection
Load the data into staging table
etc. . .
Main Package:
Child Package Control FLow:
E-L UserInformationList DFT:
Package Output With Error Message:
When testing the entire solution, everything (tasks, parameters, variables, etc.) behaves properly until 3.1(see above), when the OData Source fails during validation. The only aspects of the source and connection manager that change are the URL and ConnectionString for the connection manager; the specific SharePoint list that I access on each site never changes. When the solution enters the child package, the URL and ConnectionString for the Connection Manager are properly set, prior to entering the DFT.
When testing the child package via the Execute Package Task, using hard-coded parameter values, the child package fails to validate.
When testing just the child package, there are no errors and the list information is stored in the database, as expected. However, with individual testing, the OData Connection Manager uses the default value of the package parameters.
Things I have tried so far:
Set DelayValidation to True
Changing the debugging runtime from 64 to 32 bit (and back again)
Use collection to specify the list (in the OData Source Editor)
Use resource path to specify the list (in the OData Source Editor)
Running the child package as a Farm Admin
Running the solution as Farm Admin
Other information:
SharePoint 2013
Data Tools for Visual Studios 2012
Microsoft’s OData Source for SQL Server 2012
i think you don't have access to the source sharepoint or you are not passing the right credentials thats why you are getting this error. Please use valid connection and test your connections.
I had the same issue while I was reading the URL for OData source from database. In my case, I was passing old value for URL which has changed in the SharePoint side that is, the DB had URL value http://sharepointsite/News but the actual site was modified by user to http://sharepointsite/NewsUpdated
So, check the passing URL value in your case in case you still having this issue
I had the same issue, and it looks like at the moment of starting for loop container, you need to provide a valid value for the URL variable, it will be overwritten this or the other way, but if I would go with "0" or null I would get the same error as you