I am trying to find out more informations about using the REST API in order to create a schedule for schema loading. Right now, I have to reload the particular schemas via my data server connections manually (click on every schema and Load Metadata) and would like to automate this process.
Any pointers will be much appreciated.
Thank you
If the metadata of your data warehouse is so in flux that you need to reload the metadata so frequently that you want to automate the process then you need to understand that your data warehouse is in no way ready for use.
So, the question becomes why would you want to frequently reload the metadata of a data source schema? I'm guessing that you are refreshing the data of your data base and, because your query cache has not expired, you are not seeing the new data.
So the answer is, you probably don't want to do what you think you need to do unless you can convince me otherwise.
Also, if you enter some obvious search terms you will find the Cognos analytics REST api documentation without too much difficulty.
Related
Is there any suitescript API to pull the Application Performance Monitor metrics from the NS database or any other alternate way to get this data. We need the data to store for future reference to optimize the transaction records. Can anyone please help in this to get the data? Thanks!
It's not public but yes you can make requests to the underlying Suitelets that power the NetSuite APM. To see this, go to one of the APM modules, inspect the page and choose the network tab. Once this is open, you can refresh the data and then inspect the request that was made to fetch the data used to populate these metrics.
Good luck!
Recently Microsoft published the Microsoft Search API (beta) which provides the possibility to index external systems by creating an MS Graph search custom connector.
I created such a connector that was successful so far. I also pushed a few items to the index and in the MS admin center, I created a result type and a vertical. Now I'm able to find the regarded external items in the SharePoint Online modern search center in a dedicated tab belonging to the search vertical created before. So far so good.
But now I wonder:
How can I achieve that the external data is continuously pushed to the MS Search Index? (How can this be implemented? Is there any tutorial or a sample project? What is the underlying architecture?)
Is there a concept of Full / Incremental / Continuous Crawls for a Search Custom Connector at all? If so, how can I "hook" into a crawl in order to update changed data to the index?
Or do I have to implement it all on my own? And if so, what would be a suitable approach?
Thank you for trying out the connector APIs. I am glad to hear that you are able to get items into the index and see the results.
Regarding your questions, the logic for determining when to push items, and your crawl strategy is something that you need to implement on your own. There is no one best strategy per se, and it will depend on your data source and the type of access you have to that data. For example, do you get notifications every time the data changes? If not, how do you determine what data has changed? If none of that is possible, you might need to do a periodic full recrawl, but you will need to consider the size of your data set for ingestion.
We will look into ways to reduce the amount of code you have to write in the future, but right now, this is something you have to implement on your own.
-James
I recently implemented incremental crawling for Graph connectors using Azure functions. I created a timer triggered function that fetches the items updated in the data source since the time of the last function run and then updates the search index with the updated items.
I also wrote a blog post around this approach considering a SharePoint list as the data source. The entire source code can be found at https://github.com/aakashbhardwaj619/function-search-connector-crawler. Hope it would be useful.
I wonder if it is possible to add a listener to Cassandra getting the table and the primary key for changed entries? It would be great to have such a mechanism.
Checking Cassandra documentation I only find adding StateListener(s) to the Cluster instance.
Does anyone know how to do this without hacking Cassandras data store or encapsulate the driver and do something on my own?
Check out this future jira --
https://issues.apache.org/jira/browse/CASSANDRA-8844
If you like it vote for it : )
CDC
"In databases, change data capture (CDC) is a set of software design
patterns used to determine (and track) the data that has changed so
that action can be taken using the changed data. Also, Change data
capture (CDC) is an approach to data integration that is based on the
identification, capture and delivery of the changes made to enterprise
data sources."
-Wikipedia
As Cassandra is increasingly being used as the Source of Record (SoR)
for mission critical data in large enterprises, it is increasingly
being called upon to act as the central hub of traffic and data flow
to other systems. In order to try to address the general need, we,
propose implementing a simple data logging mechanism to enable
per-table CDC patterns.
If clients need to know about changes, the world has mostly gone to the message broker model-- a middleman which connects producers and consumers of arbitrary data. You can read about Kafka, RabbitMQ, and NATS here. There is an older DZone article here. In your case, the client writing to the database would also send out a change message. What's nice about this model is you can then pull whatever you need from the database.
Kafka is interesting because it can also store data. In some cases, you might be able to dispose of the database altogether.
Are you looking for something like triggers?
https://github.com/apache/cassandra/tree/trunk/examples/triggers
A database trigger is procedural code that is automatically executed
in response to certain events on a particular table or view in a
database. The trigger is mostly used for maintaining the integrity of
the information on the database. For example, when a new record
(representing a new worker) is added to the employees table, new
records should also be created in the tables of the taxes, vacations
and salaries.
So here's my deal.
I'm using node on the express framework. The website i'm working on grabs scraped data and stores it for each user on the website. That data can then be displayed on the users page whenever they want to access it, so the data will be scraped, put in a database or storage, whatever i decide the best way to do it is, and then pulled back out for the user.
I'm trying to figure out what the best database setup would be. There will potentially be large amounts of data per user, especially over long periods of time. I've read some stuff about using redis to cache some data like the user login info and that basic stuff, and then using mongodb for the big data. But I don't know, i'm new to database stuff so I am open to some new teachings and some ideas from the masters.
What would you guys suggest I do? I want it to be fast and be able to handle multiple queries at the same time, but really, I have no idea what i'm talking about, so please help me.
What would you guys suggest I do?
This really depends on the nature of your data, how you model your domain and how you want to persist it. I would first try to figure out the basic model and based on that choose the most suitable database system. Don't jump at quick conclusions around caching with redis when you don't even know if you will need it in the first place.
Suggestion might also depend on how much time you want to spend with database layer of your application. Some database systems provide more functionality than others depending on their concepts. If you are a beginner choose a single mainstream solution that is well documented with established community like MongoDB or MySQL that will cover all your needs from the beginning so that you won't end up managing multitude of systems.
I want to log some information about my visitors. Is it better to use the IIS generated log or to create my own in an SQL 2008 db.
I know I should probably provide more information about my specific scenario, but I'd like just generally, pros and cons of either proposal.
You can add additional information to the IIS logs from ASP.NET using HttpResponse.AppendToLog, additionally you could use the Advanced Logging Module to create your own logs with custom filters and custom data including data from Performance Counters, and more.
It all depends on what information you want to analyse.
If you're doing aggregations and rollups then you'd want to pull this data into a database for analysis. Pulling your data into a database will give you access to indexes and better querying tools.
If you're doing infrequent one-off simple queries then LogParser might be sufficient for your needs. However you'll be constantly scanning unindexed flat files looking for data which is I/O intensive.
But as you say, without knowing more about your specific scenario it's hard to say what would be best.