I have a Node.JS app that should perform some queries on my ontology. The ontology was created using protegé and there's some data on it.
The problem is I do not know where / how to create this triple store making it a SPARQL endpoint.
As I did not find anything, at least recent, by googling, I suppose I am completely lost.
Any suggestions?
Related
I have a web server which is made specifically for an app. Some of the services get very complex in terms of calling a lot of functions across multiple services and each of them querying multiple models. Would it be good idea to save the queried documents of each request in AsyncLocalStorage to reduce the time taken for querying?
So, I would check if document is present in ALS if yes then use it else fetch and save it there before using.
I tried to find references related to this but didn't find anything. Which led me to think maybe it is not such a good idea after all. But then querying the same document again and again over different services doesn't make sense either.
i want to migrate unidata database which is multivalue to sql using dotnet code.IS this possible,one of the possibility is through SSIS but this will consume lot of time becouse we have to do ETL process to all the tables in DB .So was looking for a dot net code where i can connect to Unidatadb and migrate data to sql
You're probably getting downvoted because this is an awfully general question, and it's not particularly a programming question, but rather a big project.
One piece of advice is to flip things around and extract the information from the Unidata side "exploding" out the multivalues into flat tables that your ETL process can consume. And the challenge there (apart from writing Unibasic code) is identifying which multivalued fields are associated with each other. Unless you have very good documentation that can be tough to do.
I'm currently involved in a app project, and I'm incharge of setting up the backend.
What i'm use to using is a MYSQL database + php for cleaning and managing the data sent to and fro the front end, which I have much more experience in. However, because of certain preferences of my bosses, on this project I've found myself looking at IBMs Bluemix and Cloudant software. Cloudant is a NoSQL database(like CouchDB) and my experience regarding noSQL is severely lacking. All I've mananged to do so far is to create a few JSON documents, and some basic views
What I need to figure out is how to perform the CRUD(create,read,update,delete) actions on a NoSQL database, or at least what it would look like.
In addition to this, I need to know if there are ways to implement security measures(implement security and anti-hacking functions) on a NoSQL database without an external source, or will I need to learn how to reroute the data through some sort of php function first, if i want it cleaned, before sending it to the Cloudant server where my database sits.
Let me know if my attempt to explain my problem is lacking in clarity. I'll try my best to state a different way, if need be.
Generally speaking, there is nothing equivalent to an ANSI to NoSQL databases. In other words, NoSQL databases are not as standardized as SQL databases. All standards are starting to appear. You can think of it as a technology still in the making.
What you have in general is an API with methods such as put_record or delete_record, or a REST interface that is logically equivalent. Also, in general you CRUD the whole record, not parts of the record.
Take a look at the reference: Cloudant - Reading and Writing
Having that said, in your case I would recommend abstracting away from the specific implementation of the NoSQL you want to use if you care about avoiding vendor lock-in. So I would suggest you to wrap CRUD functions using PHP functions that later can be replaced if you want to change the NoSQL database flavor.
This approach has the additional advantage to provide an abstraction for you to implement your own security. Some important NoSQL databases have no concept of multi-tenancy or just implemented that. Again, it is a technology in the making.
When your mindset is the relational one, you tend to think of the database as something that will help you guarantee data consistency as much as possible. But NoSQL databases are not like that. Think of them as a simple repository of documents (in a JSON or XML structure, for instance), without cross references.
Then the obvious question is perhaps: why would anyone want such a thing? One of the possible answers is because NoSQL databases may hold an aggregate of consolidated data. You can then retrieve aggregates to save time reprocessing or re-retrieving data unnecessarily.
As for security, most (if no all) NoSQL databases have some pretty good authentication mechanisms.
So here's my deal.
I'm using node on the express framework. The website i'm working on grabs scraped data and stores it for each user on the website. That data can then be displayed on the users page whenever they want to access it, so the data will be scraped, put in a database or storage, whatever i decide the best way to do it is, and then pulled back out for the user.
I'm trying to figure out what the best database setup would be. There will potentially be large amounts of data per user, especially over long periods of time. I've read some stuff about using redis to cache some data like the user login info and that basic stuff, and then using mongodb for the big data. But I don't know, i'm new to database stuff so I am open to some new teachings and some ideas from the masters.
What would you guys suggest I do? I want it to be fast and be able to handle multiple queries at the same time, but really, I have no idea what i'm talking about, so please help me.
What would you guys suggest I do?
This really depends on the nature of your data, how you model your domain and how you want to persist it. I would first try to figure out the basic model and based on that choose the most suitable database system. Don't jump at quick conclusions around caching with redis when you don't even know if you will need it in the first place.
Suggestion might also depend on how much time you want to spend with database layer of your application. Some database systems provide more functionality than others depending on their concepts. If you are a beginner choose a single mainstream solution that is well documented with established community like MongoDB or MySQL that will cover all your needs from the beginning so that you won't end up managing multitude of systems.
I currently developed an app that connects to SQL Server 2005 database, so my DAL objects where generated using information from that DB.
It will also be possible to connect to an Oracle and MySQL db, all with the same table structures (aside from the normal differences in fields, such as varbinary(max) in SQL Server and BLOB in Oracle, and so on). For this purpose, I already defined multiple connection strings and multiple SubSonic providers for the different DB's the app will run on.
My question is, if I generated my objects using a SQL Server database, should the generated objects work transparently with the other DB's or do I need to generate a different DAL for each database engine I use? Should I be aware of any possible bugs I may encounter while performing these operations?
Thanks in advance for any advice on this issue.
I'm using SubSonic 2.2 by the way....
From what I've been able to test so far, I can't see an easy way to achieve what I'm trying to do.
The ideal situation for me would have been to generate SubSonic objects using SQL Server for example, and just be able to switch dynamically to MySQL by just creating at runtime the correct Provider for it along with its connection string. I got to a point where my app would correctly connect from SQL Server to a MySQL DB, but there's a point where the app fails since SubSonic internally generates queries of the form
SELECT * FROM dbo.MyTable
which MySQL doesn't support obviously. I also noticed queries that enclosed table names with brackets ([]), so it seems that there are a number of factors that would limit the use of one Provider along multiple DB engines.
I guess my only other option is to sort it out with multiple generated providers, although I must admit it does not make me comfortable knowing that I'll have N copies of basically the same classes along my project.
I would really love to hear from anyone else if they've had similar experiences. I'll be sure to post my results once I get everything sorted out and working for my project.
Has any of this changed in 3.0? This would definitely be a worthy reason for me to upgrade if life is any easier on this matter...