Java ME Triple Store - java-me

I'm currently coding a Java ME program that has an internal OWL reasoning engine (Hermit & Pellet) and receives ontology data (sensor data) from a backend server. Sometimes this data is also composed of raw sensor data and already reasoned results from a reasoner on the backend server. The reasoning will only be performed on the mobile device in case of network failures.
At this moment I'm lacking a good method of storing the backend data for further processing.
I've already looked up Triple Stores, but I was wondering if there are any good ones for Java ME applications?
Grtz
Neo

You're approaching this the wrong way. Do the reasoning on the server and send the results to your application.
Reasoning is computationally difficult. Trying to do it on a mobile device will either be a terrible user experience because it's slow, or just won't work on anything but toy data.
There are RDF databases that perform reasoning which are quite good, and if you really need DL reasoning, there are a number of dedicated OWL reasoners which it would not be hard to put a SPARQL endpoint in front of so you can query them remotely. Pick one that best suits your needs and go with it; do the reasoning in the backend, get the results via SPARQL protocol (HTTP).

Related

If I'm integrating two seperate API's, is a MERN stack appropriate?

I have two separate cloud-based APIs that I am working on integrating together. Neither software directly talks to each other so I am creating something in the middle to get them to communicate. I have had trouble finding examples or documentation on how exactly to do this, does anyone know of any resources that could help me out?
My plan going in was to use a MERN Stack, running on a local server to do GET and POST requests to both APIs, use some mapping and logic to transpose the data into the correct format and send it to the other software. I do not have a client per se (other than myself) on my end, so I really will be skipping the React part of MERN, at least that is what I'm thinking. I'll be using Mongo to keep track of both sets of data for redundancy. I also considered using a LAMP Stack but felt that MERN would be faster in handling the data, and Mongo is more flexible in handling different data formats. If there is another process or technology that could help me that I'm not thinking of, I would be grateful to hear about it.
Has anyone encountered something like this before? Thank you.
As with most architecture questions, there's no completely right or wrong answer here. You could certainly design a well-built system to handle for this purpose with either stack; even more-so when you mention that your front-end framework is not an important consideration. Instead, ask yourself questions like this:
Which stack do you have more experience with, and is this an appropriate time to learn a new set of technologies, or is it important to do the best work you're capable of right now (how important is time, cost, or quality in this case)?
Another generalization I'll stick my neck out for is a data-first approach; what sort of data are you dealing with from each cloud integration, and what kind of data do you need to support and/or create in order to make your system work? Mongo, being a NoSQL persistence layer, will allow you to change your data model and handle more varied data in a quicker and easier manner than a SQL solution will. This is a double-edged sword, however, as lack of validation and a strongly-constrained (typed?) data model will make your application harder to work with and debug as it grows. In short - how big might this application grow?
If you have a handy and familiar way to manage the three different data models you're dealing with (cloud service 1, cloud service 2, and your app) via MySQL, then that's a compelling reason to use it. However, if your style is to start dumping data into your database and you're comfortable with a more iterative approach (which may require more, albeit shorter rounds of refactoring), then Mongo with MERN may be the preferable choice.
Finally, will others ever be working on this application? If so, which language would you prefer to be dealing with them upon - PHP or Javascript?

Separate application service for command / query in CQRS implementation in Domain Driven Design?

When implementing CQRS with Domain Driven Design, we separate our command interface from our query interface.
My understanding is that at the domain level this reduces complexity significantly (especially when using event sourcing) in the domain model sense your read model will be different from your write model. So that looks like a separate domain service for your read and write bounded context.
At the application level, do we need a separate application service for the read and write separations of our domain?
I've been playing devil's advocate on the matter. My thoughts are it could be overkill, requiring clients to know the difference. But then I think about how a consuming webservice might use it. Generally, it will issue get requests for reading and post for writing, which means it already knows.
I see the benefits being cleaner application services.
The real value is having a properly separated read and domain model. They do fundamentally different things. And often have very different shapes. It's entirely possible for the read model to contain an amalgam of data from differing domain objects for example.
When you think about how they are used and the way the function within an application you can start to appreciate the need for the separation. The classic example here is to consider the number of writes compared to the reads in a typical application. The reads massively outnumber the writes. By maintaining a difference you can optimise each side for it's respective role.
Another aspect to bear in mind is that a 'post' will constitute a command not a viewmodel which may contain a read model. If using a CQRS approach you need to adapt the way you do queries and posts. In fact you can achieve a much more descriptive language rather than simply reflecting a view model back and forth to a server.
If your interested I have a blog post which outlines the high level overview of a typical CQRS architecture. You can find it here: A step by step overview of a typical CQRS application. Hope you find it useful.
Final point. We are in the process of adding new functionality and have found the separation to be very helpful. Changes to one side don't impact the other in the same way as they might.

WHat is the best method to fetch social media data?

Hey I am a new guy to big data. I am making a system which will fetch data from social media and process the result, for this I am using apache spark.
Following is the flow of my model:
user will save the desired keywords using a webpage made in php.
with those key words I would be fetching data from social media,
processing the data(ex, sentiments and views) and then provide it to
the end user.
Now my confusion is how should I fetch data from social media. using
apache kafka
apache flume
or by directly calling the API twitter4j(just an example).
Though I have to learn to implement all three data fetching techniques, and If I happen to use direct api then I can skip the whole hadoop part. It would be great if you guys could suggest me which one is better.
All of the above I am doing on a local machine. I have completed the ui part now I am in the phase where I have to fetch data.
Thanks.
I guess I will make this a suggestion.
You may not want to fetch data from any source using distributed system, unless you plan to DDoS their production server. If your cluster is setup behind one router, your whole cluster may be blacklisted because all nodes consistently hit the access rate limit that's adding up at your router, depending on whether the server is powerful or not. Twitter server doesn't care about 100 threads to be honest (provided you know what you are doing), but any startup will probably get to you right away.
If you have a workstation with 4 cores, having it up catching streaming data should suffice for initial stage of academic research. Or if you really want tons of data, you can perhaps do Hadoop streaming with your fetcher script as mapper and no reducer, quick and easy. If you are superstar in Java or Scala, get a fetching thread on each vcore on Spark's executor.
Now, Twitter has the REST API, which means you can pretty much fetch data in any programming language. Of course sometimes it may be easier to use existing interfaces, assuming they are well-maintained, they are almost always more robust. But I get lazy all the time. For example, I sometimes just want a sample data point, so I just pipe curl into jq to check what I want to check.
Yes, learn about jq too, will save you tons of time. And be a gentleman who doesn't DDoS people.

Send requests directly to couchDB from NodeJS/Angular application?

I'm currently building a new web-application with user registration, profiles, image upload and so on. I was using the MEAN stack (MongoDB, ExpressJS, Angular, NodeJS) for previous projects and now want to try out couchDB.
couchDB delivers a REST-API for free. I could shift all the logic to the client and make sure, that the input is valid by couchDBs validation functions. Therefore I could make the requests from client directly to the database and I would not have to code annoying things like CRUD Operations in my expressJS controllers. Authentication, Validation and simple CRUD operations - it's all there and for free.
Is there a reason not to do so? I would then pass the request to my server and then pass it on to the couchDB from there, which pretty much eradicates all the nice benefits over mongoDB.
greetings,
Michel
I think your proposal is at least theoretically true and you might want to go ahead and do it, perhaps forwarding requests from the browser to couchdb with a reverse proxy like nginx or node-http-proxy. I believe there are products on the market espousing this "no application server" architecture such as parse.com, which provides some social proof that this idea is at least interesting and worth exploring.
However I think you will at some point discover there is such a thing as an application server and people use them and write code for them in nearly every application for good reason. Debugging problems with your couchdb data validation code is probably going to be cumbersome at best. Compare that to the amazing features you have debugging node.js code with node-inspector and the chrome developer tools debugger.
couchdb is also probably not going to provide realistically granular enough authorization capabilities. This means eventually your application will be exposed to malicious users just doing a PUT with the right document id and gaining access to data they are unauthorized to see or change.
Very few applications are simple enough that UI + DB can handle all of the data transitions and operations that are needed. You could in theory code some of this logic in the browser, but having the Internet between your compound query logic and your database is going to add so much latency to your app to make some features impossible, especially if you have to do a query, get some results, then do a secondary query based on each of those results. That is sometimes feasible between a server-side application and its couchdb, but doing that across the Internet will suffer from the latency.

For an embedded client-server architecture, is node.js the best option?

We have an embedded box. The specs of the CPU is medium speed (600MHz) and RAM is between 512MB to 2GB depending on configuration. The system consists of data layer to process incoming data from hardware and needing to be displayed both remotely and on an HDMI output.
Seeing the remote aspect is as important as the local display, we have architected a client-server solution. For the server, it just needs to respond to requests for data. The data needs to come from the internals of another process (IPC interaction) and return it formatted for the client.
For the server we are thinking of using node.js. The data is spec'ed to be formatted into json messages, so using JavaScript and json is simple.
However, are there any better/other server options to consider?
The main requirement is the server can be extended to interact with the data process and be able to process and respond to requests.
We could write it ourselves, but feel that there must be usable tech to leverage already.
Thanks.
I am assuming that you need output as a webpage only.
It depends.
If your team knows java/servlet well, you can do using simple jetty/servlet/jsp combination.
But if you have team good with javascript/node.js, go with it. Although, I am not sure about stability requirements you have, because node.js is quite stable but it haven't reached 1.0 yet.

Resources