How to read multiple rss xml files in j2me - java-me

I already developed Rss feed reader application in j2me java using kxmlParser,
for one rss xml file(if i read one xml ,my code should be executed and list screen should be displayed)
Now i want to apply the same code to 5 rss xml files(i.e i need to read one by one xml files)
How can it possible in j2me?any help?

If you can parse one xml at a time:
Create a Vector to control the xml queue
Have a Thread consume the xml objects from the queue reusing the kxmlParser
If you need to parse all xmls at the same time
Create a thread for each xml and a kxmlParser instance for each xml
First approach is slower but will run with less memory and is simpler to implement.

Related

Limitless Multi Threading with JavaFX to update GUI

Using inputstream.readUTF() and a DataInputStream we are required to use a single thread to continuously check for null and to iterate over our input stream. From what I have found online, JavaFX does not provide any way of implementing concurrency or multithreading while also allowing dynamic updating of nodes. Is there something I am over looking? Because this dynamic updating of nodes from a stream has been impossible. The task I am trying to perform is to take an input stream and populate a Pane with the text received.
I have tried Platform.runlater, I have tried Task, I have also looked into executor but don't believe it to be what I am looking for.

node-red use several sources to build http post request

I am new to node-red and I am confused with the "message payload flow system".
I want to send a POST request that contains, among other params, files into the request payload. These files should be in an array called "files".
I read my files from my file system, this works fine, but in the function node, how do I build my POST payload?
So far I have this:
The problem is that the payload contains both files and I can't find a way to get them separately. How can I retrieve both my files, separately, into the BUILD-POST-REQ function?
The core Join node can be used to combine the output of parallel input streams. It has a number of modes that control how many input messages it will collect together.
These include a count of messages.
You can also choose how it combines the input messages, this can be as an array or as an object using the msg.topic as the key to the incoming msg.payload
Ok. I found A solution. But I don't know if this is a best practice. Feel free to correct me!
The idea is that after each file read, I store it in a new property of the msg object and then can access it later in the flow.

FetchXml optimization

I am looking for the ways how to optimize bulk download of CRM2011 data. Here are the two main scenarios:
a) Full synchronization: Download of all data - first all accounts, then all contacts etc etc.
b) Incremental synchronization: Download of all entities modified since given date
We use multithread downloader with 3 threads. Each thread performs FetchXml for one entity type that is downloaded page by page. Parsed objects are stored in the downloader cache and the downloader goes on for the next page. There is another thread that pulls the downloaded data from the cache and processes them. This organization increases the download speed more than 2x.
The problems I see:
a) FetchXml protocol is very inefficient. For example it contains lots of unneeded data. Example: FormattedValues take 10-15% bandwidth (my data show ~15% in the source Xml stream or ~10% in the zipped stream), although all we do with it is a) Xml parsing, b) throwing away. (Note that the parsing is not negligible either - iOs/Android Mono parsers are surprisingly slow.)
b) In case of incremental synchronization most of the FetchXml requests return zero items. In this case it would be highly desirable to combine several FetchXml requests into one. (AFAIK it is impossible.) Or maybe use another trick such as to ask for the counts of modified objects I did not investigate what is possible yet.
Does anybody have any advice how to optimize FetchXml traffic?
Your fastest method would be to use SQL server directly for something like this (unless you are using online).
To make the incremental faster, your best bet is to use the aggregate functionality FetchXML provides which is both extremely quick and less verbose.
Why parse on the iOS/Android Mono? If you are instead sharing this to a large number of devices, you'd be better off having a central caching server that could send back this data in a json (zipped) format to the devices (or possibly bson). Then the caching server would request an update of the changes, process those and then send back incremental changes in whatever format to the clients. Would be considerably faster on the clients and far less bandwidth.
I'm not sure of a way to further optimize FetchXML. I would question why you're not using the OData Endpoints and REST, especially if you're primarily concerned with the about of data being sent over the wire.
I have talked to some brilliant CRM MVPs, and I know they have used REST to migrate data to CRM. I'm not sure if they did it because it was faster, but I assumed that was why.
I do know that you are going to minimize the amount of data that is being sent to the client, since XML is extremely bloated.
Have a look at the ExecuteMultipleRequest to allow you to perform multiple requests/queries at once... http://msdn.microsoft.com/en-us/library/microsoft.xrm.sdk.messages.executemultiplerequest.aspx

Single Writer and Multiple Readers for Ole storage file

I have an Ole storage file which needs to be write by a writer and read by multiple readers. When i try to open stream with CFile::shareDenyWrite it's not opening stream. It's returning false.Stream is opening if i use shareEcxlusive but than i have to make storage file share Exclusive.
Is there any way to open OleStorage files with one writer and multiple readers?
Yes, but you should be using StgOpenStorageEx to work with OleStorage files NOT CFile. Read the comments on STGM_DIRECT_SWMR (Single Writer Multiple Readers) for more information:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa380342%28v=vs.85%29.aspx
You'll need to make sure that you've initialized COM before using COM based functions (just add AfxOleInit() in your InitInstance and the framework handles the start-up and shut-down).

C++/CLI efficient multithreaded circular buffer

I have four threads in a C++/CLI GUI I'm developing:
Collects raw data
The GUI itself
A background processing thread which takes chunks of raw data and produces useful information
Acts as a controller which joins the other three threads
I've got the raw data collector working and posting results to the controller, but the next step is to store all of those results so that the GUI and background processor have access to them.
New raw data is fed in one result at a time at regular (frequent) intervals. The GUI will access each new item as it arrives (the controller announces new data and the GUI then accesses the shared buffer). The data processor will periodically read a chunk of the buffer (a seconds worth for example) and produce a new result. So effectively, there's one producer and two consumers which need access.
I've hunted around, but none of the CLI-supplied stuff sounds all that useful, so I'm considering rolling my own. A shared circular buffer which allows write-locks for the collector and read locks for the gui and data processor. This will allow multiple threads to read the data as long as those sections of the buffer are not being written to.
So my question is: Are there any simple solutions in the .net libraries which could achieve this? Am I mad for considering rolling my own? Is there a better way of doing this?
Is it possible to rephrase the problem so that:
The Collector collects a new data point ...
... which it passes to the Controller.
The Controller fires a GUI "NewDataPointEvent" ...
... and stores the data point in an array.
If the array is full (or otherwise ready for processing), the Controller sends the array to the Processor ...
... and starts a new array.
If the values passed between threads are not modified after they are shared, this might save you from needing the custom thread-safe collection class, and reduce the amount of locking required.

Resources