I have a need of creating a image gallery. The images are saved in the remote server. The Blackberry client need to download and render it to the UI (Gallery View).
I have used "UniversalImageDownloaer" library for android. But now for I am looking for any such freeware/open source lib that will help me to serve my purpose for BlackBerry. Can Anyone help me guiding me to a resource.
I need to look into the following things
Async image download
Gallery view
Image caching
Edit-1
From my earlier experience, I understand that Blackberry has the restriction for creating maximum of 250 (Many be 5+/-) number of threads at run time. And per application it is restricted to 17 number of threads. So I must look into the thread pool and thread safety for my requirements.
I don't know of any library for lazy loading in BB. You could try to port that library to BlackBerry, or DIY. Let's see how could you achieve this:
You can code a consumer thread, which will download an image at a time (in Blackberry, you won't get much performance improvement downloading in parallel). This consumer could take URLs from a stack. The UI (screen, list) will submit a request to the consumer each time it needs an image. The request is just passing the resource URL to the consumer, so that it puts it at the top of the stack. In the meantime, the GUI should display a default image or loading message. There are plenty of good books and manuals in Java on how to design a consumer-producer scheme in a thread-safe manner, but it goes beyond the scope of this answer.
Starting in OS 5.0, you have the PictureScrollField class that allows you to display a row of scrolling images, and can be customized to some extent. There's a sample demo app in the samples folder in the SDK, I think.
If multiple requests to the same image are likely to be made during the program execution, caching is an interesting enhancement. You could just keep them in RAM in the consumer stack, or even save them to a folder in SDCard. the consumer will then look first in the cache, and only in case it doesn't exist will it initiate a remote download.
Related
I wrote a electron app for a device from aliexpress (simple spectrum analyzer), and I feed the data to the app through some dlls etc. That is not that important, but just so you know what it is.
Important is, that it goes to the app as JSON and the app renders it using lightning chart.js.
My questions is, how much CPU should real-time app like this take? Mine sits around 19% on Ryzen5 2600, which to me seems like a lot and most of the CPU is taken by the event handler in the renderer process alone, even if I remove the rendering part on the chart component and just transfer the data to the renderer from my forked process through the main process the usage is high. I am new to javascript so I tried doing 30s performance snapshot from devtools and I am not sure if it helped me or not. Summary of performance recording
Is there a better solution to transfer large data from forked process to the renderer without compromising on security by turning nodeIntegration on? The data are a Buffer of 65k bytes which I then convert to the JSON in the forked process and pass through all the IPC to the renderer. I need the forked process so the app is not blocked if there is connection issue between the device and the app.
My employer is in a crossroads now:
We've got an offer to create app for a large multinational company, interested in monitoring of a large fleet of vehicles simultaneously on a map. I'm talking about 5000 at the time. We tried to do that in our current web based app and it chokes due to quanity of objects, despite our efforts to optimize code. My question is: can we gain some performance boost if we convert our web based app into a desktop one via nodejs`s modules like node-webkit or atom-shell. Does a desktop app has a better access to a system resources? Web page is frozen beyond help and even gives me a message to mercy kill it, because processing is taken too long, but in a task manager it only uses about 18% of CPU and 2 GB of ram out of 16 GB.
No that wont help. Your code still runs in a webkit browser.
The trick is to not show all 5000 objects at a time.
Showing 5000 pins on a map is not useful to the user anyways, group markers that are close together (https://developers.google.com/maps/articles/toomanymarkers?hl=en);
as the user zooms in you can then show a more and more detailed view.
Team,
i'm developing an iOS application.
My requirement is to query for specific news service(REST API) in regular time interval.I wanted query the service twice for a day and update my sqllite db, even the applciation is in background state. My UI will be updated with data fetched from sqllite db, while the application is in foreground.
My question are,
Is it possible to run NSTimer in background continuously? if yes, is
there any maximum time limit for timer to run in background (say 10
mins or 60 mins)?
Is it possible to send request to download a file using
NSUrlConnection and save the file to documents directory, when the
application is in background ?
Your suggestions will be much helpful for my project design.
Thanks in advance.
What you are aiming for cannot be achieved on iOS:
Arbitrary apps cannot run in the background for an arbitrary amount of time.
You can try to mitigate some of this by using local notifications instead of NSTimer to schedule your updating. This will, however, only buy you a very limited amount of time to do your networking.
The question you should ask yourself at this point probably is:
If you are only updating twice a day, how bad can it be to initiate the download when your app becomes active?
Answering my own question, so that it will be helpful for others.
Ques 1: Is it possible to run NSTimer in background continuously?
Ans: Nstimer will not run while the application in background state. So there is no point of maximum allowed timer value in background. If the application enters into background while there is an ongoing process, [UIApplication beginBackgroundTaskWithExpirationHandler:] can be used to complete the ongoing process. The maximum time allowed by the OS with this handler is 10mins.
Ques 2: Is it possible to send request to download a file using NSUrlConnection and save the file to documents directory, when the application is in background ?
Ans:
Below given information is from Apple documentation. Detail info is found here
In iOS, only specific app types are allowed to run in the background:
Apps that play audible content to the user while in the background,such as a music player app
Apps that keep users informed of their location at all times, such as a navigation app
Apps that supportVoice over Internet Protocol (VoIP)
Newsstand apps that need to download and process new content
Apps that receive regular updates from external accessories
Info about running background process using VOiP type application can be found here
I am experiencing a really curious problem with a HttpHandler and I am hoping someobody here might be able to shed light on this. Many thanks in advance for reading this.
We have created a HttpHandler that sits in the pipeline of an IIS website that serves images, videos, and other assets. The HttpHandler is very lightweight. Its sole purpose is to check if the media asset requested exists on disk and, if it does not, to re-write the URL for the asset to a location where the asset does exist. The handler has been created in this way to allow us to migrate our media assets into a new folder structure. We also plan to use the handler (which I will refer to as the URLRewriter from here on) for SEO on image and video URLs.
As mentioned, the URLRewriter class is very lightweight. We have run memory profiling over it and determined that it only consumes about 12B of memory while running. However, when we put the handler into the IIS pipeline we see some strange behaviour that ultimately results in a large amount of memory consumption and, invariably, that the w3 worker process recycles. The behaviour we see is this:
When a request comes in for an image on http://www.ourimageserver.com/media/a/b/c/d/image1xxl.jpg (not an actual URL) we notice that W3WP.exe creates, and hangs on to, a handle for every single folder in the path to the image:
• /media
• /media/a
• /media/a/b
• /media/a/b/c
• /media/a/b/c/d
This is a big problem because we have hundreds of thousands of media assets that are stored in a very wide and very deep folder structure. The number of handles created by IIS/W3WP grows rapidly when the URLRewriter is deployed to our production environment, and the memory consumption of W3WP goes up correspondingly. Within less than an hour of running (at a relatively quiet period in terms of traffic) the number of handles held by W3WP was in excess of 22000 and the process died. We also noticed that the kernel memory usage had increased on the servers where the URLRewriter was deployed.
Careful inspection of W3WP’s behaviour using Process Explorer and Process Monitor (both with and without a VS debugger attached) have revealed that the handles are created before the URLRewriter is called. In fact, the handles are created before the BeginRequest event is fired. When the URLRewriter is removed from the pipeline none of these handles are created. Now, a really curious thing is that it appears that the handles are created as result of a NotifyChangeDirectory operation carried out by W3WP. Why would W3WP request to be notified for changes to these directories? And how can we prevent it from doing so? Surely this is not default/normal behaviour?
If you have any ideas as to what might be causing this problem I would be most grateful for your input. The behaviour is the same on IIS6 and IIS7.
Even with a poor network connection?
Specifically, I've written code which launches a separate thread (from the UI) that attempts to upload a file via HTTP POST. I've found, however, that if the connection is bad, the processor gets stuck on outputstream.close() or httpconnection.getheaderfield() or any read/write which forces data over the network. This causes not only the thread to get stuck, but steals the entire processor, so even the user interface becomes unresponsive.
I've tried lowering the priority of the thread, to no avail.
My theory is that there is no easy way of avoiding this behavior, which is why all the j2me tutorial instruct developers to create a ‘sending data over the network…’ screen, instead of just sending everything in a background thread. If someone can prove me wrong, that would be fantastic.
Thanks!
One important aspect is you need to have a generic UI or screen that can be displayed when the network call in background fails. It is pretty much a must on any mobile app, J2ME or otherwise.
As Honza said, it depends on the design, there are so many things that can be done, like pre-fetching data on app startup, or pre-fetching data based on the screen that is loaded (i.e navigation path), or having a default data set built in into the app etc.
Another thing that you can try is a built-in timer mechanism that retries data download after certain amount of time, and aborting after say 5 tries or 1-2 minutes and displaying generic screen or error message.
Certain handsets in J2ME allow detection of airplane mode, if possible you can detect that and promptly display an appropriate screen.
Also one design that has worked for me is synchronizing UI and networking threads, so that they dont lock up each other (take this bit of advice with heavy dose of salt as I have had quite a few interesting bugs on some samsung and sanyo handsets because of this)
All in all no good answer for you, but different strategies.
It pretty much depends on how you write the code and where you run it. On CLDC the concept of threading is pretty limited and if any thread is doing some long lasting operation other threads might be (and usualy are) blocked by it as well. You should take that into account when designing your application.
You can divide your file data into chunks and then upload with multiple retries on failure. This depends on your application strategy . If your priority is to upload a bulk data with out failure. You need to have assemble the chunks on server to build back your data . This may have the overhead for making connections but the chance is high for your data will get uploaded . If you are not uploading files concurrently this will work with ease .