I am working on a website that allows my clients to upload large amount of high resolution image files to my system. I have tried couple of uploaders but they don't seem to be as reliable as I was expecting - especially on Windows.
I have tried the following:
Uploadify 2 and 3
YUI Uploader (Yahoo Uploader)
The above uploaders all relies on Adobe Flash.
One of the most noticeable problem which is common to the mentioned uploaders is that the progress does not seem to work on Windows. It either stuck on 100% or 0% until the upload is actually finished. The files seems to be uploading fine but progress bar is totally out. This isn't a problem on a MAC though. I believe the problem is not within my code because even the uploaders' demo sites suffers the same problem.
On top of that I have heard clients complain about the uploaders being "self aborted" after say 33 files or so. They just don't seem to be that reliable.
My questions are:
Is there any way around this? To improve the visual at least (make the progress bar goes under windows)
Are there any other uploaders/methods that allows my clients to deliver large amount of files to my server reliably?
Running out of ideas and my clients are getting frustrated.
Related
I am developing a cloud server that serves static files for personal use with express and nodejs. While developing I added some script files until I noticed that the webserver suddenly started to load extremely slow on reloads. I used the chrome dev tools and noticed extreme loading times like 6 seconds for a 265 byte script! (See picture)
What I tried:
moving the app.use(express.static(...)) to the very top
clearing cache and application storage as well as restarting the computer several times
serving just a very simple HTML-File with no external scripts or stylesheets which of course reduced the loading time severely but localhost (265 B) still took 2.03 seconds
I am really confused about this, because it happend out of nowhere from one moment to the next and Ive never experienced this issue while developing.
Well, a very stupid mistake: as you can see in the screenshot the throttling is set to Slow 3G. Seems that I changed it from No throttling by mistake. So check the dev tools settings!
Before you down-vote this question, please note that I have already searched Google and asked on Apple Developer Forums but got no solution.
I am making an app that uses core data with iCloud. Everything is set up fine and the app is saving core data records to the persistent store in the ubiquity container and fetching them just fine.
My problem is that to test if syncing is working fine between two devices (on the same icloud ID), I depend on NSPersistentStoreDidImportUbiquitousContentChangesNotification to be fired so that my app (in foreground) can update the table view.
Now it takes random amount of time for this to happen. Sometimes it takes a few seconds and at times even 45 minutes is not enough! I have checked my broadband speed several times and everything is fine there.
I have a simple NSLog statement in the notification handler that prints to the console when the notification is fired, and then proceeds to update the UI.
With this randomly large wait time before changes are imported, I am not able to test my app at all!
Anyone knows what can be done here?
Already checked out related threads...
More iCloud Core Data synching woes
App not syncing Core Data changes with iCloud
PS: I also have 15 GB free space in my iCloud Account.
Unfortunately, testing with Core Data + iCloud can be difficult, precisely because iCloud is an asynchronous data transfer, and you have little influence over when that transfer will take place.
If you are working with small changes, it is usually just 10-20 seconds, sometimes faster. But larger changes may get delayed to be batch uploaded by the system. And it is also possible that if you constantly hit iCloud with new changes — which is common in testing — it can throttle back the transfers.
There isn't much you can really do about it. Try to keep your test data small where possible, and don't forget the Xcode debug menu items to force iCloud to sync up in the simulator.
This aspect of iCloud file sync is driving a lot of developers to use CloudKit, where at least you have a synchronous line of communication, removing some of the uncertainty. But setting up CloudKit takes a lot of custom code, or moving to a non-Apple sync solution.
I've recently started programming in C on my Raspberry Pi. I have downloaded libspotify (I have the correct version), and have managed it pretty well.
Just recently (~2 hours ago (around 18:00 30/12/2013)), libspotify started to return SP_ERROR_OTHER_PERMANENT when checking for a search error in the search_complete_cb callback.
Before the error started occuring, I have built and started the program quite a few times (and thus, logging in many times, during only a short period of time), and to test my 'Search' feature, I have used the same query every time. Then, without making any changes to my program, suddenly there were no results returned after calling sp_search_create.
I am worried that the developer account has been somehow suspended for either repeatedly logging in, or because it seemed weird to the spotify crew that I would search for the same query all the time. I don't really know what the problem is caused by. There are no emails or warnings sent to the address connected to the account. The problem has lasted for a while now, so it seems like it's not going away at first.
Additional details
log_message tells me there is a ChannelError(4, 0, search). I have also seen ChannelError(5, 0, search), but only once.
I can still play music from the official Spotify desktop client for Windows.
I have an earlier version of the program, before I rewrote it to get a bit more structure, that works. The same API key and same credentials are used in both programs, so that excludes a ban. The rewrite does log in, but no results are returned from searching. In the old version, I get a lot of results. All working. I have rebooted the Raspberry Pi several times, but that doesn't seem to help.
If you need any code or other information, I'll be happy to share. Just point out what's needed, because the code is split over a lot of files.
Well, if your old one is working then the problem will be in your rewrite. Don't pay too much notice to the error messages, they're pretty much par for the course, and can be triggered by something as benign as a cache miss. Unless you're actually getting an error callback somewhere, the log messages are meaningless.
As for your problem, I can't really make any guesses without seeing your code. One thing to check and is the most common course of a permanent error is to make sure you're actually logged in. The login process is asynchronous, and any functionality that requires you to be logged in (searching is one of them) will fail before login is completed.
When I save something in an administration page in Drupal, for example when I save on
http://drupal62/admin/build/modules
it takes a very long time. It says,
Executed 2980 queries in 51606.38 milliseconds. Queries taking longer than 5 ms and queries executed more than once, are highlighted. Page execution time was 52547.06 ms.
I know that this question is vague. I don't think is is a MySQL problem. Maybe you have seen it before.
I've had exactly the same problem as you and although it's not exactly a MySQL problem, it's related to Drupal 6 database optimisation. Are you running on localhost? If so, then could you give some more information as to OS, and if you're using XAMPP, WAMP, etc.
I am currently running Drupal 6 on Windows 7 and WAMP Server with none of the lag you're experiencing. If it's the same issue, I've got it documented so will let you know the config changes.
You're right it's too vague to be fully answered here without more information.
It will depend of course of your specific configuration, of the modules you use, how many of them, your PHP memory limit, possible errors in your database.
A common debugging method is simply to disable most, if not all the modules and re-enable them one by one, until you can single out what goes wrong. You can also start by clearing all caches , then again depending on the modules you're using.
Even with a poor network connection?
Specifically, I've written code which launches a separate thread (from the UI) that attempts to upload a file via HTTP POST. I've found, however, that if the connection is bad, the processor gets stuck on outputstream.close() or httpconnection.getheaderfield() or any read/write which forces data over the network. This causes not only the thread to get stuck, but steals the entire processor, so even the user interface becomes unresponsive.
I've tried lowering the priority of the thread, to no avail.
My theory is that there is no easy way of avoiding this behavior, which is why all the j2me tutorial instruct developers to create a ‘sending data over the network…’ screen, instead of just sending everything in a background thread. If someone can prove me wrong, that would be fantastic.
Thanks!
One important aspect is you need to have a generic UI or screen that can be displayed when the network call in background fails. It is pretty much a must on any mobile app, J2ME or otherwise.
As Honza said, it depends on the design, there are so many things that can be done, like pre-fetching data on app startup, or pre-fetching data based on the screen that is loaded (i.e navigation path), or having a default data set built in into the app etc.
Another thing that you can try is a built-in timer mechanism that retries data download after certain amount of time, and aborting after say 5 tries or 1-2 minutes and displaying generic screen or error message.
Certain handsets in J2ME allow detection of airplane mode, if possible you can detect that and promptly display an appropriate screen.
Also one design that has worked for me is synchronizing UI and networking threads, so that they dont lock up each other (take this bit of advice with heavy dose of salt as I have had quite a few interesting bugs on some samsung and sanyo handsets because of this)
All in all no good answer for you, but different strategies.
It pretty much depends on how you write the code and where you run it. On CLDC the concept of threading is pretty limited and if any thread is doing some long lasting operation other threads might be (and usualy are) blocked by it as well. You should take that into account when designing your application.
You can divide your file data into chunks and then upload with multiple retries on failure. This depends on your application strategy . If your priority is to upload a bulk data with out failure. You need to have assemble the chunks on server to build back your data . This may have the overhead for making connections but the chance is high for your data will get uploaded . If you are not uploading files concurrently this will work with ease .