i have created an application which stores lots of data around 100 mb in rms and application process on this data
now when i am un-installing application from Nokia e-72 some times it remove application many time it cant remove application
when it cant remove at that time i have to go in control panel --> installed application --> uninstall application.
even i can't able to un-install from there.
some times it hang phone at time of un-installing application.
if i have less data in RMS then that app can removed easily.
what could be problem and how i can solve this problem??
Basically RMS designed for small amount of data. If you want to store huge amount of data means you can store into card or store into database through GPRS. See this existing discussion same in this forum.
Better option is choose file system(JSR-75). But signing required for accessing the file system. It will be cost.
As a temporary solution i am un-installing application in following way
first removing all RMS Data from application it self
now when application is without any rms data i can remove it like
option--> remove
Related
If this is something in another thread I'm happy to read but I've been keeping google hot without a result.
I have an IIS site that is getting hammered with some search engine exploit file creations. Basically trying to direct traffic to videos on YouTube by getting them indexed. I've disabled all FTP accounts to start and then went through confirming everything is updated including things like PHP. I'm having a bear of a time figuring out where these are coming from. I have started resetting the passwords on the anonymous user credential for each website but I don't think this is a password exploit.
The info I have:
They all seem to be created by Network Service but I can't tell which site they are coming in through.
I'm using up to date and active AV and active malware scanning.
All windows updates are in place
I have attempted to use process monitor and the only thing I can see is that the file creation comes from NETWORK SERVICE
The only thing I've thought of at this point is an exploit somewhere but without any viruses, missing updates or malware I'm at a loss of where to go with this.
I have tried to put auditing in place but this server has a pretty immense load overall so there is a number of completely safe temp files created and deleted constantly. This is compounded by the files being created at random times. I may keep a clean deck for a day or two and then get hit for a few hours. I did try auditing anyways and stopped at 15gb of security events without a hit.
I am considering containerizing into credentialed app pools but that will be a pretty significant time and resource overhead for a guess.
Any ideas or suggestions? I'm nearing the point of visiting a fortune teller down the road to see if they have any idea. :D
My employer is in a crossroads now:
We've got an offer to create app for a large multinational company, interested in monitoring of a large fleet of vehicles simultaneously on a map. I'm talking about 5000 at the time. We tried to do that in our current web based app and it chokes due to quanity of objects, despite our efforts to optimize code. My question is: can we gain some performance boost if we convert our web based app into a desktop one via nodejs`s modules like node-webkit or atom-shell. Does a desktop app has a better access to a system resources? Web page is frozen beyond help and even gives me a message to mercy kill it, because processing is taken too long, but in a task manager it only uses about 18% of CPU and 2 GB of ram out of 16 GB.
No that wont help. Your code still runs in a webkit browser.
The trick is to not show all 5000 objects at a time.
Showing 5000 pins on a map is not useful to the user anyways, group markers that are close together (https://developers.google.com/maps/articles/toomanymarkers?hl=en);
as the user zooms in you can then show a more and more detailed view.
Before you down-vote this question, please note that I have already searched Google and asked on Apple Developer Forums but got no solution.
I am making an app that uses core data with iCloud. Everything is set up fine and the app is saving core data records to the persistent store in the ubiquity container and fetching them just fine.
My problem is that to test if syncing is working fine between two devices (on the same icloud ID), I depend on NSPersistentStoreDidImportUbiquitousContentChangesNotification to be fired so that my app (in foreground) can update the table view.
Now it takes random amount of time for this to happen. Sometimes it takes a few seconds and at times even 45 minutes is not enough! I have checked my broadband speed several times and everything is fine there.
I have a simple NSLog statement in the notification handler that prints to the console when the notification is fired, and then proceeds to update the UI.
With this randomly large wait time before changes are imported, I am not able to test my app at all!
Anyone knows what can be done here?
Already checked out related threads...
More iCloud Core Data synching woes
App not syncing Core Data changes with iCloud
PS: I also have 15 GB free space in my iCloud Account.
Unfortunately, testing with Core Data + iCloud can be difficult, precisely because iCloud is an asynchronous data transfer, and you have little influence over when that transfer will take place.
If you are working with small changes, it is usually just 10-20 seconds, sometimes faster. But larger changes may get delayed to be batch uploaded by the system. And it is also possible that if you constantly hit iCloud with new changes — which is common in testing — it can throttle back the transfers.
There isn't much you can really do about it. Try to keep your test data small where possible, and don't forget the Xcode debug menu items to force iCloud to sync up in the simulator.
This aspect of iCloud file sync is driving a lot of developers to use CloudKit, where at least you have a synchronous line of communication, removing some of the uncertainty. But setting up CloudKit takes a lot of custom code, or moving to a non-Apple sync solution.
How can I make sure that any iCloud Core Data transaction logs are automatically downloaded to a device, even if the App is not active ?
Currently it seems new transaction logs are only downloaded when the app becomes active, is there anyway I can get a device to download these as soon as they become available?
You don't. On iOS, transaction data is only downloaded on demand. You create that demand in your app by adding the persistent store while using iCloud configuration options. If the app isn't running, there's no demand, and if there's no demand then there's no download. If you need this, literally the only option is to file a bug with Apple and hope that they change this in some future version of iOS. I wouldn't bet on them doing it, though-- it would almost certainly be rejected due to some combination of the effect on battery, monthly data transfer quotas, or device storage space.
I am working in a MOSS 2007 project and have customized many parts of it. There is a problem in the production server where it takes a very long time (more than 15 minutes, sometimes fails due to timeouts) to create a sub site (even with the built-in site templates). While in the development server, it only takes 1 to 2 minutes.
Both servers are having same configuration with 8 cores CPU and 8 GIGs RAM. Both are using separate database servers with the same configuration. The content db size is around 100 GB. More than a hundred subsites are there.
What could be the reason why in the other server it will take so much time? Is there any configuration or something else I need to take care?
Update:
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Update 2:
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
What should I check next?
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Thanks for the inputs everyone, I really appreciate.
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
Any idea what should I check next? Thanks a lot.
Yes, its normal OOB if you haven't turned the Second Stage Recycle bin off or set a site quota. If a site quota has not been set then the growth of the Second Stage Recycle bin is not limited...
the second stage recycle bin is by default limited to 50% size of the site quota, in other words if you have a site quota of 100gb then you would have a Second Stage recycle bin of 50gb. If a site quota has not been set, there are not any growth limitations...
I second everything Nat has said and emphasize splitting the content database. There are instructions on how to this provided you have multiple site collections and not a single massive one.
Also check your SharePoint databases are in good shape. Have you tried DBCC CHECKDB? Do you have SQL Server maintenance plans configured to reindex and reduce fragmentation? Read these resources on TechNet (particularly the database maintenance article) for details.
Finally, see if there is anything more you can do to isolate the SQL Server as the problem. Are there any other applications with databases on the same SQL Server and are they having problems? Are you running performance monitoring on the SQL Server or SharePoint servers that show any bottlenecks?
Backup the production database to dev and attach it to your dev SharePoint server.
Try and create a site. If it does not take forever to create a site, you can assume there is a problem with the Prod database.
Despite that, at 100gig, you are running up to the limit for a content database and should be planning to put content into more than one. you will know why when you try and backup the database. Searching should also be starting to take a good long time now.
So long term you are going to have to plan on splitting your websites out into different content databases.
--Responses--
Yeah, database size is all just about SQL server handling it. 100GB is just the "any more than this and it starts to be a pain" rule of thumb. Full Search crawls will also start a while.
Given that you do not have access to the production database and that creating a sub-site is primarily a database operation, there is nothing you can really do to figure out what the issue is.
You could try creating a subsite while doing a trace of the Dev database and look at the tables those commands reference to see if there is a smoking gun, but without production access you are really hampered.
Does the production system server pages and documents at a reasonable speed?
See if you can start getting some stats from the database during the creation, find out what work is being done. SQL has some great tools for that now.