Should I wait before publishing App with Core-Data and iCloud until it is more stable? - core-data

(as at 1st June 2012)
A very simple question I would really appreciate input from others (I think we all could) who are either wondering the same or have decided to go with it (if so, any user experiences shared would be great).
I have 2 main issues which seem common on stackoverflow and on various blogs, so I know I'm not the only person experiencing them.
Occasionally my sync gets corrupted for one reason or another and the sync either fails on certain data or fails to sync altogether.
Pre populating data is essential for my app, there's no reliable way of doing this (without asking the user - which isn't reliable)
I've seen many good work arounds to both these issues but I can't help feeling that my users will encounter problems still.
I'd be really interested to learn of any experiences particularly from those that have decided to publish using iCloud and Core-Data.
iCloud has been around a while now but even on iOS 5.1.1 it doesn't seem stable enough, surely a more reliable version can't be too far away.

It is hard to answer this one without "Making statements based on opinion", but it's one worth giving partially backed up opinions on to hopefully save others some time and frustration.
My opinion is that it's just not ready. I unfortunately have an app in the wild at the moment based around CoreData + iCloud and even though we danced around a number of iCloud bugs to finally get it stable enough to pass reasonably thorough testing some users are still running into the corrupted sync states. I've spent a healthy portion of my recent life trying to make it work and am currently reluctantly reimplementing the app without Core Data. I really hope that it stabilizes soon as I genuinely like most of it.

Related

iCloud + Core Data today (10th july 2015)

Some years ago Apple released the iCloud sync of Core Data apps. Then I released an app for iPad / iPhone / MAC with a shared model using the new Apple mechanism.
Things have not gone as expected. The sync mechanism sometimes doesn't work. For example, the last months, in my case, rarely the sync ends well through my 3 devices. Uploading objects in general works fine. But the download process of new or deleted objects normaly crash.
Apple released some time ago a way to force the devices to redownload ALL the objects of the model (NSPersistentStoreRebuildFromUbiquitousContentOption) that normaly works but it's not an acceptable solution.
My questions: Does someone have accomplished to get iCloud + Core Data working fine? What about running iCloud+CD under iOS 9 + El capitan, any experience?
I'm evaluating migrate to new CloudKIT API but I don't like the idea of self-manage an object upload if the device is offline. Does the new mechanism of push notifications indicating model changes works fine?
Thanks
This is a question I've researched deeply in these last few months, I'm afraid without a definitive answer.
Here's what I can tell you from my experience:
If you, like me, don't want to start over with CloudKit, which works reliably but requires you to manually handle much of the syncing work and the conversion of CKRecords to ManagedObjects, give Ensembles.io a chance: it's working very well for me, the layer between Core Data and iCloud really is working in my case, where CD+iCloud didn't.
I'm using the 1.0 version, which is open source and supports iCloud as one of its possible backends (and version 2.0, paid, supports even more); in a few days I got reliable sync with automatic de-duplication (you have to provide a uniqueIdentifier property to have it working, but I already had something in place...).
The only issue I haven't figured out yet: sometimes (1 case out of 10, I'd say) an object doesn't sync right away when edited or deleted, but it always get on the other device when another object is added, edited or deleted: nothing got lost and everything has been handled "automatically" for me when these delays occurred, but still, I'd prefer that everything always synced right away.
Ensembles also has a good logging for debug, something you'll appreciate coming from vanilla CD+iCloud.
If you want to give it a try, you should take a look at these resources:
Ensembles.io company website
Ensembles on Github
A presentation/introduction from the creator of Ensembles
This, and other posts, on a blog I found after I implemented Ensembles in my Swift project; I could have used these informations, if I had found them before... but they're more useful if you're writing Swift code, if you're Objective C then the official Ensembles book is the way to go
If you are absolutely sure that you don't want layers / third party code between Core Data and iCloud (I thought so myself, but I've changed my mind when I realized I lost three months of my life and got nothing in return), the implementations of Core Data + iCloud that I've found online and looked more promising were these:
Sample Library Style Core Data Apps with iCloud Integration, looks a bit complex to me, but I've read many good things
Tim Roadley's book and sample code
I haven't tried myself these last two solutions, because my last plan of attack was to try Ensembles and, if it didn't work for me, go with those approaches. Since Ensembles has been very good to me, I didn't need to try them, but again they looked solid.
One last thought that bothers me: in 2015 WWDC sessions there's no mention of Core Data + iCloud. This, to me, spells doom for the syncing solution we're choosing.
Hope this helps.

Has anyone used OpenAM/OpenDJ/OpenIDM suite without using ForgeRock's Support plans?

We are looking to implement an open source identity management system and have identified ForgeRock's stack as the best technology to implement.
The high cost of ForgeRock support and its per-User pricing model, however, is a potential roadblock. Our current User base is ~45K, but we expect to ramp up to 1M in the next 2 years.
So we're looking into scenarios where we proceed without FR Support. The lack of FR Maintenance releases would seem to put a damper on that, so we're curious if others have gone that route.
What has been your experience?
What kind of projects have you done this for? Size, etc.
In the absence of FR's Maintenance releases, have you been able to easily create your own patches?
What are some potential pitfalls?
If there are blogs or other communities that deal with this topic, please point me in their general direction.
Thanks.
As a community user I did use OpenAM(/OpenSSO) and OpenDJ for the past 6 years or so, but it was a very small deployment (10k users only 1 server instance from both products).
1) In the early stages we did have reliability issues with OpenAM, which we mostly resolved by restarting the server instances - clearly wasn't preferred, but we didn't really spend too much development effort on actually trying to resolve it (plus lacked the necessary knowledge for investigation back then). After spending some actual effort on trying to learn the product it turned out that the most of our issues were either self-inflicted (badly written customizations, or misconfigurations), or was actually something that got recently resolved in the OpenAM project and was relatively simple to backport to our version.
Of course the experience itself largely depends on how often you want to make configuration changes in the deployment though, since we weren't changing a lot of things over the years, OpenAM just worked nicely for long intervals without requiring any kind of maintenance.
3) Since we didn't really ran into new issues (the config barely changed), there weren't too many surprises after a while. The security patches were mostly simple to backport and didn't cause too much trouble (It did help that after 1,5 years I became a FR employee and I actively worked on OpenAM issues though :) )
4) I think running without subscription has its risks, but they mostly relate to:
are you planning to roll out new features based on OpenAM functionality during that 2 years (i.e. are you planning to constantly make changes to the deployment)?
do you have good developers to work on these features? Working with OpenAM for example can quite easily require you to have a look at the source code to figure out how things work, the quality of the documentation has improved a lot over the years though. Regardless, backporting fixes are going to be more and more difficult over time, as the releases will differ a lot more (since the development team is getting bigger and bigger for each projects) - and even then you can't just assume that all the issues you run into are by definition already resolved in trunk. The need to resolve some of the issues on your own is a cost/risk you need to take into account.
what kind of SLA do you want to have for your deployment? Is your business going bankrupt after a 1 minute outage? Is it acceptable to just frequently restart your service (in case you run into some weird issues)?
do you really need support for all 3 products? For example my background would allow me to work easily without OpenAM support, but I would be in the deep end if something is going wrong with my provisioning system...
And a generic remark:
Having user growth of 20x within two years sounds a bit unrealistic, or very hopeful at least. Maybe what you should look for is a 1 year subscription for a bit more reasonable target number and then have a renewal once you have a better understanding of customer growth in your business?

Is subsonic dead?

The company I work for pretty much uses a subsonic DAL for everything we do.
I recently noticed that the domain was released.
So is subsonic dead?
The domain does seem dead. Their github repository is still online though, with most recent activity from a month ago. The author Rob has a blog but he doesn't seem to have addressed his poor domain, though he does seem to acknowledge that it hasn't seen much attention from him.
For all your info, the domain is back up (probably got lost in the flood of Rob's inbox).
I put a lot of time in Jan-Feb 2012 into fixing up all the outstanding bugs in SubSonic 2, including patching, fixing or removing a lot of the failing unit tests (there were quite a few), actioning outstanding pull requests, and generally giving it a good tune up.
I had also added an MS Access provider and an auto enum generator to it.
But SS2 is very much 'mature' now, and I can't see much happening to it apart from bug fixes. That said, now that the help resources are back up, it's still a great and complete package. If there's not much active patching, that may be because it doesn't need any.
SS3 takes the project in a very different direction. I'm happy using it, but as with all these things, I mainly concentrate on what I can do WITH it rather than the tool itself. There is a pretty big backlog of outstanding stuff on SS3, and I'm not up with or even that keen on LINQ, so not sure how I'd go trying to fix anything to do with that.
And Rob has moved on to Massive, his .NET 4 dynamic tool.
UPDATE: The actual official SubSonic site is at: subsonic.wekeroad.com
Subsonicproject.com is a secondary site previously registered by contributor Eric Kemp, fallen to domain sqatters, largely broken. I nearly fell over when Rob told me this.
YES!
Rest in peace, my old friend :(

Recommendations for automatically logging unexpected errors/stack traces to bug tracker

We have been looking at automatically logging all unexpected client errors to our bug tracker. For reference our application is written in Java/GWT/Guice/Hibernate/Jetty and our bug tracker is the hosted version of FogBugz which can create bugs programatically or via an email.
The biggest problem I see with doing this is stack traces that happen in a loop overload the bug tracker by creating thousands of cases. Does anybody have a suggested way to handle automatic bug creation like this?
If you're using FogBugz bugscout (also see up-to-date docs here) then it has the ability to just increase number of occurences of same problem, instead of creating new case for same exception again and again.
Are you sure that you want to do that?
It obviously depends on your application but even by carefully taking care of the cases that could generate lots of bug reports (because of the loops) this approach could still end up filling the bug tracker.
How about this?
Code your app so that every time an exception is thrown, you gather info about the client (IP, login, app version, etc) and send that + the stack trace (or the whole exception object .ToString()) by email to yourself (or the dev team).
Then on you email client, have a filter that sorts that incoming mail and throws it in a nice folder for you to look at later.
Thus you can have tons of emails about maybe one of more issues but then you don't really care because you input the issues yourself in the bugtracker, and easily delete that ton of mail.
That's what I did for my app (which is a client-server desktop app). It plays out well in this case.
Hope that helped!
JIRA supports automated issues creation using so called services: documentation.
Does anybody have a suggested way to handle automatic bug creation...?
Well, I have. Don't do that.
What are you going to gain from that? Tester's effort? in my experience, whatever effort one can save from that was lost multiple times with overhead transferred to developers who had to analyze and maintain the automatically created tickets anyway. Not to mention overall frustration caused by that.
The least counterproductive way I can imagine would be something like establishing a dedicated bugs category or issue tracker instance, such that only testers can see and use it.
In that "sandbox", auto-created bugs could be assigned to testers who would later pass analyzed and aggregated bug reports to developers.
And even in that case, I'd recommend to pay close attention to what users (testers) say about the system. If they, say, start complaining about the system, consider trying a manual way of doing things instead.

Upgrading and Security Implementation (Access 2000-2003 and up)

I’ve been working on a few small scale Access projects that have turned large scale rather quickly. The original designer implemented next to zero security and everyone can just walk in with a simple shift enter, way beyond just a security hole for nuclear submarines to dive through and that has always drove me bonkers.
With that said, users are currently on Office 2000, migrating slowly into 2003. I have taken this opportunity to convince higher parties to implement said security through the use of built in access tools.
Next I get to go through hundreds of functions and forms to pop in option explicit to define all the data types restricting the compile to MDE and clean up memory that was not done for some reason. There are some sensitive connection strings in the code that are plain as day that need to be compiled to reduce the risk factor.
My questions involve both the upgrade to 2003+ and the built in security. And yes, this is what I'm stuck with using unless I really want to redo everything in Visual FoxPro but building a porsche with rocks... not my idea of a good time.
When moving into office 2007, are
there any major holes that I should
be working around ahead of time?
Within the next year and a half the
whole business is supposedly
upgrading to this and I’ve only heard
horror stories about changed/obsolete
functions
Are there any major bugs that
can/will happen because of the use of
the workgroup file and permissions?
Tricks I should know ahead of time if
something crazy happens to lock
everyone out of it?
In the sandbox, I have not implemented the Encryption feature. Pros/Cons, Risks?
Any other good tips? I realize the broadness of this question and have a few good books on hand here (Professional Access 2000 Programming, Access Developers 2002, Developing Solutions with Office 2000 Components and VBA) but obviously these are before the time of current Access and Jet technology. If anything, a good book recommendation would be a booster for me, anything to give me a head start. Right now I really need to devour this security issue, its beyond just out of hand considering the sensitivity of the information at hand.
Thanks for reading my dreaded wall of text o.O
User level security does not exist for Access 2007 files (http://office.microsoft.com/en-us/access/HA101662271033.aspx). If the data is very sensitive, you may wish to consider a different back-end.
If the data is truly that sensitive it shouldn't be stored in an Access database file. Anyone can copy the entire data MDB/ACCDB and take it home with them to analyze at their leisure. Instead the data should be upsized to a database engine such as SQL Server.
Keep the current Access queries, forms and reports but get the data into a format that isn't so easy to steal.
Then think about limiting their views, logging the queries they run and such.
I would wait until A2010 is out before making any determination about upgrades beyond A2003. A2003 is fine for now, seems to me. I certainly wouldn't want to wade into targetting development to A2007 with A2010 coming out so soon and having so many really great new features (table-level data macros, really useful additions to Sharepoint integration that make a lot of really huge things possible, to name just two). My plan is to skip A2007 with clients (though I have it installed now and am playing with it so that I'll be better prepared when 2010 comes out).
One thing that doesn't often get mentioned about A2007 is that the Office FileSearch object was removed in Office 2007. If your app uses it, you can use my File Search class module to replace it. I've had it in production use since June (when I created it), but just released it more widely and am currently troubleshooting some issues that seem to be related to file names with odd characters.

Resources