Is Core Data what I'm looking for? - core-data

I'm very new to ios development in general, all of my experience is in php and MySQL. After doing a bit of research on the best way to save the players game, in my app, I'm thinking core data is the way to go. Or the game could save to my own server.
Thinking in terms of playing an RPG where as soon as you get a new weapon, your game needs to remember that so when you log back in, you still have that weapon.
Am I on the right track in thinking core data is the way to go with this? Also, ideas on implementation would be great! (Does it save to db on every button click right after new item is found? Does it just run a save script every 30 seconds?)
Thanks guys! Sorry for such a basic noob question, it's hard to find documentation on this.

I think NSUserDefaults would probably be the easiest way to go.
NSArray *weapons = #[#"Machine Gun",#"Hand Gun"];
NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];
[defaults setObject:weapons forKey:#"weapons"];
And retrieve data like this
NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults];
_weapons = [defaults objectForKey:#"weapons"];
The only thing to bare in mind is that the objects can only be NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary. For NSArray and NSDictionary objects, their contents must be property list objects.

Actually, if you are relatively new to iOS and if the data storage needs aren't very intense, I might steer you away from Core Data. (About 80% of the apps I've shipped use Core Data for the object graph persistence; but even Apple's documentation on Core Data mentions that it's not an entry-level technology on the platform.)
Without knowing the complexity and intensity of your data persistence needs, it's hard to be definitive; but just serializing/deserializing your object graph to and from disk might make the best sense. Your model objects will need to adopt the NSCoding protocol.
I'm not sure you would need to save your object graph to disk with every new found item, or every n seconds. You'll want to archive and save at key points in the application life cycle, e.g. when the application goes to the background. But if you're designing a game, perhaps you'll need discrete save points, allow the user to designate a save point. That's something that is less dependent on how you persist the object graph than on how you choose to design it in the first place.

Related

How do I go about building this application?

So I have this application laid out but I am unsure on some aspects of how to go about it.
It's a website for dogs with cancer to supply a raw ketogenic diet for the user. There are other aspects of the website that I have figured out like where to hold state and such.
Where I'm a little confused on how to proceed is on the calculations for the ketogenic page shopping cart. The user can choose 1 fat source from the sources provided, one protein source and one green vegetable source. I want to make this as balanced and complete for the user for their dog- so obviously 1-protein 1 veg and 1 fat source is not balanced and complete. I need to factor in the amount of calories need per weight which I will have an input where the user can enter their dogs weight. I have done some of the math to make it complete and balanced on paper, factoring in amino acids, vitamins and minerals and the omega 6:3 ratio.
What I'm confused about is where am I going to hold all this data per se? It's a lot of data and it's based on many factors such as weight-activity, keto ratio like 1:1 or 2:1 depending what the user selects.
I obviously need a backend and need to build an API, but how would I return to the user the complete and balanced diet when so many other factors play in? Where would I store this other data? In objects? Variables? And then put it on the backend? I would greatly appreciate any help. Thanks in advance.
how are you? First of all, congrats on the great idea, I think it is great.
For starters, I think it's important for you, to lay down what technologies you'll be using.
1- Choose the technologies you are going to use. Will you use the MERN stack? React for front-end, mongo for the database, node.js for back-end with express.js as a framework?
2- Will you be the one providing all the data for the dog's diets? If not, find some website with an API for that data.
3- I am not any kind of expert on JS nor React.js, I'm just following the logic here. But does that data change? or is it always the same? I would advise you to use objects, so you can "play" with them, pass them around the classes (I would advise you to create a react class, where you store all of those objects, make sure you export that class)
4- Just lay everything out and START BUILDING!!! It sounds hella overwhelming to start such a big project, but once you get going some i's will be dotted, you will get a better understanding of what you actually are going to build.
5- Few tips, if you use an API you won't need to store the data, else store it on your database (JSON file, for example, there are great youtube tutorials on how to do so.)
Keeping this in mind, you asked where you'll be storing all that data, if you have an api, you won't store anything, if you don't, JSON is your best shot, it's really easy and intuitive to use and easy to read inside a react component.
I hope this gave you some extra clarification on what you'll need to do, in order to start.
PS: This answer is purely based on my limited React (and web dev) experience, I am no expert, let me know if there's anything else I can help you with.

Automating Raw Export Data Cleansing for Client Onboarding - Format is Always Different

So a bit of a general question. I work as a data analyst for a startup. My primary process involves taking existing customer data a client has and cleansing/normalizing it to fit into our platform once as part of our onboarding process. A member of our team exports their data from their system they are transitioning from or, if they kept track of it in house, we receive their Excel log they used to track it. It is always in a different format and requires extensive cleansing (avg 1 min/record). We take what is usually one large table (.xlxs format), and after cleansing, split it into four .csv files; which we load as four tables on our platform.
I feel I have optimized the process quite well in terms of the process steps and cleansing with excel functions (if, concat, text-to-columns, etc). I have beginner-intermediate skills in VBA and SQL and have just scratched the surface in R; what is frustrating is that I know there is the potential to automate this process but I just don't know where to start. If anyone has experience with something like this, code, a link to an article / another thread, or just some general direction would be much appreciated. Please ask for clarification where you feel it is needed. Thanks.
This will be really hard to do in Excel. If you have the time you can try out Optimus, a Data Cleansing library written in Python and Pyspark (you don't need to know spark). Here is the webpage https://hioptimus.com.
You can create Data Pipelines with it, and I recommend that you do that, try to generalize your processes, and asking the client for more a structure way of passing the data.
The good thing is that you don't need Big Data for running Optimus, bit if you have it some day, the same code will work.
Check out the documentation for more:
http://optimus-ironmussa.readthedocs.io/en/latest/
Let me know if you have doubts!

How to make Sequence Diagram for Update Inventory

I'm preparing the sequence diagram for a project. I made the following sequence diagram for a retailer updating his inventory
It's confusing to me because this is the first time I use this technique with a real project.i have used database as an object here and i don't know whether its right or wrong. And another thing i need to clarify is by using Updating i meant for both editing/add new item To the inventory. Is it wrong to do like that way? or else can we draw it separately?
The following image is part of the updating process, would any one take a look and correct me if I did any mistake.(UpdateUI- User interface).Thanks in Advance.
It does not look right. There are a couple of issues:
Your database will likely never issue any messages
Actions inside a DB are usually not exposed. You normally only call CRUD from outside for a DB.
You mix synch/asynch (likely unwillingly). Filled arrows are synch, unfilled ones as asynch.
Main Page is likely the V in MVC and UpdateUI the C. So the controller will act on a click from the user and interact with the DB.
So just from my guts here is a more reasonable sketch:

OLAP cube powering Excel Pivot. What's a better solution?

I'm looking to build a dynamic data environment for non-technical marketers.
I want to provide large sets of data in an Excel pivot table form so even marketers without analytics/technical backgrounds can access relevant performance information. I'm trying for avoid non-excel front ends since I don't want users to have to constantly export data when they need to manipulate it in some way.
My first thought was to just throw together an OLAP cube populated with pre-aggregated data, but I got pushback from the IT team as OLAP is "obsolete." I don't disagree with them - there are definitely faster data processing architectures out there.
So my question is this: are there any other ways to structure the data so that marketers can access it easily but still manipulate it to some degree in Excel? I'm working with probably 50-100m rows of data and need the ability to scale dimensionality.
This is just my thoughts.
Really the question could be thrown back at your IT team. Your first thought was to throw together an OLAP cube. IT didn't like this. If they're so achingly hip that they consider OLAP "obsolete", what do they suggest as a better, more up-to-date alternative?
Or, to put it a different way - what is the substance of their objection to an OLAP solution? (I'm assuming there is one beyond "MS gave us an awesome presentation of PowerPivot/Azure tabular, with really great free snacks and coffee").
Your requirements are pretty clear:
Easy access for non-technical people
Structured data so that they don't have to interpret the raw data
Access through Excel
Scalability
I'll be paying close attention to any other answers to your question, because I'm always interested in finding out that I don't know something; but personally I haven't come across a better solution to these requirements than OLAP.
What makes me suspicious of the "post-OLAP" sentiment is related to point (2) in the list above. Non-technical users can tend to think of the cube data they consume as being somehow effortlessly produced, by some kind of magic. That in itself is an indicator of success, demonstrating just how easy it is for users to get what they want from a well-designed OLAP system.
But this effortlessness is an illusion: to structure the raw data into this form takes design effort, and the resulting structure incorporates design decisions and assertions: that is how it can be easy to use, because the hard stuff has been encapsulated in the cube design.
I have a definite Han Solo-like bad feeling about "post-OLAP": that it amounts to pandering to this illusion of effortless transformation of data into a usable form, and propagates further illusions.
Under OLAP, users get their wonderful magic usable data structure, and the hard work is done out of sight by developers like you or me. Perhaps we get something wrong so that they can't see data exactly as they'd like to - but at least the users can then talk to us and ask for what they do want.
My impression of the "post-OLAP" sales pitch is that it tries to dispense with the design work. We don't need those pesky expensive developers, we don't need to make specific design decisions (which necessarily enable some functionality while precluding some other functionality), we don't need cube-processing time-lags. We can somehow deliver this:
Input any data you like. Don't worry if it's completely unstructured or full of dirt!
Any scale
Immediate access to analytics without ETL/processing delays
Somehow, the output is usable, structured data. Structured by... no-one in particular. The user can structure it as they like, but somehow this will be easy
Call me cynical, but this sounds like magical thinking to me.

why does PhotoCamera need a VideoBrush?

When using the PhotoCamera one must create an instance of the PhotoCamera as well as a VideoBrush - and then assign that PhotoCamera instance to the source of the VideoBrush instance before the camera can be initialized. example:
PhotoCamera camera;
VideoBrush brush;
camera = new PhotoCamera();
camera.Initialized += CameraInitialized;
brush = new VideoBrush();
brush.SetSource(camera);
The VideoBrush is clearly useful in scenarios where the developer wishes to create a viewfinder for the camera video stream by associating the VideoBrush instance with the brush of a visual object like a Canvas.Background or Rectangle.Fill. However, when that is not the case, requiring the developer to still go through the motions of creating a VideoBrush seems somewhat random at first glance.
So two questions, why does the PhotoCamera always need to be associated with the VideoBrush?
What is the performance impact associating with attaching the PhotoCamera to a VideoBrush? Specifically how are calls to GetPreviewBuffer(Argb|Y|YCbCr) impacted by the associated VideoBrush?
Thanks!
PS. hopefully this doesn't come off as pointed in anyway, I'd just like to have a better understanding of why this requirement exists - and how it impacts performance.
PPS. the improvements in the WP7 SDK for Mango are amazing - I'm looking forward to seeing what people come up with now that access to the sensors have been opened up.
In mango you simply have two options, either do as you suggested above to use a frame in your app (a Video Frame) to take pictures, essentially grabbing a single frame from the video brush.
Or you can use the old NoDo method of using the PhotoChooser Task, which will launch the framework camera app separately and return an image.
Obviously pro's and cons of both methods so just choose the one that suit you.

Resources