iOS update core data file from a remote source - core-data

I have an iOS 7 app with a static (not editable by the user) coredata file. Once in a while this file has to be updated. Like updating a recipe in a cookbook or a phone number in an address book.
The app itself (the binary file) remains unchanged.
I have never done something like this. I am looking for a starting point (a guide or google search terms) for learning rather than concrete solutions.
Which frameworks should I use for this?
Do I need my own webserver?
How can I ensure that the correct datafile is received by the app?
Similarly, how can I ensure that only my app is requesting the data?
How should I handle security? How do I know it's me sending the update?
Probably I have to send a notification to the app telling it to download data from somewhere? Should I hardcode a server address in the app from where to download data?

I'm using a simple web site for this. I have a text file with a version number (or date) in the web site. My code simply downloads the file, checks the version or date with the version field in my database. If it is newer, it activates the download button.
The download button downloads a zip file (doesn't have to be zip), extracts them and replaces the existing core data files with new onen. It has been working without any problems so far (since 2012).
Although the recommended way is to delete all the data in core database one by one and update just the data using a remote source, I'm using my approach of simply replacing the core database's sqlite file and related files. Here's the code snippet: -
//removing the core data files
NSString *finalPath = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents/NTTimeTable.sqlite"];
[[NSFileManager defaultManager] removeItemAtPath:finalPath error:nil];
finalPath = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents/NTTimeTable.momd/NTTimeTable.mom"];
[[NSFileManager defaultManager] removeItemAtPath:finalPath error:nil];
finalPath = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents/NTTimeTable.momd/NTTimeTable.omo"];
[[NSFileManager defaultManager] removeItemAtPath:finalPath error:nil];
finalPath = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents/NTTimeTable.momd/VersionInfo.plist"];
[[NSFileManager defaultManager] removeItemAtPath:finalPath error:nil];
// Unzipping downloaded file
finalPath = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents"];
[SSZipArchive unzipFileAtPath:tempFilePath toDestination:finalPath];
[[NSFileManager defaultManager] removeItemAtPath:tempFilePath error:nil];
Let me know if you need more info. Please note that I've moved my core database files to the Documents directory because I didn't have permissions to replace them when they were in their original location.
Q/A
As I showed above, I'm just using a web server and leaving an update file on my website.
I think it's better to have your own webserver. You can get a cheap one on the internet. You could use free ones but you have to think about speed, bandwidth, availability, etc.
You could use a checksum to verify the file is intact. To ensure the file is correct, you could do checks like reading the file size, contents, etc. In my case, I'm not doing anything as I've hardcoded the file url. It depends on how sensitive your data is. Mine was pretty much like a public thing.
In order to ensure that only your app requests the data, you could use some simple authentication. Maybe getting the app to make a post request with a username and/or password/hash, and getting the web server check them will do. I would use a small php file in the server as most of the cheap web hosting providers support php. I would hard code the username and password/hash in the php file in web server. If you are really serious, you might want to look at authentication mechanisms such as asymmetric cryptography.
This is a hard part. Even if you hard code the download url in the app, people can still make a fake webserver to send the files/data by changing DNS or doing all sorts of things. At the end of the day, you will only be making things harder to break into, not 100% secure.

Related

How to download a file from website by using logic app?

how you doing?
I'm trying to download a excel file from a web site (Specifically DataCamp) in order to use its data into an automatic process, but before to get the file is necessary to sign in on the page. I was thinking that this would be possible with the JSON Query on the HTTP action, but to be honest I don't know where to start (I'm new on Azure).
The process that I need to emulate to get the file extraction would be as follow (I know this could be possible with an API or RPA but I don't have any available for now):
Could you tell me guys some advices (how to get the desired result or at least where to make research)? is this even posibile?
Best regards.
If you don't have other ways, e.g. your source is on an SFTP, etc. than using an HTTP Action should work, pass the BODY to your next action (e.g. you might want to persist that on a BLOB if content is binary).
If your content is "readable", e.g. JSON, CSV and want to load for processing, you need to ensure, for large files, that you read it in Chunks to load it completely before processing.
Detailed explanation at https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-handle-large-messages#download-content-in-chunks

How to displaly PDF files which were indexed by solr in a Angular app with a node express API

I want my angular app to show a list of PDF files which were previously indexed by a solr server. The PDF file should then open in my app (pdf-viewer installed and working with external PDF files like this one. Since Angular can't access/display local files I thought I might use a node API which I'm currently using in my angular app to get some data from a db to also get the list of PDF files.
I just don't know how...
The indexed files have three fields (fileName, fileDir, fileAbsolutePath) which I can get by using the solr query (https://myServerAdress/solr/CoreName/select?fl=fileDir%2C%20fileName%2C%20fileAbsolutePath&q=*%3A*) in case it's relevant.
I don't need a exact tutorial on how to do this. A rough approach on how to do this would be sufficient and very helpful!
Screenshot and notes of the actual goal
You have to map the file name indexed to a path that your node application have access to - then make a request to your node application that returns the file. Either directly or through something like X-Sendfile.
Exactly how you do this will depend on the framework you're using in node.
Solr does not have any method to retrieve the actual raw file content for serving, so the absolute path you've indexed (or file name if you have a static dir) will have to be used. Be careful about not serving files or documents outside of your intended path.

Export report to Excel

I want to export a table to an Excel file. I need to export a report.
ORA_EXCEL.new_document;
ORA_EXCEL.add_sheet('Sheet name');
ORA_EXCEL.query_to_sheet('select * from mytable');
ORA_EXCEL.save_to_blob(myblob);
I saved my table to blob.
How do I export/respond to the user (client)?
I need something that is simple to allow a user to be able to download an Excel file to their own computer. I tried doing this procedure in an Oracle workflow:
ORA_EXCEL.save_to_file('EXPORT_DIR', 'example.xlsx');
But this did not help, because it is saves the file to a directory on the server and I need it in the real server.
The way I have handled similar issues in the past was to work with the systems people to mount a directory from either a web server or file server on the Database server.
Then create a directory object so that the procedure can save to a location that is accessible to the user.
If the files are not sensitive and there are a limited number of users then a file server makes sense as it is then just a matter of giving the user access to the file share.
If files are sensitive or this is a large number or unknown users we then used the Web server and sent a email with a link to the user enabling them to download their file. Naturally there needs to be security built into this to stop people being able to download other users files.
We didn't just email the files as an attachment because...
1) Emails with attachments tend to get blocked
2) We always advise not to open attachments on emails. (Yes I know we advise not to click on links as well but nothing is perfect)
Who or what is invoking the production of the document?
If it´s done by an application, which the user is working on, this application can fetch the BLOB, stores it at f.e. TEMP-Directory and calls
System.Diagnostics.Process.Start("..."); to open it with the associated application. (see Open file with associated application)
If it´s a website, this one could stream the blob back as Excel-Mimetype (see Setting mime type for excel document)
Also you could store in an Oracle-DIRECTORY, but this one has to be on the server and should be a netword-share to be accessible for clients (which is rarely accepted in a productive environment!)
If MAIL isn´t the solution, then maybe FTP can be a way to store files in a common share. See UTL_TCP - Package, with this a FTP-transfer can be achieved (a bit hard to code, but there are solutions to find in the web) and I guess, professional tools that generate Office-documents out of Oracle-DB and distribute them do it like this.

For a web app that allows simple image uploads, how should I store the images? Confused about file system vs. cdn

Every search result says something about storing the images in the file system but store the paths in the database, but I'm not sure exactly what "file system" means. Would that mean you have something like:
/public (assets)
/js
/css
/img
/app (frontend)
/server (backend)
and you'd upload directly to that /public/img directory?
I remember trying something like that in the past with a Node.js app hosted on Heroku, and it wouldn't let me. I had to set up Amazon S3 and upload the images THERE, which leads to my confusion.
Is using something like Amazon S3 the usual practice or do people upload directly to the /img directory (assuming this is the "file system"?) and it just happened to be the case that Heroku doesn't allow this but other hosts do?
I'd characterize the pattern as "store the data in a blob storage service, store a pointer in your database". The uploaded file is the "blob" - once it has left the user's computer and filesystem, is it really a file anymore? :) On the server, a file system can store that "blob". S3 can store that blob. In the first case, you are storing a path. In the second case, you are storing the URL to the S3 object. A database could even store that blob (not at all recommended, though...)
In any case, the question to ask is: "what happens when I need two app servers to support my traffic?". Wherever that blob goes, both app servers need access to it.
In a data center under your control, there are many ways to share a filesystem across servers - network attached storage (NFS- or SMB-mounted volumes), or storage area networks (iSCSI, Fibre Channel). With more limited network/hardware configuration options in cloud-based Infrastructure/Platform-as-a-Service providers, the de facto standard is S3 because it is inexpensive, reliable, easy to use, and can completely offload serving the file from your servers.
For Heroku, though, you don't have much control over the file system. And, know that the file system for each of your dynos is "ephemeral" - it goes away when the dyno restarts. Which will happen when your app goes idle, or every 24 hours, whichever comes first. So that forces the choice a little.
Final point - S3 comes with the ancillary benefit of taking the burden of serving the blob off of your servers. You can also store files directly to S3 from the browser, without routing it through your app (see https://devcenter.heroku.com/articles/s3-upload-node). The benefit in both cases is that those downloads/uploads can take up lots of your application's precious time for stuff that's pretty rote.
Uploading directly to a host file system is generally not a best practice. This is one reason services like S3 are so popular.
If you're using the host file system and ever need more than one instance of a server, the file systems will grow out of sync. Imagine one user uploads 'foo.jpg' to server A (A/app/uploads) and another uploads 'bar.jpg' to server B (B/app/uploads). When either of these images is later requested, the request has a 50% chance of failing, depending on whether the load balancer routes the request to server A or server B.
There are several ancillary benefits to avoiding the host filesystem. For instance, you can set the filesystem serving your app to read-only for increased security. Files are a form of state, and stateless web servers allow you to do things like blow away one instance and deploy another instance to take over its work.
You might find this of help:
https://codeforgeek.com/2014/11/file-uploads-using-node-js/
I used multer in my node.js server file to handle uploading from the front end. Basically I had an html form that would submit the image to the server file, where it would be handled by multer. This actually led it to be saved in the file system (to answer your question concretely, yes, this was to something like the /img directory right in your project file structure). My application is running on heroku, and this feature works on there as well. However, I would not recommending using the file system to store your image like this (I doubt you will have enough space for a large amount of images/files) - using AWS storage or a DB would be better.

Uploading and requesting images from Meteor app

I want to upload images from the client to the server. The client must see a list of all images he or she has and see the image itself (a thumbnail or something like that).
I saw people using two methods (generically speaking)
1- Upload image and save the binaries to MongoDB
2- Upload an image and move it to a folder, save the path somewhere (the classic method, and the one I implemented so far)
What are the pros and cons of each method and how can I retrieve the data and show it in a template in each case (getting the path and writing to the src attribute of and img tag and sending the binaries) ?
Problems found so far: when I request foo.jpg (localhost:3000/uploads/foo.jpg) that I uploaded and the server moved to a known folder, my router (iron router) fails to find how to deal with the request.
1- Upload image and save the binaries to MongoDB
Either you limit the file size to 16MB and use only basic mongoDB, either you use gridFS and can store anything (no size limit). There are several pro-cons of using this method, but IMHO it is much better than storing on the file system :
Files don't touch your file system, they are piped to you database
You get back all the benefits of mongo and you can scale up without worries
Files are chunked and you can only send a specific byte range (useful for streaming or download resuming)
Files are accessed like any other mongo document, so you can use the allow/deny function, pub/sub, etc.
2- Upload an image and move it to a folder, save the path somewhere
(the classic method, and the one I implemented so far)
In this case, either you store everything in your public folder and make everything publicly accessible using the files names + paths, either you use dedicated asset delivery system, such as an ngix server. Either way, you will be using something less secure and maintenable than the first option.
This being said, have a look at the file collection package. It is much simpler than collection-fs and will offer you everything you are looking for out of the box (including a file api, gridFS storing, resumable uploads, and many other things).
Problems found so far: when I request foo.jpg
(localhost:3000/uploads/foo.jpg) that I uploaded and the server moved
to a known folder, my router (iron router) fails to find how to deal
with the request.
Do you know this path leads to your root folder public/uploads/foo.jpg directory? If you put it there, you should be able to request it.

Resources