I have a JSON file. The size is around 20MB.
Project 1.
So, I created a project and imported data from that JSON file to Core Data. Core Data creates the following sqlite files, entireDataList.sqlite, entireDataList.sqlite-shm and entireDataList.sqlite-wal.
It took 10 minutes to copy entire content from JSON to core data and its one time file creation.
Now in Project2.
I copied all the three files in MainBundle.I have to create .xcdatamodeld as if I directly copy it from Project1, it does not find the .momd file.
NSURL *storeURL = [[NSBundle mainBundle] URLForResource:#"entireDataList" withExtension:#"sqlite"];
store =[[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedModel]];
NSError *error;
[store addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error];
So, is storeURL from NSBundle mainBundle is a reliable approach, where entireDataList.sqlite, entireDataList.sqlite-shm and entireDataList.sqlite-wal files are stored?
Generally they are into documents directory.
Generally speaking you should copy the files from the bundle into the documents folder on first launch, then open them from there. This approach will then allow you to edit the contents of the files and save the updates.
Related
We have recently made changes to how we connect to ADLS from Databricks which have removed mount points that were previously established within the environment. We are using databricks to find points in polygons, as laid out in the databricks blog here: https://databricks.com/blog/2019/12/05/processing-geospatial-data-at-scale-with-databricks.html
Previously, a chunk of code read in a GeoJSON file from ADLS into the notebook and then projected it to the cluster(s):
nights = gpd.read_file("/dbfs/mnt/X/X/GeoSpatial/Hex_Nights_400Buffer.geojson")
a_nights = sc.broadcast(nights)
However, the new changes that have been made have removed the mount point and we are now reading files in using the string:
"wasbs://Z#Y.blob.core.windows.net/X/Personnel/*.csv"
This works fine for CSV and Parquet files, but will not load a GeoJSON! When we try this, we get an error saying "File not found". We have checked and the file is still within ADLS.
We then tried to copy the file temporarily to "dbfs" which was the only way we had managed to read files previously, as follows:
dbutils.fs.cp("wasbs://Z#Y.blob.core.windows.net/X/GeoSpatial/Nights_new.geojson", "/dbfs/tmp/temp_nights")
nights = gpd.read_file(filename="/dbfs/tmp/temp_nights")
dbutils.fs.rm("/dbfs/tmp/temp_nights")
a_nights = sc.broadcast(nights)
This works fine on the first use within the code, but then a second GeoJSON run immediately after (which we tried to write to temp_days) fails at the gpd.read_file stage, saying file not found! We have checked with dbutils.fs.ls() and can see the file in the temp location.
So some questions for you kind folks:
Why were we previously having to use "/dbfs/" when reading in GeoJSON but not csv files, pre-changes to our environment?
What is the correct way to read in GeoJSON files into databricks without a mount point set?
Why does our process fail upon trying to read the second created temp GeoJSON file?
Thanks in advance for any assistance - very new to Databricks...!
Pandas uses the local file API for accessing files, and you accessed files on DBFS via /dbfs that provides that local file API. In your specific case, the problem is that even if you use dbutils.fs.cp, you didn't specify that you want to copy file locally, and it's by default was copied onto DBFS with path /dbfs/tmp/temp_nights (actually it's dbfs:/dbfs/tmp/temp_nights), and as result local file API doesn't see it - you will need to use /dbfs/dbfs/tmp/temp_nights instead, or copy file into /tmp/temp_nights.
But the better way would be to copy file locally - you just need to specify that destination is local - that's done with file:// prefix, like this:
dbutils.fs.cp("wasbs://Z#Y.blob.core.windows.net/...Nights_new.geojson",
"file:///tmp/temp_nights")
and then read file from /tmp/temp_nights:
nights = gpd.read_file(filename="/tmp/temp_nights")
I use SuiteScript to execute Saved Searches and save CSV files to the File Cabinet. However, the saved files are limited to 10mb or the script fails. Is there any way to work around the 10mb limit? I'm able to upload a file through the UI over 10mb in size and the ability to do so using SuiteScript would be very useful.
Thanks for any insight.
Like #bknights answered, you can use the N/task module to have NetSuite create a CSV for you.
var searchTask = task.create({
taskType: task.TaskType.SEARCH
});
searchTask.savedSearchId = 51;
searchTask.filePath = 'ExportFolder/export.csv';;
var searchTaskId = searchTask.submit();
If for whatever reason you need more control over the output, you can create files larger than 10MB using N/file#File.appendLine() to set the contents of the file line by line.
Use a SS2.0 N/task method to schedule the script to be published to a file id or path.
I am trying to find the specification of a reqif archive file (reqifz). Such an archive is used to transfer multiple reqif documents and "external" objects if i understand correctly. Can anyone point me to how such an archive is structured?
Need to fetch data from plist file
Need to display the data in UITableView in Xamarin.iOS
It's fairly trivial to read from a plist.
Make sure the build action of the plist file is "Bundle resource".
Then you can read the contents of the plist, if it's a dictionary:
var dict = NSDictionary.FromFile (path);
Or it's an array:
NSArray array = NSArray.FromFile(path);
From the NSArray you can populate a UITableView.
I'd advise you check out the tutorial on Xamarin's website, for populating a UITableView, its very clear and concise.
[1] http://developer.xamarin.com/guides/ios/user_interface/tables/part_2_-_populating_a_table_with_data/
Edited to read from as a dictionary, per #jstedfast advice in the comment
I have a webpart that users can upload files to a document library.
a problem appears if a user uploads a file with a name identical to a file name already exists in the library.
in this case I want the new file to be added as a new version to the existing file using the object model not the web services.
how to achieve that ?
thanks
OK here's the Answer:
//byte array holding the contents of the file
byte[] contents = File.ReadAllBytes(path);
file.CheckOut();
file.SaveBinary(contents);
file.CheckIn("New version");
the trick is in the check in/ check out
thanks