How to create a zip file in CMIS? - zip

I'm pretty new in CMIS and I'm having a little trouble on the zip topic. I need to create a zip file in the Document Service and after a loop where I create 12 files, I need to add them into to the zip.
The 12 files are created successfully. I need to create the zip before the loop and move the 12 files to the zip...but I don't know how to solve this.
In other tries I've managed to create the zip (wasn't possible to open it) but I couldn't move the 12 files. Please help.
SOLVED!
I've managed to solve this. I've created a temp zip where I was putting the files that I was creating and after the 12 files inserted in the zip I've uploaded in the CMIS :)

Visit alfresco documentation given in following link, this will help.
http://docs.alfresco.com/5.1/pra/1/tasks/opencmis-ext-workbench.html

Related

Retrieve contents of a ZIP file on SharePoint without downloading it

I have written a bit of automated code that checks a SharePoint site and looks for a ZIP file (lets call it doc.zip). If doc.zip is found, it downloads it, and then checks for a file (say target.docx). doc.zip is about 300MB, and so I want to only download where necessary.
What I would like to know is that given SharePoint has some ZIP search capability, is it possible to write code using CSOM (c#) to find doc.zip, and then run some code to retrieve the contents of doc.zip without downloading it.
Just to re-iterate, I am comfortable with searching for files in a folder on SP, downloading the file, and unpacking zip entries. What I need is to retrieve a ZIP files content on SP without downloading it.
E.g. is there a SP command:
cxt.Load(SomeZipFileQuery);
cxt.ExecuteQuery();
Thanks in advance.
This capability is not available. I do like the idea. Having the ability to "parse" zip files on the server side and then download the relevant bits would be ideal. Perhaps raise this on uservoice to see if others also find this us https://sharepoint.uservoice.com
Ok, I have proven yet again that stubbornness will prevail.
I have figured out that if I use the /_api/search?query='myfile.zip' web REST API to search for my file, this search will also match ZIP files that contain the file I need. And it works perfectly.
Of course there is added (pain) of parsing an XML response, but it works very nicely for my code example.
At least if someone is looking for this solution here it is. I wont bore anyone with code, as the /_api/search has probably been done to death already on other threads.

Delete folder by name from google drive using gdrive

I have read the documentation for gdrive here, but I couldn't find a way to do what I want to do. I want to write a bash script to upload automatically a specific folder from my hard drive. The problem is that when I upload it several times, instead of replacing the old folder by the new one, it generates a new folder with the same name.
I could only find the following partial solutions to my problem:
Use update to replace files. The problem with this partial solution is that new files inside the folder could not get uploaded automatically, and I would have to change the bash script every time a new file is produced in the folder that I want to upload.
Erase the folder by its id from google drive and then upload the folder again. The problem here is that whenever I do this, the id of the uploaded folder chagnes, so I couldn't find a way to write a script to do the work.
I am looking for any method that solves my problem. But the precise questions that could help me are:
Is there a way to delete a folder from google drive (using gdrive) by its name instead of by its id?
Is there a way to get the id of a folder by its name? I guess not, since there can be several folders with the same name (but different ids) uploaded. Or am I missing something?
Is there a way to do a recursive update to renew all files that are already inside the folder uploaded on google drive and in addition upload those that are not yet uploaded?
In case it is relevant, I am using Linux Mint 18.1.
Is there a way to delete a folder from google drive (using gdrive) by its name instead of by its id?
Nope. As your next question observes, there can be multiple such folders.
Is there a way to get the id of a folder by its name? I guess not, since there can be several folders with the same name (but different ids) uploaded. Or am I missing something?
You can get the ids (plural) of all folders with a given name.
gdrive list -q "name = 'My folder name' and mimeType='application/vnd.google-apps.folder' and trashed=false"
Is there a way to do a recursive update to renew all files that are already inside the folder uploaded on google drive and in addition upload those that are not yet uploaded?
Yes, but obviously not with a single command. You'll need to write a short script using gdrive list and parse (awk works well) the output.

creating multiple zip files based on size

I have 150GB of jpg's in around 30 folders. I am trying to import them into the media library of a CMS. The CMS will accept a bulk import of images in a zip file but there is a limit of 500MB on the size of the zip (and it won't accept multi-volume zips).
I need to go into each folder and zip the images into a small number of ~500MB zip files. I am using WinRAR but it doesn't seem to have the facility to do what I want.
Is there another product that will do what I want?
Thanks
David
It is possible with WinRAR also. Please see this guide: Create Multi-part Archives to Split Large Files for Emailing, Writing to CD [How To]

How to transfer files from on zip to another without decrompressing

The title says it all, I'm trying to transfer a list of files from one zip to another without the need to de-compress then re-compress the files.
Any suggestions?
Yes as danielu13 said just unzip to temp folder and copy. Also you may wanto include more specifics into you question (code samples, directory structures etc.
ps #danielu13 why not post your comment as an answer? I'm new here so there could be a good reason.

How to pull the data or files from website using spoon /Kettle

We need to pull the data from some website using peantho kettle if any one is having some pointers please let me know.
The files are in the zip format in link available on web.
Simple. Create a job that downloads the file from the website.
then create a transform called from the job, which loads the zipped files ( you can use text file input to read zipped text files as they are) and writes them to your db.

Resources