The title says it all, I'm trying to transfer a list of files from one zip to another without the need to de-compress then re-compress the files.
Any suggestions?
Yes as danielu13 said just unzip to temp folder and copy. Also you may wanto include more specifics into you question (code samples, directory structures etc.
ps #danielu13 why not post your comment as an answer? I'm new here so there could be a good reason.
Related
Sometime ago I worked in a team that developed a bunch of educational softwares and now they are been reviewed for bugs and updates.
During this process, I noticed that the folder "imgs" accumulated too many files. Probably one of the developers decided to include all the images used by each of the softwares into the folder. However, because there are too many softwares, it would be too painful to check manually all of them (and some of the images are part of the layout, almost invisible).
Is there a way to write a shell script in Linux to check if the files in a given folder are being used by a set of HTML and JS files in another folder?
Go to the images folder and try this
for name in *; { grep -ril $name /path/to/soft/* || echo "$name not used"; }
Im not sure I understood your question correctly,
But maybe this will help you
ls -1 your_source_path | while read file
do
grep -wnr "$file*" your_destination_path ||
echo "no matching for file $file"
# you can set any extra action here
done
in source_path you put director from hi will list all files name and destination where he should searching.
It is not possible to check for the generic case - since HTML and Javascript are two dynamic (e.g. the Javascript code could create the image file name on the file). Likewise, images can be specified in CSS style sheet, inline style, etc.
You want to review the HTML/JS files, and see if possible to identify the tags that are actually used to specify images. This will hopefully, reduce the number of XML tags and attribute names that need to be extracted.
As an alternative, if you have access to the 'access log' of the server, you can find out which images have been accessed over time, and focus the search on images not referenced in the log file.
In SharePoint online when my flow moves the file (PDF, ZIP...) named "U000" in a folder where is a file with the same name it renames the file in "U0001".
How can I customize this to be renamed like "U000-Rev.1" or "U000_copy(1)" instead of "U0001"?
I know this is the default SharePoint behavior and there is no option for renaming format but maybe I can change or add a code in "definition.json" file from exported ZIP flow (or somewhere else).
(I'm not a software developer so any answer/idea is welcomed.)
Thank you!
Add an if statement to check the file name you just uploaded. If it contains (1) at the end of the name then rename the file. This is probably the least convoluted fast approach but it's not 100% robust.
You can add more logic or change the approach to make it fully robust but you can look into that after you've got something working imo, baby steps.
I have written a bit of automated code that checks a SharePoint site and looks for a ZIP file (lets call it doc.zip). If doc.zip is found, it downloads it, and then checks for a file (say target.docx). doc.zip is about 300MB, and so I want to only download where necessary.
What I would like to know is that given SharePoint has some ZIP search capability, is it possible to write code using CSOM (c#) to find doc.zip, and then run some code to retrieve the contents of doc.zip without downloading it.
Just to re-iterate, I am comfortable with searching for files in a folder on SP, downloading the file, and unpacking zip entries. What I need is to retrieve a ZIP files content on SP without downloading it.
E.g. is there a SP command:
cxt.Load(SomeZipFileQuery);
cxt.ExecuteQuery();
Thanks in advance.
This capability is not available. I do like the idea. Having the ability to "parse" zip files on the server side and then download the relevant bits would be ideal. Perhaps raise this on uservoice to see if others also find this us https://sharepoint.uservoice.com
Ok, I have proven yet again that stubbornness will prevail.
I have figured out that if I use the /_api/search?query='myfile.zip' web REST API to search for my file, this search will also match ZIP files that contain the file I need. And it works perfectly.
Of course there is added (pain) of parsing an XML response, but it works very nicely for my code example.
At least if someone is looking for this solution here it is. I wont bore anyone with code, as the /_api/search has probably been done to death already on other threads.
I have read the documentation for gdrive here, but I couldn't find a way to do what I want to do. I want to write a bash script to upload automatically a specific folder from my hard drive. The problem is that when I upload it several times, instead of replacing the old folder by the new one, it generates a new folder with the same name.
I could only find the following partial solutions to my problem:
Use update to replace files. The problem with this partial solution is that new files inside the folder could not get uploaded automatically, and I would have to change the bash script every time a new file is produced in the folder that I want to upload.
Erase the folder by its id from google drive and then upload the folder again. The problem here is that whenever I do this, the id of the uploaded folder chagnes, so I couldn't find a way to write a script to do the work.
I am looking for any method that solves my problem. But the precise questions that could help me are:
Is there a way to delete a folder from google drive (using gdrive) by its name instead of by its id?
Is there a way to get the id of a folder by its name? I guess not, since there can be several folders with the same name (but different ids) uploaded. Or am I missing something?
Is there a way to do a recursive update to renew all files that are already inside the folder uploaded on google drive and in addition upload those that are not yet uploaded?
In case it is relevant, I am using Linux Mint 18.1.
Is there a way to delete a folder from google drive (using gdrive) by its name instead of by its id?
Nope. As your next question observes, there can be multiple such folders.
Is there a way to get the id of a folder by its name? I guess not, since there can be several folders with the same name (but different ids) uploaded. Or am I missing something?
You can get the ids (plural) of all folders with a given name.
gdrive list -q "name = 'My folder name' and mimeType='application/vnd.google-apps.folder' and trashed=false"
Is there a way to do a recursive update to renew all files that are already inside the folder uploaded on google drive and in addition upload those that are not yet uploaded?
Yes, but obviously not with a single command. You'll need to write a short script using gdrive list and parse (awk works well) the output.
I'm wondering if this is possible and the best way to accomplish it if it is.
Scenario: We have multiple sites that create a "dated subdirectory" each day at a certain time. The dated subs contain information for that day of business.
I need to pull a single DBF file out of the dated sub each day and either export the data to an ever-expanding Excel file, that contains information from the single DBF file from EACH day so it looks like:
Day 1's information
.
.
.
Day 2's information
.
.
.
Day 3's information
OR
Add a copy of the DBF file from each dated sub to a ZIP file that is done daily.
The name of the DBF file never changes, and can't be deleted.
I'm thinking it could be done with a forfiles command, but am curious if it could be done more efficiently. The file that searches, pulls, and zips would be run as a task nightly.
As an add-on, could it be pushed to a Google Drive for safe storage?
Sorry if this is rambling. This is something I'd love to try to do, but not sure where to start exactly.
-Dated sub created nightly, single file from that directory needs to be pulled or read and transferred either to an Excel file or copied to a ZIP with a way to seperate each file, maybe a directory with date as name?
Also, if able, it needs to start with a particular date, like 6/1/2014 but no further back.
Thanks in advance for any help.
Can you merge data from a DBF file to an Excel file? Not really with pure batch, but you can use JScript or VB Script. You'll need the MS ACE OLEDB 12.0 driver. Then you can use a connection string for DBF and another for XLSX. (If you're using XLS or CSV, you could get by with the MS Jet driver, running the WOW64 version of cscript.) Once connected, just use SQL queries. SELECT * FROM dbffile, and as you're looping through the recordset, INSERT INTO xlsxfile.
Can you append a DBF file to a zip file? Probably. I'm guessing 7za.exe a will append to the archive if the archive already exists. Try it and see. Or were you wanting to script the zip functionality without 3rd party software?
Can it be copied to a Google Drive? Well, yeah, the Google Drive software watches and mirrors a folder on your hard drive. So chances are, copying the file to %userprofile%\Google Drive\ will do what you want without any conscious effort.
Try posting another question. But rather than rambling again, find one specific problem where you're getting stuck, and explain what you've tried without success.