I have started working with the libzip library today. But I do not understand the principle how libzip works.
My focus is on zipping a directory with all the files and dirs within
into a zip-file.
Therefore, I started with zip_open(), then I read the directory
contents and add all the dirs with zip_dir_add() to the archive.
After that, I closed the zip-file with zip_close(). Everything was
fine. The next step should be to add all the files to the archive with
zip_file_add(). But it doesn't work. The last step closing the file
fails.
OK, I forgot to create a zip_source to get this done. I added a
statement a line before to get this source (zip_source_file()). But
still it doesn't work.
What is wrong in my thinking? Do I have to fopen() and fclose() the file on the filesystem also?
And what is the difference between zip_source_file() and zip_source_filep()?
Do I have to fopen() and fclose() the file on the filesystem also?
No, you can just use zip_source_file().
From your comments I think you have the right general idea, but there is probably some detail that is making it fail. Make sure you perform all the error checking the documentation suggests after each libzip call so you can get more information about what is causing it to fail.
You could also compare your code with https://gist.github.com/clalancette/bb5069a09c609e2d33c9858fcc6e170e
Related
Is there a way to open a directory and create it if doesn't exist atomically ?
My use case is simple, I use a directory to watch every public key that are allowed to connect to my server, but if the directory doesn't exist I want to create it, unfortunately for now I only see a two step solution, create the path with create_dir_all() and then open it with read_dir() but this create a possible situation where the directory is delete between the two calls (very unlucky in my docker container... but anyway !)
I didn't find any solution in linux to do that and I'm quite surprise cause that a very common operation for file.
I found a related question Create a directory and return a dirfd with `open` but it's focus on file descriptor and so is more specific. The answer seem to say this doesn't prevent race condition, I don't really understand the context, my only concern is to avoid to create the directory and than try to open it and it's fail. It's more for convenience and robustness of code.
As I review these file system flags, I'm I correct in concluding that there is no flag you can pass to fs.promises.writeFile that will automatically create all missing directories leading up to a filename? If not, which flag does this?
I don't like solutions that check for the existence of the folders first before attempting writeFile, because after the folders are created that check happens every time you write to a file in that folder.
In my program, after the folders are created once, it should always be there, so it seems more efficient to only create the folders if there is an exception. However, I'm hoping there is a flag that avoids all this micro-management.
If a flag for auto-creating the folders doesn't exist for writeFile, then I'd like to attempt writeFile first, and then (only if there is an exception) create the folders recursively.
fs.promises.writeFile() does not automatically create the directory structure for you. That must exist first.
If you want to automatically create the path because you received an error indicative of a path problem, you can use fs.promises.mkdir() and pass the recursive flag.
And you could, of course, create your own wrapper function that calls fs.promises.writeFile() and if it gets whatever error you get when the path doesn't exist (you'd have to test to see exactly what that error is), then call fs.promises.mkdir() and then repeat the fs.promises.writeFile(). It could all be wrapped in your own utility function.
everyone,
when I deploy my package to a linux environment, I met this error:
.../Linux-2.6c2.5-i686/Ncurses/Ncurses-15766.0-0/lib/libncurses.so.5 is encountered a second time at /apollo/_env/FBAMerchantAutoRemovalDaemon-swit1na.1755067.237551097.1107633519/perl/lib/perl5.8-dist/File/Find.pm line 542.
though I read the perl script, I have no idea what is wrong. I suspect my environment is tainted. Does anyone have idea what is wrong and how can I debug this problem? Thanks a lot in advance!
Zhe
From perldoc File::Find
follow
Causes symbolic links to be followed. Since directory trees with symbolic links (followed) may contain files more than once and may even have cycles, a hash has to be built up with an entry for each file. This might be expensive both in space and time for a large directory tree. See "follow_fast" and "follow_skip" below. If either follow or follow_fast is in effect:
It is guaranteed that an lstat has been called before the user's wanted() function is called. This enables fast file checks involving _. Note that this guarantee no longer holds if follow or follow_fast are not set.
There is a variable $File::Find::fullname which holds the absolute pathname of the file with all symbolic links resolved. If the link is a dangling symbolic link, then fullname will be set to undef.
So, if, for the purposes of your application, if it is OK to follow symlinks, invoke find with the follow option set:
find({ wanted => \&process, follow => 1 }, $dir);
Or, consider if one of the other follow_skip behaviors is more appropriate for your application:
follow_skip
follow_skip==1, which is the default, causes all files which are neither directories nor symbolic links to be ignored if they are about to be processed a second time. If a directory or a symbolic link are about to be processed a second time, File::Find dies.
follow_skip==0 causes File::Find to die if any file is about to be processed a second time.
follow_skip==2 causes File::Find to ignore any duplicate files and directories but to proceed normally otherwise.
It may be that follow_skip => 2 is more appropriate for your application. Only you can make that decision.
Julia contains a number of methods for making temporary files and directories.
I'm making fairly heavy use of them (and /dev/shm), to inferface with libraries that really want to work with actual files (JLD/HDF5, and OpenStack Swift).
I had been assuming they would be deleted when their finalisers on the pointer to there name were called.
But then after exiting julia it seemed like they were all still there.
Will linux delete them?
If the app didn't clean after itself, the OS will delete the files eventually. It depends on system settings when temp files are deleted. For example, it can happen on boot or nightly (via cron job) or some another way.
See this answer, for example: How is the /tmp directory cleaned up?
What you are likely looking for,
given your surprise that they were not removed, based on going out of scope, as the do block versions of mktemp.
In the very documentation you linked.
mktemp(f::Function[, parent=tempdir()])
Apply the function f to the result of mktemp(parent) and remove the temporary file upon completion.
mktempdir(f::Function[, parent=tempdir()])
Apply the function f to the result of mktempdir(parent) and remove the temporary directory upon completion.
Which you can use like:
mktempdir("/dev/shm") do tdir
fname = joinpath(tdir, name)
#Do some things with your new temp filename `fname` in your tempdir `tdir`
end
#the directory referenced by `tdir`, and `fname`, have now been deleted.
I'm using CC.net on against a Source Safe database, and have a problem that someone deleted some files from the database, and the deleted files weren't removed. I didn't see a config switch or anything that I could set for it to clear the code directory prior to building.
Am I missing something?
As Alex says there is a CleanCopy flag in the source code block. However, my situation was a little different. I use subversion and I found the CleanCopy flag was NOT doing what it said it would on the box.
To solve the problem I added a task which runs a batch file that clears out the build's working copy prior to checkout. It is a bit slower (about 1 min for code base of 400Mb) but guarantees no old code.
Kindness,
Dan
All you need to do is set CleanCopy to true in your source control block. The documentation is very clear on this. The above answer is the wrong way.