How do I copy a file retaining the original permissions? - linux

I would like to copy a file using pure Go, emulating the behavior of cp -p.
My copy function currently looks like:
// copy creates a copy of the file located at `dst` at `src`.
func copyFile(src, dst string) error {
in, err := os.Open(src)
if err != nil {
return err
}
defer in.Close()
out, err := os.Create(dst)
if err != nil {
return err
}
_, err = io.Copy(out, in)
if err != nil {
out.Close()
return err
}
return out.Close()
}
which will create dst owned by whoever is running the process. Instead, I would like to keep the owner and permissions of src, i.e. what I get from:
// copy creates a copy of the file located at `dst` at `src`.
func copyFile(src, dst string) error {
cmd := exec.Command("cp", "-p", src, dst)
return cmd.Run()
}
but without having to call through to system commands (for portability). Everything I tried ended up calling through to something else. Is this possible to do in Go?

It's definitely possible in Go but not in a system-independent manner (because different OS kernels have different ideas about what "permissions" are, and how they are implemented).
Also consider that the identity used by the process copying the file might have insufficient permissions to set permissions on the target file (for instance, on Linux, a non-root user cannot change the owning group of a file to a group the user is not a member of, and it obviously cannot set the file owner to anyone other than theirselves — in other words, mere mortals cannot transfer ownerwhip, only share files within their own "circles" defined by group membership).
Basically, to do what you'r after, you have to Stat the source file and then os.Chmod (and os.Chown, if needed) the destination file after it is created.
Also please note that Linux-native filesystems support POSIX ACLs on files, and each file might or might not have them.
Whether you include this in what you define "permissions" are, is an open question.

Related

How to copy a file from one directory to another using "os/exec" package in GO

if I am in directory A and running the GO code, and I need to copy a file from directory B to directory C , how to do it? I tried adding cmd.Dir = "B" but it can copy the files in "B" directory, but when I try full path for directory "C" it throws error "exit status 1"
basic code sample
Currently in directory A with location "/var/A"
cmd := exec.Command("cp","/var/C/c.txt","/var/B/")
err := cmd.Run()
"os/exec" is the Go package used to run external programs, which would include Linux utilities.
// The command name is the first arg, subsequent args are the
// command arguments.
cmd := exec.Command("tr", "a-z", "A-Z")
// Provide an io.Reader to use as standard input (optional)
cmd.Stdin = strings.NewReader("some input")
// And a writer for standard output (also optional)
var out bytes.Buffer
cmd.Stdout = &out
// Run the command and wait for it to finish (the are other
// methods that allow you to launch without waiting.
err := cmd.Run()
if err != nil {
log.Fatal(err)
}
fmt.Printf("in all caps: %q\n", out.String())

perl untar single file

So running into an issue with my code here not sure what exactly i'm doing wrong i pass it the two arguments it searches for the file but its always going to does not exist.
i pass this to the file
perl restore.cgi users_old_52715.tar.gz Ace_Walker
its not finding the file it exist i assure you.
#!/usr/bin/perl
use Archive::Tar;
my $tarPath = $ARGV[0];
my $playerfile = $ARGV[1].".ini";
my $tar = Archive::Tar->new($tarPath);
if ($tar->contains_file($playerfile)) {
$tar->read($tarPath);
$tar->extract_file($playerfile, './' );
print "Successfully restored $playerfile to production enviroment\n";
exit 0;
}else{
print $playefile." does not exist in this archive!\n";
exit 0;
}
Just writing Scott Hunter's comment as an answer:
Try using an absolute path instead of a relative one.
if( $tar->extract_file($playerfile, './'.$playerfile )){
print "Successfully restored $playerfile to production enviroment\n";
}
exit 0;
man Archive::Tar :
$tar->extract_file( $file, [$extract_path] )
Write an entry, whose name is equivalent to the file name provided to disk. Optionally takes a second parameter, which is the full native path (including filename) the entry will be written to.

Changing how nodejs require() fetches files

I'm looking to monkey-patch require() to replace its file loading with my own function. I imagine that internally require(module_id) does something like:
Convert module_id into a file path
Load the file path as a string
Compile the string into a module object and set up the various globals correctly
I'm looking to replace step 2 without reimplementing steps 1 + 3. Looking at the public API, there's require() which does 1 - 3, and require.resolve() which does 1. Is there a way to isolate step 2 from step 3?
I've looked at the source of require mocking tools such as mockery -- all they seem to be doing is replacing require() with a function that intercepts certain calls and returns a user-supplied object, and passes on other calls to the native require() function.
For context, I'm trying to write a function require_at_commit(module_id, git_commit_id), which loads a module and any of that module's requires as they were at the given commit.
I want this function because I want to be able to write certain functions that a) rely on various parts of my codebase, and b) are guaranteed to not change as I evolve my codebase. I want to "freeze" my code at various points in time, so thought this might be an easy way of avoiding having to package 20 copies of my codebase (an alternative would be to have "my_code_v1": "git://..." in my package.json, but I feel like that would be bloated and slow with 20 versions).
Update:
So the source code for module loading is here: https://github.com/joyent/node/blob/master/lib/module.js. Specifically, to do something like this you would need to reimplement Module._load, which is pretty straightforward. However, there's a bigger obstacle, which is that step 1, converting module_id into a file path, is actually harder than I thought, because resolveFilename needs to be able to call fs.exists() to know where to terminate its search... so I can't just substitute out individual files, I have to substitute entire directories, which means that it's probably easier just to export the entire git revision to a directory and point require() at that directory, as opposed to overriding require().
Update 2:
Ended up using a different approach altogether... see answer I added below
You can use the require.extensions mechanism. This is how the coffee-script coffee command can load .coffee files without ever writing .js files to disk.
Here's how it works:
https://github.com/jashkenas/coffee-script/blob/1.6.2/lib/coffee-script/coffee-script.js#L20
loadFile = function(module, filename) {
var raw, stripped;
raw = fs.readFileSync(filename, 'utf8');
stripped = raw.charCodeAt(0) === 0xFEFF ? raw.substring(1) : raw;
return module._compile(compile(stripped, {
filename: filename,
literate: helpers.isLiterate(filename)
}), filename);
};
if (require.extensions) {
_ref = ['.coffee', '.litcoffee', '.md', '.coffee.md'];
for (_i = 0, _len = _ref.length; _i < _len; _i++) {
ext = _ref[_i];
require.extensions[ext] = loadFile;
}
}
Basically, assuming your modules have a set of well-known extensions, you should be able to use this pattern of a function that takes the module and filename, does whatever loading/transforming you need, and then returns an object that is the module.
This may or may not be sufficient to do what you are asking, but honestly from your question it sounds like you are off in the weeds somewhere far from the rest of the programming world (don't take that harshly, it's just my initial reaction).
So rather than mess with the node require() module, what I ended up doing is archiving the given commit I need to a folder. My code looks something like this:
# commit_id is the commit we want
# (note that if we don't need the whole repository,
# we can pass "commit_id path_to_folder_we_need")
#
# path is the path to the file you want to require starting from the repository root
# (ie 'lib/module.coffee')
#
# cb is called with (err, loaded_module)
#
require_at_commit = (commit_id, path, cb) ->
dir = 'old_versions' #make sure this is in .gitignore!
dir += '/' + commit_id
do_require = -> cb null, require dir + '/' + path
if not fs.existsSync(dir)
fs.mkdirSync(dir)
cmd = 'git archive ' + commit_id + ' | tar -x -C ' + dir
child_process.exec cmd, (error) ->
if error
cb error
else
do_require()
else
do_require()

Pattern match data from file in Lua

I've been tasked with creating a new server modification for Crysis Wars. I have ran into a particular issue that it cannot read the old ban-file (this is required in order to keep the server consistent). The Lua code itself does not seem to have any errors, but it's just not getting any of the data.
Looking at the code I'm using for this below, can you find anything wrong with it?
This is the code I'm using for this:
function rX.CheckBanlist(player)
local Root = System.GetCVar("sys_root");
local File = ""..Root.."System/Bansystem/Raptor.xml";
local FileHnd = io.open(File, "r");
for line in FileHnd:lines() do
if (not string.find(line, "User:Read")) then
System.Log("[rX] File Read Error: System/Raptor/Banfile.xml, The contents are unexpected.");
return false;
end
local Msg, Date, Reason, Type, Domain = string.match(line, "User:Read( '(.*)', { Date='(.*)'; Reason='(.*)'; Typ='(.*)'; Info='(.*)'; } );");
local rldomain = g_gameRules.game:GetDomain(player.id);
if (Domain == rldomain) then
return true;
else
return false;
end
end
end
Also, the actual file reads as this, but I can't get the " to work in Lua properly. Could this be the issue?
User:Read( "Banned", { Date="31.03.2011"; Reason="WEBSTREAM"; Typ="Inetnum"; Info="COMPUTER.SED.gg"; } );
You may prefer using Lua's [[ for multiline string when you want to include quotes inside quotes etc.
Also, you'd have to escape the ( and ) while matching:
local Msg, Date, Reason, Type, Domain = line:match([[User:Read%( "(.-)", { Date="(.+)"; Reason="(.+)"; Typ="(.+)"; Info="(.+)"; } %);]])
And the results will be as expected: http://codepad.org/gN8kSL6H

Which objects are finalized in Go by default and what are some of the pitfalls of it?

The function runtime.SetFinalizer(x, f interface{}) sets the finalizer associated with x to f.
What kind of objects are finalized by default?
What are some of the unintended pitfalls caused by having those objects finalized by default?
The following objects are finalized by default:
os.File: The file is automatically closed when the object is garbage collected.
os.Process: Finalization will release any resources associated with the process. On Unix, this is a no-operation. On Windows, it closes the handle associated with the process.
On Windows, it appears that package net can automatically close a network connection.
The Go standard library is not setting a finalizer on object kinds other than the ones mentioned above.
There seems to be only one potential issue that may cause problems in actual programs: When an os.File is finalized, it will make a call to the OS to close the file descriptor. In case the os.File has been created by calling function os.NewFile(fd int, name string) *File and the file descriptor is also used by another (different) os.File, then garbage collecting either one of the file objects will render the other file object unusable. For example:
package main
import (
"fmt"
"os"
"runtime"
)
func open() {
os.NewFile(1, "stdout")
}
func main() {
open()
// Force finalization of unreachable objects
_ = make([]byte, 1e7)
runtime.GC()
_, err := fmt.Println("some text") // Print something via os.Stdout
if err != nil {
fmt.Fprintln(os.Stderr, "could not print the text")
}
}
prints:
could not print the text
Just jump into the os.NewFile's source code:
// NewFile returns a new File with the given file descriptor and name.
func NewFile(fd uintptr, name string) *File {
fdi := int(fd)
if fdi < 0 {
return nil
}
f := &File{&file{fd: fdi, name: name}}
runtime.SetFinalizer(f.file, (*file).close) // <<<<<<<<<<<<<<
return f
}
When go runs GC, it will run Finalizers bind on that object.
When you open a new file, the go library will bind a Finalizer on that returned object for you.
When you are not sure what the GC will do to that object, jump to source code and check whether the library has set some finalizers on that object.
"What kind of objects are finalized by default?"
Nothing in Go is IMO finalized by default.
"What are some of the unintended pitfalls caused by having those objects finalized by default?"
As per above: none.

Resources