I am trying to combine files with cat using Rust. Below is some example code that is erroring with the below error.
use std::process::Command as cmd;
cmd::new("/bin/cat")
.arg("1.txt 2.txt > 3.txt")
.spawn()
.expect("Failure");
/bin/cat: '1.txt 2.txt > 3.txt': No such file or directory
I also tried adding it as multiple arguments with
cmd::new("/bin/cat")
.args("1.txt 2.txt", ">", "3.txt")
.spawn()
.expect("Failure");
which errors out with
/bin/cat: '1.txt 2.txt': No such file or directory
/bin/cat: '>': No such file or directory
/bin/cat: 3.txt: No such file or directory
I've tried with cmd::new("/bin/sh") but that doesn't work either.
Here are two examples considering the suggestions in the comments.
fn main() {
// redirect output from rust
std::process::Command::new("/bin/cat")
.args(["1.txt", "2.txt"])
.stdout(std::fs::File::create("3.txt").unwrap())
.spawn()
.expect("spawn failure")
.wait()
.expect("wait failure");
//
// rely on the shell for redirection
std::process::Command::new("/bin/sh")
.args(["-c", "cat 1.txt 2.txt >3_bis.txt"])
.spawn()
.expect("spawn failure")
.wait()
.expect("wait failure");
}
A simpler and more robust solution is to use std::fs instead of cat:
use std::fs;
let file1 = fs::read_to_string("1.txt").expect("1.txt could not be opened");
let file2 = fs::read_to_string("2.txt").expect("2.txt could not be opened");
fs::write("3.txt", file1 + "\n" + &file2).expect("3.txt could not be written");
Note that this doesn't depend on cat, it works on every platform and operating system, and it allows handling each error properly.
Related
fn main() {
let output = Command::new("/bin/bash")
.args(&["-c", "docker","build", "-t", "postgres:latest", "-", "<>", "dockers/PostgreSql"])
.output()
.expect("failed to execute process");
println!("{:?}", output);
}
Above code runs fine but prints output only after docker script is completely ran, But I want to see all command output in my Linux terminal as it happens and want to see output as it happens,
I tried all combinations given in documentation and read it many times, not understanding, how do I redirect stdout to my terminal window,
According to the documentation stdout has default behavior depending on how you launch your subprocess:
Defaults to inherit when used with spawn or status, and defaults to piped when used with output.
So stdout is piped when you call output(). What does piped mean? This means the child process's output will be directed to the parent process (our rust program in this case). std::process::Command is kind enough to give us this as a string:
use std::process::{Command, Stdio};
let output = Command::new("echo")
.arg("Hello, world!")
.stdout(Stdio::piped())
.output()
.expect("Failed to execute command");
assert_eq!(String::from_utf8_lossy(&output.stdout), "Hello, world!\n");
// Nothing echoed to console
Now that we understand where the stdout is currently going, if you wish to have the console get streamed output, call the process using spawn():
use std::process::Command;
fn main() {
let output = Command::new("/bin/bash")
.args(&["-c", "echo hello world"])
.spawn()
.expect("failed to execute process");
println!("{:?}", output);
}
Notice also in this later example, I pass the full echo hello world command in one string. This is because bash -c splits its arg by space and runs it. If you were in your console executing a docker command through a bash shell you would say:
bash -c "docker run ..."
The quotes above tell the terminal to keep the third arg together and not split it by space. The equivalent in our rust array is to just pass the full command in a single string (assuming you wish to call it through bash -c of course).
I had luck with the other answer, but I had to modify it a little. For me, I
needed to add the wait method:
use std::{io, process::Command};
fn main() -> io::Result<()> {
let mut o = Command::new("rustc").arg("-V").spawn()?;
o.wait()?;
Ok(())
}
otherwise the parent program will end before the child.
https://doc.rust-lang.org/std/process/struct.Child.html#method.wait
I want a function for getting directory entries on Linux. I use ioutil.ReadDir and usually it is fast.
But if I want to read some mounted virtual file system on /run/user/1000/gvfs/, this function becomes slow. If the directory has many file entries I need to wait a long time.
I can use the ls command in a terminal and result will be the same.
When I tried ls -U -a -p -1 I got line by line output immediately.
I tried running this in Go with exec.Command, but it didn't work asynchronously. Go is waiting for full program output. What did I do wrong?
m.cmd = exec.Command("ls", "-U", "-a", "-p", "-1")
// for example some "slow" directory:
m.cmd.Dir = "/run/user/1000/gvfs/dav:host=webdav.yandex.ru,ssl=true,user=...../"
reader, _ := m.cmd.StdoutPipe()
bufReader := bufio.NewReader(reader)
go func() {
m.cmd.Start()
for {
line, _, err := bufReader.ReadLine()
if err != nil {
break
}
linestr := string(line)
if linestr != "./" && linestr != "../" {
fmt.Println(linestr)
}
}
}()
I need line by line printing immediately in Go.
Try ls -U -a -p 1 | cat to see if you get line-by-line output.
Go doesn't control ls; ls does line-by-line writing if ls chooses to do so, and ls chooses not to do that when its output is a pipe. You could allocate a pty pair and use that, but that's the wrong way to do this.
ioutil.ReadDir first reads the entire directory (by calling Readdir(-1)), then sorts the file names. If you use os.Open to open the directory, then call the Readdir or Readdirnames function with a small (but not negative) number, you should get something more to your liking.
I created a small script in Perl and I am really new to this. I'm supposed to have a script that looks at an argument given and create a directory tree in the given argument. This part of the script works. The second part (which is the nested if statement) does not when you do not give an argument and it asks you to input a directory of your choice. I believe the nested if statement is messing up due to the $file input but I'm not entirely sure whats wrong. This is probably something really simple, but I have not been able to find the solution. Thank you in advance for the help and tips.
#! /usr/bin/perl
if ($#ARGV == -1)
{
print "Please enter default directory:";
my $file=<STDIN>;
if (-d $file)
{
chdir $file;
system("mkdir Data");
system("mkdir Data/Image");
system("mkdir Data/Cache");
print "Structure Created";
}
else
{
print "Directory does not exsist";
}
}
else
{
chdir $ARGV[0];
system("mkdir Data");
system("mkdir Data/Image");
system("mkdir Data/Cache");
print ("Structure Created");
}
print ("\n");
The test -d $file is failing because what is entered via STDIN also has the newline, after the string that specifies the directory name. You need chomp($file);
However, there are a few more points I would like to bring up.
Most importantly, there is repeated code in both branches. You really do not want to do that. It can, and does, cause trouble later. Instead, decide on the directory name, and then make it.
Second, there is no reason to go out to the system in order to make a directory. It is far better to do it in Perl, and there are good modules for this.
use strict;
use warnings;
use File::Path qw(make_path);
my $dir;
if (not #ARGV) {
print "Please enter default directory: ";
$dir = <STDIN>;
chomp $dir;
}
else {
$dir = $ARGV[0];
}
die "No directory $dir" if not -d $dir;
my $orig_cwd = chdir $dir or die "Can't chdir to $dir: $!";
my #dirs = map { "Data/$_" } qw(Image Cache);
my #dirs_made = make_path( #dirs, { verbose => 1 } );
print "Created directories:\n";
print "$_\n" for #dirs_made;
I build the directory list using map so to avoid repeated strings with Data/..., and for later flexibility. You can of course just type the names in, but that tends to invite silly mistakes.
I used File::Path to make the directories. It builds the whole path, like mkdir -p, and has a few other useful options that you can pass in { }, including error handling. There are other modules as well, for example Path::Tiny with its mkpath (and a lot of other goodies).
Note that with chdir you probably want to record the current working directory, that it returns, and that you want to check for error. But you don't have to chdir, if there are no other reasons for that. Just include the $dir name in the map
# No chdir needed here
my #dirs = map { "$dir/Data/$_" } qw(Image Cache);
I have the following perl script that is intended to accept command line arguments that will archive all of a users data files into a zip file and then delete the original data. The script does alright, but when run again with a different user as the argument, it overwrites the previous data in the userData.zip file. I have searched and not been able to find how to perform this task. It should continue to accept users as an argument and append their folders to the userData.zip file.
Any help is appreciated.
use 5.14.2;
use strict;
use warnings;
use Archive::Zip qw( :ERROR_CODES :CONSTANTS );
use File::Path;
my ($DATAFILEIN, $DATAFILEOUT);
my ($new,$zip);
use constant COLUMNS => 6;
sub main {
verifyArguments();
setDataFileIn();
zipFiles();
deleteUserFiles();
#setDataFileOut();
#printData();
#writeData();
}
sub verifyArguments {
if (!(#ARGV) || !(-e $ARGV[0])) {
die "\n\nYou must specify correct file name upon command invocation.\n\n";
}
}
sub setDataFileIn {
$DATAFILEIN = $ARGV[0];
}
sub zipFiles {
print "\nBacking up ".$DATAFILEIN."\n";
sleep 1;
$zip = Archive::Zip->new();
opendir (DIR, $DATAFILEIN) or die $!;
while (my $file = readdir(DIR)) {
# Use -f to look for a file
next unless (-f $DATAFILEIN."\\".$file);
$zip->addFile($DATAFILEIN."\\".$file, );
print "Added $file to zip\n";
}
closedir(DIR);
my $fileName = $DATAFILEIN;
unless ( $zip->writeToFileNamed('userData.zip') == AZ_OK ) {
die 'write error';
}
print "Successfully backed up $fileName to userData.zip\n";
}
sub deleteUserFiles{
rmtree($DATAFILEIN);
}
main();
Have you read this portion of the Archive::Zip FAQ?
Can't Read/modify/write same Zip file
Q: Why can't I open a Zip file, add a member, and write it back? I get
an error message when I try.
A: Because Archive::Zip doesn't (and can't, generally) read file
contents into memory, the original Zip file is required to stay around
until the writing of the new file is completed.
The best way to do this is to write the Zip to a temporary file and
then rename the temporary file to have the old name (possibly after
deleting the old one).
Archive::Zip v1.02 added the archive methods overwrite() and
overwriteAs() to do this simply and carefully.
See examples/updateZip.pl for an example of this technique.
I don't see $zip->overwrite() in your code.
The best place to find information on CPAN modules is http://metacpan.org. In this case, the Archive::Zip page. That page has a documentation link to Archive::Zip::FAQ. You can read it there, or you can probably just type perldoc Archive::Zip::FAQ on your system where you have the module installed.
The examples are part of the downloaded package. If you used the cpan command to install Archive::Zip, then the examples would be in the build location. By default, that would be ~/.cpan/build/Archive-Zip-*/examples.
I am having a perl script in which i am giving path to directory as input.
Directory has xml files inside it.
In my code i am iterating through all the xml files and creating absolute path for all xml files. Code is working fine.
#!/usr/bin/perl
use File::Spec;
$num_args = $#ARGV + 1;
if ($num_args != 1) {
print "\nUsage: $0 <input directory>\n";
exit;
}
my $dirPath = $ARGV[0];
opendir(DIR, $dirPath);
my #docs = grep(/\.xml$/,readdir(DIR));
foreach my $file (#docs)
{
my $abs_path = join("",$dir,$file);
print "absolute path is $abs_path";
}
Question i have here is,
joining $dirPath and $file with no separator which means that $dirPath must end in a "/". So is there any way or built in function in perl which take cares of this condition and replaces the join method.
All i want is not to worry about the separator "/". Even if script is called with path as "/test/dir_to_process" or "/test/dir_to_process/", i should be able to produce the correct absolute path to all xml files present without worrying about the separator.
Let me know if anyone has any suggestions.
Please take heed of the advice you are given. It is ridiculous to keep asking questions when comments and answers to previous posts are being ignored.
You must always use strict and use warnings at the top of every Perl program you write, and declare every variable using my. It isn't hard to do, and you will be reprimanded if you post code that doesn't have these measures in place.
You use the File::Spec module in your program but never make use of it. It is often easier to use File::Spec::Functions instead, which exports the methods provided by File::Spec so that there is no need to use the object-oriented call style.
catfile will correctly join a file (or directory) name to a path, doing the right thing if path separators are incorrect. This rewrite of your program works fine.
#!/usr/bin/perl
use strict;
use warnings;
use File::Spec::Functions 'catfile';
if (#ARGV != 1) {
print "\nUsage: $0 <input directory>\n";
exit;
}
my ($dir_path) = #ARGV;
my $xml_pattern = catfile($dir_path, '*.xml');
while ( my $xml_file = glob($xml_pattern) ) {
print "Absolute path is $xml_file\n";
}
The answer is in the documentation for File::Spec, e.g., catfile:
$path = File::Spec->catfile( #directories, $filename );
or catpath:
$full_path = File::Spec->catpath( $volume, $directory, $file );
This will add the trailing slash if not there:
$dirPath =~ s!/*$!/!;