Windows 11 here.
Running ffmpeg transcodes from Cygwin.
If I just run them from the command prompt, I get 100% cpu utilization, but if I just put it in a bash script I get ~50% cpu utilization.
So I chained a multi-line command and that works but I need to transcode hundreds of clips and it's cumbersome to use.
I also tried Perl and have the same slowdown.
I tried forcing the number of threads ffmpeg should use, but it did not make a difference.
Any suggestions?
Turns out it was a conflict with paths, I have two versions of ffmpeg installed, and depending on how it was called it would run a different version.
eg.
perl -e "`ffmpeg`"
Will run the updated version.
perl -e "foreach(0 .. 5){`ffmpeg`}"
Would run the outdated version.
Running ffmpeg straight in the console would use the new version, running it from a bash script would run the outdated one.
Related
I am trying to make a shell script to automatically download, compile and build some program I've made. You can see the full script on my blog. My program requires NodeJS 11 or newer, so I am trying to make my script output an appropriate error message in case it's not installed. Here is what I've tried:
node_version=$(node -v) # This does not seem to work in Git Bash on Windows.
# "node -v" outputs version in the format "v18.12.1"
node_version=${node_version:1} # Remove 'v' at the beginning
node_version=${node_version%\.*} # Remove trailing ".*".
node_version=${node_version%\.*} # Remove trailing ".*".
node_version=$(($node_version)) # Convert the NodeJS version number from a string to an integer.
if [ $node_version -lt 11 ]
then
echo "NodeJS version is lower than 11 (it is $node_version), you will probably run into trouble!"
fi
However, when I try to run it in Git Bash on Windows 10, with NodeJS 18 installed and in PATH, here is what I get:
stdout is not a tty
NodeJS version is lower than 11 (it is 0), you will probably run into trouble!
What is going on here?
The problem here is a mismatch in expectations of how a terminal should behave from a program with unix/posix expectations compared to how terminals behave on windows. This is a known problem, and there is a longer discussion of the issue on the nodejs repository (although this is not unique to just nodejs).
To compensate there is a tool winpty which provides the missing expectations (or alternatively node-pty), although you normally do not need to install explicitly, it should be included in the normal Git bash installation.
For this reason node is normally set up to be an alias for winpty node, although that causes problems for stout/stdin redirects, so a common practice is to then invoke node as node.exe (e.g. node.exe --help | wc -l) to sometimes avoid using winpty.
For your case you probably want to change your script to
if ...is-running-on-windows... # Check https://stackoverflow.com/q/38086185/23118
then
# I expect one of these to work
NODE_BINARY=node.exe
NODE_BINARY="winpty node"
else
NODE_BINARY=node
fi
node_version=$($NODE_BINARY -v)
I know there are many linux experts here, I wish to get little help with at command in Ubuntu.
I have been troubled by at command in ubuntu (18.04 and 20.04) for quite a while, but I don't know where I made a mistake. I've tried at on three of my Ubuntu systems and it doesn't work on any of them. at is very handle and nice job scheduler, I really want to get it to work so that I do not have to manually launch programs in the late night on a shared Ubuntu server. I read many tutorials on at command, here is a very good one.
at now + 1 minutes -f ~/myscript.sh, it looks really great and can save me lots of energy. Unfortunately, when myscript.sh is extremely simple,then at now + 1 minutes -f ~/myscript.sh can run smoothly and I get what I expected. Here is everything I have in myscript.sh:
echo $(date) > ~/Desktop/time.txt
On top of that, it never worked for me. For example when I change myscript.sh to
echo $(date) > ~/Desktop/time.txt
pycharm.sh
Basically what myscript.sh does it is noting down the time and to open Pycharm IDE. I can run sh myscript.sh without at , it wroks very well. However, when I run at at now + 1 minutes -f ~/myscript.sh, the time is noted down but Pycharm was not never opened (I can see the process in htop if Pycharm is open). Also at now + 1 minutes -f ~/script.sh does not work with any of my other shell scripts.
Could you please help me understand where I have done wrong and how to make it work. Thank you very much.
PyCharm and other GUI programs need a lot of information from your environment. The atd daemon which runs jobs for at does not have access to this environment. You will need to specify it directly.
I recommend running printenv redirected to a file in an at job. Then compare that to printenv running from a terminal in your GUI session. Find the differences and see if you can set them up the same way at the beginning of your at script.
I've ubuntu 16.04 and i've installed perl 5.8.7,5.18.2 both with threads activated, and 5.18.1 without threads.
The purpose was to use a version of Perl with threads instead of forks as i've mutiple scripts already done with threds and forks is not a proper multi-threading models (it just forks processes).
The first problem i get is when installing modules via cpanm -fi [name_of_module] command. As a matter of fact the command doesnt' return anything at all! just 'Perl'. The same fact happens when i'm traying to instal wathever modules i desire using in my scrips.
I think this problem is linked with the fact i'm able to use ''treads'' just when i run the scripts without sudo (e.g.perl [name_of_the_script]), while when i run it with 'sudo perl [name_of_the_script]' says 'the current version of Perl doesn't support threads'!
That's quite strange.
The perlbrew enviroment veriables are set up correctly and when i type ''which perl'', the system poits to the new-version directory as expected.
Dont' know how to proceed
I see that you posted several question in one paragraph. I'll try to answer the onse that I could.
cpanm -fi XXX does not "return anything at all"
I'm not sure I get this part. If XXX failed to be installed, there should probably be some error messages on the screen. The fact that perlbrew list-modules prints nothing but Perl implies that probably nothing is installed.
It could be that your cpanm execuateble is implicityl installing stuff for system perl instead. You could verify this by checking the first line of head =cpanm. If it is not #!/usr/bin/env perl, it is probably wrong. You want the one installed by: perlbrew install-cpanm
sudo perl
I wonder if you PATH is actually correctly set -- Running perl -V after perlbrew use 5.8.7 can show you enough version information and therefore tell you whether perlbrew itself is working properly.
You probably also need sudo -E perl instead. sudo reset env vars unless you ask it not to (the -E option), and PERL5LIB is probably needed.
Previous
This is a follow-up to this question.
Specs
My system is a dedicated server running Ubuntu Desktop, Release 12.04 (precise) 64-bit, 3.14.32-xxxx-std-ipv6-64. Neither release or kernel can be upgraded, but I can install any package.
Problem
The problem discribed in the question above seems to be solved, however this doesn't work for me. I've installed the latest lftp and parallel packages and they seem to work fine for themselves.
Running lftp works fine.
Running ./job.sh ftp.microsoft.com works fine, but I needed to chmod -x the script
Running sed 's/|.*$//' end_unique.txt | xargs parallel -j20 ./job.sh ::: does not work and produces bash errors in the form of /bin/bash: <server>: command not found.
To simplify things, I cleaned the input file end_unique.txt, now it has the following format for each line:
<server>
Each line ends in a CRLF, because it is imported from a windows server.
Edit 1:
This is the job.sh script:
#/bin/sh
server="$1"
lftp -e "find .; exit" "$server" >"$server-files.txt"
Edit 2:
I took the file and ran it against fromdos. Now it should be standard unix format, one server per line. Keep in mind that the server in the file can vary in format:
ftp.server.com
www.server.com
server.com
123.456.789.190
etc. All of those servers are ftp servers, accessible by ftp://<serverfromfile>/.
With :::, parallel expects the list of arguments it needs to complete the commands it's going to run to appear on the command line, as in
parallel -j20 ./job.sh ::: server1 server2 server3
Without ::: it reads the arguments from stdin, which serves us better in this case. You can simply say
parallel -j20 ./job.sh < end_unique.txt
Addendum: Things that can go wrong
Make certain two things:
That you are using GNU parallel and not another version (such as the one from moreutils), because only (as far as I'm aware) the GNU version supports reading an argument list from stdin, and
That GNU parallel is not configured to disable the GNU extensions. It turned out, after a lengthy discussion in the comments, that they are disabled by default on Ubuntu 12.04, so it is not inconceivable that this sort of thing might be found elsewhere (particularly downstream from Ubuntu). Such a configuration can hide in
The environment variable $PARALLEL,
/etc/parallel/config, or
~/.parallel/config
If the GNU version of parallel is not available to you, and if your argument list is not too long for the shell and none of the arguments in it contain whitespaces, the same thing with the moreutils parallel is
parallel -j20 job.sh -- $(cat end_unique.txt)
This did not work for OP because the file contained more servers than the shell was willing to put into a command line, but it might work for others with similar problems.
A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.