I am encountering a really weird issue when trying to "cd" into a specific directory (e.g. directory_A) along a path. Whenever I try to "cd", my linux terminal immediately hangs for at least 1hr. Upon entering successfully, the terminal is completely frozen and I cannot run any commands within the shell.
Additionally, while exiting the "cd" command during execution through "ctrl-c" does kill the "cd" call, it becomes impossible to run any additional command within the shell (i.e. "ls/cd/etc.." into directory_B causes the terminal to hang again). This happens despite the fact that cding into directory_B (without first trying to cd into directory A) causes no issues whatsoever. It appears that trying to enter directory_A at all causes immediate failure of the shell somehow.
What is more is that "lsing" directory A from its parent dir causes no issues. I can see all the files (and even open them! - e.g. through "vim directory_A/foo.txt), but "cding" causes massive problems.
I'm not sure if I just have the wrong keyword searches, but I haven't been able to find similar issues - though I acknowledge I am far from an expert with these things.
Has anyone seen such an issue before? Or may know potentially where to search for potential answers?
I'd be happy to provide any other information as well - thanks very much for any help/advice you may have!
A) type alias | grep cd to see if its aliased, or type cd to check wether its been re-defined as a function.
B) start a new shell without startup file: bash --noprofile --norc
C) use a different shell: sh, or whatever
Related
When I run in the terminal a script which compiles different things by running other scripts, it shows up an error, telling me that the file XXX doesn't exist. But it existed before I ran the script. Thus, I would like to find out what script/program/process deleted the file to fix the bug.
I looked at this but it didn't help me: I am looking for the program at the origin of the deletion, not the user. Additionally, auditctl and iwatch don't exist on the machine I am using.
Please note that I am not interested in what files are deleted (I perfectly know which).
Also, I can recreate XXX to run again the script.
How to simply find out what causes the deletion of XXX?
Two possibilities come to mind how to debug a shell script:
Use strace; the function call you're looking for is most probably unlink(). From the surrounding calls you should be able to deduct the context the deletion is called from.
Switch on command echo with set -x or the shebang #!/bin/bash -x (for Bash) and grep for rm and perhaps unlink. However the deletion may occur in some binary that's called somewhere, in that case you won't find anything this way.
Redirect the output to a file to make searching easier.
I would like to configure my bash in a way so that I react on the event that the user enters a command. The moment they press Enter I would like my bash to run a script I installed first (analog to any PROMPT_COMMAND which is run each time a prompt is given out). This script should be able to
see what was entered,
maybe change it,
maybe even make the shell ignore it (i. e. make it not execute the line),
decide on whether the text shall be inserted in the history or not,
and maybe similar things.
I have not found a proper way to do this. My current implementations are all flawed and use things like debug traps to intervene before executing a command or (HISTTIMEFORMAT='%s '; history 1) to ask the history after the command execution is complete about things when the command was started etc (but that is only hindsight which is not really what I want).
I'd expect something like a COMMAND_INTERCEPTION variable which would work similar to PROMPT_COMMAND but I'm not able to find anything like it.
I also considered to use command line completion to achieve my goal but wasn't able to find anything about reacting on sending a finished command in this, but maybe I just didn't find it.
Any help appreciated :)
You can use the DEBUG trap and the extdebug feature, and peek into BASH_COMMAND from the trap handler to see the running command. (Though as noted in comments, the debug trap is sprung on every simple command, not every command line. Also subshells elude it.)
The debug handler can prevent the command from running, but can't change it directly. Though of course you could run any command inside the debugger, possibly using BASH_COMMAND and eval to build it and then tell the shell to ignore the original command.
This would prevent running anything starting with ls:
$ preventls() { case "$BASH_COMMAND" in ls*) echo "no!"; return 1 ;; esac; }
$ shopt -s extdebug
$ trap preventls DEBUG
$ ls -l
no!
Use trap - DEBUG to remove the trap. Tested on Bash 4.3.30.
A predecessor of mine installed a crappy piece of software on an old machine (running Linux) which I've inherited. Said crappy piece of software installed flotsam all over the place, and also is sufficiently bloated that I want it off ASAP -- it no longer has any functional purpose since we've moved on to better software.
Vendor provided an uninstall script. Not trusting the crappy piece of software, I opened the uninstall script in an editor (a 200+ line Bash monster), and it starts off something like this:
SWROOT=`cat /etc/vendor/path.conf`
...
rm -rf $SWROOT/bin
...
It turns out that /etc/vendor/path.conf is missing. Don't know why, don't know how, but it is. If I had run this lovely little script, it would have deleted the /bin folder, which would have had rather amusing implications. Of course this script required root to run!
I've dealt with this issue by just manually running all the install commands (guh) where sensible. This kind of sucked because I had to interpolate all the commands manually. In general, is there some sort of way I can "dry run" a script to have it dump out all the commands it would execute, without it actually executing them?
bash does not offer dry-run functionality (and neither do ksh, zsh, or any other shell I know).
It seems to me that offering such a feature in a shell would be next to impossible: state changes would have to be simulated and any command invoked - whether built in or external - would have to be aware of these simulations.
The closest thing that bash, ksh, and zsh offer is the ability to syntax-check a script without executing it, via option -n:
bash -n someScript # syntax-check a script, without executing it.
If there are no syntax errors, there will be no output, and the exit code will be 0.
If there are syntax errors, analysis will stop at the first error, an error message including the line number is written to stderr, and the exit code will be:
2 in bash
3 in ksh
1 in zsh
Separately, bash, ksh, and zsh offer debugging options:
-v to print each raw source code line[1]
to stderr before it is executed.
-x to print each expanded simple command to stderr before it is executed (env. var. PS4 allows tweaking the output format).
Combining -n with -v and/or -x offers little benefit:
With -n specified, -x has no effect at all, because nothing is being executed.
With -n specified, -v will effectively simply print the source code.
If there is a syntax error, there may be benefit in the source code getting print up to the point where the error occurs; keep in mind, though that the error message produced by
-n always includes the offending line number.
[1] Typically, it is individual lines that are printed, but the true unit is however many lines a given command - which may be a compound command such as while or a command list (such as a pipeline) - spans.
You could try running the script under Kornshell. When you execute a script with ksh -D, it reads the commands and checks them for syntax, but doesn't execute them. Combine that with set -xv, and you'll print out the commands that will be executed.
You can also use set -n for the same effect. Kornshell and BASH are fairly compatible with each other. If it's a pure Bourne shell script, both Kornshell and BASH will execute it pretty much the same.
You can also run ksh -u which will cause unset shell variables to cause the script to fail. However, that wouldn't have caught the catless cat of a nonexistent file. In that case, the shell variable was set. It was set to null.
Of course, you could run the script under a restricted shell too, but that's probably not going to uninstall the package.
That's the best you can probably do.
I'll be glad if someone can fix the title to be more appropriate since I'm pretty new to terminal.
I have an issue with terminal. Once I execute a command, if it goes to the next line, I can't close it or revert it. I assume it starts the executable or asks for more parameters using >
For example:
//Windows Machine
vagrant up
//Vagrant Instance Unix Machine
$ git
>
>
>
> ... it goes on like this, I can't close > so I can't execute other commands
The only solution on fixing is restarting the terminal (which means I need to restart Vagrant instance)
It happens on some commands only - not all, so I don't know what makes a difference.
For example, executing composer, I get information about Composer and terminal goes back to main state. However, if I execute things like php, git, mysql, > symbol appears and I can't return from there.
So, two basic questions;
What causes this?
How can I terminate the current command to go back main state?
Any help would be greatly appreciated.
Ps. I use both windows terminal and unix terminal and this issue happens on both.
Normally you'll see a > prompt if you've entered a command that's syntactically incomplete, for example if there's a unterminated string literal:
$ echo 'hello
> '
hello
$
It means that the shell is waiting for you to type the rest of the command, or at least enough of it to make for something that's not a syntax error.
In this example, the default prompt, $PS1, is '$ ', and the secondary prompt, $PS2, is '> '. Read the documentation for your shell (probably bash) for more information.
You can cancel the current command and get back to your primary prompt for a new command by typing Control-C.
This is all about the behavior of your shell; it has nothing to do with your terminal (almost certainly a terminal emulator), which merely provides a GUI for your shell to run in.
I am installing a virtualenv and want to understand what's going on.
$ curl -O https://raw.github.com/pypa/virtualenv/master/virtualenv.py
- I understand curl fine
$ python virtualenv.py my_new_env
- Understand this, too
$ . my_new_env/bin/activate
- Here's where I get lost. What is the period doing here?
(my_new_env)$ pip install ...
- What does it mean to have the parentheses here? Does this use tell me I'm in a folder?
The dot is a command that means to read and execute the contents of the given script in the current shell (normally running a shell script runs it in a new process.) Evaluating the script in the current shell can change the environment variables of the current shell, so the behavior of subsequent commands is affected.
I don't know for sure about the parentheses, but I don't think they're meant to be syntax you type. As they come before the '$' prompt, perhaps that's literally what you'll get as your new prompt after running the activate script, to show you that your environment has been changed?
The dot is essentially an "execute" command — execute the commands in my_new_env/bin/activate as though they were typed into your prompt, essentially.
The parentheses shown in the prompt (at least in the tutorial instructions) then indicate that you're typing commands in your new virtual environment, and not in your original (real) environment.