Interactive quiz in Bash (Multiple Q's) - linux

I'm teaching an introductory Linux course and have abandoned the paper-based multiple-choice quizzes and have created interactive quizzes in Bash. My quiz script is functional, but kind of quick-and-dirty, and now I'm in the improvement phase and looking for suggestions.
First off, I'm not looking to automate the grading, which certainly simplifies things.
Currently, I have a different script file for each quiz, and the questions are hard-coded. That's obviously terrible, so I created a .txt file holding the questions, delimited by lines with "question 01" etc. I can loop through and use sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/p", but this prints the delimiter lines. I can pipe through sed "/^q/d" or head -n-1|tail -n+2 to get rid of them, but is there a better way?
Second issue: For questions where the answer is an actual command, I'm printing a [user]$ prompt, but for short-answer, I'm using a >. In my text file, for each question, the last line is the prompt to use. Initially, I was thinking I could store the question in a variable and |tail -1 it to get the prompt, but duh, when you store it it strips newlines. I want the cursor to immediately follow the prompt, so I either need to pass it to read -p or strip the final newline from the output. (Or create some marker in the file to differentiate between the $ and > prompt.) One thought I had was to store each question in a separate file and just cat it to display it, making sure there was no newline at the end. That might be kind of a pain to maintain, but it would solve both problems. Thoughts?
Now to how I'm actually running the quiz. This is a Fedora 20 box, and I tried copying bash and setuid-ing it to me so that it would be able to read the quiz script that the students couldn't normally read, but I couldn't get that to work. After some trial and error, I ended up copying touch and setuid-ing it to me, then using that to create their answer file in a "submit" directory with an ACL so new files have o=w so they can write to their answer file (in the quiz with >> echo) but not read it back or access the directory. The only major loophole I see with this is that they can delete their file by name and start the quiz over with no record of having done so. Since I'm not doing any automatic grading, I'm not terribly concerned with the students being able to read the script file, although if I'm storing the questions separately, I suppose I could make a copy of cat and setuid it to read in files that they can't access.
Also, I realize that Bash is not the best choice for this, and learning the required simple input/output for Python or something better would not take much effort. Perhaps that's my next step.

1) You could use
sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/ { //!p }"
Here // repeats the last attempted pattern, which is the opening pattern in the first line of the range and the closing pattern for the rest.
...by the way, if you really want to do this with sed, you better be damn sure that i is a number, or you'll run into code injection problems.
2) You can store multiline command output in a variable without problems. You just have to make sure you quote the variable everafter to avoid shell expansion on it. For example,
QUESTION=$(sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/ { //!p }" questions.txt)
echo -n "$QUESTION" # <-- the double quotes are important here.
The -n option to echo tells echo to not append a newline at the end, which should take care of your prompt problem.
3) Yes, well, hackery breeds more hackery. If you want to lock this down, the first order of business would be to not give students a shell on the test machine. You could put your script behind inetd and have the students fill it out with telnet or something, I suppose, but...really, why bash? If it were me, I'd knock something together with a web server and one of the several gazillion php web quiz frameworks. Although I also have to wonder why it's a problem if students can see the questions and the answers they gave. It's not like all students use the same account and can see each other's answers, is it? (is it?) Don't store an answer key on the same machine and you shouldn't have a problem.

Related

Redirect to a file named psfile

Sorry for probably the seemingly dumb question, but I am rather confused on what my professor is asking, or how to test if I got the right answer. I have searched for similar questions but nothing seems to exactly relate. Here's the question
create a new file called program2.scr.
Change the permissions on this new file to add the execute bit for user, group, and owner.
Edit your new file using Gedit, VIM or VI and change it so it performs the following actions:
· Using a line after #!/bin/bash, take the output of the ps –e command and pipe it to the sort command.
· After it has been piped to the sort command, redirect that output to a file called psfile.
So far my code is this:
#!/bin/bash
ps -e | sort > psfile
Is this all I need to do? Is it right? How to I test for the right output?

Add comments next to files in Linux

I'm interested in simply adding a comment next to my files in Linux (Ubuntu). An example would be:
info user ... my_data.csv Raw data which was sent to me.
info user ... my_data_cleaned.csv Raw data with duplicates filtered.
info user ... my_data_top10.csv Cleaned data with only top 10 values selected for each ID.
So sort of the way you can comment commits in Git. I don't particularly care about searching on these tags, filtering them etc. Just seeings them when I list files in a directory. Bonus if the comments/tags follow the document around as I copy or move it.
Most filesystem types support extended attributes where you could store comments.
So for example to create a comment on "foo.file":
xattr -w user.comment "This is a comment" foo.file
The attributes can be copied/moved with the file just be aware that many utilities require special options to copy the extended attributes.
Then to list files with comments use a script or program that grabs the extended attribute. Here is a simple example to use as a starting point, it just lists the files in the current directory:
#!/bin/sh
ls -1 | while read -r FILE; do
comment=`xattr -p user.comment "$FILE" 2>/dev/null`
if [ -n "$comment" ]; then
echo "$FILE Comment: $comment"
else
echo "$FILE"
fi
done
The xattr command is really slow and poorly written (it doesn't even return error status) so I suggest something else if possible. Use setfattr and getfattr in a more complex script than what I have provided. Or maybe a custom ls command that is aware of the user.comment attribute.
This is a moderately serious challenge. Basically, you want to add attributes to files, keep the attributes when the file is copied or moved, and then modify ls to display the values of these attributes.
So, here's how I would attack the problem.
1) Store the information in a sqlLite database. You can probably get away with one table. The table should contain the complete path to the file, and your comment. I'd name the database something like ~/.dirinfo/dirinfo.db. I'd store it in a subfolder, because you may find later on that you need other information in this folder. It'd be nice to use inodes rather than pathnames, but they change too frequently. Still, you might be able to do something where you store both the inode and the pathname, and retrieve by pathname only if the retrieval by inode fails, in which case you'd then update the inode information.
2) write a bash script to create/read/update/delete the comment for a given file.
3) Write another bash function or script that works with ls. I wouldn't call it "ls" though, because you don't want to mess with all the command line options that are available to ls. You're going to be calling ls always as ls -1 in your script, possibly with some sort options, such as -t and/or -r. Anyway, your script will call ls -1 and loop through the output, displaying the file name, and the comment, which you'll look up using the script from 2). You may also want to add file size, but that's up to you.
4) write functions to replace mv and cp (and ln??). These would be wrapper functions that would update the information in your table, and then call the regular Unix versions of these commands, passing along any arguments received by the functions (i.e. "$#"). If you're really paranoid, you'd also do it for things like scp, which can be used (inefficiently) to copy files locally. Still, it's unlikely you'll catch all the possibilities. What if someone else does a mv on your file, who doesn't have the function you have? What if some script moves the file by calling /bin/mv? You can't easily get around these kinds of issues.
Or if you really wanted to get adventurous, you'd write some C/C++ code to do this. It'd be faster, and honestly not all that much more challenging, provided you understand fork() and exec(). I can't recall whether sqlite has a C API. I assume it does. You'd have to tangle with that, too, but since you only have one database, and one table, that shouldn't be too challenging.
You could do it in perl, too, but I'm not sure that it would be that much easier in perl, than in bash. Your actual code isn't that complex, and you're not likely to be doing any crazy regex stuff or string manipulations. There are just lots of small pieces to fit together.
Doing all of this is much more work than should be expected for a person answering a question here, but I've given you the overall design. Implementing it should be relatively easy if you follow the design above and can live with the constraints.

Go to beginning of output command created - shortcut

I can scroll trough bash output using shift+pgup/pgdown.
But lets say, some command outputted lot of text, I have to pageup few times to go to beginning of output of this command.
Can I just simply do this by some shortcut? Something that simply allows me to scroll between previous commands (not history!), seeing their output.
You could try piping the output into less:
someCommand | less
less will allow you to search and scroll through the output text pretty easily.
once in less you can just type % to jump back to the top of the page. Essentially that means jump to 0% of the page. There are also a bunch of extra commands on the page I linked to above.
Another option is to use screen and use backward search (beware: read the Overview first, especially the part about the C-a prefix) to e.g. search for some specific characters in your prompt (like your username).
The scroll back history in Unix shells is a shell specific functionality, meaning that it is up to the specific shell (xterm, rxvt, text console, etc) to handle it. The functionality you request would require the shell to identify the individual program runs, to know where to scroll to. Scanning text is not technically hard per se, but as prompts and command display can differ due to user settings it can be hard to make it work generally good. Some communication between the shell and the terminal could make it better.
There sure are some nice fancy terminal programs doing things like this, to for example show syntax help when writing commands, but for your case I agree with previous answer, that piping commands to less is a good way to isolate the output. It might be a bit cumbersome first, as it requires you to think about it first, and not just go back in history, but if you learn the shell better and learn to use the command history it will probably work fine. I recommend you to, if you haven't already. What I mean is ctrl-r etc. More described for example here:
http://www.catonmat.net/blog/the-definitive-guide-to-bash-command-line-history/

Shell script - input redirection when prompted by the shell script multiple times

We have a shell script that expects multiple user inputs to be entered when prompted. e.g
At first it may ask for the operation to be performed. When that answer is given, it may ask for username then password etc. We want to automate this task by providing the inputs using file redirection i.e.
script < input.
The input file will have all the answers for different questions that the script may ask. However it is not working and the shell script is reading only the first line of the input file. What do I need to change or use to make this work?
What you can use is the program expect. You create a script for it that tells it when to give what input to some command it executes. This way you can automate exactly the kind of thing you're struggling with.
More info on Google and here:
http://www.linuxjournal.com/article/3065
man page: http://linux.die.net/man/1/expect
You say 'it only reads the first line of input.'
So you have to kill the script?
Is there any output? (error messages especially)?
Are you redirecting STDERR to /dev/null or else where? If so, remove that.
Here is the hightest probability helper ... Modify the top-level script and add set -vx at the 2nd line. Then you'll be able to see what was processed, where it has stopped and possibly formulate theories about why it is not processing data.
Any chance that the input file was created in a Windows environment and the cr\lf pair is messing up the expected input?
I hope this helps.
Thanks all for commenting and answering. I tried except and that did not work. So I am going to mention what worked for us. Here was our workflow - 1. At the linux prompt, type the command, it was connect() in our case. 2. Once that command is given, the script would ask for parameters for the command like port number, server etc. we had to provide that manually 3. Then we again are presented with a shell prompt with another input. In our case, we were able to provide the first command connect() at the prompt using file redirection, but the parameter passing was an issue. The solution we found was provide the parameters inside the parentheses of connect only i.e. our input file for redirection would contain - connect(). This worked for us.

How do I view a log file generated by screen (screenlog.0)

So I just found out I can create log files of everything I do in screen (C-a H). Sounds like a nice way to keep track of potential goofs in a particular screen session. However, when I went to try it out the logfile is reported as being a binary file (and can't be viewed like a regular text as such). So am I missing something? A quick man page looksee and searching Google (and SO) turns up nothing about this.
So my question is: How do I generate plain text log files in screen?
Assuming the answer is "What a noob... how about you try making them? RTFM." my question becomes: How do I use less to view screen logfiles I've created (since less screenlog.0 does not work on a binary file)?
EDIT: So cat works fine but less complains that the file is binary... why?
SOLUTION: as jcomeau_ictx helpfully pointed out, you can view these logfiles fine with cat or more but with less you must add the -r flag less -r screenlog.0
I just found a screenlog.0 on the net; it is plain text, with some escape sequences. Just 'cat' the file, you should be able to view it just fine.
[after more checking]
Control-A H is what generates the screenlog on my system. And though 'cat' works, you'll miss a lot of data. Use 'more' instead of 'less' to interpolate the escape codes.
I found neither less nor more nor cat to be an ideal solution for viewing screenlog files. All "replay" some of the control character so that e.g. screen deletions as produced by "clear" (don't remember the corresponding control character) are beeing shown, hiding what has been cleared.
What i know works great is: use "view" or "vi", it just shows the control character in escaped notation. Probably any other text editor works, too (not tested).
-L logs to file,
tail -f 'logfilename' to monitor this file

Resources