what does -I or "unshift a path mean" in expresso? - node.js

I'm not sure I understand what "unshifting a path" with -I means in expresso. Does that mean if I run expresso with the switch as such
expresso -I myCode test/*
That when I normally use require statements in my tests in my test folder such as
models = require "../myCode/models"
That instead I can call the require like this?
models = require "models"
I thought that was my understanding but it doesn't seem to work as it gives me an "Error, cannot find module"

Overview
Unshift basically just means "add an item to the beginning of a sequence of items".
So in this context, unshifting a path just means to add a path to the beginning of the sequence of paths.
The following cheatsheet explains a little more using arrays as an example.
Quick Cheatsheet:
The terms shift/unshift and push/pop can be a bit confusing, at least to folks who may not be familiar with programming in C.
If you are not familiar with the lingo, here is a quick translation of alternate terms, which may be easier to remember:
* array_unshift() - (aka Prepend ;; InsertBefore ;; InsertAtBegin )
* array_shift() - (aka UnPrepend ;; RemoveBefore ;; RemoveFromBegin )
* array_push() - (aka Append ;; InsertAfter ;; ;; InsertAtEnd )
* array_pop() - (aka UnAppend ;; RemoveAfter ;; ;; RemoveFromEnd )

Related

Getting Common Lisp Process ID on Linux

I am wondering if there is a way to get Linux's PID (Process ID) from Common Lisp's REPL. That is, I would like to know the ID of the SBCL or Allegro process from the REPL of the process itself.
There's nothing in the Common Lisp specification that implements this. Process IDs are too implementation-dependent.
In SBCL, the SB-POSIX package provides Lisp interfaces to most POSIX system calls, so you would use (sb-posix:getpid).
In Allegro CL, operating system interface functions are in the EXCL.OSI package, so you would use (excl.ose:getpid)
There is a (basically) portable way to do this. CL provides for reading files and one can observe that the pid of the current process is in /proc/self/status (also /proc/self is a symlink to the process’ pid but I don’t think there’s a portable read link).
Specifically /proc/self/status is a text file and contains a line that looks like:
Pid: 439
So you could parse the file to extract that.
But then once you have the pid there isn’t much you can do with it without system calls or /proc weirdness
Final Solution (to the biggest part by #Dan Robertson's and #coredump - thank you guys!)
Actually #Dan Robertson gave the full answer - I realize in retrospect.
This answer is just the implementation of what he said. So give him the points!
(ql:quickload "CL-PPCRE") ;; for regex parsing
(defun get-this-pid ()
"Return PID of this current lisp process."
(with-open-file (in #P"/proc/self/status")
(loop for line = (read-line in nil)
while line
when (ppcre:scan "^Pid" line)
do (return (car
(ppcre:all-matches-as-strings "\\d+"
line))))))
;; to get current process id, call:
(get-this-pid) ;
;; returns for me at the moment using sbcl "12646"
;; this is correct as closing of all other sbcl processes
;; and doing "pidof sbcl" in the shell showed.
As #Don Robertson pointed out, the file /proc/self/status shows the program which opens it its "PID" number (every program sees it differently). Thank you Don, since this solves the problem of finding really the PID of the program (pidof sbcl in the shell would give several numbers if several lisp programs are running independently on the machine.
Calling external programs is obsolete, if we open this file then from within cl, like #coredump pointed out.
PID numbers of other programs
;; Thanks to #coredump - who suggested to use
;; `ppcre:split :whitespace-char-class` for capturing arbitrary numbers
;; in the answer string - I added a test for integer-string-p to clean
;; non-numberic values after split.
(ql:quickload "CL-PPCRE")
(defun integer-string-p (string)
"Does string constist only of '01234567890' characters?"
(reduce (lambda (x y) (and x y))
(mapcar (lambda (c) (member c (coerce "1234567890" 'list)))
(coerce string 'list))))
(defun extract-integers-from-string (s)
"Return integer-words of s."
(let ((l (ppcre:split :whitespace-char-class s)))
(remove-if-not #'integer-string-p l)))
(defun pid-numbers (program-name)
"Return PID numbers of a program in current machine."
(let ((pid-line (with-output-to-string (out)
(external-program:run "pidof" (list program-name)
:output out))))
(extract-integers-from-string pid-line)))
;; call it
(pid-numbers "sbcl")
(pid-numbers "firefox")
;; * (pid-numbers "sbcl")
;; ("16636" "12346")
;; * (pid-numbers "firefox")
;; ("24931" "19388" "19122" "10800" "10745") ; yeah I have many open :D

reading a log file with different sequences using shell script

I'm a starter with shell scripting and I wanted to ask you a question about reading data from a log file. the file is really long and includes few steps of calculation convergence.
step 1
...
converged
final energy : 1000000
step 2
...
converged
final energy : 10000
...
structure optimized
final energy: 100000
what I need to do is to first find if the structure is finally optimized if so the read the final energy and some other data. In mathematica I could find the position of structure optimized and do a search from there is the same thing possible in shell?
I'm a starter please list all the commands I need to use
This might look something like the following:
optimized=0
while IFS= read -r line; do
case $line in
"structure optimized")
optimized=1
continue
;;
"final energy")
[ "$optimized" = 1 ] || continue
echo "Found final energy after structure optimized"
;;
esac
done <input.log
If when you put final energy in your sample file, you really mean something like:
final energy: 10000
...then you might change the relevant clause to:
"final energy: "*)
[ "$optimized" = 1 ] || continue
final_energy=${line#*:}
echo "Found optimized final energy: $final_energy"
;;
...but without a detailed and precise specification, how's anyone to know exactly what you mean?
in awk version
awk '{if($0=="structure optimized")i=1;if(($0~/final energy/) && (i==1)){print "Found optimized "$0;i=0}}' filename

How to create a bash variable like $RANDOM

I'm interest in some thing : every time I echo $RANDOM , the show value difference . I guess the RANDOM is special (When I read it , it may call a function , set a variable flag and return the RANDOM number . ) . I want to create a variable like this , how I can do it ? Every answer will be helpful .
The special behavior of $RANDOM is a built-in feature of bash. There is no mechanism for defining your own special variables.
You can write a function that prints a different value each time it's called, and then invoke it as $(func). For example:
now() {
date +%s
}
echo $(now)
Or you can set $PROMPT_COMMAND to a command that updates a specified variable. It runs just before printing each prompt.
i=0
PROMPT_COMMAND='((i++))'
This doesn't work in a script (since no prompt is printed), and it imposes an overhead whether you refer to the variable or not.
If you are BASH scripting there is a $RANDOM variable already internal to BASH.
This post explains the random variable $RANDOM:
http://tldp.org/LDP/abs/html/randomvar.html
It generates a number from 0 - 32767.
If you want to do different things then something like this:
case $RANDOM in
[1-10000])
Message="All is quiet."
;;
[10001-20000])
Message="Start thinking about cleaning out some stuff. There's a partition that is $space % full."
;;
[20001-32627])
Message="Better hurry with that new disk... One partition is $space % full."
;;
esac
I stumbled on this question a while ago and wasn't satisfied by the accepted answer: He wanted to create a variable just like $RANDOM (a variable with a dynamic value), thus I've wondered if we can do it without modifying bash itself.
Variables like $RANDOM are defined internally by bash using the dynamic_value field of the struct variable. If we don't want to patch bash to add our custom "dynamic values" variables, we still have few other alternatives.
An obscure feature of bash is loadable builtins (shell builtins loaded at runtime), providing a convenient way to dynamically load new symbols via the enable function:
$ enable --help|grep '\-f'
enable: enable [-a] [-dnps] [-f filename] [name ...]
-f Load builtin NAME from shared object FILENAME
-d Remove a builtin loaded with -f
We now have to write a loadable builtin providing the functions (written in C) that we want use as dynamic_value for our variables, then setting the dynamic_value field of our variables with a pointer to the chosen functions.
The production-ready way of doing this is using an another loadable builtin crafted on purpose to do the heavy-lifting, but one may abuse gdb if the ptrace call is available to do the same.
I've made a little demo using gdb, answering "How to create a bash variable like $RANDOM?":
$ ./bashful RANDOM dynamic head -c 8 /dev/urandom > /dev/null
$ echo $RANDOM
L-{Sgf

Subtle software security bugs in webapps

Im doing research on the capabilities of static analysis and at the moment I'm in the process of gathering code-snippets which contain subtle vulnerabilities.
By that I mean not the obvious XSS and SQLI, but more subtle ones like below:
$url = htmlspecialchars($_GET["url"]);
echo "<a href=$url>Click here to continue</a>";
$url = htmlspecialchars($_GET["url"]);
echo "<a href='$url'>Click here to continue</a>";
$filename = $_GET["filename"];
$safeFile = str_replace("../", "", $filename);
include("home/test/traversal/" . $safeFile . ".php");
Obviously, first two are XSS and last one is arbitrary file inclusion. Can you provide me with more of such examples. Language preferably php, java, c# or vb, but if you have examples in other languages, that's also fine.
Btw, this is not a game of bypassing the analyzer with nifty tricks, but a global analysis of what is and what is not detected by different analyzers. So on purpose obscured code to fool the analyser is not what I'm looking for.
Another example is
$query = mysql_real_escape($_GET["id"]);
mysql_query("SELECT * FROM prods WHERE id=" . $query);
or
$safeVal = htmlspecialchars($_GET['val']);
echo "<a href='#' $safeVal>Click here</a>
Cases in which escaping or other measures are used, but where there is still a vulnerability:
Bug #49687 utf8_decode xml_utf8_decode vuln
Request #47694 escapeshellcmd() considered harmful?
Request #39863 file_exists() silently truncates after a null byte

Unit testing for shell scripts

Pretty much every product I've worked on over the years has involved some level of shell scripts (or batch files, PowerShell etc. on Windows). Even though we wrote the bulk of the code in Java or C++, there always seemed to be some integration or install tasks that were better done with a shell script.
The shell scripts thus become part of the shipped code and therefore need to be tested just like the compiled code. Does anyone have experience with some of the shell script unit test frameworks that are out there, such as shunit2 ? I'm mainly interested in Linux shell scripts for now; I'd like to know how well the test harness duplicate the functionality and ease of use of other xUnit frameworks, and how easy it is to integrate with continuous build systems such as CruiseControl or Hudson.
UPDATE 2019-03-01: My preference is bats now. I have used it for a few years on small projects. I like the clean, concise syntax. I have not integrated it with CI/CD frameworks, but its exit status does reflect the overall success/failure of the suite, which is better than shunit2 as described below.
PREVIOUS ANSWER:
I'm using shunit2 for shell scripts related to a Java/Ruby web application in a Linux environment. It's been easy to use, and not a big departure from other xUnit frameworks.
I have not tried integrating with CruiseControl or Hudson/Jenkins, but in implementing continuous integration via other means I've encountered these issues:
Exit status: When a test suite fails, shunit2 does not use a nonzero exit status to communicate the failure. So you either have to parse the shunit2 output to determine pass/fail of a suite, or change shunit2 to behave as some continuous integration frameworks expect, communicating pass/fail via exit status.
XML logs: shunit2 does not produce a JUnit-style XML log of results.
Wondering why nobody mentioned BATS. It's up-to-date and TAP-compliant.
Describe:
#!/usr/bin/env bats
#test "addition using bc" {
result="$(echo 2+2 | bc)"
[ "$result" -eq 4 ]
}
Run:
$ bats addition.bats
✓ addition using bc
1 tests, 0 failures
Roundup by #blake-mizerany sounds great, and I should make use of it in the future, but here is my "poor-man" approach for creating unit tests:
Separate everything testable as a function.
Move functions into an external file, say functions.sh and source it into the script. You can use source `dirname $0`/functions.sh for this purpose.
At the end of functions.sh, embed your test cases in the below if condition:
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
fi
Your tests are literal calls to the functions followed by simple checks for exit codes and variable values. I like to add a simple utility function like the below to make it easy to write:
function assertEquals()
{
msg=$1; shift
expected=$1; shift
actual=$1; shift
if [ "$expected" != "$actual" ]; then
echo "$msg EXPECTED=$expected ACTUAL=$actual"
exit 2
fi
}
Finally, run functions.sh directly to execute the tests.
Here is a sample to show the approach:
#!/bin/bash
function adder()
{
return $(($1+$2))
}
(
[[ "${BASH_SOURCE[0]}" == "${0}" ]] || exit 0
function assertEquals()
{
msg=$1; shift
expected=$1; shift
actual=$1; shift
/bin/echo -n "$msg: "
if [ "$expected" != "$actual" ]; then
echo "FAILED: EXPECTED=$expected ACTUAL=$actual"
else
echo PASSED
fi
}
adder 2 3
assertEquals "adding two numbers" 5 $?
)
I recently released new testing framework called shellspec.
https://shellspec.info/
https://github.com/ko1nksm/shellspec
shellspec is BDD style testing framework.
It's works on POSIX compatible shell script including bash, dash, ksh, busybox etc.
Of course, the exit status reflects the result of running of the specs
and it's has TAP-compliant formatter.
The specfile is close to natural language and easy to read,
and also it's shell script compatible syntax.
#shellcheck shell=sh
Describe 'sample'
Describe 'calc()'
calc() { echo "$(($*))"; }
It 'calculates the formula'
When call calc 1 + 2
The output should equal 3
End
End
End
In addition to roundup and shunit2 my overview of shell unit testing tools also included assert.sh and shelltestrunner.
I mostly agree with roundup author's critique of shunit2 (some of it subjective), so I excluded shunit2 after looking at the documentation and examples. Although, it did look familiar having some experience with jUnit.
In my opinion shelltestrunner is the most original of the tools I've looked at since it uses simple declarative syntax for test case definition. As usual, any level of abstraction gives some convenience at the cost of some flexibility. Even though, the simplicity is attractive I found the tool too limiting for the case I had, mainly because of the lack of a way to define setup/tearDown actions (for example, manipulate input files before a test, remove state files after a test, etc.).
I was at first a little confused that assert.sh only allows asserting either output or exit status, while I needed both. Long enough to write a couple of test cases using roundup. But I soon found the roundup's set -e mode inconvenient as non-zero exit status is expected in some cases as a means of communicating the result in addition to stdout, which makes the test case fail in said mode. One of the samples shows the solution:
status=$(set +e ; rup roundup-5 >/dev/null ; echo $?)
But what if I need both the non-zero exit status and the output? I could, of course, set +e before invocation and set -e after or set +e for the whole test case. But that's against the roundup's principle "Everything is an Assertion". So it felt like I'm starting to work against the tool.
By then I've realized the assert.sh's "drawback" of allowing to only assert either exit status or output is actually a non-issue as I can just pass in test with a compound expression like this
output=$($tested_script_with_args)
status=$?
expected_output="the expectation"
assert_raises "test \"$output\" = \"$expected_output\" -a $status -eq 2"
As my needs were really basic (run a suite of tests, display that all went fine or what failed), I liked the simplicity of assert.sh, so that's what I chose.
You should try out the assert.sh lib, very handy, easy to use
local expected actual
expected="Hello"
actual="World!"
assert_eq "$expected" "$actual" "not equivalent!"
# => x Hello == World :: not equivalent!
I have recently encountered a very thorough review of existing Bash unit testing frameworks - https://github.com/dodie/testing-in-bash
Shellspec has been so far the best, however it still depends on what you would like to achieve.

Resources