Segmentation fault while using bash script to generate mobility file - linux

I am using a bash script to generate mobility files (setdest) in ns2 for various seeds. But I am running into this troublesome segmentation fault. Any help would be appreciated. The setdest.cc has been modified, so its not the standard ns2 file.
I will walk you through the problem.
This code in a shell script returns the segmentation fault.
#! /bin/sh
setdest="/root/ns-allinone-2.1b9a/ns-2.1b9a/indep-utils/cmu-scen-gen/setdest/setdest_mesh_seed_mod"
let nn="70" #Number of nodes in the simulation
let time="900" #Simulation time
let x="1000" #Horizontal dimensions
let y="1000" #Vertical dimensions
for speed in 5
do
for pause in 10
do
for seed in 1 5
do
echo -e "\n"
echo Seed = $seed Speed = $speed Pause Time = $pause
chmod 700 $setdest
setdest -n $nn -p $pause -s $speed -t $time -x $x -y $y -l 1 -m 50 > scen-mesh-n$nn-seed$seed-p$pause-s$speed-t$time-x$x-y$y
done
done
done
error is
scengen_mesh: line 21: 14144 Segmentation fault $setdest -n $nn -p $pause -s $speed -t $time -x $x -y $y -l 1 -m 50 >scen-mesh-n$nn-seed$seed-p$pause-s$speed-t$time-x$x-y$y
line 21 is the last line of the shell script (done)
The strange thing is If i run the same setdest command on the terminal, there is no problem! like
$setdest -n 70 -p 10 -s 5 -t 900 -x 1000 -y 1000 -l 1 -m 50
I have made out where the problem is exactly. Its with the argument -l. If i remove the argument in the shell script, there is no problem. Now i will walk you through the modified setdest.cc where this argument is coming from.
This modified setdest file uses a text file initpos to read XY coordinates of static nodes for a wireless mesh topology. the relevant lines of code are
FILE *fp_loc;
int locinit;
fp_loc = fopen("initpos","r");
while ((ch = getopt(argc, argv, "r:m:l:n:p:s:t:x:y:i:o")) != EOF) {
switch (ch) {
case 'l':
locinit = atoi(optarg);
break;
default:
usage(argv);
exit(1);
if(locinit)
fscanf(fp_loc,"%lf %lf",&position.X, &position.Y);
if (position.X == -1 && position.Y == -1){
position.X = uniform() * MAXX;
position.Y = uniform() * MAXY;
}
What i dont get is...
In Shell script..
-option -l if supplied by 0 returns no error,
-but if supplied by any other value (i used 1 mostly) returns this segmentation fault.
In Terminal..
-no segmentation fault with any value. 0 or 1
something to do with the shell script surely. I am amazed what is going wrong where!
Your help will be highly appreciated.
Cheers

Related

Check duplicate disk LABEL before mounting it to the system in bash script

is there a way to check if there is duplicate disk LABEL before mounting it to the system?
i need to make sure that if the user have two external drives, if the two of them have the same label, prompt to the user a warning and asking it to remove the duplicated disk.
my code is in the early stages:
if mountpoint -q "${JOB_MOUNT_DIR}"; then
echo " ${JOB_MOUNT_HD_LABEL} já está montado e está pronto para uso"
else
echo "O dispositivo ""${JOB_MOUNT_HD_LABEL}"" não está montado no diretório ""${JOB_MOUNT_DIR}"""
echo "Deseja montar o diretório?"
echo -n "Qual sua opção? [s/n]: "
read -r "opcao"
if [ "$opcao" == "s" ]; then
mkdir -p "${JOB_MOUNT_DIR}/${JOB_MOUNT_HD_LABEL}"
mount -L "${JOB_MOUNT_HD_LABEL}" "${JOB_MOUNT_DIR}/${JOB_MOUNT_HD_LABEL}"
exit 0
else
echo "Disco não irá ser montado"
exit 0
fi
fi
exit 0
some parts are in pt-br i think that will not be a problem
first it checks if that the disk is already mounted, if not it asks to mount, then there is the problem to know which of the two disk with the same LABEL is to mount
You can use:
lsblk -o name,label
NAME LABEL
mmcblk0
└─mmcblk0p1 eMMC
Or you can use:
blkid
/dev/nvme0n1p1: UUID="36D7-B890" TYPE="vfat" PARTUUID="8614534f-01"
/dev/nvme0n1p5: UUID="65885781-bd9b-4c62-afb0-4a82a0e5759e" TYPE="ext4" PARTUUID="8614534f-05"
/dev/mmcblk0p1: LABEL="eMMC" UUID="79ff33b4-2add-4f2f-844e-d7d242c18578" TYPE="ext4" PARTUUID="d4b36674-ab5f-4f15-bb83-313cce242fe4"
You may need to prefix above commands with sudo
If the above commands get your labels correctly, you can pipe the output roughly as follows to list duplicates:
lsblk -o label | sort | uniq -d

Weka command line attributes arguments

On the command line, I'm able to get this rolling with no problem:
java weka.Run weka.classifiers.timeseries.WekaForecaster -W
"weka.classifiers.functions.MultilayerPerceptron -L 0.01 -M 0.2 -N 500 -V 0 -S 0 -E 20 -H 20 " -t "C:\MyFile.arff" -F DirectionNumeric -L 1 -M 3 -prime 3 -horizon 6 -holdout 100 -G TradeDay -dayofweek -weekend -future
But once I try to put the skip list, I start to get errors saying that it's missing a date that is not in the skip list even though the date is in fact on it:
java weka.Run weka.classifiers.timeseries.WekaForecaster -W "weka.classifiers.functions.MultilayerPerceptron -L 0.01 -M 0.2 -N 500 -V 0 -S 0 -E 20 -H 20 " -t "C:\MyFile.arff" -F DirectionNumeric -L 1 -M 3 -prime 3 -horizon 6 -holdout 100 -G TradeDay -dayofweek -weekend -future -skip ""2014-06-07#yyyy-MM-dd, 2014-06-12"
Does anybody knows how to get this working? Weka is low on documentation as far as I know.
Thank's in advance!
Forget it. I got it, the problem was the 's' must be in capital letters:
-Skip
instead of
-skip.

Slurm job expansion using the C API

I sent this question a few months ago to the slurm-dev list, but it is still unsolved.
The problem is: after trying to change the job size as describes the FAQ, I wanted to do it programatically using the API.
Everything seems to work fine before the step of updating the environment variables.
When I launch the application this is what I get:
$ salloc -N1 mpiexec -n 1 ./jobExpansion
salloc: Granted job allocation 559
srun: error: Only allocated 1 nodes asked for 4
In the squeue I can see that the allocation has changed, but srun cannot see the changes.
I continued debugging and if I executed:
$ salloc -N1
$ export SLURM_NODELIST=n04,n06,n00,n01
$ export SLURM_NNODES=4
$ mpiexec -n 1 ./jobExpansion
It worked.
So, I don't want to overwhelm with the complete code but, just in case you could help me I paste here the parts of the resizing:
slurm_init_job_desc_msg(&job);
job.user_id = getuid();
job.min_nodes = hostsToExpand;
job.dependency = (char *) malloc(sizeof (char)*20);
sprintf(job.dependency, (char *) "expand:%s", pID);
//$ salloc -N4 --dependency=expand:$SLURM_JOBID
slurm_alloc_msg_ptr = slurm_allocate_resources_blocking(&job, 0, NULL);
//$ scontrol update jobid=$SLURM_JOBID NumNodes=0
slurm_init_job_desc_msg(&job_update);
job_update.job_id = slurm_alloc_msg_ptr->job_id;
job_update.min_nodes = 0;
slurm_update_job(&job_update);
//exit
slurm_kill_job(slurm_alloc_msg_ptr->job_id, 9, 0);
//$ scontrol update jobid=$SLURM_JOBID NumNodes=ALL
slurm_init_job_desc_msg(&job_update);
job_update.job_id = procID;
job_update.min_nodes = INFINITE;
slurm_update_job(&job_update);
Everything points out the environment variables but I am not sure how to properly update them.
Thank you.
EDITED
If somebody would like test what I've said, here is the repository:
git clone https://siserte#bitbucket.org/siserte/slurm-job-expansion-test.git

Bash script not producing desired result

I am running a cron-ed bash script to extract cache hits and bytes served per IP address. The script (ProxyUsage.bash) has two parts:
(uniqueIP.awk) find unique IPs and create a bash script do add up the hits and bytes
run the hits and bytes per IP
ProxyUsage.bash
#!/usr/bin/env bash
sudo gawk -f /home/maxg/scripts/uniqueIP.awk /var/log/squid3/access.log.1 > /home/maxg/scripts/pxyUsage.bash
source /home/maxg/scripts/pxyUsage.bash
uniqueIP.awk
{
arrIPs[$3]++;
}
END {
for (n in arrIPs) {
m++; # count arrIPs elements
#print "Array elements: " m;
arrAddr[i++] = n; # fill arrAddr with IPs
#print i " " n;
}
asort(arrAddr); # sort the array values
for (i = 1; i <= m; i++) { # write one command line per IP address
#printf("#!/usr/bin/env bash\n");
printf("sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=%s /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt\n", arrAddr[i])
}
}
pxyUsage.bash
sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=192.168.1.13 /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt
sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=192.168.1.14 /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt
sudo gawk -f /home/maxg/scripts/proxyUsage.awk -v v_Var=192.168.1.22 /var/log/squid3/access.log.1 >> /home/maxg/scripts/pxyUsage.txt
TheProxyUsage.bash script runs as scheduled and creates the pxyUsage.bash script.
However the pxyUsage.text file is not amended with the latest values when the script runs.
So far I run pxyUsage.bash every day myself, as I cannot figure out, why the result is not written to file.
Both bash scripts are set to execute. Actually the file permissions are below:
-rwxr-xr-x 1 maxg maxg 169 Mar 14 08:40 ProxySummary.bash
-rw-r--r-- 1 maxg maxg 910 Mar 15 17:15 proxyUsage.awk
-rwxrwxrwx 1 maxg maxg 399 Mar 17 06:10 pxyUsage.bash
-rw-rw-rw- 1 maxg maxg 2922 Mar 17 07:32 pxyUsage.txt
-rw-r--r-- 1 maxg maxg 781 Mar 16 07:35 uniqueIP.awk
Any hints appreciated. Thanks.
The sudo(8) command requires a pseudo-tty and you do not have one allocated under cron(8); you do have one allocated when logged in the usual way.
Instead of mucking about with sudo(8), just run the script as the correct user.
If you cannot do that, then in the root crontab, do something like this:
su - username /path/to/mycommand arg1 arg2...
This will work because root can use su(1) without neding a password.

time command output on an already running process

I have a process that spawns some other processes,
I want to use the time command on a specific process and get the same output as the time command.
Is that possible and how?
I want to use the time command on a specific process and get the same output as the time command.
Probably it is enough just to use pidstat to get user and sys time:
$ pidstat -p 30122 1 4
Linux 2.6.32-431.el6.x86_64 (hostname) 05/15/2014 _x86_64_ (8 CPU)
04:42:28 PM PID %usr %system %guest %CPU CPU Command
04:42:29 PM 30122 706.00 16.00 0.00 722.00 3 has_serverd
04:42:30 PM 30122 714.00 12.00 0.00 726.00 3 has_serverd
04:42:31 PM 30122 714.00 14.00 0.00 728.00 3 has_serverd
04:42:32 PM 30122 708.00 16.00 0.00 724.00 3 has_serverd
Average: 30122 710.50 14.50 0.00 725.00 - has_serverd
If not then according to strace time uses wait4 system call (http://linux.die.net/man/2/wait4) to get information about a process from the kernel. The same info returns getrusage but you cannot call it for an arbitrary process according to its documentation (http://linux.die.net/man/2/getrusage).
So, I do not know any command that will give the same output. However it is feasible to create a bash script that gets PID of the specific process and outputs something like time outpus then
This script does these steps:
1) Get the number of clock ticks per second
getconf CLK_TCK
I assume it is 100 and 1 tick is equal to 10 milliseconds.
2) Then in loop do the same sequence of commands while exists the directory /proc/YOUR-PID:
while [ -e "/proc/YOUR-PID" ];
do
read USER_TIME SYS_TIME REAL_TIME <<< $(cat /proc/PID/stat | awk '{print $14, $15, $22;}')
sleep 0.1
end loop
Some explanation - according to man proc :
user time: ($14) - utime - Amount of time that this process has been scheduled in user mode, measured in clock ticks
sys time: ($15) - stime - Amount of time that this process has been scheduled in kernel mode, measured in clock ticks
starttime ($22) - The time in jiffies the process started after system boot.
3) When the process is finished get finish time
read FINISH_TIME <<< $(cat '/proc/self/stat' | awk '{print $22;}')
And then output:
the real time = ($FINISH_TIME-$REAL_TIME) * 10 - in milliseconds
user time: ($USER_TIME/(getconf CLK_TCK)) * 10 - in milliseconds
sys time: ($SYS_TIME/(getconf CLK_TCK)) * 10 - in milliseconds
I think it should give roughly the same result as time. One possible problem I see is if the process exists for a very short period of time.
This is my implementation of time:
#!/bin/bash
# Uses herestrings
print_res_jeffies()
{
let "TIME_M=$2/60000"
let "TIME_S=($2-$TIME_M*60000)/1000"
let "TIME_MS=$2-$TIME_M*60000-$TIME_S*1000"
printf "%s\t%dm%d.%03dms\n" $1 $TIME_M $TIME_S $TIME_MS
}
print_res_ticks()
{
let "TIME_M=$2/6000"
let "TIME_S=($2-$TIME_M*6000)/100"
let "TIME_MS=($2-$TIME_M*6000-$TIME_S*100)*10"
printf "%s\t%dm%d.%03dms\n" $1 $TIME_M $TIME_S $TIME_MS
}
if [ $(getconf CLK_TCK) != 100 ]; then
exit 1;
fi
if [ $# != 1 ]; then
exit 1;
fi
PROC_DIR="/proc/"$1
if [ ! -e $PROC_DIR ]; then
exit 1
fi
USER_TIME=0
SYS_TIME=0
START_TIME=0
while [ -e $PROC_DIR ]; do
read TMP_USER_TIME TMP_SYS_TIME TMP_START_TIME <<< $(cat $PROC_DIR/stat | awk '{print $14, $15, $22;}')
if [ -e $PROC_DIR ]; then
USER_TIME=$TMP_USER_TIME
SYS_TIME=$TMP_SYS_TIME
START_TIME=$TMP_START_TIME
sleep 0.1
else
break
fi
done
read FINISH_TIME <<< $(cat '/proc/self/stat' | awk '{print $22;}')
let "REAL_TIME=($FINISH_TIME - $START_TIME)*10"
print_res_jeffies 'real' $REAL_TIME
print_res_ticks 'user' $USER_TIME
print_res_ticks 'sys' $SYS_TIME
And this is an example that compares my implementation of time and real time:
>time ./sys_intensive > /dev/null
Alarm clock
real 0m10.004s
user 0m9.883s
sys 0m0.034s
In another terminal window I run my_time.sh and give it PID:
>./my_time.sh `pidof sys_intensive`
real 0m10.010ms
user 0m9.780ms
sys 0m0.030ms

Resources