Parameters in TCL using ns2 - linux

how can i sent this values
24.215729
24.815729
25.055134
27.123499
27.159186
28.843474
28.877798
28.877798
to tcl input argument?
as you know we cant use pipe command because tcl dosent accept in that way!
what can i do to store this numbers in tcl file(the count of this numbers in variable and can be 0 to N and in this example its 7)

This is pretty easy to do in bash, dump the list of values into a file and then run:
tclsh myscript.tcl $(< datafilename)
And then the values are accessible in the script with the argument variables:
puts $argc; # This is a count of all values
puts $argv; # This is a list containing all the arguments

You can read data piped to stdin with commands like
set data [gets stdin]
or from temporary files, if you prefer. For example, the following program's first part (an example from wiki.tcl.tk) reads some data from a file, and the other part then reads data from stdin. To test it, put the code into a file (eg reading.tcl), make it executable, create a small file somefile, and execute via eg
./reading.tcl < somefile
#!/usr/bin/tclsh
# Slurp up a data file
set fsize [file size "somefile"]
set fp [open "somefile" r]
set data [read $fp $fsize]
close $fp
puts "Here is file contents:"
puts $data
puts "\nHere is from stdin:"
set momo [read stdin $fsize]
puts $momo

A technique I use when coding is to put data in my scripts as a literal:
set values {
24.215729
24.815729
25.055134
27.123499
27.159186
28.843474
28.877798
28.877798
}
Now I can just feed them into a command one at a time with foreach, or send them as a single argument:
# One argument
TheCommand $values
# Iterating
foreach v $values {
TheCommand $v
}
Once you've got your code working with a literal, switching it to pull the data from a file is pretty simple. You just replace the literal with code to read a file:
set f [open "the/data.txt"]
set values [read $f]
close $f
You can also pull the data from stdin:
set values [read stdin]
If there's a lot of values (more than, say, 10–20MB) then you might be better off processing the data one line at a time. Here's how to do that with reading from stdin…
while {[gets stdin v] >= 0} {
TheCommand $v
}

Related

how can i make the lines variable in a file? [duplicate]

I'm trying to read from a file, that has multiple lines, each with 3 informations I want to assign to the variables and work with.
I figured out, how to simply display them each on the terminal, but can't figure out how to actually assign them to variables.
while read i
do
for j in $i
do
echo $j
done
done < ./test.txt
test.txt:
1 2 3
a b c
So I want to read the line in the outer loop, then assign the 3 variables and then work with them, before going to the next line.
I'm guessing I have to read the values of the lines without an inside loop, but I can't figure it out right now.
Hope someone can point me in the right direction.
I think all you're looking for is to read multiple variables per line: the read command can assign words to variables by itself.
while read -r first second third; do
do_stuff_with "$first"
do_stuff_with "$second"
do_stuff_with "$third"
done < ./test.txt
The below assumes that your desired result is the set of assignments a=1, b=2, and c=3, taking the values from the first line and the keys from the second.
The easy way to do this is to read your keys and values into two separate arrays. Then you can iterate only once, referring to the items at each position within those arrays.
#!/usr/bin/env bash
case $BASH_VERSION in
''|[123].*) echo "ERROR: This script requires bash 4.0 or newer" >&2; exit 1;;
esac
input_file=${1:-test.txt}
# create an associative array in which to store your variables read from a file
declare -A vars=( )
{
read -r -a vals # read first line into array "vals"
read -r -a keys # read second line into array "keys"
for idx in "${!keys[#]}"; do # iterate over array indexes (starting at 0)
key=${keys[$idx]} # extract key at that index
val=${vals[$idx]} # extract value at that index
vars[$key]=$val # assign the value to the key inside the associative array
done
} < "$input_file"
# print for debugging
declare -p vars >&2
echo "Value of variable a is ${vars[a]}"
See:
BashFAQ #6 - How can I use variable variables (indirect variables, pointers, references) or associative arrays?
The bash-hackers page on the read builtin, documenting use of -a to read words into an array.

How to read a file into a variable and print it using that variable with exact format in shell

I am trying to read to id_rsa file into a variable var( set var=`cat id_rsa`) in tcsh to provide input to a program. But when i echo the variable ( echo "$var")new lines are gone, its a one line file content. So how do i correctly store and print the variable?
Don't use tcsh for this task, getting the output of a command into a variable in verbatim is unnecessarily difficult:
Some workarounds if you have to use tcsh are:
use redirection
% yourtool < id_rsa
Store the variable as base-16 (or something else) encoded stuff, so that it doesn't contain any newline characters that will get mangled by tcsh.
% set hex_contents = `<id_rsa xxd -l 16 -p`
Use a tempfile?
% set tempfile = `mktemp`
% program > tempfile
... later
% <tempfile other-program
I asked a similar question almost a year ago; https://unix.stackexchange.com/questions/284220/tcsh-preserve-newlines-in-command-substitution
In case you're curious this is how you get the verbatim contents (credit Stéphane Chazelas).
set temp = "`(some command; echo .) | paste -d . - /dev/null`"
set var = ""
set nl = '\
'
foreach i ($temp:q)
set var = $var:q$i:r:q$nl:q
end
set var = $var:r:q

Split a huge file in LINUX into multiple small files (each less than 100MB) splitting at a specific line with pattern match

I have the below source file (~10GB) and I need to split into several small files (<100MB each) and each file should have the same header record. The tricky part is I can't just split the file at any random line by using some split command. Records belonging to an agent shouldn't be split across multiple files. For simplicity I am only showing 2 agents here (there are thousands of them in the real file).
Inout.csv
Src,AgentNum,PhoneNum
DWH,Agent_1234,phone1
NULL,NULL,phone2
NULL,NULL,phone3
DWH,Agent_5678,phone1
NULL,NULL,phone2
NULL,NULL,phone3
DWH,Agent_9999,phone1
NULL,NULL,phone2
NULL,NULL,phone3
Output1.csv
Src,AgentNum,PhoneNum
DWH,Agent_1234,phone1
NULL,NULL,phone2
NULL,NULL,phone3
Output2.csv
Src,AgentNum,PhoneNum
DWH,Agent_5678,phone1
NULL,NULL,phone2
NULL,NULL,phone3
DWH,Agent_9999,phone1
NULL,NULL,phone2
NULL,NULL,phone3
#!/bin/bash
#Calculate filesize in bytes
FileSizeBytes=`du -b $FileName | cut -f1`
#Check for the file size
if [[ $FileSizeBytes -gt 100000000 ]]
then
echo "Filesize is greater than 100MB"
NoOfLines=`wc -l < $FileName`
AvgLineSize=$((FileSizeBytes / NoOfLines))
LineCountInEachFile=$((100000000 / AvgLineSize))
#Section for splitting the files
else
echo "Filesize is already less than 100MB. No splitting needed"
exit 0
fi
I an new to UNIX but trying this bash script on my own and kind of stuck at splitting the files. I am not expecting somebody to give me a full script, I am looking for any simple approach/recommendation possibly using other simple alternatives like sed or such. Many thanks in advance!
Here is a rough idea of how to do it in Perl. Please modify the regular expression if it doesn't exactly match to your actual data. I only tested it on your dummy data.
#!/usr/bin/perl -w
my $l=<>; chomp($l); my $header=$l;
my $agent=""; my $fh;
while ($l=<>) {
chomp($l);
if ($l=~m/^\s*[^,]+,(Agent_\d+),[^,]+/) {
$agent="$1";
open($fh,">","${agent}.txt") or die "$!";
print $fh $header."\n";
}
print $fh $l."\n";
}
Use it as follows:
./perlscript.pl < inputfile.txt
If you don't have perl (check for perl at /usr/bin/perl or some other such location), I will try to do a awk script. Let me know if you find problems running in the above script.
In response to your updated request that you only want to split the file, with each output file as less than 100MB, with no agent records split across two files, and that that header is printed in each output file, here is a rough idea of how you can accomplish that. It doesn't to a exact-cut (because you would need to calculate before you write). If you set the $maxfilesize to a value like 95*1024*1024 or 99*1024*1024, that should let you have a file that is less than 100MB (For ex., if the maximum size of a agent's records are less than 5MB, then set the $maxfilesize to 95*1024*1024)
#!/usr/bin/perl -w
# Max file size, approximately in bytes
#
# For 99MB make it as 99*1024*1024
#
my $maxfilesize=95*1024*1024;
#my $maxfilesize=400;
my $l=<>; chomp($l); my $header=$l;
my $fh;
my $filecounter=0;
my $filename="";
my $filesize=1000000000000; # big dummy size for first iteration
while ($l=<>) {
chomp($l);
if ($l=~m/^\s*[^,]+,Agent_\d+,[^,]+/) {
if ($filesize>$maxfilesize) {
print "FileSize: $filesize\n";
$filecounter++; $filename=sprintf("outfile_%05d",$filecounter);
print "Opening New File: $filename\n";
open($fh,">","${filename}.txt") or die "$!";
print $fh $header."\n";
$filesize=length($header);
}
}
print $fh $l."\n";
$filesize+=length($l);
print "FileSize: $filesize\n";
}
If you want more precise cuts than this, I will update it buffer the data before printing.
Step 1. Save the header
Step 2. create a variable "content" to temp-save the things the program is going to read
Step 3. start reading the next lines, in python:
if line.startswith("DWH"):
if content != "":
#if the content.len() reaches your predefined size, output_your_header + content here and reinitiate content by 'content = ""'
#else, content.len() is still under size limit, keep adding the new agent to content by doing 'content += line'
else:
content += line

tcl command to search for a particular word inside a txt file in linux

I need to write a tcl script that will process the lines of a text file. The file is looks like
10.77.33.247 10.77.33.241 rtp 0x26
10.77.33.247 10.77.33.241 rtp 0x26
10.77.33.247 10.77.33.241 rtp 0x26
10.77.33.247 10.77.33.241 0x24
10.77.33.247 10.77.33.241 0x22
10.77.33.247 10.77.33.241 0x21
I need to be able to iterate through the file and for each line that contains rtp store the value that comes after it (e.g., 0x26 in the sample above) in a variable to do use in other parts of the script.
Here's a (rather low-level) Tcl way to do it.
set ch [open myfile.txt]
set data [chan read $ch]
chan close $ch
set lines [split [string trim $data] \n]
set res {}
foreach line $lines {
if {[llength $line] > 3 && [lindex $line 2] eq {rtp}} {
lappend res [lindex $line 3]
}
}
If you replace "myfile.txt" with the name of your data file and run this code, you get the words you were after collected in the variable res.
Explanation
It's usually best to use standard (builtin or tcllib) commands, such as fileutil::foreachLine in glenn jackman's answer. If one wants to do it step by step, however, Tcl still makes it very easy.
The first step is to get the contents of the file into memory. There is a standard command for that too: fileutil::cat, but the following sequence will do:
set ch [open myfile.txt]
set data [chan read $ch]
chan close $ch
(This is more or less equivalent to set data [fileutil::cat myfile.txt].)
Next step is to split the text into lines. It's always a good idea to trim off whitespace at both ends of the text, otherwise loose newlines can create empty elements that disturb processing.
set lines [split [string trim $data] \n]
In some cases, we might have to split the lines into lists of fields, but from the example it seems that the lines are already usable as lists (lines that only have whitespace, alphanumerics, and well-behaved punctuation such as dots usually are).
We need a test for matching lines. There are several alternatives that fit the example data you provided, including
string match *rtp* $line ;# match every line that has "rtp" somewhere
[llength $line] > 3 ;# match every line that has more than three columns
[lindex $line 2] eq {rtp} ;# match every line where the third element is "rtp"
We also need a way to extract the data we want. If the word after "rtp" is always in the last column, [lindex $line end] will do the job. If the word is always in the fourth column, but there may be further columns, [lindex $line 3] is better.
Grabbing a couple of these alternatives, the procedure to get a list of words as specified can be written
set res {}
foreach line $lines {
if {[llength $line] > 3 && [lindex $line 2] eq {rtp}} {
lappend res [lindex $line 3]
}
}
(In pseudo-code: get an empty list (res); test every line (using a combination of two of the tests above), extract the sought-after word from every matching line and add it to the res list.)
or, using lmap (Tcl 8.6+)
set res [lmap line $lines {
if {[llength $line] > 3 && [lindex $line 2] eq {rtp}} {
lindex $line 3
} else {
continue
}
}]
All the words that came after a "rtp" word should now be in res. If you just wanted the last match, it's [lindex $res end].
Documentation: chan, continue, foreach, if, lappend, lindex, llength, lmap, open, set, split, string
Supposing your file is foo.txt:
grep "word" foo.txt
grep "0x26" file.txt
will show you all the lines with 0x26 in them.
tcllib has lots of goodness in it:
% package require fileutil
1.14.5
% fileutil::foreachLine line "file" {
if {[string match {*rtp*} $line]} {
lappend values [lindex [split $line] end]
}
}
% puts $values
0x26 0x26 0x26
The below code works for getting the text from the file which is in cotes (" "):
proc aifWebcamInitVideo {} {
variable devicePath "c:/testfile.txt"
#ffmpeg command to get the device name connected to system
exec ffmpeg -list_devices true -f dshow -i dummy >& $devicePath &
after 4000 ;# wait of 4 seconds so that device can be selected.
set files [glob $aif::LogRootDir/*] ;# Look for all the files.
foreach file $files {
set fileName $devicePath
if {[string match $fileName $file ] } { ;# this if statement check currently captured video with the files present in directory.
set file [open $devicePath ] ;# open the file
set file_device [read $file]
set data [split $file_device "\n"] ;# divides the open file into lines.
foreach line $data {
if {[regexp {"([^""]*)"} $line -> substring]} { ;# check for the quotes to retrieve the device connected to system.
set result $substring
lappend cameraList $result ;# makes the list of devices.
set camera [lindex $cameraList 0]
}
}
close $file
break
}
}
#values passed to FFMPEG command.
variable TableCamera $camera
puts "Device selected for Video capture is : $TableCamera" ;# gets the first device from the list
}

How to replace a string of different length through file handling in tcl

Want to replace SVT-ATL in all the lines of file with SVT without disturbing other text.
Using below code:
set fileDest3 "$dirName/$filename"
set fpr [open $fileDest3 r+]
set line [gets $fpr]
regsub -all "SVT-ATL" $line "SVT" line
puts $fpr "$line"
Because you're changing the length of lines, you must rewrite the whole file. (Well, you could theoretically leave the lines before the first thing being changed a lot, but that's a whole bunch more work.) The simplest way is to read it all in, string map to perform the change (in the simplest case; regsub if things are trickier) and then write it all back out (chan seek to the beginning first, of course). As you're shortening things, you'll need to finish with a chan truncate.
set fileDest3 "$dirName/$filename"
set fpr [open $fileDest3 r+]
set newContents [string map {"SVT-ATL" "SVT"} [read $fptr]]
chan seek $fptr 0
puts -nonewline $fptr $newContents
chan truncate $fptr
close $fptr
The puts has a -nonewline so you don't get an extra terminating newline; the one that was there originally will still be in (as we're reading it all in and not just line-by-line).
package require fileutil
proc cmd data {
string map {SVT-ATL SVT} $data
}
if {[catch {fileutil::updateInPlace [file join $dir $filename] cmd}]} {
error "failed to change file"
}
The Tcllib fileutil::updateInPlace command takes care of the low-level details of opening, reading, applying a given command to the content, truncating, writing, and closing files that you want updated. You simply provide a command like cmd here and enjoy the odds ever being in your favor.
Documentation: catch, error, if, package, proc, string
The fileutil package is documented here: fileutil
set timestamp [clock format [clock seconds] -format {%Y%m%d%H%M%S}]
set filename "yourfilenamehere.txt"
set temp $filename.tmp.$timestamp
set backup $filename.bak.$timestamp
set in [open $filename r]
set out [open $temp w]
# line-by-line, read the original file
while {[gets $in line] != -1} {
# Modifying $line by replacing the 'SVT-AL' with 'SVT'
regsub -all "SVT-ATL" $line "SVT" line
# then write the modified line to 'tmp' file
puts $out $line
}
close $in
close $out
# This is to rename the current file to backup file
file rename -force $filename $backup
# This is to rename the tmp file to the original file
file rename -force $temp $filename
Reference : Glenn Jackman & Donal Fellows
Update :
If you don't want to create a new file, then at least, as Jerry pointed out, we can read all the file content at once, apply our string replacement and then write back to file.
# Reading the file content
set fd [ open "yourfilename" r ]
set data [ read $fd ]
close $fd
# Replacing the string now...
regsub -all "SVT-ATL" $data "SVT" data
# Opening file with 'w' mode which will truncate the file
set fd [ open "yourfilename" w ]
puts $fd $data
close $fd
I would consider
exec sed -i {s/SVT-ATL/SVT/g} "$dirName/$filename"

Resources