lets say I open a file, then parsed it into lines. Then I use a loop:
foreach line $lines {}
e.g., if the file contained the following string:
XYDATA, NAME1
I want to put ACC_ after the XYDATA to get ACC_NAME1
and if the file contains more than one strings with XYDATA, put VEL_, DSP_ and Prs_ and so on
Using the textutil::split package from tcllib, and the ability of foreach to iterate over multiple lists simultaneously
package require textutil::split
set line {XYDATA, foo, bar, baz, qux}
set prefixes {ACC_ VEL_ DSP_ Prs_}
set fields [textutil::split::splitx $line {, }]
set new [list]
if {[lindex $fields 0] eq "XYDATA"} {
lappend new [lindex $fields 0]
foreach prefix $prefixes field [lrange $fields 1 end] {
lappend new $prefix$field
}
}
puts [join $new ", "]
XYDATA, ACC_foo, VEL_bar, DSP_baz, Prs_qux
alternately, use a single regsub call that generates some code
set code [regsub -all {(, )([^,]+)} $line {\1[lindex $prefixes [incr counter]]\2}]
set counter -1
puts [subst $code]
Related
Well, i tried to find online my answer but actually I didn't and I really need help..
I have a text file (file.txt) that contain :
C:/Users/00_file/toto.odb,
dis,455,
stre,54,
stra,25,
C:/Users/00_file/tota.odb,
And a TCL script that allows me to read values of each lines :
set Infile [open "C:/Users/00_file/file.txt" r]
set filelines [split $Infile ","]
set Namepath [lindex $filelines 1 0] #*doesn't work*
set dis [lindex $filelines 2 0] # *work good*
...
The problem is when I want the complete line 1 of the text file with my TCL script, some informations are missing and extra caracter disapear..
How can I have the complete string (line 1 of my text file) ?
Thanks a lot !
You open the file for reading but you don't actually read from it. $Infile is just (basically) a pointer to a file descriptor, not the contents of the file:
% set fh [open file.txt r]
% puts $fh
file3
The idiomatic way to read from a file: line-by-line
set fh [open "C:/Users/00_file/file.txt" r]
set data [list]
while {[get $fh line] != -1} {
lappend data [split $line ,]
}
close $fh
Or, read the whole file and split it on newlines
set fh [open "C:/Users/00_file/file.txt" r]
set data [lmap line [split [read -nonewline $fh] \n] {split $line ,}]
close $fh
Then access the data
set Namepath [lindex $data 0 0] ;# first line, first field
set dis [lindex $data 1 1] ;# second line, second field
Tcl code will be as follow:
set file [open c:/filename.txt ]
set file_device [read $file]
set data [split $file_device "\n"]
for {set count 0} {$count < 2} {incr count} {
puts $data
# for every iterartion one line will be printed.
# split /n is use for getting the end of each line.
# open command open the file at given path.
# read command is use to read the open file.
}
close $file
break
this will take the line one after another.
I need to write a tcl script that will process the lines of a text file. The file is looks like
10.77.33.247 10.77.33.241 rtp 0x26
10.77.33.247 10.77.33.241 rtp 0x26
10.77.33.247 10.77.33.241 rtp 0x26
10.77.33.247 10.77.33.241 0x24
10.77.33.247 10.77.33.241 0x22
10.77.33.247 10.77.33.241 0x21
I need to be able to iterate through the file and for each line that contains rtp store the value that comes after it (e.g., 0x26 in the sample above) in a variable to do use in other parts of the script.
Here's a (rather low-level) Tcl way to do it.
set ch [open myfile.txt]
set data [chan read $ch]
chan close $ch
set lines [split [string trim $data] \n]
set res {}
foreach line $lines {
if {[llength $line] > 3 && [lindex $line 2] eq {rtp}} {
lappend res [lindex $line 3]
}
}
If you replace "myfile.txt" with the name of your data file and run this code, you get the words you were after collected in the variable res.
Explanation
It's usually best to use standard (builtin or tcllib) commands, such as fileutil::foreachLine in glenn jackman's answer. If one wants to do it step by step, however, Tcl still makes it very easy.
The first step is to get the contents of the file into memory. There is a standard command for that too: fileutil::cat, but the following sequence will do:
set ch [open myfile.txt]
set data [chan read $ch]
chan close $ch
(This is more or less equivalent to set data [fileutil::cat myfile.txt].)
Next step is to split the text into lines. It's always a good idea to trim off whitespace at both ends of the text, otherwise loose newlines can create empty elements that disturb processing.
set lines [split [string trim $data] \n]
In some cases, we might have to split the lines into lists of fields, but from the example it seems that the lines are already usable as lists (lines that only have whitespace, alphanumerics, and well-behaved punctuation such as dots usually are).
We need a test for matching lines. There are several alternatives that fit the example data you provided, including
string match *rtp* $line ;# match every line that has "rtp" somewhere
[llength $line] > 3 ;# match every line that has more than three columns
[lindex $line 2] eq {rtp} ;# match every line where the third element is "rtp"
We also need a way to extract the data we want. If the word after "rtp" is always in the last column, [lindex $line end] will do the job. If the word is always in the fourth column, but there may be further columns, [lindex $line 3] is better.
Grabbing a couple of these alternatives, the procedure to get a list of words as specified can be written
set res {}
foreach line $lines {
if {[llength $line] > 3 && [lindex $line 2] eq {rtp}} {
lappend res [lindex $line 3]
}
}
(In pseudo-code: get an empty list (res); test every line (using a combination of two of the tests above), extract the sought-after word from every matching line and add it to the res list.)
or, using lmap (Tcl 8.6+)
set res [lmap line $lines {
if {[llength $line] > 3 && [lindex $line 2] eq {rtp}} {
lindex $line 3
} else {
continue
}
}]
All the words that came after a "rtp" word should now be in res. If you just wanted the last match, it's [lindex $res end].
Documentation: chan, continue, foreach, if, lappend, lindex, llength, lmap, open, set, split, string
Supposing your file is foo.txt:
grep "word" foo.txt
grep "0x26" file.txt
will show you all the lines with 0x26 in them.
tcllib has lots of goodness in it:
% package require fileutil
1.14.5
% fileutil::foreachLine line "file" {
if {[string match {*rtp*} $line]} {
lappend values [lindex [split $line] end]
}
}
% puts $values
0x26 0x26 0x26
The below code works for getting the text from the file which is in cotes (" "):
proc aifWebcamInitVideo {} {
variable devicePath "c:/testfile.txt"
#ffmpeg command to get the device name connected to system
exec ffmpeg -list_devices true -f dshow -i dummy >& $devicePath &
after 4000 ;# wait of 4 seconds so that device can be selected.
set files [glob $aif::LogRootDir/*] ;# Look for all the files.
foreach file $files {
set fileName $devicePath
if {[string match $fileName $file ] } { ;# this if statement check currently captured video with the files present in directory.
set file [open $devicePath ] ;# open the file
set file_device [read $file]
set data [split $file_device "\n"] ;# divides the open file into lines.
foreach line $data {
if {[regexp {"([^""]*)"} $line -> substring]} { ;# check for the quotes to retrieve the device connected to system.
set result $substring
lappend cameraList $result ;# makes the list of devices.
set camera [lindex $cameraList 0]
}
}
close $file
break
}
}
#values passed to FFMPEG command.
variable TableCamera $camera
puts "Device selected for Video capture is : $TableCamera" ;# gets the first device from the list
}
Hi I am using tcl to write output a xls file.
however I am succeeding in writing the output to a xls file in one column but what i want to split and write to two different column at the sane time .
My code which is writing to one column only is working fine:
set fh [open $e w]
while {[llength $c]} {
set name [lindex $c 0]
set c [concat [glob -nocomplain -directory [lindex $c 0] -type d *] [lrange $c 1 end]]
set filesofDirectory [glob -nocomplain -directory $name -type f *]
if { [llength $filesofDirectory] > 0 && $d == "fftc"} {
set x "number of files in $name is [llength $filesofDirectory]"
puts $fh [join $x ]
}
}
close $fh
However when I modified the same code to have the output :
set fh [open $e w]
while {[llength $c]} {
set name [lindex $c 0]
set c [concat [glob -nocomplain -directory [lindex $c 0] -type d *] [lrange $c 1 end]]
set filesofDirectory [glob -nocomplain -directory $name -type f *]
if { [llength $filesofDirectory] > 0 && $d == "fftc"} {
set x "number of files in $name"
set y [llength $filesofDirectory]
puts $fh [join $x "," $y]
}
}
close $fh
Please suggest the workaround
To dump a directory breakdown into a CSV file that can be used in Excel, this code ought to work:
package require csv
set c .
set d fftc
set e foo.csv
proc glob2csv {c d fh} {
foreach name $c {
if {[file isdirectory $name]} {
set n [llength [glob -nocomplain -directory $name -type f *]]
if {$n > 0 && $d eq "fftc"} {
chan puts $fh [csv::join [list "number of files in $name is" $n]]
}
glob2csv [glob -nocomplain -directory $name -type d *] $d $fh
}
}
}
try {
open $e w
} on ok fh {
glob2csv $c $d $fh
} finally {
catch {chan close $fh}
}
I'm making a lot of uncomfortable assumptions here since I don't really know what your code is about. You might want to use the optional arguments to csv::join to tweak the format of the CSV file. In my locale, for instance, I need to set the separator character to tab (\t) to avoid having Excel treat every line as a single string.
Documentation for the Tcllib CSV module
Documentation: catch, chan, file, foreach, glob, if, list, llength, open, package, proc, set, try
I need to work on a text file with 3 columns of value like this:
10 650 8456
1 3264 64643
...
Now i have the following problems:
1) I don't know how counting the length of each numbers (example: 10 = 2 numbers; 650 -> 3 numbers; 64643 -> 5 numbers)
2) Once resolved first point, i need to create an output txt file with a proper data format like this:
|--01--||--02--||--03--|
For each columns there are 8 space useful to write on numbers; if a numbers, for example, has 4 value like 8456, i want to count
other 4 spaces (8 - 4) remaing and then at the 9th space write on second column, another number and so on..
Here an example of the desire output:
|--01--||--02--||--03--|
10 650 8456
1 3264 64643
This is a piece of my code but i don't know how to count numbers and writing after the first numbers the others.
set FileOutput [open $Output w]
set FileInput [open $filename r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
puts $FileOutputGas " [lindex $inputList 3] [lindex $inputList 4] [lindex $inputList 5]"
but in this way i maintain always the same text format with fixed spaces between numbers; on the contrary i would like to put spaces dynamically.
EDIT: wrong output in this way:
set formatStr {%-8d}
puts $FileOutputGas "[format $formatStr [lindex $num 3]]"
It prints out the format "-8d" and not the number
EDIT 2: Problem with output when bind a button.
The problem i was mentioned before was due to the push of a button. I don't know why the output is correct if i run your script, but if i insert all that action in a button it gives me a wrong output in this way:
button .bCreate -text "CREATE OUTPUT" -width 30 -height 5 -activebackground green -font " -12"
bind .bCreateGas <1> {
set Output "output.txt"
set filename "input.txt"
set FileOutput [open $Output w]
set FileInput [open $filename r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
set CtriaFind [lsearch -all -inline $inputList CTRIA3*]
foreach line $CtriaFind {
# Extracting all the numbers in a line
set numbers [ regexp -inline -all {\d+} $line ]
set num3 [lindex $numbers 3]
set num4 [lindex $numbers 4]
# Printing each numbers into the file
puts -nonewline $FileOutput " [ format "%-8d" $num3] [ format "%-8d" $num4]"
puts $FileOutput "";
}
}
a part of input.txt file is this one:
GRID 48588 -.366712-3.443-2.3697197
GRID 48606 -.366683-.0373640.374481
GRID 48607 -.366536-3.888-2.3767999
GRID 48608 -.366735-3.589-2.3721335
$$
$$ SPOINT Data
$$
CTRIA3 101268 0 9793 4098 9938
CTRIA3 101353 0 3986 9928 3803
CTRIA3 101363 0 4010 12337 3932
i want to print only
9793 4098
3986 9928
4010 12337
You need to make use of format command to format the display and regexp to retrieve the numbers in each line.
set Output "output.txt"
set filename "input.txt"
set FileOutput [open $Output w]
set FileInput [open $filename r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
#puts $inputList
foreach line $inputList {
# Extracting all the numbers in a line
set numbers [ regexp -inline -all {\d+} $line ]
# Printing each numbers into the file
foreach num $numbers {
puts -nonewline $FileOutput "[ format "%-8d" $num ]"
}
puts $FileOutput ""; # This is just for the newline character
}
close $filename
close $FileOutput
The - used in the format command specifies that the converted argument should be left-justified in its field. The numbers 8 specifies the width of each field.
Reference : format
Update 1 :
There can be many ways. We have the whole list of numbers in a particular line in the list numbers. Afterwards we are iterating through the list with foreach. Here, instead of looping for all the elements, you can take only the 2nd element using [lindex $numbers 1].
Or, since we know that the elements are separated with spaces, instead of using these ways, we can directly assign it to one list and extract the second element from it. It all depends on your requirement only.
Is there an inbuilt command to do this or has anyone had any luck with a script that does it?
I am looking to get counts of how many records (as defined by a specific EOL such as "^%!") had how many occurrences of a specfic character. (sorted descending by the number of occurrences)
For example, with this sample file:
jdk,|ljn^%!dk,|sn,|fgc^%!
ydfsvuyx^%!67ds5,|bvujhy,|s6d75
djh,|sudh^%!nhjf,|^%!fdiu^%!
Suggested input: delimiter EOL and filename as arguments.
bash/perl some_script_name ",|" "^%!" samplefile
Desired output:
occs count
3 1
2 1
1 2
0 2
This is because the 1st record had one delimiter, 2nd record had 2, 3rd record had 0, 4th record had 3, 5th record had 1, 6th record had 0.
Bonus pts if you can make the delimiter and EOL argument accept hex input (ie 2C7C) or normal character input (ie ,|) .
Script:
#!/usr/bin/perl
use strict;
$/ = $ARGV[1];
open my $fh, '<', $ARGV[2] or die $!;
my #records = <$fh> and close $fh;
$/ = $ARGV[0];
my %counts;
$counts{(split $_)-1}++ for #records;
delete $counts{-1};
print "$_\t$counts{$_}\n" for (reverse sort keys %counts);
Test:
perl script.pl ',|' '^%!' samplefile
Output:
3 1
2 1
1 2
0 2
This is what perl lives for:
#!perl -w
use 5.12.0;
my ($delim, $eol, $file) = #ARGV;
open my $fh, "<$file" or die "error opening $file $!";
$/ = $eol; # input record separator
my %counts;
while (<$fh>) {
my $matches = () = $_ =~ /(\Q$delim\E)/g; # "goatse" operator
$counts{$matches}++;
}
say "occs\tcount";
foreach my $num (reverse sort keys %counts) {
say "$num\t$counts{$num}";
}
(if you haven't got 5.12, remove the "use 5.12" line and replace the say with print)
A solution in awk:
BEGIN {
RS="\\^%!"
FS=",\\|"
max_occ = 0
}
{
if(match($0, "^ *$")) { # This is here to deal with the final separator.
next
}
if(NF - 1 > max_occ) {
max_occ = NF - 1
}
count[NF - 1]=count[NF - 1] + 1
}
END {
printf("occs count\n")
for(i = 0; i <= max_occ; i++) {
printf("%s %s\n", i, count[i])
}
}
Well, there's one more empty record at the end of the file which has 0. So, here's a script to do what you wanted. Adding headers and otherwise tweaking the printf output is left as an excercise for you. :)
Basically, read the whole file in, split it into records, and for each record, use a /g regex to count the sub-delimiters. Since /g returns an array of all matches, use #{[]} to make an arrayref then deref that in scalar context to get a count. There has to be a more elegant solution to that particular part of the problem, but whatever; it's perl line noise. ;)
user#host[/home/user]
$ ./test.pl ',|' '^%!' test.in
3 1
2 1
1 2
0 3
user#host[/home/user]
$ cat test.in
jdk,|ljn^%!dk,|sn,|fgc^%!
ydfsvuyx^%!67ds5,|bvujhy,|s6d75
djh,|sudh^%!nhjf,|^%!fdiu^%!
user#host[/home/user]
$ cat test.pl
#!/usr/bin/perl
my( $subdelim, $delim, $in,) = #ARGV;
$delim = quotemeta $delim;
$subdelim = quotemeta $subdelim;
my %counts;
open(F, $in) or die qq{Failed opening $in: $?\n};
foreach( split(/$delim/, join(q{}, <F>)) ){
$counts{ scalar(#{[m/.*?($subdelim)/g]}) }++;
}
printf( qq{%i% 4i\n}, $_, $counts{$_} ) foreach (sort {$b<=>$a} keys %counts);
And here's a modified version which only keeps fields which contain at least one non-space character. That removes the last field, but also has the consequence of removing any other empty fields. It also uses $/ and \Q\E to reduce a couple of explicit function calls (thank, Alex). And, like the previous one, it works with strict + warnings;
#!/usr/bin/perl
my( $subdelim, $delim, $in ) = #ARGV;
local $/=$delim;
my %counts;
open(F, $in) or die qq{Failed opening $in: $?\n};
foreach ( grep(/\S/, <F>) ){
$counts{ scalar(#{[m/.*?(\Q$subdelim\E)/g]}) }++;
}
printf( qq{%i% 4i\n}, $_, $counts{$_} ) foreach (sort {$b<=>$a} keys %counts);
If you really only want to remove the last record unconditionally, I'm partial to using pop:
#!/usr/bin/perl
my( $subdelim, $delim, $in ) = #ARGV;
local $/=$delim;
my %counts;
open(F, $in) or die qq{Failed opening $in: $?\n};
my #lines = <F>;
pop #lines;
$counts{ scalar(#{[m/.*?(\Q$subdelim\E)/g]}) }++ foreach (#lines);
printf( qq{%i% 4i\n}, $_, $counts{$_} ) foreach (sort {$b<=>$a} keys %counts);