Hi I am using tcl to write output a xls file.
however I am succeeding in writing the output to a xls file in one column but what i want to split and write to two different column at the sane time .
My code which is writing to one column only is working fine:
set fh [open $e w]
while {[llength $c]} {
set name [lindex $c 0]
set c [concat [glob -nocomplain -directory [lindex $c 0] -type d *] [lrange $c 1 end]]
set filesofDirectory [glob -nocomplain -directory $name -type f *]
if { [llength $filesofDirectory] > 0 && $d == "fftc"} {
set x "number of files in $name is [llength $filesofDirectory]"
puts $fh [join $x ]
}
}
close $fh
However when I modified the same code to have the output :
set fh [open $e w]
while {[llength $c]} {
set name [lindex $c 0]
set c [concat [glob -nocomplain -directory [lindex $c 0] -type d *] [lrange $c 1 end]]
set filesofDirectory [glob -nocomplain -directory $name -type f *]
if { [llength $filesofDirectory] > 0 && $d == "fftc"} {
set x "number of files in $name"
set y [llength $filesofDirectory]
puts $fh [join $x "," $y]
}
}
close $fh
Please suggest the workaround
To dump a directory breakdown into a CSV file that can be used in Excel, this code ought to work:
package require csv
set c .
set d fftc
set e foo.csv
proc glob2csv {c d fh} {
foreach name $c {
if {[file isdirectory $name]} {
set n [llength [glob -nocomplain -directory $name -type f *]]
if {$n > 0 && $d eq "fftc"} {
chan puts $fh [csv::join [list "number of files in $name is" $n]]
}
glob2csv [glob -nocomplain -directory $name -type d *] $d $fh
}
}
}
try {
open $e w
} on ok fh {
glob2csv $c $d $fh
} finally {
catch {chan close $fh}
}
I'm making a lot of uncomfortable assumptions here since I don't really know what your code is about. You might want to use the optional arguments to csv::join to tweak the format of the CSV file. In my locale, for instance, I need to set the separator character to tab (\t) to avoid having Excel treat every line as a single string.
Documentation for the Tcllib CSV module
Documentation: catch, chan, file, foreach, glob, if, list, llength, open, package, proc, set, try
Related
Well, i tried to find online my answer but actually I didn't and I really need help..
I have a text file (file.txt) that contain :
C:/Users/00_file/toto.odb,
dis,455,
stre,54,
stra,25,
C:/Users/00_file/tota.odb,
And a TCL script that allows me to read values of each lines :
set Infile [open "C:/Users/00_file/file.txt" r]
set filelines [split $Infile ","]
set Namepath [lindex $filelines 1 0] #*doesn't work*
set dis [lindex $filelines 2 0] # *work good*
...
The problem is when I want the complete line 1 of the text file with my TCL script, some informations are missing and extra caracter disapear..
How can I have the complete string (line 1 of my text file) ?
Thanks a lot !
You open the file for reading but you don't actually read from it. $Infile is just (basically) a pointer to a file descriptor, not the contents of the file:
% set fh [open file.txt r]
% puts $fh
file3
The idiomatic way to read from a file: line-by-line
set fh [open "C:/Users/00_file/file.txt" r]
set data [list]
while {[get $fh line] != -1} {
lappend data [split $line ,]
}
close $fh
Or, read the whole file and split it on newlines
set fh [open "C:/Users/00_file/file.txt" r]
set data [lmap line [split [read -nonewline $fh] \n] {split $line ,}]
close $fh
Then access the data
set Namepath [lindex $data 0 0] ;# first line, first field
set dis [lindex $data 1 1] ;# second line, second field
Tcl code will be as follow:
set file [open c:/filename.txt ]
set file_device [read $file]
set data [split $file_device "\n"]
for {set count 0} {$count < 2} {incr count} {
puts $data
# for every iterartion one line will be printed.
# split /n is use for getting the end of each line.
# open command open the file at given path.
# read command is use to read the open file.
}
close $file
break
this will take the line one after another.
I have a 2 text file. file1 contains IDs:
0 ABCD
3 ABDF
4 ACGFR
6 ABCD
7 GFHTRSFS
And file2:
ID001 AB ACGFR DF FD GF TYFJ ANH
ID002 DFR AG ABDF HGT MNJ POI YUI
ID003 DGT JHY ABCD YTRE NHYT PPOOI IUYNB
ID004 GFHTRSFS MJU UHY IUJ POL KUH KOOL
If the second column of file 1 matches to any entry in file 2 then the first column of file 2 should be the answer for it.
The output should be like:
0 ID003
3 ID002
4 ID001
6 ID003
7 ID004
(2nd column of file1 (ABCD) found match to 3rd row of file 2 which has ID003. So, ID003 should be the answer to it).
I have tried examples form other posts too, but somehow they are not matching to this.
Any help will be grateful.
Kind Regards
When trying to match up records from one file with records in another, the idea is to use a hash ( also known as an associative array, set of key-value pairs, or dictionaries ) to store the relationship between the first column and the rest of the columns. In effect, create the following relationships:
file1: ABCD -> 0
ABDF -> 3
ACGFR -> 4
FGHTRSS -> 6
GFHTRSFS -> 7
file2: AB -> ID001
ACGFR -> ID001
DF -> ID001
...
ANH -> ID001
DFR -> ID002
AG -> ID002
...
KUH -> ID004
KOOL -> ID004
The actual matching up of records between the files amounts to determining
if both hashes, here file1 and file2 both have keys defined for each file1 record. Here we can see that ACGFR is a key for both, therefore we can match up 4 and ID001, and so on for the rest of the keys.
In perl, we can create a hash by assigning pairs of values:
my %hash = ( foo => 1, bar => 2 );
A hash can also be created using references:
my $hash_ref = { foo => 1, bar => 2 };
Keys can be found using the keys function, and individual values can be extracted:
my $val1 = $hash{ foo }; # regular hash
my $val2 = $hash_ref->{ foo }; # hash reference
Whether a particular key is a member of a hash can be tested using the exists function.
With that background out of the way, here is one way to do this in perl:
matchup_files.pl
#!/usr/bin/env perl
use warnings;
use strict;
my $usage = "usage: $0 file1 file2\n";
my ($file1, $file2) = #ARGV;
for my $file ($file1, $file2) {
die $usage unless defined $file && -f $file; # -f checks whether $file is an actual file
}
# Create mappings col2 -> col1
# col3 -> col1
# col4 -> col1
my $h1 = inverted_hash_file_on_first_column( $file1 );
my $h2 = hash_file_on_first_column( $file2 );
# Try to find matching pairs
my $matches = {};
for my $h1_key ( keys %$h1 ) {
my $h1_val = $h1->{$h1_key};
if ( exists $h2->{ $h1_val } ) {
# We have a match!
my $num = $h1_key;
my $id = $h2->{ $h1_val };
$matches->{ $num } = $id;
}
}
# Print them out in numerical order
for my $num ( sort { $a <=> $b } keys %$matches ) {
my $id = $matches->{$num};
print join(" ", $num, $id) . "\n";
}
exit 0; # Success
sub inverted_hash_file_on_first_column {
my ($file) = #_;
return _hash_file($file, 1);
}
sub hash_file_on_first_column {
my ($file) = #_;
return _hash_file($file, 0);
}
sub _hash_file {
my ($file, $inverted) = #_;
my $fhash = {};
open my $fh, "<", $file or die "Unable to open $file : $!";
while ( my $line = <$fh> ) {
my #fields = split /\s+/, $line; # Split line on whitespace
my $key = shift #fields; # First column
for my $field ( #fields ) {
if ( $inverted ) {
die "Duplicated field '$field'" if exists $fhash->{ $key };
$fhash->{ $key } = $field;
} else {
die "Duplicated field '$field'" if exists $fhash->{ $field };
$fhash->{ $field } = $key;
}
}
}
return $fhash;
}
output
matchup_files.pl input1 input2
0 ID003
3 ID002
4 ID001
6 ID003
7 ID004
lets say I open a file, then parsed it into lines. Then I use a loop:
foreach line $lines {}
e.g., if the file contained the following string:
XYDATA, NAME1
I want to put ACC_ after the XYDATA to get ACC_NAME1
and if the file contains more than one strings with XYDATA, put VEL_, DSP_ and Prs_ and so on
Using the textutil::split package from tcllib, and the ability of foreach to iterate over multiple lists simultaneously
package require textutil::split
set line {XYDATA, foo, bar, baz, qux}
set prefixes {ACC_ VEL_ DSP_ Prs_}
set fields [textutil::split::splitx $line {, }]
set new [list]
if {[lindex $fields 0] eq "XYDATA"} {
lappend new [lindex $fields 0]
foreach prefix $prefixes field [lrange $fields 1 end] {
lappend new $prefix$field
}
}
puts [join $new ", "]
XYDATA, ACC_foo, VEL_bar, DSP_baz, Prs_qux
alternately, use a single regsub call that generates some code
set code [regsub -all {(, )([^,]+)} $line {\1[lindex $prefixes [incr counter]]\2}]
set counter -1
puts [subst $code]
I need to work on a text file with 3 columns of value like this:
10 650 8456
1 3264 64643
...
Now i have the following problems:
1) I don't know how counting the length of each numbers (example: 10 = 2 numbers; 650 -> 3 numbers; 64643 -> 5 numbers)
2) Once resolved first point, i need to create an output txt file with a proper data format like this:
|--01--||--02--||--03--|
For each columns there are 8 space useful to write on numbers; if a numbers, for example, has 4 value like 8456, i want to count
other 4 spaces (8 - 4) remaing and then at the 9th space write on second column, another number and so on..
Here an example of the desire output:
|--01--||--02--||--03--|
10 650 8456
1 3264 64643
This is a piece of my code but i don't know how to count numbers and writing after the first numbers the others.
set FileOutput [open $Output w]
set FileInput [open $filename r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
puts $FileOutputGas " [lindex $inputList 3] [lindex $inputList 4] [lindex $inputList 5]"
but in this way i maintain always the same text format with fixed spaces between numbers; on the contrary i would like to put spaces dynamically.
EDIT: wrong output in this way:
set formatStr {%-8d}
puts $FileOutputGas "[format $formatStr [lindex $num 3]]"
It prints out the format "-8d" and not the number
EDIT 2: Problem with output when bind a button.
The problem i was mentioned before was due to the push of a button. I don't know why the output is correct if i run your script, but if i insert all that action in a button it gives me a wrong output in this way:
button .bCreate -text "CREATE OUTPUT" -width 30 -height 5 -activebackground green -font " -12"
bind .bCreateGas <1> {
set Output "output.txt"
set filename "input.txt"
set FileOutput [open $Output w]
set FileInput [open $filename r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
set CtriaFind [lsearch -all -inline $inputList CTRIA3*]
foreach line $CtriaFind {
# Extracting all the numbers in a line
set numbers [ regexp -inline -all {\d+} $line ]
set num3 [lindex $numbers 3]
set num4 [lindex $numbers 4]
# Printing each numbers into the file
puts -nonewline $FileOutput " [ format "%-8d" $num3] [ format "%-8d" $num4]"
puts $FileOutput "";
}
}
a part of input.txt file is this one:
GRID 48588 -.366712-3.443-2.3697197
GRID 48606 -.366683-.0373640.374481
GRID 48607 -.366536-3.888-2.3767999
GRID 48608 -.366735-3.589-2.3721335
$$
$$ SPOINT Data
$$
CTRIA3 101268 0 9793 4098 9938
CTRIA3 101353 0 3986 9928 3803
CTRIA3 101363 0 4010 12337 3932
i want to print only
9793 4098
3986 9928
4010 12337
You need to make use of format command to format the display and regexp to retrieve the numbers in each line.
set Output "output.txt"
set filename "input.txt"
set FileOutput [open $Output w]
set FileInput [open $filename r]
set filecontent [read $FileInput]
set inputList [split $filecontent "\n"]
#puts $inputList
foreach line $inputList {
# Extracting all the numbers in a line
set numbers [ regexp -inline -all {\d+} $line ]
# Printing each numbers into the file
foreach num $numbers {
puts -nonewline $FileOutput "[ format "%-8d" $num ]"
}
puts $FileOutput ""; # This is just for the newline character
}
close $filename
close $FileOutput
The - used in the format command specifies that the converted argument should be left-justified in its field. The numbers 8 specifies the width of each field.
Reference : format
Update 1 :
There can be many ways. We have the whole list of numbers in a particular line in the list numbers. Afterwards we are iterating through the list with foreach. Here, instead of looping for all the elements, you can take only the 2nd element using [lindex $numbers 1].
Or, since we know that the elements are separated with spaces, instead of using these ways, we can directly assign it to one list and extract the second element from it. It all depends on your requirement only.
Using any combination of Linux tools (without going into any full featured programming language) how can I sort this list
A,C 1
C,B 2
B,A 3
into
A,B 3
A,C 1
B,C 2
Not applying for any beauty contest, this seems to come close:
#!/bin/bash
while read one two; do
one=`echo $one | sed -e 's/,/\n/g' | sort | sed -e '
1 {h; d}
$! {H; d}
H; g; s/\n/,/g;
'`
echo $one $two
done | sort
Change the internal field separator, then compare the the first two letters with ">":
(
IFS=" ,";
while read a b n; do
if [ "$a" \> "$b" ]; then
echo "$b,$a $n";
else
echo "$a,$b $n";
fi;
done;
) <<EOF | sort
A,C 1
C,B 2
B,A 3
EOF
In case somebody is interested. I was not realy satisfied with any suggestions. Probably because I hoped for view lines solution and such doesn't exist as far as I know.
Anyway I did wrote an utility, called ljoin (for left join like in databases) which does exactly what I was asking for (of course :D)
#!/usr/bin/perl
=head1 NAME
ljoin.pl - Utility to left join files by specified key column(s)
=head1 SYNOPSIS
ljoin.pl [OPTIONS] <INFILE1>..<INFILEN> <OUTFILE>
To successfully join rows one must suply at least one input file and exactly one output file. Input files can be real file names or a patern, like [ABC].txt or *.in etc.
=head1 DESCRIPTION
This utility merges multiple file into one using specified column as a key
=head2 OPTIONS
=item --field-separator=<separator>, -fs <separator>
Specifies what string should be used to separate columns in plain file. Default value for this option is tab symbol.
=item --no-sort-fields, -no-sf
Do not sort columns when creating a key for merging files
=item --complex-key-separator=<separator>, -ks <separator>
Specifies what string should be used to separate multiple values in multikey column. For example "A B" in one file can be presented as "B A" meaning that this application should somehow understand that this is the same key. Default value for this option is space symbol.
=item --no-sort-complex-keys, -no-sk
Do not sort complex column values when creating a key for merging files
=item --include-primary-field, -i
Specifies whether key which is used to find matching lines in multiple files should be included in the output file. First column in output file will be the key in any case, but in case of complex column the value of first column will be sorted. Default value for this option is false.
=item --primary-field-index=<index>, -f <index>
Specifies index of the column which should be used for matching lines. You can use multiple instances of this option to specify a multi-column key made of more than one column like this "-f 0 -f 1"
=item --help, -?
Get help and documentation
=cut
use strict;
use warnings;
use Getopt::Long;
use Pod::Usage;
my $fieldSeparator = "\t";
my $complexKeySeparator = " ";
my $includePrimaryField = 0;
my $containsTitles = 0;
my $sortFields = 1;
my $sortComplexKeys = 1;
my #primaryFieldIndexes;
GetOptions(
"field-separator|fs=s" => \$fieldSeparator,
"sort-fields|sf!" => \$sortFields,
"complex-key-separator|ks=s" => \$complexKeySeparator,
"sort-complex-keys|sk!" => \$sortComplexKeys,
"contains-titles|t!" => \$containsTitles,
"include-primary-field|i!" => \$includePrimaryField,
"primary-field-index|f=i#" => \#primaryFieldIndexes,
"help|?!" => sub { pod2usage(0) }
) or pod2usage(2);
pod2usage(0) if $#ARGV < 1;
push #primaryFieldIndexes, 0 if $#primaryFieldIndexes < 0;
my %primaryFieldIndexesHash;
for(my $i = 0; $i <= $#primaryFieldIndexes; $i++)
{
$primaryFieldIndexesHash{$i} = 1;
}
print "fieldSeparator = $fieldSeparator\n";
print "complexKeySeparator = $complexKeySeparator \n";
print "includePrimaryField = $includePrimaryField\n";
print "containsTitles = $containsTitles\n";
print "primaryFieldIndexes = #primaryFieldIndexes\n";
print "sortFields = $sortFields\n";
print "sortComplexKeys = $sortComplexKeys\n";
my $fieldsCount = 0;
my %keys_hash = ();
my %files = ();
my %titles = ();
# Read columns into a memory
foreach my $argnum (0 .. ($#ARGV - 1))
{
# Find files with specified pattern
my $filePattern = $ARGV[$argnum];
my #matchedFiles = < $filePattern >;
foreach my $inputPath (#matchedFiles)
{
open INPUT_FILE, $inputPath or die $!;
my %lines;
my $lineNumber = -1;
while (my $line = <INPUT_FILE>)
{
next if $containsTitles && $lineNumber == 0;
# Don't use chomp line. It doesn't handle unix input files on windows and vice versa
$line =~ s/[\r\n]+$//g;
# Skip lines that don't have columns
next if $line !~ m/($fieldSeparator)/;
# Split fields and count them (store maximum number of columns in files for later use)
my #fields = split($fieldSeparator, $line);
$fieldsCount = $#fields+1 if $#fields+1 > $fieldsCount;
# Sort complex key
my #multipleKey;
for(my $i = 0; $i <= $#primaryFieldIndexes; $i++)
{
my #complexKey = split ($complexKeySeparator, $fields[$primaryFieldIndexes[$i]]);
#complexKey = sort(#complexKey) if $sortFields;
push #multipleKey, join($complexKeySeparator, #complexKey)
}
# sort multiple keys and create key string
#multipleKey = sort(#multipleKey) if $sortFields;
my $fullKey = join $fieldSeparator, #multipleKey;
$lines{$fullKey} = \#fields;
$keys_hash{$fullKey} = 1;
}
close INPUT_FILE;
$files{$inputPath} = \%lines;
}
}
# Open output file
my $outputPath = $ARGV[$#ARGV];
open OUTPUT_FILE, ">" . $outputPath or die $!;
my #keys = sort keys(%keys_hash);
# Leave blank places for key columns
for(my $pf = 0; $pf <= $#primaryFieldIndexes; $pf++)
{
print OUTPUT_FILE $fieldSeparator;
}
# Print column headers
foreach my $argnum (0 .. ($#ARGV - 1))
{
my $filePattern = $ARGV[$argnum];
my #matchedFiles = < $filePattern >;
foreach my $inputPath (#matchedFiles)
{
print OUTPUT_FILE $inputPath;
for(my $f = 0; $f < $fieldsCount - $#primaryFieldIndexes - 1; $f++)
{
print OUTPUT_FILE $fieldSeparator;
}
}
}
# Print merged columns
print OUTPUT_FILE "\n";
foreach my $key ( #keys )
{
print OUTPUT_FILE $key;
foreach my $argnum (0 .. ($#ARGV - 1))
{
my $filePattern = $ARGV[$argnum];
my #matchedFiles = < $filePattern >;
foreach my $inputPath (#matchedFiles)
{
my $lines = $files{$inputPath};
for(my $i = 0; $i < $fieldsCount; $i++)
{
next if exists $primaryFieldIndexesHash{$i} && !$includePrimaryField;
print OUTPUT_FILE $fieldSeparator;
print OUTPUT_FILE $lines->{$key}->[$i] if exists $lines->{$key}->[$i];
}
}
}
print OUTPUT_FILE "\n";
}
close OUTPUT_FILE;