powershell write name of file containing string - string

I have following two lines in my Input file.
Sample Input file ( name of file - file.txt)
String1 value 'string2'
..
..
..
Call string1
Desired output :
File.txt. ( i.e. Name of file )
Basically i want names of file - if it contains these two lines
1) string1 value 'string2'
2) call string1
1) and 2) above are two different lines and there could be many lines in between.
P.S. a) i am searching for 'string2' . 'string1' could be any 8 characters. I do not know 'string1'
b) 'string2' will be always in single quote (')
c) 'string2' will always be preceded by string 'value'
Thanks

Try this...
gc input.txt |?{$_ -match '.*value ''(.*)'''}|%{$matches[1]}

The below script goes through each file in the current directory and looks for the files with the 2 lines in the content. Not sure if this meets your situation but it is not difficult to customize it to you environment.
dir *.* | foreach {if (get-content $_| Select-String -pattern "^[A-Za-z]{8} value 'string2'","Call string1") {"$_"}}
The output (in my lab) is:
C:\Documents\ManualScripts\test.txt
test.txt is the file I created for testing this script. The content is as following:
abcdefgh value 'string2'
hello world
I love church
I love Jesus
call string1
Alibaba, sesame open the door!
To #dazedandconfused I believe your script return "String1" which is part of the second line instead of file name he/she requested. Besides your script doesn't reflect another one of his/her needs: "'string1' could be any 8 characters". Forgive me if I am wrong.

Related

How do I concatenate each line of 2 variables in bash?

I have 2 variables, NUMS and TITLES.
NUMS contains the string
1
2
3
TITLES contains the string
A
B
C
How do I get output that looks like:
1 A
2 B
3 C
paste -d' ' <(echo "$NUMS") <(echo "$TITLES")
Having multi-line strings in variables suggests that you are probably doing something wrong. But you can try
paste -d ' ' <(echo "$nums") - <<<"$titles"
The basic syntax of paste is to read two or more file names; you can use a command substitution to replace a file anywhere, and you can use a here string or other redirection to receive one of the "files" on standard input (where the file name is then conventionally replaced with the pseudo-file -).
The default column separator from paste is a tab; you can replace it with a space or some other character with the -d option.
You should avoid upper case for your private variables; see also Correct Bash and shell script variable capitalization
Bash variables can contain even very long strings, but this is often clumsy and inefficient compared to reading straight from a file or pipeline.
Convert them to arrays, like this:
NUMS=($NUMS)
TITLES=($TITLES)
Then loop over indexes of whatever array, lets say NUMS like this:
for i in ${!NUMS[*]}; {
# and echo desired output
echo "${NUMS[$i]} ${TITLES[$i]}"
}
Awk alternative:
awk 'FNR==NR { map[FNR]=$0;next } { print map[FNR]" "$0} ' <(echo "$NUMS") <(echo "$TITLE")
For the first file/variable (NR==FNR), set up an array called map with the file number record as the index and the line as the value. Then for the second file, print the entry in the array as well as the line separated by a space.

Log Parsing via Powershell - print all array elements after nth element

I'm parsing a log file that is space delimited for the first 7 elements and then a log message or sentence follows. I know just enough to get around in PS, and I'm learning more each day, so I'm not sure this is the best way to do this and apologies if I'm not leveraging a more efficient means that would be second nature to you. I'm using -split(' ')[n] to extract each field of the log file line by line. I'm able to extract the first parts fine as they are space-delimited, but I'm not sure how to get the rest of the elements up to the end of the line.
$logFile=Get-Content $logFilePath
$dateStamp=$logfile -split(' ')[0]
$timeStamp=$logfile -split(' ')[1]
$requestID=$logfile -split(' ')[3]
$binaryID=$logfile -split(' ')[4]
$logID=$logfile -split(' ')[5]
$action=$logfile -split(' ')[6]
$logMessage=$logfile -split(' ')[?]
This is not a CSV that I can import. I'm more familiar with string manipulation in bash so I am able to successfully replace spaces in the first 7 elements, and the end, with "," :
#!/bin/bash
inputFile="/cygdrive/c/Temp/logfile.log"
outputFile="/cygdrive/c/Temp/test_log.csv"
echo "\"DATE\",\"TIME\",\"HYPEN\",\"REQUESTID\",\"BINARY\",\"PROC_NUMBER\",\"MESSAGE\"" > $outputFile
while read -a line
do
arrLength=$(echo ${#line[#]})
echo \"${line[0]}\",\"${line[1]}\",\"${line[2]}\",\"${line[3]}\",\"${line[4]}\",\"${line[5]}\",\"${line[#]:6:$arrLength}\"
done < $inputFile >> $outputFile
Can you help either printing the array elements from position n to the end, or replacing the spaces appropriately in PS so I have a CSV that I can import? Just trying to avoid the two-step process of converting it in bash, then importing it in PS but I'm still researching. I did find this post Parsing Text file and placing contents into an Array Powershell
for importing the file assuming it's space-delimited and that works for the first 7 elements but not sure about everything after that.
Of course I welcome any other PS solutions such as one of those [something]::SOMETHING things I've seen by googling that might do all this much more seamlessly.
You can specify the maximum number of substrings in which the string is split like this:
$splittedRow = $logfile.split(' ',8)
$dateStamp=$splittedRow[0]
$timeStamp=$splittedRow[1]
$requestID=$splittedRow[3]
$binaryID=$splittedRow[4]
$logID=$splittedRow[5]
$action=$spltttedRow[6]
$logMessage=$splittedRow[7]
As an addition to Viktor Be's answer:
$data = "111 22222 333 4444444 5 6 77 888888 9999999 0" #this is the content of file below for testing purposes
#$data = get-content -path C:\temp\mytest.txt
foreach ($line in $data){
$splitted = $line.split(' ',8)
$line_output= ""
for ($i = 0;$i -lt 7;$i++){
$line_output += "$($splitted[$i]);"
}
$line_output += $splitted[7]
$line_output | out-file "C:\temp\MyCsvThatPowershellCanRead.csv" -append
}
You should be able to iterate over each line in the logfile and get the information you need the way you are doing. However, it's easy to grab the message field, which could include n number of spaces in the log message with a regular expression.
The following regex should work for you. Assuming $line is the current line you are on:
$line -match '(?<=(\S+\s+){6}).*'
$logMessage = $matches[0]
The way this expression works is that it looks for .* (which means any character 0 or more times) that comes after 6 occurences of non-whitespace characters followed by whitespace characters. The .* in this expression should match on your log message.

How do I add a new column with a specific word to a file in linux?

I have a file with one column containing 2059 ID numbers.
I want to add a second column with the word 'pop1' for all the 2059 ID numbers.
The second column will just mean that the ID number belongs to population 1.
How can I do this is linux using awk or sed?
The file currently has one column which looks like this
45958
480585
308494
I want it to look like:
45958 pop1
480585 pop1
308494 pop1
Maybe not the most elegant solution, and it doesn't use sed or awk, but I would do that:
while read -r line; do echo ""$line" pop1" >> newfile; done < test
This command will append stuff in the file 'newfile', so be sure that it's empty or it doesn't exist before executing the command.
Here is the resource I used, on reading a file line by line : https://www.cyberciti.biz/faq/unix-howto-read-line-by-line-from-file/
A Perl solution.
$ perl -lpi -e '$_ .= " pop1"' your-file-name
Command line options:
-l : remove newline from input and replace it on output
-p : put each line of input into $_ and print $_ at the end of each iteration
-i : in-place editing (overwrite the input file)
-e : run this code for each line of the input
The code ($_ .= " pop1") just appends your string to the input record.

Select wav files from a folder whose partial names are in a text file

I have 500 wave files in a folder ABC which are named like
F1001
F1002
F1003
...
F1100
F2001
F2002
...
F2100
M3001
M3002
...
M3100
M4001
M4002
...
M4100
all with extension .wav.
Also I have a text file which contains 3 digit numbers like
001
003
098
034 .... (200 in total).
I want to select wave files from the folder ABC whose names end with these 3 digits.
Expecting MATLAB or bash script solutions.
I read this:
Copy or move files to another directory based on partial names in a text file. But I don't know how to use it for me.
for Matlab
1) Get all the file names in the folder using functions dir or rdir.
2) Using for loop go through every filename and add the last 3 digits of every filename to an array (array A). You will need str2num() here
3) Parse all 3 digit numbers to an array (array B)
4) Using function ismember(B, A) find which elements of B are contained in A
5) Load corresponding filenames
find . -name "*.wav" | grep -f <(awk '{print $0 ".wav"}' file)
grep -f will use the patterns stored in file, one per line, and look for them in your find result. But you want the three numbers to be at the end, so in the above command the last awk statement will provide a modified file with ".wav" appended at each line. So for the line 001, "0001.wav" will match but any file 0010.wav will not.
see: process substitution syntax and grep --help
function wavelist()
wavefiles=dir('*.wav'); % loaded wave files
myfolder='/home/adspr/Desktop/exps_sree/waves/selectedfiles'; %Output folder to store files
for i=1: numel(wavefiles) %for each wave file
filename=wavefiles(i).name;
[~,name,~] = fileparts(filename); % found the name of file without extension
a=name(end-2:end); %get the last 3 digits in the file name
fileID = fopen('nameslist','r');
while ~feof(fileID)
b=fgetl(fileID); % get each line in the list file
if strcmp(a,b) % compare
movefile(filename,myfolder); % moved to otput folder
end
end
fclose(fileID);
end
end
I don't think this is a simple answer, thats why I asked here. Anyway, my problem solved, thats why I posted this as an answer.
Thank you all.

Linux command to split each lines in a file based on a character and write only the specified columns to another file

Suppose the input file file.txt is
abc/def/ghi/jkl/mno
pqr/st/u/vwxy/z
bla/123/45678/9
How to split the lines based on the character '/' and write the specified columns (here it is second and fourth) to another file so that the file should look like
def jkl
st vwxy
123 9
You can use perl, for example:
cat file.txt | perl -ne 'chomp(#cols = split("/", $_)); print "#cols[1, 3]\n";' > output

Resources