Unable to trim the last line in the Unix file - linux
I am trying to create and unix file with some text in it with the below command:
ssh.sendLine("echo '"+"{1:F01ZYIBGB20AXXX0000000000}{2:O5481710NDEASES0XXXX12345678901511041511180930N}\n{4:\n:16R:GENL\n:20C::SEME//"+$TradeRef+"\n:23G:INST\n:16R:LINK\n:20C::RELA//"+$TradeRef+"\n:16S:LINK\n:16R:STAT\n:25D::MTCH//MACH\n:16S:STAT\n:16S:GENL\n:16R:SETTRAN\n:35B:ISIN DE0005933931\niShares Core DAX UCITS ETF DE\n:36B::SETT//UNIT/10,\n:97A::SAFE//8696\n:22F::SETR//TRAD\n:98A::SETT//20151118\n:98A::TRAD//20151118\n:16S:SETTRAN\n-}'"+">M548File.txt");
NOw this command is createing the file M548File.txt.
When id a cat this is what i get :
{1:F01ZYIBGB20AXXX0000000000}{2:O5481710NDEASES0XXXX12345678901511041511180930N}
{4:
:16R:GENL
:20C::SEME//11111111111111111111
:23G:INST
:16R:LINK
:20C::RELA//11111111111111111111
:16S:LINK
:16R:STAT
:25D::MTCH//MACH
:16S:STAT
:16S:GENL
:16R:SETTRAN
:35B:ISIN DE0005933931
iShares Core DAX UCITS ETF DE
:36B::SETT//UNIT/10,
:97A::SAFE//8696
:22F::SETR//TRAD
:98A::SETT//20151118
:98A::TRAD//20151118
:16S:SETTRAN
-}
However when i try to open the same file in notepad i get one extraline at the last which is basically an empty line making it a total of -- 23 lines as compared to 22 in cat.
I tried the Sed commands but it just not working
Any idea how to overcome this to get 22 lines in notepad(same as cat)?
Related
SED style Multi address in Python?
I have an app that parses multiple Cisco show tech files. These files contain the output of multiple router commands in a structured way, let me show you an snippet of a show tech output: `show clock` 20:20:50.771 UTC Wed Sep 07 2022 Time source is NTP `show callhome` callhome disabled Callhome Information: <SNIPET> `show module` Mod Ports Module-Type Model Status --- ----- ------------------------------------- --------------------- --------- 1 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok 2 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok 3 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok 4 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok 21 0 Fabric Module N9K-C9504-FM-R ok 22 0 Fabric Module N9K-C9504-FM-R ok 23 0 Fabric Module N9K-C9504-FM-R ok <SNIPET> My app currently uses both SED and Python scripts to parse these files. I use SED to parse the show tech file looking for a specific command output, once I find it, I stop SED. This way I don't need to read all the file (these can get to be very big files). This is a snipet of my SED script: sed -E -n '/`show running-config`|`show running`|`show running config`/{ p :loop n p /`show/q b loop }' $1/$file As you can see I am using a multi address range in SED. My question specifically is, how can I achieve something similar in python? I have tried multiple combinations of flags: DOTALL and MULTILINE but I can't get the result I'm expecting, for example, I can get a match for the command I'm looking for, but python regex wont stop until the end of the file after the first match. I am looking for something like this sed -n '/`show clock`/,/`show/p' I would like the regex match to stop parsing the file and print the results, immediately after seeing `show again , hope that makes sense and thank you all for reading me and for your help
You can use nested loops. import re def process_file(filename): with open(filename) as f: for line in f: if re.search(r'`show running-config`|`show running`|`show running config`', line): print(line) for line1 in f: print(line1) if re.search(r'`show', line1): return The inner for loop will start from the next line after the one processed by the outer loop. You can also do it with a single loop using a flag variable. import re def process_file(filename): in_show = False with open(filename) as f: for line in f: if re.search(r'`show running-config`|`show running`|`show running config`', line): in_show = True if in_show print(line) if re.search(r'`show', line1): return
How do I fix USER FATAL MESSAGE 740?
How do I fix USER FATAL MESSAGE 740? This error is generated by Nastran when I try to run a BDF/DAT file of mine. *** USER FATAL MESSAGE 740 (RDASGN) UNIT NUMBER 5 HAS ALREADY BEEN ASSIGNED TO THE LOGICAL NAME INPUT USER ACTION: CHANGE THE UNIT NUMBER ON THE ASSIGN STATEMENT AND IF THE UNIT IS USED FOR PARAM,POST,<0 THEN SPECIFY PARAM,OUNIT2 WITH THE NEW UNIT NUMBER. AVOID USING THE FOLLOWING UNIT NUMBERS THAT ARE ASSIGNED TO SPECIAL FILES IN MSC.NASTRAN: 1 THRU 12, 14 THRU 22, 40, 50, 51, 91, 92. SEE THE MSC.NASTRAN INSTALLATIONS/OPERATIONS GUIDE SECTION ON MAKING FILE ASSIGNMENTS OR MSC.NASTRAN QUICK REFERENCE GUIDE ON ASSIGN PHYSICAL FILE FOR REFERENCE. Below is the head of my BDF file. assign userfile='SUB1_PLATE.csv', status=UNKNOWN, form=formatted, unit=52 SOL 200 CEND ECHO = NONE DESOBJ(MIN) = 35 set 30=1008,1007,1015,1016 DESMOD=SUB1_PLATE SUBCASE 1 $! Subcase name : DefaultLoadCase $LBCSET SUBCASE1 DefaultLbcSet ANALYSIS = STATICS SPC = 1 LOAD = 6 DESSUB = 99 DISPLACEMENT(SORT1,PLOT,REAL)=ALL STRESS(SORT1,PLOT,VONMISES,CORNER)=ALL BEGIN BULK param,xyunit,52 [...] ENDDATA
Below is the solution Correct assign userfile='SUB1_PLAT.csv', status=UNKNOWN, form=formatted, unit=52 I shortened the name of CSV file to SUB1_PLAT.csv. This reduced the length of the line to 72 characters. Incorrect assign userfile='SUB1_PLATE.csv', status=UNKNOWN, form=formatted, unit=52 The file management section is limited to 72 characters, spaces included. The incorrect line stretches 73 characters. The nastran reader ignores the 73rd character and on. Instead of reading "unit=52" the reader reads "unit=5" which triggers the error. |<--------------------- 72 Characters -------------------------------->||<- Characters are ignored truncated -> assign userfile='SUB1_PLATE.csv', status=UNKNOWN, form=formatted, unit=52 References MSC Nastran Reference Guide The records of the first four sections are input in free-field format and only columns 1 through 72 are used for data. Any information in columns 73 through 80 may appear in the printed echo, but will not be used by the program. If the last character in a record is a comma, then the record is continued to the next record.
entering text in a file at specific locations by identifying the number being integer or real in linux
I have an input like below 46742 1 48276 48343 48199 48198 46744 1 48343 48344 48200 48199 46746 1 48344 48332 48201 48200 48283 3.58077402e+01 -2.97697746e+00 1.50878647e+02 48282 3.67231688e+01 -2.97771595e+00 1.50419488e+02 48285 3.58558188e+01 -1.98122787e+00 1.50894850e+02 Each segment with the 2nd entry like 1 being integer is like thousands of lines and then starts the segment with the 2nd entry being real like 3.58077402e+01 Before anything beings I have to input a text like *Revolved *Gripped *Crippled 46742 1 48276 48343 48199 48198 46744 1 48343 48344 48200 48199 46746 1 48344 48332 48201 48200 *Cracked *Crippled 48283 3.58077402e+01 -2.97697746e+00 1.50878647e+02 48282 3.67231688e+01 -2.97771595e+00 1.50419488e+02 48285 3.58558188e+01 -1.98122787e+00 1.50894850e+02 so I need to enter specific texts at those locations. It is worth mentioning that the file is space delimited and not tabs delimited and that the text starting with * has to be at the very left of the line without spacing. The format of the rest of the file should be kept too. Any suggestions with sed or awk would be highly appreaciated! The text in the beginning could entered directly so that is not a prime problem since that is the start of the file, problematic is the second bunch of line so identify that the second entry has turned to real.
An awk with fixed strings: awk 'BEGIN{print "*Revolved\n*Gripped\n*Crippled"} match($2,"\+")&&!pr{print "*Cracked\n*Crippled";pr=1}1' yourfile match($2,"\+")&&!pr : When + char is found at $2 field(real number) and pr flag is null.
Why read.clipboard is not working in R
Why is read.clipboard() not working on my system? > library(psych) > read.table(text=read.clipboard(), sep="\t", header=T, stringsAsFactors=F, strip.white=T) Error in textConnection(text, encoding = "UTF-8") : invalid 'text' argument In addition: Warning message: In read.table(file("clipboard"), header = TRUE, ...) : incomplete final line found by readTableHeader on 'clipboard' > read.table(text=readClipboard(), sep="\t", header=T, stringsAsFactors=F, strip.white=T) Error in textConnection(text, encoding = "UTF-8") : could not find function "readClipboard" The version information: > packageVersion('psych') [1] ‘1.4.8.11’ > R.version _ platform i486-pc-linux-gnu arch i486 os linux-gnu system i486, linux-gnu status major 3 minor 1.1 year 2014 month 07 day 10 svn rev 66115 language R version.string R version 3.1.1 (2014-07-10) nickname Sock it to Me > EDIT: As suggested by #RichardScriven, I used following: read.table('clipboard', sep="\t", header=T) If I copy some cells in a spreadsheet and try above command, it does not work. Following is the error: Error in file(file, "rt") : cannot open the connection In addition: Warning message: In file(file, "rt") : clipboard cannot be opened or contains no text But if I paste first to a text editor, and copy it again from there, then the above command works well. How can I directly use the data after copying from a spreadsheet? Following command also shows same problem, works when copied from text editor but not when copied from spreadsheet. I produces same error. > read.clipboard(sep="\t", header=T) Error in open.connection(file, "rt") : cannot open the connection In addition: Warning message: In open.connection(file, "rt") : clipboard cannot be opened or contains no text
read.clipboard(sep="\t",header = T) The above code should work. Also please note that you need to copy [from excel / libreoffice etc] , go to R script and run the command in R. If you perform any other commands in between, this might not work. Hope this helps.
Add a number to each line of a file in bash
I have some files with some lines in Linux like: 2013/08/16,name1,,5000,8761,09:00,09:30 2013/08/16,name1,,5000,9763,10:00,10:30 2013/08/16,name1,,5000,8866,11:00,11:30 2013/08/16,name1,,5000,5768,12:00,12:30 2013/08/16,name1,,5000,11764,13:00,13:30 2013/08/16,name2,,5000,2765,14:00,14:30 2013/08/16,name2,,5000,4765,15:00,15:30 2013/08/16,name2,,5000,6765,16:00,16:30 2013/08/16,name2,,5000,12765,17:00,17:30 2013/08/16,name2,,5000,25665,18:00,18:30 2013/08/16,name2,,5000,45765,09:00,10:30 2013/08/17,name1,,5000,33765,10:00,11:30 2013/08/17,name1,,5000,1765,11:00,12:30 2013/08/17,name1,,5000,34765,12:00,13:30 2013/08/17,name1,,5000,12765,13:00,14:30 2013/08/17,name2,,5000,1765,14:00,15:30 2013/08/17,name2,,5000,3765,15:00,16:30 2013/08/17,name2,,5000,7765,16:00,17:30 My column separator is "," and in the third column (currently ,,), I need the entry number within the same day. For example, with date 2013/08/16 I have 11 lines and with date 2013/08/17 7 lines, so I need add the numbers for example: 2013/08/16,name1,1,5000,8761,09:00,09:30 2013/08/16,name1,2,5000,9763,10:00,10:30 2013/08/16,name1,3,5000,8866,11:00,11:30 2013/08/16,name1,4,5000,5768,12:00,12:30 2013/08/16,name1,5,5000,11764,13:00,13:30 2013/08/16,name2,6,5000,2765,14:00,14:30 2013/08/16,name2,7,5000,4765,15:00,15:30 2013/08/16,name2,8,5000,6765,16:00,16:30 2013/08/16,name2,9,5000,12765,17:00,17:30 2013/08/16,name2,10,5000,25665,18:00,18:30 2013/08/16,name2,11,5000,45765,09:00,10:30 2013/08/17,name1,1,5000,33765,10:00,11:30 2013/08/17,name1,2,5000,1765,11:00,12:30 2013/08/17,name1,3,5000,34765,12:00,13:30 2013/08/17,name1,4,5000,12765,13:00,14:30 2013/08/17,name2,5,5000,1765,14:00,15:30 2013/08/17,name2,6,5000,3765,15:00,16:30 2013/08/17,name2,7,5000,7765,16:00,17:30 I need do it in bash. How can I do it?
This one's good too: awk -F, 'sub(/,,/, ","++a[$1]",")1' file Output: 2013/08/16,name1,1,5000,8761,09:00,09:30 2013/08/16,name1,2,5000,9763,10:00,10:30 2013/08/16,name1,3,5000,8866,11:00,11:30 2013/08/16,name1,4,5000,5768,12:00,12:30 2013/08/16,name1,5,5000,11764,13:00,13:30 2013/08/16,name2,6,5000,2765,14:00,14:30 2013/08/16,name2,7,5000,4765,15:00,15:30 2013/08/16,name2,8,5000,6765,16:00,16:30 2013/08/16,name2,9,5000,12765,17:00,17:30 2013/08/16,name2,10,5000,25665,18:00,18:30 2013/08/16,name2,11,5000,45765,09:00,10:30 2013/08/17,name1,1,5000,33765,10:00,11:30 2013/08/17,name1,2,5000,1765,11:00,12:30 2013/08/17,name1,3,5000,34765,12:00,13:30 2013/08/17,name1,4,5000,12765,13:00,14:30 2013/08/17,name2,5,5000,1765,14:00,15:30 2013/08/17,name2,6,5000,3765,15:00,16:30 2013/08/17,name2,7,5000,7765,16:00,17:30