Python3 - Remove last (n) lines from file and append new data [duplicate] - linux

This question already has answers here:
Delete final line in file with python
(10 answers)
Closed 3 years ago.
I have a little python script that I'm working on that will add the local weather conditions and some network information to my .bashrc file. Everything works as expected except for two bugs: first instead of removing the old data and appending the new it just appends the new data like this:
('echo [Local weather]:', 66.9, 'F', 'with', 'overcast clouds')
('echo [Your public IP is]:', 'x.x.x.x'('echo [Local weather]:', 66.9, 'F', 'with', 'overcast clouds')
('echo [Your public IP is]:', 'x.x.x.x')
and second I need to drop the formatting from the printed text, eg (parentheses, commas, etc..) so that the string appears as such:
echo [Local weather]: 66.9F with overcast clouds
echo [Public IP]: x.x.x.x
Here is the file operations portion of my script:
with open('HOME/.bashrc', 'a') as f:
w = "echo [Local weather]:", wx_t,"F", "with", wx_c
i = "echo [Your public IP is]:", ip
out = [str(w), str(i)]
f.write('\n'.join(out)[0:-3])
so I thought that f.write('\n'.join(out)[0:-3]) would remove the last 3 lines of the file but apparently it drops the last 3 characters of the string. What do I need to change to achieve that which I attempting? Should I be using f.writelines() instead of f.write()?
The expected result will eventually look like this:
Welcome to [hostname] You are logged in as user [some_user]
[Local time]: Mon Mar 20 08:28:32 CDT 2017.
[Local weather]: 66.56 F with clear sky
[Local IP]: 192.168.x.x [Public IP]: x.x.x.x
Thanks in advance and if this is a duplicate question I apologize. I felt I've done my due-diligence in searching for a solution but I have been unsuccessful.
UPDATE:
So i fixed the formatting by changing
w = "echo [Local weather]:", wx_t,"F", "with", wx_c
to
w = "echo [Local weather]: " + str(wx_t) + " F, with " + wx_c
and using f.writelines() instead of f.write(). Also I removed f.close() as suggested by DeepSpace

The first bug is caused by using 'a' instead of 'w' in the first line. This cause Python to add to the file rather than overwrite the file.
The second bug is caused because you are joining all your data into a string, and getting [0:-3] of the joined string, rather than [0:-3] of the list out.
Your code, fixed, should look like this:
with open('HOME/.bashrc', 'w') as f:
w = "echo [Local weather]:", wx_t,"F", "with", wx_c
i = "echo [Your public IP is]:", ip
out = [str(w), str(i)]
f.write('\n'.join(out[0:-3]))

Related

From SSH not decoded from bytes to ASCII?

Good afternoon.
I get the example below from SSH:
b"rxmop:moty=rxotg;\x1b[61C\r\nRADIO X-CEIVER ADMINISTRATION\x1b[50C\r\nMANAGED OBJECT DATA\x1b[60C\r\n\x1b[79C\r\nMO\x1b[9;19HRSITE\x1b[9;55HCOMB FHOP MODEL\x1b[8C\r\nRXOTG-58\x1b[10;19H54045_1800\x1b[10;55HHYB"
I process ssh.recv (99999) .decode ('ASCII')
but some characters are not decoded for example:
\x1b[61C
\x1b[50C
\x1b[9;55H
\x1b[9;19H
The article below explains that these are ANSI escape codes that appear since I use invoke_shell. Previously everything worked until it moved to another server.
Is there a simple way to get rid of junk values that come when you SSH using Python's Paramiko library and fetch output from CLI of a remote machine?
When I write to the file, I also get:
rxmop:moty=rxotg;[61C
RADIO X-CEIVER ADMINISTRATION[50C
MANAGED OBJECT DATA[60C
[79C
MO[9;19HRSITE[9;55HCOMB FHOP MODEL[8C
RXOTG-58[10;19H54045_1800[10;55HHYB
If you use PuTTY everything is clear and beautiful.
I can't get away from invoke_shell because the connection is being thrown from one server to another.
Sample code below:
# coding:ascii
import paramiko
port = 22
data = ""
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname=host, username=user, password=secret, port=port, timeout=10)
ssh = client.invoke_shell()
ssh.send("rxmop:moty=rxotg;\n")
while data.find("<") == -1:
time.sleep(0.1)
data += ssh.recv(99999).decode('ascii')
ssh.close()
client.close()
f = open('text.txt', 'w')
f.write(data)
f.close()
The normal output is below:
MO RSITE COMB FHOP MODEL
RXOTG-58 54045_1800 HYB BB G12
SWVERREPL SWVERDLD SWVERACT TMODE
B1314R081D TDM
CONFMD CONFACT TRACO ABISALLOC CLUSTERID SCGR
NODEL 4 POOL FLEXIBLE
DAMRCR CLTGINST CCCHCMD SWVERCHG
NORMAL UNLOCKED
PTA JBSDL PAL JBPTA
TGFID SIGDEL BSSWANTED PACKALG
H'0001-19B3 NORMAL
What can you recommend in order to return normal output, so that all characters are processed?
Regular expressions do not help, since the structure of the record is shifted, then characters from certain positions are selected in the code.
PS try to use ssh.invoke_shell (term='xterm') don't work.
There is an answer here:
How can I remove the ANSI escape sequences from a string in python
There are other ways...
https://unix.stackexchange.com/questions/14684/removing-control-chars-including-console-codes-colours-from-script-output
Essentially, you are 'screen-scraping' input, and you need to strip the ANSI codes. So, grab the input, and then strip the codes.
import re
... (your ssh connection here)
data = ""
while data.find("<") == -1:
time.sleep(0.1)
chunk = ssh.recv(99999)
data += chunk
... (your ssh connection cleanup here)
ansi_escape = re.compile(r'\x1B(?:[#-Z\\-_]|\[[0-?]*[ -/]*[#-~])')
data = ansi_escape.sub('', data)

Edit multiple text files in directory [duplicate]

This question already has answers here:
Editing/Replacing content in multiple files in Unix AIX without opening it
(2 answers)
Closed 4 years ago.
I'm working with bash via Ubuntu terminal. I want to make the same text edit to all the files in my directory that have the same extension. My directory contains several versions of 227 numerically counted data files. So for example I have:
tmp0001.ctl
tmp0001.out
tmp0001.trees
tmp0001.txt
tmp0002.ctl
tmp0002.out
tmp0002.trees
tmp002.txt
And so on.
The files I am interested in editing are those with the extension ".ctl". At present, .ctl files look like this (though the numbers vary from tmp0001 through 227 of course):
seqfile = tmp0001.txt
treefile = tmp0001.trees
outfile = tmp0001.out
noisy = 3
seqtype = 2
model = 0
aaRatefile =
Small_Diff = 0.1e-6
getSE = 2
method = 1
I want to edit the .ctl files so that they read:
seqfile = tmp0001.txt
treefile = tmp0001.trees
outfile = tmp0001.out
noisy = 3
seqtype = 2
model = 2
aaRatefile = lg.dat
fix_alpha = 0
alpha = .5
ncatG = 4
Small_Diff = 0.1e-6
getSE = 2
method = 1
I'm not sure how to do this though, I would guess to use an editor like nano or sed, but I'm not sure how to automate this. I thought something along the lines of:
for file in *.ctl
do
nano
???
Hope this isn't too convoluted! Essentially I want to change two lines already in there (model and aaratefile), and add 2 more lines (>ix_alpha = 0 and alpha = .5) in each .ctl file.
Thanks!
You can use sed to perform the task:
sed -e 's/model = 0/model = 2/; s/aaRatefile = /aaRatefile = lg.dat/' \
-e '/Small_Diff/ i fix_alpha = 0\nalpha = .5\nncatG = 4' \
-i~ *.ctl
s/pattern/replacement/ replaces "pattern" by "replacement"
i text inserts the text
if a command is preceded by /pattern/, the command is run only when the current line matches the pattern, i.e. in this case, the lines are inserted before the Small_Diff line.
-i~ tells sed to replace the files "in place", leaving a backup with the ~ appended to the name (so there will be backups named tmp0001.ctl~ etc.)

split() on one character OR another

Python 3.6.0
I have a program that parses output from Cisco switches and routers.
I get to a point in the program where I am returning output from the 'sh ip int brief'
command.
I place it in a list so I can split on the '>' character and extract the hostname.
It works perfectly. Pertinent code snippet:
ssh_channel.send("show ip int brief | exc down" + "\n")
# ssh_channel.send("show ip int brief" + "\n")
time.sleep(0.6)
outp = ssh_channel.recv(5000)
mystring = outp.decode("utf-8")
ipbrieflist = mystring.splitlines()
hostnamelist = ipbrieflist[1].split('>')
hostname = hostnamelist[0]
If the router is in 'enable' mode the command prompt has a '#' character after the hostname.
If I change my program to split on the '#' character:
hostnamelist = ipbrieflist[1].split('#')
it still works perfectly.
I need for the program to handle if the output has the '>' character OR the '#' character in 'ipbrieflist'.
I have found several valid references for how to handle this. Ex:
import re
text = 'The quick brown\nfox jumps*over the lazy dog.'
print(re.split('; |, |\*|\n',text))
The above code works perfectly.
However, when I modify my code as follows:
hostnamelist = ipbrieflist[1].split('> |#')
It does not work. By 'does not work' I mean it does not split on either character. No splitting at all.
The following debug is from PyCharm:
ipbrieflist = mystring.splitlines() ipbrieflist={list}: ['terminal length 0', 'rtr-1841>show ip int brief | exc down', 'Interface'] IP-Address OK? Method Status Protocol', 'FastEthernet0/1 192.168.1.204 YES NVRAM up up ', 'Loopback0 172.17.0.1 YES NVRAM up up ', '', 'rtr-1841>']
hostnamelist = ipbrieflist[1].split('> |#') hostnamelist={list}: ['rtr-1841>show ip int brief | exc down']
hostname = {str}'rtr-1841>show ip int brief | exc down'
As you can see the hostname variable still contains the 'show ip int brief | exc down' appended to it.
I get the same exact behavior if the hostname is followed by the '#' character.
What am I doing wrong?
Thanks.
Instead of this:
ipbrieflist[1].split('> |#')
You want this:
re.split('>|#', ipbrieflist[1])

Why does this String→List→Map conversion doesn't work in Groovy

I have input data of type
abc 12d
uy 76d
ce 12a
with the lines being separated by \n and the values by \t.
The data comes from a shell command:
brlist = 'mycommand'.execute().text
Then I want to get this into a map:
brmap = brlist.split("\n").collectEntries {
tkns = it.tokenize("\t")
[ (tkns[0]): tkns[1] ]
}
I also tried
brmap = brlist.split("\n").collectEntries {
it.tokenize("\t").with { [ (it[0]): it[1] ] }
}
Both ways gave the same result, which is a map with a single entry:
brmap.toString()
# prints "[abc:12d]"
Why does only the first line of the input data end up being in the map?
Your code works, which means the input String brlist isn't what you say it is...
Are you sure that's what you have? Try printing brlist, and then it inside collectEntries
As an aside, this does the same thing as your code:
brlist.split('\n')*.split('\t')*.toList().collectEntries()
Or you could try (incase it's spaces not tabs, this will expect both)
brlist.split('\n')*.split(/\s+/)*.toList().collectEntries()
This code works
// I use 4 spaces as tab.
def text = 'sh abc.sh'.execute().text.replaceAll(" " * 4, "\t")
brmap = text.split("\n").collectEntries {
tkns = it.tokenize("\t")
[(tkns[0]) : tkns[1]]
}
assert[abc:"12d", uy:"76d", ce:"12a"] == brmap
abc.sh
#!/bin/sh
echo "abc 12d"
echo "uy 76d"
echo "ce 12a
Also, I think your groovy code is correct. maybe your mycommand has some problem.
Ok, thanks for the hints, it is a bug in Jenkins: https://issues.jenkins-ci.org/browse/JENKINS-26481.
And it has been mentioned here before: Groovy .each only iterates one time

Puppet iteration string/array

Can you think of a way to solve this problem in Puppet?
I have a custom fact with generates a string of IP addresses depending on the domain it is run on, it can resolve to have 1 to n addresses.
"10.1.29.1"
"10.1.29.1,10.1.29.5"
"10.1.29.1,10.1.29.5,10.1.29.7"
etc
I want to add these to the host file with a generated server names of servernameX for example;
10.1.29.1 myservername1
10.1.29.5 myservername2
10.1.29.7 myservername3
So how can you do this as puppet doesn't have an array iterator like "for each"?
Sadly, even if you go about and use a custom "define" to iterate over an array upon splitting your custom fact on a comma, the result will be rather not what you expect and not even close to a "for each" loop -- aside of causing you a headache, probably.
Said that, I am not sure if this is what you want to achieve, but have a look at this approach:
$fact = '1.1.1.1,2.2.2.2,3.3.3.3'
$servers = split($::fact, ',')
$count = size($servers)
$names = bracket_expansion("host[01-${count}].address")
file { '/tmp/test.txt':
content => inline_template('<%= #servers.each_with_index.map {|v,i| "#{v}\t\t#{#names[i]}\n" } %>'),
ensure => present
}
What we have there are two custom functions: size() and bracket_expansion(); which we then use values that they provide inside a hack that leverages the inline_template() function to render content of the file utilising parallel access to two arrays -- one with IP addresses from your fact and one with host names that should follow.
The result is a follows:
matti#acrux ~ $ cat | puppet apply
$fact = '1.1.1.1,2.2.2.2,3.3.3.3'
$servers = split($::fact, ',')
$count = size($servers)
$names = bracket_expansion("host[01-${count}].address")
file { '/tmp/test.txt':
content => inline_template('<%= #servers.each_with_index.map {|v,i| "#{v}\t\t#{#names[i]}\n" } %>'),
ensure => present
}
notice: /Stage[main]//File[/tmp/test.txt]/ensure: created
notice: Finished catalog run in 0.07 seconds
matti#acrux ~ $ cat /tmp/test.txt
1.1.1.1 host01.address
2.2.2.2 host02.address
3.3.3.3 host03.address
matti#acrux ~ $
Both size() and bracket_expansion() functions can be found here:
https://github.com/kwilczynski/puppet-functions/tree/master/lib/puppet/parser/functions/
I hope this helps a little :-)

Resources