Expand multiple python variables in one line inside a jupyter-lab notebook - python-3.x

I have googled many different sources and cannot determine why the ipython kernel has this behavior on a jupyter notebook, so I'm posting here to figure out why.
I'm using notebooks in order to document commandline analysis, and jupyter notebooks are a very useful format to run commands and have them saved into a pdf.
I want to be able to utilize multiple variable expansions in 1 line of a shell magic command in ipython.
Right now I can accomplish this using multiple %env magic commands like so:
#commented code below for context of commands run in a previous cell:
#ssl_logs are some logs of ssl headers
#ssl_log = f"SOMEDIR/ssl.log"
#!cat {ssl_log} | jq -r '.ja3' | sort | uniq > uniq_ja3.txt
#j3_ua_db = "ja3fingerprint_ua.json" #HTTP user-agent strings associated with each JA3 hash
#j3_hash_db = "ja3fingerprint_hash.json" #before/after hashing
#code I'm running in the specific cell with issues
for j in ja3s:
%env j {j}
%env j3_ua_db {j3_ua_db}
!echo $j $j3_ua_db
!grep "$j" "$j3_ua_db" | jq -cr "{ ja3fp, useragent }"
break
Its output is below:
env: j=021d3c3f14b88d57a9ce2d946cabe87f
env: j3_ua_db=ja3fingerprint_ua.json
021d3c3f14b88d57a9ce2d946cabe87f ja3fingerprint_ua.json
{"ja3fp":"021d3c3f14b88d57a9ce2d946cabe87f","useragent":"curl"}
{"ja3fp":"021d3c3f14b88d57a9ce2d946cabe87f","useragent":"curl/7.29.0"}
.....
However what I want to be able to accomplish is to expand the j and j3_ua_db variables on the grep command and pipe them into jq. When I run the below command, the second variable does not expand, and I think its because ipython won't expand multiple python variables in the same line.
for j in ja3s:
print(j)
print(j3_ua_db)
!echo {j} {j3_ua_db}
!grep {j} {j3_ua_db} | jq -cr "{ ja3fp, useragent }"
break
OUTPUT:
021d3c3f14b88d57a9ce2d946cabe87f
ja3fingerprint_ua.json
021d3c3f14b88d57a9ce2d946cabe87f ja3fingerprint_ua.json
grep: {j3_ua_db}: No such file or directory
To be clear, these are all example outputs from simulation data. No internal data is being published here.
TLDR:
I understand I can just do this in a shell like so:
for ja3fp in $( cat uniq_ja3.txt ); do
grep "$ja3fp" "$j3_ua_db" | jq -cr '{ ja3fp, useragent }'
done
But I want to be able to have the same expansion variable functionality in ipython on a jupyter notebook for ease of use, rather than needing to create a %env line for every variable I want to use in multiple lines.
Does anyone know how I can expand multiple ipython variables in 1 line of a "!" magic shell command?
Is there some syntax foo I am missing?
**EDIT:
my versions are as follows:
python3 --version.
Python 3.8.5.
jupyter-lab --version.
3.3.3.
ipython --version.
8.2.0.

Related

Bashscript throws command error when populating variable [duplicate]

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Bash variable from command with pipes, quotes, etc
(2 answers)
Variable variable assignment error -"command not found"
(1 answer)
Closed 1 year ago.
i have the following two lines in a batch script
iperf_options=" -O 10 -V -i 10 --get-server-output -P " $streams
$iperf_options=$iperf_options $proto
and
$streams = 2
$proto = -u
but when i run this i get the following error.
./bandwidth: line 116: -O: command not found
I am simply trying to wrote a string and then append it to a variable so why does it throw the error on the -O?
I have looked about the web but i jsut seem to find stuff about spaces around the "="
any help greatfully recived.
Thankyou
code block to show error
proto=-u
streams=2
iperf_options=" -O 10 -V -i 10 --get-server-output -P " $streams
$iperf_options=$iperf_options $proto
running this will give this out put
./test
./test: line 3: 2: command not found
./test: line 4: =: command not found
There are two main mistakes here, in a variety of combinations.
Use $ to get the value of a variable, never when setting the variable (or changing its properties):
$var=value # Bad
var=value # Good
var=$othervar # Also good
Spaces are critical delimiters in shell syntax; adding (or removing) them can change the meaning of a command in unexpected ways:
var = value # Runs `var` as a command, passing "=" and "value" as arguments
var=val1 val2 # Runs `val2` as a command, with var=val1 set in its environment
var="val1 val2" # Sets `var1` to `val1 val2`
So, in this command:
iperf_options=" -O 10 -V -i 10 --get-server-output -P " $streams
The space between iperf_options="..." and $streams means that it'll expand $streams and try to run it as a command (with iperf_options set in its environment). You want something like:
iperf_options=" -O 10 -V -i 10 --get-server-output -P $streams"
Here, since $streams is part of the double-quoted string, it'll be expanded (variable expand inside double-quotes, but not in single-quoted), and its value included in the value assigned to iperf_options.
There's actually a third mistake (or at least dubious scripting practice): building lists of options as simple string variables. This works in simple cases, but fails when things get complex. If you're using a shell that supports arrays (e.g. bash, ksh, zsh, etc, but not dash), it's better to use those instead, and store each option/argument as a separate array element, and then expand the array with "${arrayname[#]}" to get all of the elements out intact (yes, all those quotes, braces, brackets, etc are actually needed).
proto="-u" # If this'll always have exactly one value, plain string is ok
streams=2 # Same here
iperf_options=(-O 10 -V -i 10 --get-server-output -P "$streams")
iperf_options=("${iperf_options[#]}" "$proto")
# ...
iperf "${iperf_options[#]}"
Finally, I recommend shellcheck.net to sanity-check your scripts for common mistakes. A warning, though: it won't catch all errors, since it doesn't know your intent. For instance, if it sees var=val1 val2 it'll assume you meant to run val2 as a command and won't flag it as a mistake.

How to execute svn command along with grep on windows?

Trying to execute svn command on windows machine and capture the output for the same.
Code:
import subprocess
cmd = "svn log -l1 https://repo/path/trunk | grep ^r | awk '{print \$3}'"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
'grep' is not recognized as an internal or external command,
operable program or batch file.
I do understand that 'grep' is not windows utility.
Getting error as "grep' is not recognized as an internal or external command,
operable program or batch file."
Is it only limited to execute on Linux?
Can we execute the same on Windows?
Is my code right?
For windows your command will look something like the following
svn log -l1 https://repo/path/trunk | find "string_to_find"
You need to use the find utility in windows to get the same effect as grep.
svn --version | find "ra"
* ra_svn : Module for accessing a repository using the svn network protocol.
* ra_local : Module for accessing a repository on local disk.
* ra_serf : Module for accessing a repository via WebDAV protocol using serf.
Use svn log --search FOO instead of grep-ing the command's output.
grep and awk are certainly available for Windows as well, but there is really no need to install them -- the code is easy to replace with native Python.
import subprocess
p = subprocess.run(["svn", "log", "-l1", "https://repo/path/trunk"],
capture_output=True, text=True)
for line in p.stdout.splitlines():
# grep ^r
if line.startswith('r'):
# awk '{ print $3 }'
print(line.split()[2])
Because we don't need a pipeline, and just run a single static command, we can avoid shell=True.
Because we don't want to do the necessary plumbing (which you forgot anyway) for Popen(), we prefer subprocess.run(). With capture_output=True we conveniently get its output in the resulting object's stdout atrribute; because we expect text output, we pass text=True (in older Python versions you might need to switch to the old, slightly misleading synonym universal_newlines=True).
I guess the intent is to search for the committer in each revision's output, but this will incorrectly grab the third token on any line which starts with an r (so if you have a commit message like "refactored to use Python native code" the code will extract use from that). A better approach altogether is to request machine-readable output from svn and parse that (but it's unfortunately rather clunky XML, so there's another not entirely trivial rabbithole for you). Perhaps as middle ground implement a more specific pattern for finding those lines -- maybe look for a specific number of fields, and static strings where you know where to expect them.
if line.startswith('r'):
fields = line.split()
if len(fields) == 13 and fields[1] == '|' and fields[3] == '|':
print(fields[2])
You could also craft a regular expression to look for a date stamp in the third |-separated field, and the number of changed lines in the fourth.
For the record, a complete commit message from Subversion looks like
------------------------------------------------------------------------
r16110 | tripleee | 2020-10-09 10:41:13 +0300 (Fri, 09 Oct 2020) | 4 lines
refactored to use native Python instead of grep + awk
(which is a useless use of grep anyway; see http://www.iki.fi/era/unix/award.html#grep)

How to run a python file and scan the generated ips by nmap

I have written a python script to search through shodan, my script code returns a file containing an ip list that each line contains single ip.
here is my code:
import shodan
SHODAN_API="YOUR_SHODAN_API"
api = shodan.Shodan(SHODAN_API)
try:
# Search Using Shodan
results = api.search('EXAMPLE')
# Showing the results
print 'Results found: %s' % results['total']
for result in results['matches']:
print '%s' % result['ip_str']
''' following lines could be uncommented due to more information
Don't Uncomment if you are using scanning methods with the results '''
#print result['data']
#print ''
except shodan.APIError, e:
print 'Error: %s' % e
I was wondering if there is any way to automate the task of running my code and then scanning the ip list by external script or something that work on OSX and Linux ?
You can simply use a bash script like the following one:
#!/bin/bash
python ShodanSearch.py >> IPResult.txt
cat IPResult.txt | while read line
do
sudo nmap -n -Pn -sV -p 80,8080 -oG - $line >> NResult.txt
done
As an alternative to the solution above, you could also execute nmap using the python os module to execute shell commands within your python script, or the now preferred method is with the subprocess module, haven’t personally used the latter, but it can definitely do what you want.

How do I parse output from result of CURL command?

I have a Jenkins console output that looks like this:
Started by remote host 10.16.17.13
Building remotely on ep9infrajen201 (ep9) in workspace d:\Jenkins\workspace\Tools\Provision
[AWS-NetProvision] $ powershell.exe -NonInteractive -ExecutionPolicy ByPass "& 'C:\Users\user\AppData\Local\Temp\jenkins12345.ps1'"
Request network range: 10.1.0.0/13
{
    "networks":  [
                     "10.1.0.0/24"
                 ]
}
Finished: SUCCESS
I get this from a curl command that I run. to check the JENKINS_JOB_URL/lastBuild/consoleText
My question is, for the sake of some other automation I am doing, how do I get just "10.1.0.0/24" so I can assign it to a shell variable using LINUX tools?
Thank you
Since you listed jq among the tags of your duplicate question, I'll assume you have jq installed. You have to clean up your output to get JSON first, then get to the part of JSON you need. awk does the former, jq the latter.
.... | awk '/^{$/{p=1}{if(p){print}}/^}$/{p=0}' | jq -r .networks[0]
The AWK script looks for { on its own on a line to turn on a flag p; prints the current line if the flag is set; and switches off the flag when it encounters } all by itself.
EDIT: Since this output was generated on a DOS machine, it has DOS line endings (\r\n). To convert those before awk, additionally pipe through dos2unix.

Internal Variable PIPESTATUS

I am new to linux and bash scripting and i have query about this internal variable PIPESTATUS which is an array and stores the exit status of individual commands in pipe.
On command line:
$ find /home | /bin/pax -dwx ustar | /bin/gzip -c > myfile.tar.gz
$ echo ${PIPESTATUS[*]}
$ 0 0 0
working fine on command line but when I am putting this code in a bash script it is showing only one exit status. My default SHELL on command line is bash only.
Somebody please help me to understand why this behaviour is changing? And what should I do to get this work in script?
#!/bin/bash
cmdfile=/var/tmp/cmd$$
backfile=/var/tmp/backup$$
find_fun() {
find /home
}
cmd1="find_fun | /bin/pax -dwx ustar"
cmd2="/bin/gzip -c"
eval "$cmd1 | $cmd2 > $backfile.tar.gz " 2>/dev/null
echo -e " find ${PIPESTATUS[0]} \npax ${PIPESTATUS[1]} \ncompress ${PIPESTATUS[2]} > $cmdfile
The problem you are having with your script is that you aren't running the same code as you ran on the command line. You are running different code. Namely the script has the addition of eval. If you were to wrap your command line test in eval you would see that it fails in a similar manner.
The reason the eval version fails (only gives you one value in PIPESTATUS) is because you aren't executing a pipeline anymore. You are executing eval on a string that contains a pipeline. This is similar to executing /bin/bash -c 'some | pipe | line'. The thing actually being run by the current shell is a single command so it has a single exit code.
You have two choices here:
Get rid of eval (which you should do anyway as eval is generally something to avoid) and stop using a string for a command (see Bash FAQ 050 for more on why doing this is a bad idea.
Move the echo "${PIPESTATUS[#]}" into the eval and then capture (and split/parse) the resulting output. (This is clearly a worse solution in just about every way.)
Instead of ${PIPESTATUS[0]} use ${PIPESTATUS[#]}
As with any array in bash PIPESTATUS[0] contains the first command exit status. If you want to get all of them you have to use PIPESTATUS[#] which returns all the contents of the array.
I'm not sure why it worked for you when you tried it in the command line. I tested it and I didn't get the same result as you.

Resources