Been trying to wrap my head around comparing a list of lists of dictionaries against other dictionaries in the same list of lists in Python 3.7.1 (sorry if this is unclear...I'll try to spell it out with code).
I'm essentially trying to write some python that would compare what's installed on a dynamic number of servers which will be provided by the user. A simplified dataset essentially looks like this:
[
[
{'server':'serverA', 'software':'hadoop','version':'1.0'},
{'server':'serverA', 'software':'python','version':'3.6'},
{'server':'serverA', 'software':'pip','version':'18.0'}
],
[
{'server':'serverB', 'software':'python','version':'3.5'},
{'server':'serverB', 'software':'pip', 'version': '18.0'}
],
[
{'server':'testServerA', 'software':'hadoop','version':'1.0'},
{'server':'testServerA', 'software':'pip', 'version':'18.0'}
],
[
{'server':'testServerB', 'software':'hadoop','version':'1.0'},
{'server':'testServerB', 'software':'python','version':'3.6'},
{'server':'testServerB', 'software':'pip','version':'18.0'},
{'server':'testServerB', 'software':'ruby','version':'2.5'}
]
]
I essentially am trying to determine which servers have the software installed that others do not or are on different versions from another one. The goal is to easily identify what needs updated/installed on all servers to make them equal. In this example, the results would be:
serverA has hadoop 1.0 but serverB does not have Hadoop installed
serverA has python 3.6 but serverB has python 3.5
testServerA is missing python.
testServerB has ruby but the other's do not (another way to put it would be serverA, serverB, and testServerA are missing ruby).
The data set above is essentially a print of this python code (currently hard-coding the server names for testing but would later be provided by choices from a UI):
servers = ['serverA','serverB','testServerA','testServerB']
installedSoftware = []
for server in servers:
installedSoftware.append('localhost/installed_software/?server=' + server).json())
print(installedSoftware)
I've tried doing things like print(set(installedSoftware[0]) - set(installedSoftware[1])) but get an unhashable type dict.
I've also tried looping through the lists one by one to try to find the differences but feel like there should be a way to do this by sets that I'm just not getting.
Any advice on how to accomplish this? I feel like I'm making this more complicated than it has to be but I'm not very experienced with Python so I may be making a rookie mistake here.
Thanks for any help or advice that you can give!
Given your use-case, perhaps you have access to a list of all the required software and their current latest versions.
expectedSw = OrderedDict([('hadoop', '1.0'), ('python', '3.6'), ('pip', '18.0'),
('ruby', '2.5')])
currentInstallation = [] # Your data
for server in currentInstallation:
for program in expectedSw.keys():
if not any(sw.get('software', None) == program and
sw.get('version', None) == expectedSw[program] for sw in server):
print '{} not installed or outdated on {}'.format(program, server[0]['server'])
If you don't have access to the list of programs and their latest versions, you could derive it from the installed software version data.
Side-note: Puppet is quite handy for managing software installed on several machines
Related
What I'm going to do is writing an script with python to take an excel file as an input and then read the number and description of interfaces of a switch which is written in there , and then ssh to a cisco switch and change the description with the values added before in excel .
could any body give me a hint?
Try checking netmiko module. I was able to do something close to what you require using netmiko but now I use ansible ios_command which is a lot more easier for a non programmer network engineer.
Start with Paramiko or Netmiko , Netmiko is a bit better version. I would also just rethink about the actual project where instead of thinking about one switch think about all of them and see if you have some universal thing which you need to do in all of your switches instead of one.
For this project you could do below.
1 . save date in CSV
2 . Open CSV file
3. Create a dictionary and Save interface name as key , and description as values
4. Create a list where you can save all your keys --> l = d.keys()
4. SSH to the sw via paramiko/Netmiko .
5. Run a loop in the list l
on each iteration send below commands
interface l[i]
description d[l[i]]
this will translate to below
interface eth1/1
description d['eth1/1'] ( d['eth1/1'] will be value/description of whatever you are gonna get from CSV)
If you really try to learn python then its a good start however if you are on a time crunch Ansible is easier option
I am trying to perform a basic merge operation to add nonexistent nodes and relationships to my graph by going through a csv file row by row. I'm using py2neo v4, and because there is basically no documentation or examples of how to use py2neo, I can't figure out how to actually get it done. This isn't my real code (it's very complicated to handle many different cases) but its structure is basically like this:
import py2neo as pn
graph = pn.Graph("bolt://localhost:###/", user="neo4j", password="py2neoSux")
matcher = pn.NodeMatcher(graph)
tx = graph.begin()
if (matcher.match("Prefecture", name="foo").first()) == None):
previousNode = pn.Node("Type1", name="fo0", yc=1)
else:
previousNode = matcher.match("Prefecture", name="foo").first())
thisNode = pn.Node("Type2", name="bar", yc=1)
tx.merge(previousNode)
tx.merge(thisNode)
theLink = pn.Relationship(thisNode, "PARTOF", previousNode)
tx.merge(theLink)
tx.commit()
Currently this throws the error
ValueError: Primary label and primary key are required for MERGE operation
the first time it needs to merge a node that it hasn't found (i.e., when creating a node). So then I change the line to:
tx.merge(thisNode,primary_label=list(thisNode.labels)[0], primary_key="name")
Which gives me the error IndexError: list index out of range from somewhere deep in the py2neo source code (....site-packages\py2neo\internal\operations.py", line 168, in merge_subgraph at node = nodes[i]). I tried to figure out what was going wrong there, but I couldn't decipher where the nodes list come from through various connections to other commands.
So, it currently matches and creates a few nodes without problem, but at some point it will match until it needs to create and then fails in trying to create that node (even though it is using the same code and doing the same thing under the same circumstances in a loop). It made it through all 20 rows in my sample once, but usually stops on the row 3-5.
I thought it had something to do with the transactions (see comments), but I get the same problem when I merge directly on the graph. Maybe it has to do with the py2neo merge function finding more identities for nodes than nodes. Maybe there is something wrong with how I specified my primarily label and/or key.
Because this error and code are opaque I have no idea how to move forward.
Anybody have any advice or instructions on merging nodes with py2neo?
Of course I'd like to know how to fix my current problem, but more generally I'd like to learn how to use this package. Examples, instructions, real documentation?
I am having a similar problem and just got done ripping my hair out to figure out what was wrong! SO! What I learned was that at least in my case.. and maybe yours too since we got similar error messages and were doing similar things. The problem lied for me in that I was trying to create a Node with a __primarykey__ field that had a different field name than the others.
PSEUDO EXAMPLE:
# in some for loop or complex code
node = Node("Example", name="Test",something="else")
node.__primarykey__ = "name"
<code merging or otherwise creating the node>
# later on in the loop you might have done something like this cause the field was null
node = Node("Example", something="new")
node.__primarykey__ = "something"
I hope this helps and was clear I'm still recovering from wrapping my head around things. If its not clear let me know and I'll revise.
Good luck.
I'm almost completely new to Linux programming, and Bash Scripts. I build an amateur radio AllStar node.
I'm trying to create a script that looks at a certain variable and based on that info decides if it should connect or not. I can use a command: asterisk -rx "rpt showvars 47168. This returns a list of variables and their current values. I can store the whole list into a variable that I define, in my test script I just called it MYVAR but I can't seem to only get the value of one of the variables that's listed.
I talked to someone who knows a lot about Linux programming, and she suggested that I try CONNECTED="${MYVAR[3]}" but when I do this, CONNECTED just seems to become a blank variable.
What really frustrates me is I have written programs in other programming languages, and I've been told Bash scripts are easy to learn, but yet I can't seem to get this.
So any help would be great.
how did you assigned your variable?
It seems to me that you want to work with an array, then:
#!/bin/bash
myvar=( $( asterisk -rx "rpt showvars 47168 ) )
echo ${mywar[3]} # this is your fourth element
echo ${#myvar[#]} # this is the total of element in your array
be careful that index in an array starts at 0
Actual question
How can I query the default location of a personal package library/libraries as described in the R Installation and Adminstration even after environment variables like R_LIBS_USER or .libPaths() etc. might already have been changed by the user?
I'd just like to understand how exactly R determines the default settings in a platform-independent way.
Naively, I was hoping for something equivalent to R.home("library"), e.g. R.user("library")
Due dilligence
I checked this post and the answers sort contain the information/paths I'd like to retrieve. Unfortunately I only really know my way around on Windows, not on OS X or Linux. So I'm not sure if/how much of this is correct in a generic sense (home directory, separation of user vs. system-wide stuff etc.):
OS X
/Library/Frameworks/R.framework/Resources/library
Linux
/usr/local/lib/R/site-library
/usr/lib/R/site-library
/usr/lib/R/library
I also looked into the manual, but that only gave me a basic idea of how R handles these sort of things (maybe just looked in the wrong corner, any pointers greatly appreciated).
Background
I sometimes create a temporary, fresh package library for the purpose of having a "sandbox" for systematic testing (e.g. when planning to upgrade certain package dependencies) .
When I'm done, I'd like to delete that library again while making absolutely sure that I don't accidentally delete one of the standard libraries (personal library/libraries and system-wide library).
I'm starting to put together a little package called libr for these purposes. Function deleteLibrary contains my current approach (lines 76 ff.):
## Personal libs //
r_vsn <- paste(R.version$major, gsub("\\..*", "", R.version$minor), sep = ".")
if (.Platform$pkgType == "win.binary") {
lib_p <- file.path(Sys.getenv("HOME"), "R/library", r_vsn)
} else if (.Platform$OS.type == "mac.binary") {
lib_p <- file.path(Sys.getenv("HOME"), "lib/R", r_vsn)
## Taken from https://stackoverflow.com/questions/2615128/where-does-r-store-packages
## --> Hopefully results in something like: '/Users/{username}/lib/R/{version}'
} else if (.Platform$OS.type == "source" && .Platform$OS.type == "unix") {
lib_p <- file.path(Sys.getenv("HOME"),
c(
"local/lib/R/site-library",
"lib/R/site-library",
"lib/R/library"
), r_vsn)
## Taken from https://stackoverflow.com/questions/2615128/where-does-r-store-packages
## --> Hopefully results in something like:
## '/usr/local/lib/R/site-library/{version}'
## '/usr/lib/R/site-library/{version}'
## '/usr/lib/R/library/{version}'
} else {
stop("Don't know what to do for this OS type")
}
I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel