reading the running config file from a network device - cisco

Is there any way to read the running configuration file from a network device (cisco ios/ juniper junos) in a properly formatted type say for eg as an XML file?
Basically I need to get all the attributes and its values in a config file. I am using "expect" to read the config file. I would have to write a parser to get the attributes from the config file.
I was wondering if there would be already an implementation of this which I can re-use?
Is there any SDK that can be used to parse the config file, or even better , directly interact with the device and get the data in a standard format?
Kindly guide.
Thanks
Sunil

For Juniper in configuration mode:
show | display xml
For Cisco IOS I've never made this, but you can try to use ODMSpec:
http://www.cisco.com/en/US/docs/ios-xml/ios/xmlpi/command/xmlpi-cr-book.pdf
http://www.cisco.com/en/US/docs/net_mgmt/enhanced_device_interface/2.2/developer/guide/progodm.html
I'm not sure, that it works with running-config.

In ios devices, it is
show run | format
This would give the result in an xml format

Related

collada2gltf converter can't produce *.json file

I am reading a book: Programming 3D Applications with HTML5 and WebG , it involve a Vizi framework.
All the examples load the *.json file instead of *.gltf file. Why?
When I load *.gltf, it doesn't load any result, and the collada2gltf converters only produce *.gltf, *.bin, *.glsl files and so on.
What should I do?
.gltf is a JSON file. Try to open it with a text editor and see for youself. .bin and .glsl files are just additional resources, linked from .gltf file. Those are geometry buffers and shaders respectively. So to make it work you should make sure that all the files produced with the converter are also available to a web browser you running your code in.
Also you can try to add -e CLI flag to collada2gltf and it'll embed all the resources into result .gltf file.

Configure Logstash to wait before parsing a file

I wonder if you can configure logstash in the following way:
Background Info:
Every day I get a xml file pushed to my server, which should be parsed.
To indicate a complete file transfer afterwards I get an empty .ctl (custom file) transfered to the same folder.
The files both have the following name schema 'feedback_{year}{yearday}_UTC{hoursminutesseconds}_51.{extention}' (e.g. feedback_16002_UTC235953_51.xml). So they have the same file name but one is with .xml and the other is a .ctl file.
Question:
Is there a way to configure logstash to wait parsing the xml file until the according .ctl file is present?
EDIT:
Is there maybe a way to archiev that with filebeat?
EDIT2:
It would also be enough to be able to configure logstash in a way that it will wait x minutes before starting to process a new file, if that is easier.
Thanks for any help in advance
Your problem is that you don't want to start the parser before the file transfer hasn't been completed. So, why don't push the data to a file (file-complete.xml) when you find your flag file (empty.ctl)?
Here is the possible logic for a script and runs using crontab:
if empty.ctl exists:
Clear file-complete.xml
Add the content of file.xml to file-complete.xml.
Remove empty.ctl
This way, you'd need to parse the data from file-complete.xml. I think is simpler to debug and configure.
Hope it helps,

Apache Pig: Load a file that shows fine using hadoop fs -text

I have files that are named part-r-000[0-9][0-9] and that contain tab separated fields. I can view them using hadoop fs -text part-r-00000 but can't get them loaded using pig.
What I've tried:
x = load 'part-r-00000';
dump x;
x = load 'part-r-00000' using TextLoader();
dump x;
but that only gives me garbage. How can I view the file using pig?
What might be of relevance is that my hdfs is still using CDH-2 at the moment.
Furthermore, if I download the file to local and run file part-r-00000 it says part-r-00000: data, I don't know how to unzip it locally.
According to HDFS Documentation, hadoop fs -text <file> can be used on "zip and TextRecordInputStream" data, so your data may be in one of these formats.
If the file was compressed, normally Hadoop would add the extension when outputting to HDFS, but if this was missing, you could try testing by unzipping/ungzipping/unbzip2ing/etc locally. It appears Pig should do this decompressing automatically, but may require the file extension be present (e.g. part-r-00000.zip) -- more info.
I'm not too sure on the TextRecordInputStream.. it sounds like it would just be the default method of Pig, but I could be wrong. I didn't see any mention of LOAD'ing this data via Pig when I did a quick Google.
Update:
Since you've discovered it is a sequence file, here's how you can load it using PiggyBank:
-- using Cloudera directory structure:
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar
--REGISTER /home/hadoop/lib/pig/piggybank.jar
DEFINE SequenceFileLoader org.apache.pig.piggybank.storage.SequenceFileLoader();
-- Sample job: grab counts of tweets by day
A = LOAD 'mydir/part-r-000{00..99}' # not sure if pig likes the {00..99} syntax, but worth a shot
USING SequenceFileLoader AS (key:long, val:long, etc.);
If you want to manipulate (read/write) sequence files with Pig then you can give a try to Twitter's Elephant-Bird as well.
You can find here examples how to read/write them.
If you use custom Writables in you sequence file then you can implement a custom converter by extending AbstractWritableConverter .
Note, that Elephant-Bird needs to have an installed Thrift in your machine.
Before building it, make sure that it is using the correct Thrift version you have and also provide the correct path of the Thrift executable in its pom.xml:
<plugin>
<groupId>org.apache.thrift.tools</groupId>
<artifactId>maven-thrift-plugin</artifactId>
<version>0.1.10</version>
<configuration>
<thriftExecutable>/path_to_thrift/thrift</thriftExecutable>
</configuration>
</plugin>

Wrong text encoding when parsing json data

I am curling a website and writing it to .json file; this file is input to my java code which parses it using json library and the necessary data is written back in a CSV file which i later use to store it in a database.
As you know data coming from a website can be in different formats so i make sure that i read and write in UTF-8 format, still i get wrong output.
For example, Østerriksk becomes �sterriksk.
I am doing all this in Linux. I think there is some encoding problem because this same code runs fine in Windows but not in Unix/Linux.
I am quite sure my java code is proper but i am not able to find out what I'm doing wrong.
You're reading the data as ISO 8859-1 but the file is actually UTF-8. I think there's an argument (or setting) to the file reader that should solve that.
Also: curl isn't going to care about the encodings. It's really something in your Java code that's wrong.
What kind of IDE are you using, for example this can happen if you are using Eclipse IDE, and not set your default encoding to utf-8 in properties.

Splunk rewrites xml input incorrectly

I have a number of applications that I want to log to Splunk. I will be sending the data in an XML format via a UDP listener. The data that is being sent looks like:
<log4j:event logger="ASP.global_asax" level="INFO" timestamp="1303830487907" thread="15">
<log4j:message>New session started</log4j:message>
<log4j:properties>
<log4j:data name="log4japp" value="4ef113dd-9-129483040292873753(4644)" />
<log4j:data name="log4jmachinename" value="W7-SUN-JSTANTON" />
</log4j:properties>
</log4j:event>
However when it is processed by Splunk it appears like:
Apr 26 16:18:09 127.0.0.1 <log4j:message>New session started</log4j:message><log4j:properties><log4j:data name="log4japp" value="4ef113dd-9-129483040292873753(4644)"/><log4j:data name="log4jmachinename" value="W7-SUN-JSTANTON"/></log4j:properties></log4j:event>
Basically it looks like Splunk looks like it has overwritten the opening node, and as a result lossing the log level data, with the datetime that it received it. The applications that are sending it are using nLog with a log4j type target (with an Log4JXmlEventLayout layout). I have configured the sourcetype as log4jxml (custom name) but I think I need to tell it not to do something with the date/time field in the props.conf file (but not too sure what that something is).
I am also using the windows version of Splunk so the file paths are slightly different to the online manuals.
Any help would be most welcome.
It turns out I was doing 2 things wrong (maybe more but I have not found thoses yet)
In the inputs.conf file I need to add the following to my input definition:
no_priority_stripping = true
no_appending_timestamp = true
The second thing I was doing wrong was to put these files in
C:\Program Files\Splunk\etc\system\local\
when they SHOULD have been put in
C:\Program Files\Splunk\etc\apps\search\local\
I hope that this helps somebody else out

Resources