I've got a Grafana docker image running with Graphite/Carbon. Getting data using CLI works, example:
echo "local.random.diceroll $(((RANDOM%6)+1)) `date +%s`" | nc localhost 2003;
The following Python 2 code also works:
sock = socket.socket()
sock.connect((CARBON_SERVER, CARBON_PORT))
sock.sendall(message)
sock.close()
message is a string containing key value timestamp and this works, the data can be found. So the Grafana docker image is accepting data.
I wanted to get this working in Python 3, but the sendall function requires bytes as parameter. The code change is:
sock = socket.socket()
sock.connect((CARBON_SERVER, CARBON_PORT))
sock.sendall(str.encode(message))
sock.close()
Now the data isn't inserted and I can't figure out why. I tried this on a remote machine (same network) and on the local server. I also tried several packages (graphiti, graphiteudp), but they all seem to fail to insert the data. They also don't show any error message.
The simple example for graphiteudp doesn't work either on the Github page
Got an idea what I'm doing wrong?
You can add \n to the message you send. I have tried it with Python 3, and that works.
Related
I have been working through an online class on python with Coursera (this is not homework) and have been having a problem with urllib.request.urlopen for some urls. For the url hardcoded into the code below, the command urllib.request.urlopen(serviceurl, context=ctx).read().decode() times out. If another url is used... say http://www.woot.com is used data is returned.
I have tried this on two separate Ubuntu machines at my location, both running 18.04 (with 3.6.7 which is default) and 3.7.3 via Anaconda.
I have even reinstalled Ubuntu with the same results.
Strangely, if I include a timeout parameter (for example, urllib.request.urlopen(serviceurl, timeout=1, context=ctx).read().decode()), data is returned.
Also, this program runs successfully (regardless of url) with no timeout parameter on a macbook air running 3.6.4
import urllib.request
import ssl
# Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
address = input('Enter Locaton: ')
if len(address) < 1:
serviceurl = 'http://py4e-data.dr-chuck.net/comments_42.xml?'
else:
serviceurl = address
s = urllib.request.urlopen(serviceurl, context=ctx).read().decode()
print(s)
I seem to be the only one having this issue and it has me stumped. I am just beginning to get familiar with python (C, C#, Java are more familiar). Any ideas would be appreciated.
Answered (I think) my own question. Looks like the website does not like IP6 sockets. Was able to trace the hang back to socket.py. First address used in create connection is an IP6 address and port which does not return anything. Adding a timeout caused the code to select the next address and port from the list which was IP4 and that worked. For the time being I disabled IP6 in Ubuntu 18.04 to force IP4 use.
I'm trying to read a png file and output the numpy matrix of the image in terminal using imread function of opencv on the server like this
import cv2
from flask import Flask
import os
#application.route('/readImage',methods=['POST'])
def handleHTTPPostRequest():
imagePath = f'{os.getcwd()}/input.png'
print('image path is', imagePath)
print(cv2.__version__)
im = cv2.imread(imagePath,cv2.IMREAD_COLOR)
print(im)
return 'success'
This is giving expected output on my local machine(Ubuntu 18.04) no matter howmany times I execute it. I moved this to elastic beanstalk(CentOS) with necessary setup. The request runs fine(gives proper logs along with success) the very first time I make a post call.
But when I make the post call second time, it's only outputting first two logs(imagepath and cv2 version) and is stuck there for a while. and after sometime, it's showing this error
End of script output before headers: application.py
I have added one more line just before cv2.imread just to make sure that the file exists
print('does the file exists',os.path.isfile(imagePath) )
This is returning true everytime. I have restarted the server multiple times, looks like it only works the very first time and cv2.imread() is stuck after the first post call.What am I missing
When you print from a request handler, Flask tries to do something sensible, but print really isn't what you want to be doing, as it risks throwing the HTTP request/response bookkeeping off.
A fully-supported way of getting diagnostic info out of a handler is to use the logging module. It will require a small bit of configuration. See http://flask.pocoo.org/docs/1.0/logging/
To anyone facing this issue, I have found a solution. Add this to your ebextensions config file
container_commands:
AddGlobalWSGIGroupAccess:
command: "if ! grep -q 'WSGIApplicationGroup %{GLOBAL}' ../wsgi.conf ; then echo 'WSGIApplicationGroup %{GLOBAL}' >> ../wsgi.conf; fi;"
Saikiran's final solution worked for me. I was getting this issue when I tried calling methods from the opencv-python library. I'm running Ubuntu 18.04 locally and it works fine there. However, like Saikiran's original post, when deployed to Elastic Beanstalk the first request works and then the second one does not. For my EB environment, I'm using a Python3.6-based Amazon Linux server.
I have installed ELK stack for elastic search with kibana to start using logstash but I get the following issue with the no default index set?
kilbana no default index set screen shot
The page asks me a question "Do you have indices matching the pattern?" but I don't see a way to answer it and move forward! It's my first time installing this. Any ideas?
I've successfully got the services installed and running using this tutorial install ELK Stack
Update #1
Have entered http://localhost:9200/_cat/indices into my browser and it displays the following
yellow open .kibana qypsy4K-Qt-jm4_wll9PCQ 1 1 1 0 3.6kb 3.6kb
Update 2
After downloading curl and attempting to import data I received the following curl messages.
data import using curl messages
Update #3
I've downloaded data from www.kaggle.com and executed the command following command to import the data but the command prompt just sits there, I've included a screen shot below of the console window.
You need to enter your index name in the input box in the place of logstash-*. You will see your index name by going to localhost:9200/_cat/indices.
Once you put your index name it will automatically gather the fields from your index and prompt you for a time field which you can set or ignore.
I'm looking at trying to read pcap files from various CTF events.
Ideally, I would like something that can do the breakdown of information such as wireshark, but just being able to read the timestamp and return the packet as a bytestring of some kind would be welcome.
The problem is that there is little or no python 3 support with all the commonly cited libraries: dpkt, pylibpcap, pcapy, etc.
Does anyone know of a pcap library that works with python 3?
to my knowledge, there are at least 2 packages that seems to work with Python 3: pure-pcapfile and dpkt:
pure-pcapfile is easy to install in python 3 using pip. It's very easy to use but still limited to decoding Ethernet and IP data. The rest is left to you. But it works right out of the box.
dpkt doesn't work right out of the box and needs some manipulation before. They are porting it to Python 3 and plan to have a Python 2 and 3 compatible version for version 2.0. Unfortunately, it's not there yet. However, it is way more complete than pure-pcapfile and can decode many protocols. If your packet embeds several layers of protocols, it will decode them automatically for you. The only problem is that you need to make a few corrections here and there to make it work (as the time of writing this comment).
pure-pcapfile
the only one that I found working for Python 3 so far is pcapfile. You can find it at https://pypi.python.org/pypi/pypcapfile/ or install it by doing pip3 install pypcapfile.
There are just basic functionalities but it works very well for me and has been updated quite recently (at the time of writing this message):
from pcapfile import savefile
file = open('mypcapfile.pcp' , 'rb')
pcapfile = savefile.load_savefile(file,verbose=True)
If everything goes well, you should see something like this:
[+] attempting to load mypcapfile.pcap
[+] found valid header
[+] loaded 1234 packets
[+] finished loading savefile.
A few remarks now. I'm using Python 3.4.3. And doing import pcapfile will not import anything from it (I'm still a beginner with Python) but the only basic information and functions from the package. Next, you have to explicitly open your file in read binary mode by passing 'rb' as the mode in the open() function. In the documentation they don't say it explicitly.
The rest is like in the documentation:
packet = pcapfile.packets[12]
to access the packet number 12 (the 13th packet then, the first one being at 0). And you have basic functionalities like
packet.timestamp
to get a timestamp or
packet.raw()
to get raw data.
The documentation mentions functions to do packet decoding of some standard formats like Ethernet and IP.
dpkt
dpkt is not available for Python 3 so you need to do the following, assuming you have access to a command line. The code is available on https://github.com/kbandla/dpkt.git and you must download it before:
git clone https://github.com/kbandla/dpkt.git
cd dpkt
git checkout --track origin/migrate_py3
git pull
This 4 commands do the following:
clone (download) the code from its git repository on github
go into the newly created directory named dpkt
switch to the branch name migrate_py3 which contains the Python 3 code. As you can see from the name of this branch, it's still experimental. So far it works for me.
(just in case) download again the code
then copy the directory named dpkt in your project or wherever Python 3 can find it.
Later on, in Python 3 here is what you have to do to get started:
import dpkt
file = open('mypcapfile.pcap','rb')
will open your file. Don't forget the 'rb' binary mode in Python 3 (same thing as in pure-pcapfile).
pcap = dpkt.pcap.Reader(file)
will read and decode your file
for ts, buf in pcap:
eth = dpkt.ethernet.Ethernet(buf)
print(eth)
will, for example, decode Ethernet packet and print them. Then read the documentation on how to use dpkt. If your packets contain IP or TCP layer, then dpkt.ethernet.Ethernet(buf) will decode them as well. Also note that in the for loop, we have access to the timestamps in ts.
You may want to iterate it in a less constrained form and doing as follows will help:
(ts,buf) = next(pcap)
eth = dpkt.ethernet.Ethernet(buf)
where the first line get the next tuple from the pcap file. If pcap is False then you read everything.
How can I manage to periodically read the output of a script while it is running?
In the case of youtube-dl, it sends download information (progress/speed/eta) about the video being downloaded to the terminal.
With the following code I am able to capture the total result of the scripts output (on linux) to a temporary file:
tmpFile = io.open("/tmp/My_Temp.tmp", "w+")
f = io.popen("youtube-dl http://www.youtube.com/watch?v=UIqwUx_0gJI", 'r')
tmpFile:write(f:read("*all"))
Instead of waiting for the script to complete and writing all the data at the end, I would like able to capture "snapshots" of the latest information that youtube-dl has sent to the terminal.
My overall goal is to capture the download information in order to design a progress bar using Iup.
If there are more intelligent ways of capturing download information I will be happy to take advice as well.
Regardless, if it is possible to use io.popen(), os.execute(), or other tools in such a way I would still like to know how to capture the real time console output.
This works fine both on Windows and Linux. Lines are displayed in real-time.
local pipe = io.popen'ping google.com'
for line in pipe:lines() do
print(line)
end
pipe:close()
UPD :
If previous code didn't work try the following (as dualed suggested):
local pipe = io.popen'youtube-dl with parameters'
repeat
local c = pipe:read(1)
if c then
-- Do something with the char received
io.write(c) io.flush()
end
until not c
pipe:close()