Plot a continuous graph of Number of Snort alerts against time - python-3.x

I have snort logging DDOS alerts to file; I use Syslog-ng to parse the logs and output in json format into redis (wanted to set it up as a buffer, I use 'setex' command with expiry of 70 secs).
The whole thing seems not to be working well; any ideas to make it easier is welcome.
I wrote a simple python script to listen to redis KA events and count the number of snort alerts per second. I tried creating two other threads; one to retrieve the json-formatted alerts from snort and the second to count the alerts. The third is supposed to plot a graph using matplotlib.pyplot
#import time
from redis import StrictRedis as sr
import os
import json
import matplotlib.pyplot as plt
import threading as th
import time
redis = sr(host='localhost', port = 6379, decode_responses = True)
#file = open('/home/lucidvis/vis_app_py/log.json','w+')
# This function is still being worked on
def do_plot():
print('do_plot loop running')
while accumulated_data:
x_values = [int(x['time_count']) for x in accumulated_data]
y_values = [y['date'] for y in accumulated_data]
plt.title('Attacks Alerts per time period')
plt.xlabel('Time', fontsize=14)
plt.ylabel('Snort Alerts/sec')
plt.tick_params(axis='both', labelsize=14)
plt.plot(y_values,x_values, linewidth=5)
plt.show()
time.sleep(0.01)
def accumulator():
# first of, check the current json data and see if its 'sec' value is same
#that is the last in the accumulated data list
#if it is the same, increase time_count by one else pop that value
pointer_data = {}
print('accumulator loop running')
while True:
# pointer data is the current sec of json data used for comparison
#new_data is the latest json formatted alert received
# received_from_redis is a list declared in the main function
if received_from_redis:
new_data = received_from_redis.pop(0)
if not pointer_data:
pointer_data = new_data.copy()
print(">>", type(pointer_data), " >> ", pointer_data)
if pointer_data and pointer_data['sec']==new_data["sec"]
pointer_data['time_count'] +=1
elif pointer_data:
accumulated_data.append(pointer_data)
pointer_data = new_data.copy()
pointer_data.setdefault('time_count',1)
else:
time.sleep(0.01)
# main function creates the redis object and receives messages based on events
#this function calls two other functions and creates threads so they appear to run concurrently
def main():
p = redis.pubsub()
#
p.psubscribe('__keyspace#0__*')
print('Starting message loop')
while True:
try:
time.sleep(2)
message = p.get_message()
# Obtain the key from the redis emmitted event if the event is a set event
if message and message['data']=='set':
# the format emmited by redis is in a dict form
# the key is the value to the key 'channel'
# The key is in '__keyspace#0__*' form
# obtain the last field of the list returned by split function
key = message['channel'].split('__:')[-1]
data_redis = json.loads(redis.get(str(key)))
received_from_redis.append(data_redis)
except Exception e:
print(e)
continue
if __name__ == "__main__":
accumulated_data = []
received_from_redis = []
# main function creates the redis object and receives messages based on events
#this function calls two other functions and creates threads so they appear to run concurrently
thread_accumulator = th.Thread(target = accumulator, name ='accumulator')
do_plot_thread = th.Thread(target = do_plot, name ='do_plot')
while True:
thread_accumulator.start()
do_plot_thread.start()
main()
thread_accumulator.join()
do_plot_thread.join()
I currently do get errors per se ; I just cant tell if the threads are created or are working well. I need ideas to make things work better.
sample of the alert formated in json and obtained from redis below
{"victim_port":"","victim":"192.168.204.130","protocol":"ICMP","msg":"Ping_Flood_Attack_Detected","key":"1000","date":"06/01-09:26:13","attacker_port":"","attacker":"192.168.30.129","sec":"13"}

I'm not sure I understand exactly your scenario, but if you want to count events that are essentially log messages, you can probably do that within syslog-ng. Either as a Python destination (since you are already working in python), or maybe even without additional programming using the grouping-by parser.

Related

SCPI Test Instrument Driven with Python Socket Returns Data Inconsistently

I'm trying to get data from a temperature chamber consistently. My code:
import socket
import time
class TempChamber:
def __init__(self,name):
self.name = name
def creat_tcp_connection(self):
sock_tcp = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock_tcp.connect(('192.168.17.141', 5025))
return sock_tcp
def sendCommand(self,command):
sock = self.creat_tcp_connection()
print(command)
sock.send(command)
time.sleep(2) # Needed to have commands fully send
packet = sock.recv(1024)
print("Received ", str(packet))
sock.close()
def id(self):
self.sendCommand(b':*IDN?')
If I run the test code in quick succession as seen below (five times) I'm returned 3 of the 5 requests generally. Sometimes, I get the full 5 back, sometimes I get four back.
from TempChamber import TempChamber
t = TempChamber("Test")
for x in range(5):
print(x)
t.id()
What I tried: (1) Introduced a time.sleep after the command is sent and before the data is received. I expected consistent data 10 seconds also produces inconsistent results.
(2) I tried the top ranked response from this stack overflow link but my DMM does not respond to the packed messages. Are there any alternatives to this method?
(3) I notice now this may be something to do with my Temperature Chamber(control product number: Watlow F4T), not exactly the code. I'm using the same code to retrieve data from my digital multimeter and I get 5 responses every time I run the dummy code. Considering I only need to send/receive data from the temp chamber periodically and not in rapid succession like the DMM, this (the stack overflow question) may be a moot point.

Running 2 infinite loops simultaneously switching control - Python3

I am building an image processing application that reads the current screen and does some processing. The code skeleton is as given below:
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name):
while True:
mip.perform_some_image_processing_from_screen(image_name)
time.sleep(180) # Switch to perform_another_processing as next iteration is not needed for next 3 minutes. If this processing takes more than 3 minutes, complete current iteration and then switch to the other function.
def perform_another_processing(self,inp):
while True:
map.perform_some_other_image_processing_from_screen(inp)
time.sleep(180) # Switch to perform_image_processing as next iteration is not needed for next 3 minutes. If this processing takes more than 3 minutes, pause it and switch to the other function.
mp = myprocess()
mp.perform_image_processing(my_image) # Calling of image processing to identify items from the screen capture
mp.perform_another_processing(2) # Calling of application processing to calculate values based on screen capture
As of now, i am able to run only one of the function at a time.
Question here is:
How can i run both of them simultaneously (as 2 separate thread/process??) assuming both the functions may need to access/switch the same screen at the same time.
One option that i think of is both functions setting a common variable (to 1/0) and passing control of execution to each other before going on to sleep? Is it possible? How do i implement it?
Any help with this regards will help me adding multiple other similar functionalities in my application.
Thanks
Bishnu
Note:
For all who could not visualize what i wanted to achieve, here is my code that works fine. This bot code will check the screens to shield(to protect from attacks) or gather (resources in the kingdom)
def shield_and_gather_loop(self):
current_shield_sts = self.renew_shield()
num_of_gatherers = self.num_of_troops_gathering()
gather_sleep = False if num_of_gatherers < self.Max_Army_Size else True
shield_sleep = True if current_shield_sts == 'SHIELDED' else False
shield_timeout= time.time() + 60 * 3 if shield_sleep == True else None
while True:
while shield_sleep == False: #and current_shield_sts != 'SHIELDED':
self.reach_home_screen_from_anywhere()
current_shield_sts = self.renew_shield()
print("Current Shield Status # ", datetime.datetime.now(), " :", current_shield_sts)
shield_sleep = True
#self.go_to_sleep("Shield Check ", hours=0, minutes=3, seconds=0)
shield_timeout = time.time() + 60 * 3 #3 minutes from now
while num_of_gatherers < self.Max_Army_Size and gather_sleep == False:
if time.time() < shield_timeout:
self.device.remove_processed_files()
self.reach_kd_screen_from_anywhere()
self.found_rss_tile = 0
self.find_rss_tile_for_gather(random.choice(self.gather_items))
num_of_gatherers = self.num_of_troops_gathering()
print("Currently gathering: ", str(num_of_gatherers))
if gather_sleep == True and shield_sleep == True:
print("Both gather and shield are working fine.Going to sleep for 2 minutes...")
time.sleep(120)
# Wake up from sleep and recheck shield and gather status
current_shield_sts = self.renew_shield()
num_of_gatherers = self.num_of_troops_gathering()
gather_sleep = False if num_of_gatherers < self.Max_Army_Size else True
shield_sleep = True if current_shield_sts == 'SHIELDED' else False
Seems like the obvious thing to do is this, as it satisfies your criteria of having only one of the two actions running at a time:
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name,inp):
while True:
mip.perform_some_image_processing_from_screen(image_name)
map.perform_some_other_image_processing_from_screen(inp)
time.sleep(180)
mp = myprocess()
mp.perform_image_processing(my_image,2)
If you need something more elaborate (such as perform_some_other_image_processing_from_screen() being able to interrupt a call to perform_some_image_processing_from_screen() before it has returned), that might require threads, but it would also require your perform_*() functions to serialize access to the data they are manipulating, which would make them slower and more complicated, and based on the comments supplied at the bottom of your example, it doesn't sound like the second function can do any useful work until the first function has completed its identification task anyway.
As a general principle, it's probably worthwhile to move the while True: [...] sleep(180) loop out of the function call and into the top-level code, so that the caller can control its behavior directly. Calling a function that never returns doesn't leave a lot of room for customization to the caller.
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name,inp):
mip.perform_some_image_processing_from_screen(image_name)
map.perform_some_other_image_processing_from_screen(inp)
mp = myprocess()
while True:
mp.perform_image_processing(my_image,2)
sleep(180)

How to get the processed results from dramatiq python?

import dramatiq
from dramatiq.brokers.redis import RedisBroker
from dramatiq.results import Results
from dramatiq.results.backends import RedisBackend
broker = RedisBroker(host="127.0.0.1", port=6379)
broker.declare_queue("default")
dramatiq.set_broker(broker)
# backend = RedisBackend()
# broker.add_middleware(Results(backend=backend))
#dramatiq.actor()
def print_words(text):
print('This is ' + text)
print_words('sync')
a = print_words.send('async')
a.get_results()
I was checking alternatives to celery and found Dramatiq. I'm just getting started with dramatiq and I'm unable to retrieve results. I even tried setting the backend and 'save_results' to True. I'm always getting this AttributeError: 'Message' object has no attribute 'get_results'
Any idea on how to get the result?
You were on the right track with adding a result backend. The way to instruct an actor to store results is store_results=True, not save_results and the method to retrieve results is get_result(), not get_results.
When you run get_result() with block=False, you should wait the worker set result ready, like this:
while True:
try:
res = a.get_result(backend=backend)
break
except dramatiq.results.errors.ResultMissing:
# do something like retry N times.
time.sleep(1)
print(res)

Dask: Submit continuously, work on all submitted data

Having 500, continously growing DataFrames, I would like to submit operations on the (for each DataFrame indipendent) data to dask. My main question is: Can dask hold the continously submitted data, so I can submit a function on all the submitted data - not just the newly submitted?
But lets explain it on an example:
Creating a dask_server.py:
from dask.distributed import Client, LocalCluster
HOST = '127.0.0.1'
SCHEDULER_PORT = 8711
DASHBOARD_PORT = ':8710'
def run_cluster():
cluster = LocalCluster(dashboard_address=DASHBOARD_PORT, scheduler_port=SCHEDULER_PORT, n_workers=8)
print("DASK Cluster Dashboard = http://%s%s/status" % (HOST, DASHBOARD_PORT))
client = Client(cluster)
print(client)
print("Press Enter to quit ...")
input()
if __name__ == '__main__':
run_cluster()
Now I can connect from my my_stream.py and start to submit and gather data:
DASK_CLIENT_IP = '127.0.0.1'
dask_con_string = 'tcp://%s:%s' % (DASK_CLIENT_IP, DASK_CLIENT_PORT)
dask_client = Client(self.dask_con_string)
def my_dask_function(lines):
return lines['a'].mean() + lines['b'].mean
def async_stream_redis_to_d(max_chunk_size = 1000):
while 1:
# This is a redis queue, but can be any queueing/file-stream/syslog or whatever
lines = self.queue_IN.get(block=True, max_chunk_size=max_chunk_size)
futures = []
df = pd.DataFrame(data=lines, columns=['a','b','c'])
futures.append(dask_client.submit(my_dask_function, df))
result = self.dask_client.gather(futures)
print(result)
time sleep(0.1)
if __name__ == '__main__':
max_chunk_size = 1000
thread_stream_data_from_redis = threading.Thread(target=streamer.async_stream_redis_to_d, args=[max_chunk_size])
#thread_stream_data_from_redis.setDaemon(True)
thread_stream_data_from_redis.start()
# Lets go
This works as expected and it is really quick!!!
But next, I would like to actually append the lines first before the computation takes place - And wonder if this is possible? So in our example here, I would like to calculate the mean over all lines which have been submitted, not only the last submitted ones.
Questions / Approaches:
Is this cummulative calculation possible?
Bad Alternative 1: I
cache all lines locally and submit all the data to the cluster
every time a new row arrives. This is like an exponential overhead. Tried it, it works, but it is slow!
Golden Option: Python
Program 1 pushes the data. Than it would be possible to connect with
another client (from another python program) to that cummulated data
and move the analysis logic away from the inserting logic. I think Published DataSets are the way to go, but are there applicable for this high-speed appends?
Maybe related: Distributed Variables, Actors Worker
Assigning a list of futures to a published dataset seems ideal to me. This is relatively cheap (everything is metadata) and you'll be up-to-date as of a few milliseconds
client.datasets["x"] = list_of_futures
def worker_function(...):
futures = get_client().datasets["x"]
data = get_client.gather(futures)
... work with data
As you mention there are other systems like PubSub or Actors. From what you say though I suspect that Futures + Published datasets are simpler and a more pragmatic option.

Python while True preventing previous code execution

I'm building a telegram bot using telepot(python) and a weird thing happened:
since the bot is supposed to keep running, I've set a
while 1:
time.sleep(.3)
at the end of my bot, right after I define how to handle the MessageLoop.
The bot has some print() to assert that the bot is setting up (or better, that it's setting up the message handler) and that it is reading the chat, waiting for any input.
The problem is that if I run the code without the
while 1:
time.sleep(.3)
it prints the messages and stops the execution (not having a loop to wait for new input), but if I add the while 1: (...) the code stops before being able to print anything.
Here's the code:
"""Launch the bot."""
import json
import telepot
from telepot.loop import MessageLoop
import time
from base import splitter
from base import handler
messageHandler = handler.Handler()
with open('triggers.json') as f:
triggers = json.load(f)
with open('config.json') as f:
config = json.load(f)
botKey = config['BOT_KEY']
# define the bot and the botname
bot = telepot.Bot(botKey)
botName = bot.getMe()['username']
# split commands in arrays ordered by priority
configSplitter = splitter.Splitter()
triggers = configSplitter.splitByPriority(triggers)
# define the handler
messageHandler.setBotname(botName)
messageHandler.setMessages(triggers)
# handle the messageLoop
print("setting up the bot...")
MessageLoop(bot, messageHandler.handle).run_as_thread()
print("Multibot is listening!")
# loop to catch all the messages
while 1:
time.sleep(.3)
Python version: 3.6 32-bit
The solution was to add sys.stdout.flush() after the various print()to restore the functionality of print(), while in order to make the code work again i had to change the telepot function
MessageLoop(bot, messageHandler.handle).run_as_thread()
which wasn't working properly to:
bot.message_loop(messageHandler.handle)

Resources