I have installed ELK stack for elastic search with kibana to start using logstash but I get the following issue with the no default index set?
kilbana no default index set screen shot
The page asks me a question "Do you have indices matching the pattern?" but I don't see a way to answer it and move forward! It's my first time installing this. Any ideas?
I've successfully got the services installed and running using this tutorial install ELK Stack
Update #1
Have entered http://localhost:9200/_cat/indices into my browser and it displays the following
yellow open .kibana qypsy4K-Qt-jm4_wll9PCQ 1 1 1 0 3.6kb 3.6kb
Update 2
After downloading curl and attempting to import data I received the following curl messages.
data import using curl messages
Update #3
I've downloaded data from www.kaggle.com and executed the command following command to import the data but the command prompt just sits there, I've included a screen shot below of the console window.
You need to enter your index name in the input box in the place of logstash-*. You will see your index name by going to localhost:9200/_cat/indices.
Once you put your index name it will automatically gather the fields from your index and prompt you for a time field which you can set or ignore.
Related
I'm looking at control identifiers to create an automation script and noticed that the first time I ran print_control_identifiers(), I had DataItems that showed up. However, any time after that when running the same print_control_identifiers() command, all the controls showed up except for the DataItems.
Code:
app = Application(backend="uia").connect(best_match="ARMOR CRITICAL")
app.window(control_type="Window").child_window(control_type="Document").print_control_identifiers()
Output - first pass
Output - second pass
I've been using the uia backend every time.
I'm using Python 3.7.4 and pywinauto 0.6.8
I've a quick question about "pandas_profiling" .So basically i'm trying to use the pandas 'profiling' but instead of showing the output it says something like this:
<pandas_profiling.ProfileReport at 0x23c02ed77b8>
Where i'm making the mistake?? or Does it have anything to do with Ipython?? Because i'm using Ipython in Anaconda.
try this
pfr = pandas_profiling.ProfileReport(df)
pfr.to_notebook_iframe()
pandas_profiling creates an object that then needs to be displayed or output. One standard way of doing so is to save it as an HTML:
profile.to_file(outputfile="sample_file_name.html")
("profile" being the variable you used to save the profile itself)
It doesn't have to do with ipython specifically - the difference is that because you're going line by line (instead of running a full block of code, including the reporting step) it's showing you the object itself. The code above should allow you to see the report once you open it up.
i have installed wazuh agent and manager. and i set the ip address of Manager in ossec.conf also i have configured in agent.conf the log type json and path
/var/log/wildfly/app/app.json
but json logs not detected in wazuh manager alerts.json | .log
please any help from your side guys
You seem to want the output line by line in the data array and then split each array element (i. e. line) into columns. To get this, replace
outp=stdout.read()
data=[outp]
with
data = stdout.readlines()
I was simply trying to generate a summary that would show the run_metadata as follows:
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
summary = sess.run([x, y], options=run_options, run_metadata=run_metadata)
train_writer.add_run_metadata(paths.logs, 'step%d' % step)
train_writer.add_summary(paths.logs, step)
I made sure the path to the logs folder exists, this is confirmed by the fact the the summary file is generated but no metadata is presetn. Now I am not sure a file is actually generated to be honest (for the metadata), but when I open tensorboard, the graph looks fine and the session runs dropdown menu is populated. Now when I select any of the runs it shows a progress bar "Parsing metadata.pbtxt" that stops and hangs right half way through.
This prevents me from gathering any more additional info about my graph. Am I missing something ? A similar issue happened when trying to run this tutorial locally (MNIST summary tutorial). I feel like I am missing something simple. Does anyone have an idea about what could cause this issue ? Why would my tensorboard hang when trying to load a session run data ?
I can't believe I made it work right after posting the question but here it goes. I noticed that this line:
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
was giving me an error so I removed the params and turned it into
run_options = tf.RunOptions()
without realizing that this is what caused the metadata not to be parsed. Once I researched the error message:
Couldn't open CUDA library cupti64_90.dll
I looked into this Github Thread and moved the file into the bin folder. After that I ran again my code with the trace_level param, had no errors and the metadata was successfully parsed.
I've got a Grafana docker image running with Graphite/Carbon. Getting data using CLI works, example:
echo "local.random.diceroll $(((RANDOM%6)+1)) `date +%s`" | nc localhost 2003;
The following Python 2 code also works:
sock = socket.socket()
sock.connect((CARBON_SERVER, CARBON_PORT))
sock.sendall(message)
sock.close()
message is a string containing key value timestamp and this works, the data can be found. So the Grafana docker image is accepting data.
I wanted to get this working in Python 3, but the sendall function requires bytes as parameter. The code change is:
sock = socket.socket()
sock.connect((CARBON_SERVER, CARBON_PORT))
sock.sendall(str.encode(message))
sock.close()
Now the data isn't inserted and I can't figure out why. I tried this on a remote machine (same network) and on the local server. I also tried several packages (graphiti, graphiteudp), but they all seem to fail to insert the data. They also don't show any error message.
The simple example for graphiteudp doesn't work either on the Github page
Got an idea what I'm doing wrong?
You can add \n to the message you send. I have tried it with Python 3, and that works.