i have installed wazuh agent and manager. and i set the ip address of Manager in ossec.conf also i have configured in agent.conf the log type json and path
/var/log/wildfly/app/app.json
but json logs not detected in wazuh manager alerts.json | .log
please any help from your side guys
You seem to want the output line by line in the data array and then split each array element (i. e. line) into columns. To get this, replace
outp=stdout.read()
data=[outp]
with
data = stdout.readlines()
Related
I created a logic app to export some data to a *.csv file.
Data which will be exported contains german umlauts.
I read all the needed values into variables which are then concatenated and added to an array.
Finally I get an array of semicolon separated strings with the values in it.
This result will then be added to an email as file attachment:
All the values are handled correctly in the Logic App and are correct in the *.csv file but as soon I open the csv with Excel, the umlauts are not shown correctly anymore.
Is there a way to create explicitly a file with the correct encoding within the logic app and add the file to the email instead of the ExportString?
Or can I somehow encode the content of the ExportString-Variable?
Any hints?
I have reproduced in my environment and followed below steps to get correct output in CSV file:
My input is:
I have sent the data into CSV table as below and then created a file in file share as below:
Then when i open my file share and download the content from there i got different output as you got:
Then I opened my Azure Storage explorer and downloaded it as below:
When i open in notepad the downloaded file:
I get the correct output, try to do in this way
And when i save it as hello.csv and keep utf-8 with bom like below:
Then I get the correct output in csv as well:
I'm trying to connect cassandra database through python using cassandra driver .And it went successful with out any problem . When i tried to fetch the values from cassandra ,it has some formatted output like Row(values) .
python version 3.6
package : cassandra
from cassandra.cluster import Cluster
cluster = Cluster()
session = cluster.connect('employee')
k=session.execute("select count(*) from users")
print(k[0])
Output :
Row(count=11)
Expected :
11
From documentation:
By default, each row in the result set will be a named tuple. Each row will have a matching attribute for each column defined in the schema, such as name, age, and so on. You can also treat them as normal tuples by unpacking them or accessing fields by position.
So you can access your data by name as k[0].count, or by position as rows[0][0]
Please read Getting started document from driver's documentation - it will answer most of your questions.
Cassandra reply everything using something called row factory, which by default is a named tuple.
In your case, to access the output you should access k[0].count.
I have installed ELK stack for elastic search with kibana to start using logstash but I get the following issue with the no default index set?
kilbana no default index set screen shot
The page asks me a question "Do you have indices matching the pattern?" but I don't see a way to answer it and move forward! It's my first time installing this. Any ideas?
I've successfully got the services installed and running using this tutorial install ELK Stack
Update #1
Have entered http://localhost:9200/_cat/indices into my browser and it displays the following
yellow open .kibana qypsy4K-Qt-jm4_wll9PCQ 1 1 1 0 3.6kb 3.6kb
Update 2
After downloading curl and attempting to import data I received the following curl messages.
data import using curl messages
Update #3
I've downloaded data from www.kaggle.com and executed the command following command to import the data but the command prompt just sits there, I've included a screen shot below of the console window.
You need to enter your index name in the input box in the place of logstash-*. You will see your index name by going to localhost:9200/_cat/indices.
Once you put your index name it will automatically gather the fields from your index and prompt you for a time field which you can set or ignore.
As it known Apache spark saves files by parts i.e foo.csv/part-r-00000..
I save files on Swift Object storage now I want want to get the files using Openstack swift API but when I do curl on foo.csv I get zero file
How I download the contents of the file.
You can take any REST client and list content of the object store. Don't do curl on 'foo.txt', since it's zero size object. You need to list container with prefix 'foo.txt', this will return you all the parts.
Alternatively you can use Apache Spark and read foo.txt (Spark will automatically list and return all the parts)
I'm following this tutorial http://azure.microsoft.com/en-us/documentation/articles/hdinsight-use-hive/ but have become stuck when changing the source of the query to use a file.
It all works happily when using New-AzureHDInsightHiveJobDefinition -Query $queryString but when I try New-AzureHDInsightHiveJobDefinition -File "/example.hql" with example.hql stored in the "root" of the blob container I get ExitCode 40000 and the following in standarderror:
Logging initialized using configuration in file:/C:/apps/dist/hive-0.11.0.1.3.7.1-01293/conf/hive-log4j.properties
FAILED: ParseException line 1:0 character 'Ã?' not supported here
line 1:1 character '»' not supported here
line 1:2 character '¿' not supported here
Even when I deliberately misspell the hql filename the above error is still generated along with the expected file not found error so it's not the content of the hql that's causing the error.
I have not been able to find the hive-log4j.properties in the blob store to see if it's corrupt, I have torn down the HDInsight cluster and deleted the associated blob store and started again but ended up with the same result.
Would really appreciate some help!
I am able to induce a similar error by putting a Utf-8 or Unicode encoded .hql file into blob storage and attempting to run it. Try saving your example.hql file as 'ANSI' in Notepad (Open, the Save As and the encoding option is at the bottom of the dialog) and then copy it to blob storage and try again.
If the file is not found on Start-AzureHDInsightJob, then that cmdlet errors out and does not return a new AzureHDInsightJob object. If you had a previous instance of the result saved, then the subsequent Wait-AzureHDInsightJob and Get-AzureHDInsightJobOutput would be referring to a previous run, giving the illusion of the same error for the not found case. That error should definitely indicate a problem reading an UTF-8 or Unicode file when one is not expected.