phpmailer sending success but not delivered - phpmailer

My PHP mailer is not working on my pc but working on friends.. here is the code of mailer. I tried in Kali Linux it's working but not sending delivering emails... my friend has windows pc and I am also using it on windows using xampp... I set up environment variable c:\xampp\php etc...
<?php
//Regards
date_default_timezone_set('Asia/Jakarta');
$date = date('F d, Y, h:i A T');
/* W3LL SMTP SETUP */
$smtp_acc = [
[
"host" => "smtp-relay.gmail.com",
"port" => "587",
"username" => "email#s.com",
"password" => "smtppass"
],
[
"host" => "smtp-relay.gmail.com",
"port" => "587",
"username" => "email#s.com",
"password" => "smtppass"
],
[
"host" => "smtp-relay.gmail.com",
"port" => "587",
"username" => "email#s.com",
"password" => "smtppass"
],
];
/* W3LL Features SETUP */
$W3LL_setup = [
"priority" => 1,
"userandom" => 0,
"sleeptime" => 1,
"replacement" => 1,
"filesend" => 1,
"userremoveline" => 0,
"mail_list" => "file/maillist/tester.txt",
"fromname" => "from",
"frommail" => "support#domain.com",
"subject" => "sunject",
"msgfile" => "file/html/1.html",
"filepdf" => "",
"links" => [""],
];

Related

Filebeat multiline pattern for PHP stack trace

I am trying to import the PHP FPM logs into an ELK stack. For this I use the filebeat to read the files. Before sending this data to logstash, the multiline log entries should be merged.
For this I built this filebeat configuration:
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: filestream
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- '/var/log/app/fpm/*.log'
multiline.type: pattern
multiline.pattern: '^\[\d{2}-\w{3}-\d{4} \d{2}:\d{2}:\d{2} [\w/]*\] PHP\s*at.*'
multiline.negate: false
multiline.match: after
processors:
- add_fields:
fields.docker.service: "fpm"
But as you can see in the ruby debug output from logstash, the messages were not merged:
{
"#timestamp" => 2021-08-10T13:54:10.149Z,
"agent" => {
"version" => "7.13.4",
"hostname" => "3cb76d7d4c7d",
"id" => "61dec25e-12ec-4a65-9f1f-ec72a5aa83ee",
"ephemeral_id" => "631db0d8-60ad-4625-891c-3da09cb0a442",
"type" => "filebeat"
},
"input" => {
"type" => "filestream"
},
"log" => {
"offset" => 344,
"file" => {
"path" => "/var/log/app/fpm/error.log"
}
},
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"fields" => {
"docker" => {
"service" => "fpm"
}
},
"#version" => "1",
"message" => "[17-Jun-2021 13:07:56 Europe/Berlin] PHP [WARN] (/var/www/html/Renderer/RendererTranslator.php:92) - unable to translate type integer. It is not a string (/url.php)",
"ecs" => {
"version" => "1.8.0"
}
}
{
"input" => {
"type" => "filestream"
},
"module" => "PHP IES\\ServerException",
"ecs" => {
"version" => "1.8.0"
},
"#version" => "1",
"log" => {
"offset" => 73,
"file" => {
"path" => "/var/log/ies/fpm/error.log"
}
},
"#timestamp" => 2021-06-17T11:10:41.000Z,
"agent" => {
"version" => "7.13.4",
"hostname" => "3cb76d7d4c7d",
"id" => "61dec25e-12ec-4a65-9f1f-ec72a5aa83ee",
"ephemeral_id" => "631db0d8-60ad-4625-891c-3da09cb0a442",
"type" => "filebeat"
},
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"fields" => {
"docker" => {
"service" => "fpm"
}
},
"message" => "core.login"
}
{
"#timestamp" => 2021-08-10T13:54:10.149Z,
"agent" => {
"version" => "7.13.4",
"hostname" => "3cb76d7d4c7d",
"id" => "61dec25e-12ec-4a65-9f1f-ec72a5aa83ee",
"ephemeral_id" => "631db0d8-60ad-4625-891c-3da09cb0a442",
"type" => "filebeat"
},
"ecs" => {
"version" => "1.8.0"
},
"input" => {
"type" => "filestream"
},
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"fields" => {
"docker" => {
"service" => "fpm"
}
},
"#version" => "1",
"message" => "[17-Jun-2021 13:10:41 Europe/Berlin] PHP at App\\Module\\ComponentModel\\ComponentModel->doPhase(/var/www/html/Component/Container.php:348)",
"log" => {
"offset" => 204,
"file" => {
"path" => "/var/log/app/fpm/error.log"
}
}
}
I tested the regular expression with Rubular and it matches the stack trace messages.
What am I doing wrong here?
Instead of adjusting the filebeat configuration, I adjusted the log configuration of the application.
Now JSON files are written, which can be easily read with the filebeat. The consideration of the line break is then no longer necessary.
You need to set multiline.negate to true.

Logstash Aggregate filter plugin Not working properly

Hi i am new to logstash and was trying the demo in their documentation here https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate "example-1" i was using the same exact script and input but still got different output because of this i was expecting single entry in kibana but it shows 3 entries please help
grok {
match => [ "message", "%{LOGLEVEL:loglevel} - %{NOTSPACE:taskid} - %{NOTSPACE:logger} - %{WORD:label}( - %{INT:duration:int})?" ]
}
if [logger] == "TASK_START" {
aggregate {
task_id => "%{taskid}"
code => "map['sql_duration'] = 0"
map_action => "create"
}
}
if [logger] == "SQL" {
aggregate {
task_id => "%{taskid}"
code => "map['sql_duration'] += event.get('duration')"
map_action => "update"
}
}
if [logger] == "TASK_END" {
aggregate {
task_id => "%{taskid}"
code => "event.set('sql_duration', map['sql_duration'])"
map_action => "update"
end_of_task => true
timeout => 120
}
}
}
INPUT
INFO - 12345 - TASK_START - start
INFO - 12345 - SQL - sqlQuery1 - 12
INFO - 12345 - SQL - sqlQuery2 - 34
INFO - 12345 - TASK_END - end
EXPECTED OUTPUT
{
"message" => "INFO - 12345 - TASK_END - end message",
"sql_duration" => 46
}
MY OUTPUT
{
"host" => "BEN",
"message" => "INFO - 12345 - TASK_START - start\r",
"#timestamp" => 2021-04-27T14:17:28.151Z,
"loglevel" => "INFO",
"taskid" => "12345",
"logger" => "TASK_START",
"path" => "C:/software/Notepad++/log72.log",
"type" => "technical1234",
"label" => "start",
"#version" => "1"
}
{
"host" => "BEN",
"message" => "INFO - 12345 - SQL - sqlQuery1 - 12\r",
"#timestamp" => 2021-04-27T14:17:28.174Z,
"type" => "technical1234",
"label" => "sqlQuery1",
"taskid" => "12345",
"loglevel" => "INFO",
"logger" => "SQL",
"duration" => 12,
"path" => "C:/software/Notepad++/log72.log",
"#version" => "1"
}
{
"host" => "BEN",
"message" => "INFO - 12345 - SQL - sqlQuery2 - 34\r",
"#timestamp" => 2021-04-27T14:17:28.175Z,
"type" => "technical1234",
"label" => "sqlQuery2",
"taskid" => "12345",
"loglevel" => "INFO",
"logger" => "SQL",
"duration" => 34,
"path" => "C:/software/Notepad++/log72.log",
"#version" => "1"
}

How to deal with empty fields in Logstash

I am facing problem with Logstash KV filter:
Below is sample event:
2016-08-15T12:43:04.478Z 103.240.35.216 <190>date=2016-08-15 time=18:13:16 timezone="IST" device_name="CR25iNG" device_id=C2222-123 log_id=010302602002 log_type="Firewall" log_component="Appliance Access" log_subtype="Denied" status="Deny" priority=Information duration=0 fw_rule_id=0 user_name="" user_gp="" iap=0 ips_policy_id=0 appfilter_policy_id=0 application="" application_risk=0 application_technology="" application_category="" in_interface="PortA" out_interface="" src_mac=44:d9:e7:ba:5b:6c src_ip=172.16.16.19 src_country_code= dst_ip=255.255.255.255 dst_country_code= protocol="UDP" src_port=45541 dst_port=10001 sent_pkts=0 recv_pkts=0 sent_bytes=0 recv_bytes=0 tran_src_ip= tran_src_port=0 tran_dst_ip= tran_dst_port=0 srczonetype="" srczone="" dstzonetype="" dstzone="" dir_disp="" connid="" vconnid=""
Below is the KV filter output:
"#version" => "1",
"#timestamp" => "2016-08-16T13:48:30.602Z",
"type" => "cyberoam.input",
"host" => "ip-172-31-6-249",
"time" => "18:13:16",
"timezone" => "IST",
"status" => "Deny",
"priority" => "Information",
"duration" => "0",
"iap" => "0",
"application" => "",
"application_risk" => "0",
"application_technology" => "",
"application_category" => "",
"dst_country_code" => "protocol=UDP",
"recv_pkts" => "0",
"tran_src_ip" => "tran_src_port=0",
"tran_dst_ip" => "tran_dst_port=0",
"srczonetype" => "",
"srczone" => "",
"dstzonetype" => "",
"dstzone" => "",
"dir_disp" => "",
"syslog_severity_code" => 5,
"syslog_facility_code" => 1,
"syslog_facility" => "user-level",
"syslog_severity" => "notice",
"date" => "2016-08-15",
Problem:
"dst_country_code" => "protocol=UDP",
"tran_src_ip" => "tran_src_port=0",
"tran_dst_ip" => "tran_dst_port=0",
Above is due to empty keys "dst_country_code", "tran_src_ip" and "tran_dst_ip".
I was suggested to use mutate gsub to add default value to empty field by substituting =\w with ="".
But this never worked.
Pleas help.
I got response from Logstash community and that worked.
mutate {
gsub => [ 'message', '= ', '="" ' ]
}
Thanks.

How to process http posted files in logstash - line by line?

I successfully configured logstash to process csv files from the file system and put them into Elastic for further analysis.
But our ELK is heavily separated from the original source of the csv files, so I thought about sending the csv files via http to logstash instead of using a file system.
The issue is that if I use input "http" the whole file is taken and processed as one big bunch. The csv filter only recognized the first line. As mentioned, the same file works via "file" input.
logstash config is like this:
input {
# http {
# host => "localhost"
# port => 8080
# }
file {
path => "/media/sample_files/debit_201606.csv"
type => "items"
start_position => "beginning"
}
}
filter {
csv {
columns => ["Created", "Direction", "Member", "Point Value", "Type", "Sub Type"]
separator => " "
convert => { "Point Value" => "integer" }
}
date {
match => [ "Created", "YYYY-MM-dd HH:mm:ss" ]
timezone => "UTC"
}
}
output {
# elasticsearch {
# action => "index"
# hosts => ["localhost"]
# index => "logstash-%{+YYYY.MM.dd}"
# workers => 1
# }
stdout {
codec => rubydebug
}
}
My goal is to pass the csv via curl. So switching to the commented part of the input area above, and then use curl to pass the files:
curl http://localhost:8080/ -T /media/samples/debit_201606.csv
What do I need to do to achieve that logstash is processing the csv line by line?
I tried this and I think what you need to do is to split your input. Here's how you do that:
My configuration:
input {
http {
port => 8787
}
}
filter {
split {}
csv {}
}
output {
stdout { codec => rubydebug }
}
And for my test I created a csv file looking like this:
artur#pandaadb:~/tmp/logstash$ cat test.csv
a,b,c
d,e,f
g,h,i
And now for the test:
artur#pandaadb:~/dev/logstash/conf3$ curl localhost:8787 -T ~/tmp/logstash/test.csv
Outputs:
{
"message" => "a,b,c",
"#version" => "1",
"#timestamp" => "2016-08-01T15:27:17.477Z",
"host" => "127.0.0.1",
"headers" => {
"request_method" => "PUT",
"request_path" => "/test.csv",
"request_uri" => "/test.csv",
"http_version" => "HTTP/1.1",
"http_host" => "localhost:8787",
"http_user_agent" => "curl/7.47.0",
"http_accept" => "*/*",
"content_length" => "18",
"http_expect" => "100-continue"
},
"column1" => "a",
"column2" => "b",
"column3" => "c"
}
{
"message" => "d,e,f",
"#version" => "1",
"#timestamp" => "2016-08-01T15:27:17.477Z",
"host" => "127.0.0.1",
"headers" => {
"request_method" => "PUT",
"request_path" => "/test.csv",
"request_uri" => "/test.csv",
"http_version" => "HTTP/1.1",
"http_host" => "localhost:8787",
"http_user_agent" => "curl/7.47.0",
"http_accept" => "*/*",
"content_length" => "18",
"http_expect" => "100-continue"
},
"column1" => "d",
"column2" => "e",
"column3" => "f"
}
{
"message" => "g,h,i",
"#version" => "1",
"#timestamp" => "2016-08-01T15:27:17.477Z",
"host" => "127.0.0.1",
"headers" => {
"request_method" => "PUT",
"request_path" => "/test.csv",
"request_uri" => "/test.csv",
"http_version" => "HTTP/1.1",
"http_host" => "localhost:8787",
"http_user_agent" => "curl/7.47.0",
"http_accept" => "*/*",
"content_length" => "18",
"http_expect" => "100-continue"
},
"column1" => "g",
"column2" => "h",
"column3" => "i"
}
What the split filter does is:
It takes your input message (which is one String including the new-lines) and splits it by the configured value (which by default is a new-line). Then it cancels the original event and re-submits the split events to logstash. It is important that you execute the split before you execute the csv filter.
I hope that answers your question!
Artur

logstash - dynamic field names

I have problem with dynamics field names in my Logstash configuration.
This is my test config:
input {
generator {
lines => [ "May 15 13:42:55 logstash puppet-agent[3551]: Finished catalog run in 43",
"May 16 14:57:07 logstash puppet-agent[3551]: Starting Puppet client version" ]
count => 7
}
}
filter {
grok {
match => [ "message", "%{SYSLOGBASE} %{WORD:log}.*" ]
}
if "Starting" in [log] {
metrics {
meter => [ "%{logsource}.%{log}" ]
add_tag => [ "metric" ]
add_field => { "server" => "%{logsource}"
"bad" => "true" }
clear_interval => 5
}
}
}
output {
stdout { codec => rubydebug }
}
and here is my output: (just end of output)
{
"message" => "May 15 13:42:55 logstash puppet-agent[3551]: Finished catalog run in 43",
"#version" => "1",
"#timestamp" => "2016-06-07T07:37:50.138Z",
"host" => "logstash.test.lan",
"sequence" => 6,
"timestamp" => "May 15 13:42:55",
"logsource" => "test",
"program" => "puppet-agent",
"pid" => "3551",
"log" => "Finished"
}
{
"message" => "May 16 14:57:07 logstash puppet-agent[3551]: Starting Puppet client version",
"#version" => "1",
"#timestamp" => "2016-06-07T07:37:50.138Z",
"host" => "logstash.test.lan",
"sequence" => 6,
"timestamp" => "May 16 14:57:07",
"logsource" => "test",
"program" => "puppet-agent",
"pid" => "3551",
"log" => "Starting"
}
{
"#version" => "1",
"#timestamp" => "2016-06-07T07:37:50.288Z",
"message" => "Counting: 7",
"logstash.Starting" => {
"count" => 7,
"rate_1m" => 0.0,
"rate_5m" => 0.0,
"rate_15m" => 0.0
},
"server" => "%{logsource}",
"bad" => "true",
"tags" => [
[0] "metric"
]
}
Why field server donĀ“t have logstash as value from the input logs? %{logsource} works for meter option, so why not for add_field?
Thx for help.
When a log event is received, the SYSLOGBASE information is extracted from the content. This is where the %{logsource} value is defined. If the event isn't coming from a log entry that contains SYSLOGBASE information, then logsource will be undefined.
When you receive a start message, logsource is defined in scope and is added to your message.
The metrics plugin is generating a new message per interval. This message is generated from scratch so it does not have the value of logsource or anything else that would normally be obtained from an individual log entry.

Resources