How to unflatten properties, ignored in other map? - automapper

Sources for my destination are spitted in root dto and its child property:
public class Source {
public AccountSource {get;set;}
public string AccountKey {get;set;}
public string AccountPassword {get;set;}
}
public class AccountSource {
public Guid Id { get; set; }
public string Name { get; set; }
}
public class Dest
{
public Account Account { get; set; }
public class Account
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Password { get; set; }
public string Key { get; set; }
}
}
At now I have such mapping:
CreateMap<Source, Dest>()
.ForPath(x => x.Account.Id, o => o.MapFrom(x => x.Account.Id))
.ForPath(x => x.Account.Name, o => o.MapFrom(x => x.Account.Name))
.ForPath(x => x.Account.Key, o => o.MapFrom(x => x.AccountKey))
.ForPath(x => x.Account.Password, o => o.MapFrom(x => x.AccountPassword));
CreateMap<AccountSource, Dest.Account>()
.ForAllMembers(x => x.Ignore());
but it have one issue: new members added to Dest.Account will be not validated.
If remove ForPath and just leave
CreateMap<Source, Dest>();
CreateMap<AccountSource, Dest.Account>();
than Password and Key are Unmapped, so I have to Ignore() them in such way CreateMap<AccountSource, Dest.Account>().ForMember(x => x.Password, x=> x.Ignore()).ForMember(x => x.Key, x=> x.Ignore()).
But then unflattening not working (props ignored absolutely, not only when account→account mapping occurred).

By default, ForPath will ignore Dest.Account, but you can always map it explicitly:
CreateMap<Source, Dest>()
.ForPath(d => d.Account.Key, o => o.MapFrom(s => s.AccountKey))
.ForPath(d => d.Account.Password, o => o.MapFrom(s => s.AccountPassword))
.ForMember(d => d.Account, o => o.MapFrom(s => s.Account));
CreateMap<AccountSource, Account>()
.ForMember(d => d.Password, o => o.Ignore())
.ForMember(d => d.Key, o => o.Ignore());

Found another solution:
CreateMap<Source, Dest>()
.ForMember(x => x.Account, o => o.MapFrom(x => x));
CreateMap<Source, Dest.Account>()
.ForMember(x => x.Id, o => o.MapFrom(x => x.AccountSource.Id))
.ForMember(x => x.Name, o => o.MapFrom(x => x.AccountSource.Name))
.ForMember(x => x.Password, o => o.MapFrom(x => x.AccountPassword))
.ForMember(x => x.Key, o => o.MapFrom(x => x.AccountKey));

Related

how to catch failures and errors on ShouldQueue imports through an API in Laravel

Through these process I can upload data that contained in Big Excel file, if there is any validation problem on the queue it will skip and upload remaining files. But I don't know how I can get those validation errors through API. I trided if ($import->failures()->isNotEmpty()) way for checking if there any errors I know that won't work. Suggest me the best way to get those validation errors.
On my ImportController
public function import(Request $request)
{
$rules = [
'file' => 'required|mimes:xlsx,xls',
'type' => 'filled|string|not_in:null',
];
$validator = Validator::make($request->all(), $rules);
if ($validator->fails()) {
$errors = $validator->errors();
return response()->json($errors, 404);
}
$type = $request->input('type');
$file = $request->file;
$fileName = time() . '--u-' . auth()->user()->id . '.' .$file->extension();
$location = $file->storeAs('importFile/applicationFile' , $fileName);
$import = new ApplicationImport;
$import->import($location);
if ($import->failures()->isNotEmpty()) {
return response()->json(['response' => 'Imported! And have some errors!', 'errors' =>
$import->failures()], 404);
}
return response()->json(['response' => 'File send to queue! Worker will import it.'], 200);
}
On ApplicationImport (Using Maatwebsite\Excel)
namespace App\Imports;
use App\Models\Application;
use Maatwebsite\Excel\Concerns\ToModel;
use Maatwebsite\Excel\Concerns\Importable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Maatwebsite\Excel\Concerns\SkipsFailures;
use Maatwebsite\Excel\Concerns\SkipsOnFailure;
use Maatwebsite\Excel\Concerns\WithHeadingRow;
use Maatwebsite\Excel\Concerns\WithValidation;
use Maatwebsite\Excel\Concerns\WithBatchInserts;
use Maatwebsite\Excel\Concerns\WithChunkReading;
class ApplicationImport implements ToModel, WithHeadingRow, WithValidation, WithBatchInserts, WithChunkReading, ShouldQueue, SkipsOnFailure
{
use Importable, SkipsFailures;
public function rules(): array
{
return [
'*.title' => ['required', 'string', 'max:100', 'unique:applications'],
'*.user_id' => ['required', 'numeric'],
'*.company_id' => ['required', 'numeric'],
'*.course_id' => ['required', 'numeric'],
'*.coursetype_id' => ['required', 'numeric'],
'*.institution_id' => ['required', 'numeric'],
'*.status_id' => ['required', 'numeric'],
'*.internal_ref' => ['required', 'string', 'max:255'],
'*.external_ref' => ['required', 'string', 'max:255'],
'*.intake_month' => ['required', 'string', 'max:255'],
'*.intake_year' => ['required', 'numeric', 'min:4'],
'*.tution_fee' => ['required', 'numeric'],
'*.administration_charge' => ['required', 'numeric'],
'*.created_by' => ['required', 'numeric']
];
}
public function genUID()
{
$app_uid = 'USR-' . mt_rand(10000000, 99999999);
if ($this->uidNumberExists($app_uid)) {
return $this->genUID();
} else {
return $app_uid;
}
}
public function uidNumberExists($app_uid)
{
return Application::Where('uid', $app_uid)->exists();
}
/**
* #param array $row
*
* #return \Illuminate\Database\Eloquent\Model|null
*/
public function model(array $row)
{
return new Application([
'uid' => $this->genUID(),
'title' => $row['title'],
'user_id' => $row['user_id'],
'company_id' => $row['company_id'],
'course_id' => $row['course_id'],
'coursetype_id' => $row['coursetype_id'],
'institution_id' => $row['institution_id'],
'status_id' => $row['status_id'],
'internal_ref' => $row['internal_ref'],
'external_ref' => $row['external_ref'],
'intake_month' => $row['intake_month'],
'intake_year' => $row['intake_year'],
'tution_fee' => $row['tution_fee'],
'administration_charge' => $row['administration_charge'],
'created_by' => $row['created_by'],
]);
}
public function batchSize(): int
{
return 1000;
}
public function chunkSize(): int
{
return 1000;
}
}
You should move your controller code in a queue/job where you can get failures and then you can access through Pusher if you are using. If you don't implement shouldQueue in import then maybe you can get failures.

Integrity constraint violation: 1048 Column 'user_id' cannot be null while importing excel file with multiple sheet. Am Using "Laravel Excel" package

I am trying to import excel file with multiple sheet... so far my normal import class works fine but when i came to importing multiple sheets i can't store the user_id
Here is my code
class FirstSheetImport implements ToCollection
{
public function create(Request $user_id)
{
$user_id = Auth::user()->id;
$this->user_id = $user_id;
}
public function collection(Collection $rows)
{
foreach ($rows as $row) {
Medicine::create([
'name' => $row[0] ?? "",
'dosage_form' => $row[1] ?? "",
'dosage_strength' => $row[2] ?? "",
'product_date' => $row[3] ?? "",
'expire_date' => $row[4] ?? "",
'unit' => $row[5] ?? "",
'serial_no' => $row[6] ?? "",
'user_id' => $this->user_id,
]);
}
}
}
and this is my normal import class
class MedicineImport implements ToModel, WithMultipleSheets
{
public function sheets(): array
{
return [
new FirstSheetImport()
];
}
/**
* #param array $row
*
* #return User|null
*/
public function __construct($user_id){
$this->user_id = $user_id;
}
public function model(array $row)
{
// var_dump($row);
// die();
return new Medicine([
'name' => $row[0] ?? "",
'dosage_form' => $row[1] ?? "",
'dosage_strength' => $row[2] ?? "",
'product_date' => $row[3] ?? "",
'expire_date' => $row[4] ?? "",
'unit' => $row[5] ?? "",
'serial_no' => $row[6] ?? "",
'user_id' => $this->user_id,
]);
}
}
I got it
in my multiple import class...
public function __construct($user_id)
{
$this->user_id = $user_id;
}
public function collection(Collection $rows)
{
$this->user_id = Auth::user()->id;
foreach ($rows as $row) {
Medicine::create([
'name' => $row[0] ?? "",
'dosage_form' => $row[1] ?? "",
'dosage_strength' => $row[2] ?? "",
'product_date' => $row[3] ?? "",
'expire_date' => $row[4] ?? "",
'unit' => $row[5] ?? "",
'serial_no' => $row[6] ?? "",
'user_id' => $this->user_id,
]);
}
and in my new instance...
public function sheets(): array
{
return [
0 => new FirstSheetImport($this->user_id)
];
}

Logstash script unable to write data to InfluxDB

I am trying to :
1. Pull monitoring metrics of Logstash from an index.
2. Filter/Process with Logstash script.
3. Writing to InfluxDB.
The Logstash script looks like this:
input {
#logstash Metrics
http_poller {
urls => {
logstash_metrics => {
method => post
url => "http://elasticco-###############.com:80//%3C.monitoring-logstash-6-%7Bnow%2Fd%7D%3E//_search?filter_path=aggregations.source_node_ip.buckets"
headers => { Accept => "application/json" }
body => '{"query":{"bool":{"must":[{"range":{"timestamp":{"gte":"now-10m","format":"epoch_millis"}}}]}},"size":0,"aggregations":{"source_node_ip":{"terms":{"field":"source_node.ip","size":20},"aggregations":{"data":{"top_hits":{"size":1,"sort":[{"timestamp":{"order":"desc"}}]}}}}}}'
auth => { user => "elastic" password => "changeme" }
}
}
codec => json
schedule => { every => "10m"}
type => "logstash_metrics"
add_field => {
"region" => "west"
}
}
}
filter {
if [type] == "logstash_metrics"{
mutate {
rename => {
"[aggregations][source_node_ip][buckets]" => "root"
}
}
split { field => "[root]" }
mutate {
rename => {
"[root][data][hits][hits]" => "main_event"
}
}
split { field => "[main_event]" }
mutate {
rename => {
"[main_event][_source][cluster_uuid]" => "cluster_uuid"
"[main_event][_source][source_node][ip]" => "source_node_ip"
"[main_event][_source][source_node][host]" => "source_node_host"
"[main_event][_source][source_node][uuid]" => "source_node_uuid"
"[main_event][_source][logstash_stats][jvm][mem][heap_used_percent]" => "logstash_stats_jvm_mem_heap_used_percent"
"[main_event][_source][logstash_stats][jvm][mem][heap_used_in_bytes]" => "logstash_stats_jvm_mem_heap_used_in_bytes"
"[main_event][_source][logstash_stats][jvm][mem][heap_max_in_bytes]" => "logstash_stats_jvm_mem_heap_max_in_bytes"
"[main_event][_source][logstash_stats][jvm][uptime_in_millis]" => "logstash_stats_jvm_uptime_in_millis"
"[main_event][_source][logstash_stats][jvm][gc][collectors][young][collection_time_in_millis]" => "logstash_stats_jvm_gc_collectors_young_collection_time_in_millis"
"[main_event][_source][logstash_stats][jvm][gc][collectors][young][collection_count]" => "logstash_stats_jvm_gc_collectors_young_collection_count"
"[main_event][_source][logstash_stats][jvm][gc][collectors][old][collection_time_in_millis]" => "logstash_stats_jvm_gc_collectors_old_collection_time_in_millis"
"[main_event][_source][logstash_stats][jvm][gc][collectors][old][collection_count]" => "logstash_stats_jvm_gc_collectors_old_collection_count"
"[main_event][_source][logstash_stats][logstash][pipeline][batch_size]" => "logstash_stats_logstash_pipeline_batch_size"
"[main_event][_source][logstash_stats][logstash][pipeline][workers]" => "logstash_stats_logstash_pipeline_workers"
"[main_event][_source][logstash_stats][logstash][status]" => "logstash_stats_logstash_status"
"[main_event][_source][logstash_stats][logstash][host]" => "logstash_stats_logstash_host"
"[main_event][_source][logstash_stats][process][max_file_descriptors]" => "logstash_stats_process_max_file_descriptors"
"[main_event][_source][logstash_stats][process][cpu][percent]" => "logstash_stats_process_cpu_percent"
"[main_event][_source][logstash_stats][process][open_file_descriptors]" => "logstash_stats_process_open_file_descriptors"
"[main_event][_source][logstash_stats][os][cpu][load_average][5m]" => "logstash_stats_os_cpu_load_average_5m"
"[main_event][_source][logstash_stats][os][cpu][load_average][15m]" => "logstash_stats_os_cpu_load_average_15m"
"[main_event][_source][logstash_stats][os][cpu][load_average][1m]" => "logstash_stats_os_cpu_load_average_1m"
"[main_event][_source][logstash_stats][events][filtered]" => "logstash_stats_events_filtered"
"[main_event][_source][logstash_stats][events][in]" => "logstash_stats_events_in"
"[main_event][_source][logstash_stats][events][duration_in_millis]" => "logstash_stats_events_duration_in_millis"
"[main_event][_source][logstash_stats][events][out]" => "logstash_stats_events_out"
"[main_event][_source][logstash_stats][queue][type]" => "logstash_stats_queue_type"
"[main_event][_source][logstash_stats][queue][events_count]" => "logstash_stats_queue_events_count"
"[main_event][_source][logstash_stats][reloads][failures]" => "logstash_stats_reloads_failures"
"[main_event][_source][logstash_stats][reloads][successes]" => "logstash_stats_reloads_successes"
"[main_event][_source][logstash_stats][timestamp]" => "timestamp"
}
}
mutate {
remove_field => [ "root", "aggregations", "#timestamp", "#version" , "main_event"]
}
}
}
output {
if [type] == "logstash_metrics" {
stdout { codec => rubydebug }
influxdb {
host => "influx-qa-write.##########.com"
port => "8086"
user => "gt######00"
password => "hg3########1"
db => "logstash_statistics"
measurement => "logstash_health_test1"
data_points => {
"logstash_stats_events_in" => "%{logstash_stats_events_in}"
"logstash_stats_logstash_status" => "%{logstash_stats_logstash_status}"
"logstash_stats_logstash_pipeline_workers" => "%{logstash_stats_logstash_pipeline_workers}"
"logstash_stats_events_out" => "%{logstash_stats_events_out}"
"logstash_stats_events_duration_in_millis" => "%{logstash_stats_events_duration_in_millis}"
"logstash_stats_process_cpu_percent" => "%{logstash_stats_process_cpu_percent}"
"logstash_stats_jvm_mem_heap_used_in_bytes" => "%{logstash_stats_jvm_mem_heap_used_in_bytes}"
"logstash_stats_process_open_file_descriptors" => "%{logstash_stats_process_open_file_descriptors}"
"logstash_stats_jvm_uptime_in_millis" => "%{logstash_stats_jvm_uptime_in_millis}"
"logstash_stats_events_filtered" => "%{logstash_stats_events_filtered}"
"logstash_stats_jvm_mem_heap_used_percent" => "%{logstash_stats_jvm_mem_heap_used_percent}"
"logstash_stats_jvm_gc_collectors_young_collection_time_in_millis" => "%{logstash_stats_jvm_gc_collectors_young_collection_time_in_millis}"
"source_node_ip" => "%{source_node_ip}"
"logstash_stats_queue_events_count" => "%{logstash_stats_queue_events_count}"
"logstash_stats_reloads_failures" => "%{logstash_stats_reloads_failures}"
"logstash_stats_logstash_host" => "%{logstash_stats_logstash_host}"
"logstash_stats_jvm_gc_collectors_young_collection_count" => "%{logstash_stats_jvm_gc_collectors_young_collection_count}"
"logstash_stats_os_cpu_load_average_5m" => "%{logstash_stats_os_cpu_load_average_5m}"
"logstash_stats_jvm_gc_collectors_old_collection_time_in_millis" => "%{logstash_stats_jvm_gc_collectors_old_collection_time_in_millis}"
"source_node_uuid" => "%{source_node_uuid}"
"logstash_stats_os_cpu_load_average_15m" => "%{logstash_stats_os_cpu_load_average_15m}"
"logstash_stats_reloads_successes" => "%{logstash_stats_reloads_successes}"
"logstash_stats_logstash_pipeline_batch_size" => "%{logstash_stats_logstash_pipeline_batch_size}"
"source_node_host" => "%{source_node_host}"
"logstash_stats_jvm_gc_collectors_old_collection_count" => "%{logstash_stats_jvm_gc_collectors_old_collection_count}"
"logstash_stats_process_max_file_descriptors" => "%{logstash_stats_process_max_file_descriptors}"
"logstash_stats_jvm_mem_heap_max_in_bytes" => "%{logstash_stats_jvm_mem_heap_max_in_bytes}"
"cluster_uuid" => "%{cluster_uuid}"
"logstash_stats_queue_type" => "%{logstash_stats_queue_type}"
"logstash_stats_os_cpu_load_average_1m" => "%{logstash_stats_os_cpu_load_average_1m}"
"region" => "%{region}"
}
coerce_values => {
"logstash_stats_logstash_pipeline_workers" => "integer"
"logstash_stats_events_in" => "integer"
"logstash_stats_logstash_status" => "string"
"logstash_stats_events_out" => "integer"
"logstash_stats_events_duration_in_millis" => "integer"
"logstash_stats_process_cpu_percent" => "float"
"logstash_stats_jvm_mem_heap_used_in_bytes" => "integer"
"logstash_stats_process_open_file_descriptors" => "integer"
"logstash_stats_jvm_uptime_in_millis" => "integer"
"logstash_stats_events_filtered" => "integer"
"logstash_stats_jvm_mem_heap_used_percent" => "float"
"logstash_stats_jvm_gc_collectors_young_collection_time_in_millis" => "integer"
"source_node_ip" => "string"
"logstash_stats_queue_events_count" => "integer"
"logstash_stats_reloads_failures" => "integer"
"logstash_stats_logstash_host" => "string"
"logstash_stats_jvm_gc_collectors_young_collection_count" => "integer"
"logstash_stats_os_cpu_load_average_5m" => "float"
"logstash_stats_jvm_gc_collectors_old_collection_time_in_millis" => "integer"
"source_node_uuid" => "string"
"logstash_stats_os_cpu_load_average_15m" => "float"
"logstash_stats_reloads_successes" => "integer"
"logstash_stats_logstash_pipeline_batch_size" => "integer"
"source_node_host" => "string"
"logstash_stats_jvm_gc_collectors_old_collection_count" => "integer"
"logstash_stats_process_max_file_descriptors" => "integer"
"logstash_stats_jvm_mem_heap_max_in_bytes" => "integer"
"cluster_uuid" => "string"
"logstash_stats_queue_type" => "string"
"region" => "string"
}
send_as_tags => ["region","source_node_uuid"]
flush_size => 3000
idle_flush_time => 1
retention_policy => "rp_400d"
}
stdout {codec => rubydebug }
}
}
Sample output to the console (stdout) looks good and as expected:
{
"logstash_stats_events_in" => 621,
"logstash_stats_logstash_status" => "green",
"logstash_stats_logstash_pipeline_workers" => 16,
"logstash_stats_events_out" => 621,
"logstash_stats_events_duration_in_millis" => 4539,
"logstash_stats_process_cpu_percent" => 0,
"logstash_stats_jvm_mem_heap_used_in_bytes" => 170390792,
"logstash_stats_process_open_file_descriptors" => 259,
"type" => "logstash_metrics",
"logstash_stats_jvm_uptime_in_millis" => 310770160,
"logstash_stats_events_filtered" => 621,
"logstash_stats_jvm_mem_heap_used_percent" => 0,
"logstash_stats_jvm_gc_collectors_young_collection_time_in_millis" => 21586,
"source_node_ip" => "10.187.8.207",
"logstash_stats_queue_events_count" => 0,
"logstash_stats_reloads_failures" => 0,
"timestamp" => "2018-01-30T15:56:18.270Z",
"logstash_stats_logstash_host" => "ip-187-7-147.dqa.capitalone.com",
"logstash_stats_jvm_gc_collectors_young_collection_count" => 487,
"logstash_stats_os_cpu_load_average_5m" => 0.19,
"logstash_stats_jvm_gc_collectors_old_collection_time_in_millis" => 124,
"source_node_uuid" => "VmarsH2-RMO0HY2u2-A9EQ",
"logstash_stats_os_cpu_load_average_15m" => 0.13,
"logstash_stats_reloads_successes" => 0,
"logstash_stats_logstash_pipeline_batch_size" => 125,
"source_node_host" => "10.187.8.207",
"logstash_stats_jvm_gc_collectors_old_collection_count" => 1,
"logstash_stats_process_max_file_descriptors" => 16384,
"logstash_stats_jvm_mem_heap_max_in_bytes" => 32098877440,
"cluster_uuid" => "LkLw_ASTR7CVQAaX1IzDgg",
"logstash_stats_queue_type" => "memory",
"region" => "west",
"logstash_stats_os_cpu_load_average_1m" => 0.06
(Successfully the formatted output was generated)
But this above script is unable to write this to the Influx and the error logs shows this:
09:56:25.658 [[main]>worker0] DEBUG logstash.outputs.influxdb - Influxdb output: Received event: %{host} %{message}
Exception in thread "[main]>worker0" java.io.IOException: fails
at org.logstash.Event.getTimestamp(Event.java:140)
at org.logstash.ext.JrubyEventExtLibrary$RubyEvent.ruby_timestamp(JrubyEventExtLibrary.java:289)
at org.logstash.ext.JrubyEventExtLibrary$RubyEvent$INVOKER$i$0$0$ruby_timestamp.call(JrubyEventExtLibrary$RubyEvent$INVOKER$i$0$0$ruby_ti
mestamp.gen)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:306)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:136)
at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:60)
at org.jruby.ast.FCallTwoArgNode.interpret(FCallTwoArgNode.java:38)
at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.ast.BlockNode.interpret(BlockNode.java:71)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74)
at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:182)
at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:203)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:326)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:170)
at org.jruby.ast.FCallOneArgNode.interpret(FCallOneArgNode.java:36)
at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:105)
at org.jruby.evaluator.ASTInterpreter.INTERPRET_BLOCK(ASTInterpreter.java:112)
at org.jruby.runtime.Interpreted19Block.evalBlockBody(Interpreted19Block.java:206)
at org.jruby.runtime.Interpreted19Block.yield(Interpreted19Block.java:157)
at org.jruby.runtime.Block.yield(Block.java:142)
at org.jruby.RubyArray.eachCommon(RubyArray.java:1606)
at org.jruby.RubyArray.each(RubyArray.java:1613)
Update: My logstash is able to communicate with InfluxDB (other scripts were working fine) and the environment versions I am using are : Logstash 5.4, InfluxDB 1.4.2, Java 8 (64 bit), logstash-output-influxdb 5.0.3 (output plugin) , Windows 7 Enterprise (64 bit).
Can someone suggest what is going wrong here ? Let me know if you require any further information.
Thanks !

How to use pagination in Kendo grid with partial views in mvc5

I am using Kendo grid to show some user information. I have 3 tabs in view Admin and each of the tab is partial view. say customer,user,portfolio. i am loading the kendo grid while calling the main view itself which is Admin View through view Model which consists 3 classes
public class AdminViewModel
{
public IEnumerable<CustUser> CustomerUsers { get; set; }
public IEnumerable<Customer> Customers { get; set; }
public IEnumerable<Portfolio> Portfolios { get; set; }
}
And i am passing this view model to each of the 3 partial views. And its working fine but the problem is when i want to go to the next page in Grid. Its just returning the Json Data on the page.
My kendo grid:
<div class="col-xs-12">
#try
{
#(Html.Kendo().Grid(Model.CustomerUsers)
.Name("User")
.Columns(columns =>
{
columns.Bound(p => p.CustUserID).Visible(false);
columns.Bound(p => p.FirstName);
columns.Bound(p => p.LastName);
columns.Bound(p => p.CustUserName);
columns.Bound(p => p.EmailAddress);
columns.Bound(p => p.IsActive);
columns.Bound(p => p.ModifiedDt).Visible(false);
columns.Bound(p => p.CreatedDt).Visible(false);
columns.Command(command =>
{
//command.Edit();
//command.Destroy().Text("Del");
command.Custom("Delete").Click("ConfirmDeleteUnit").Text("<span class='fa fa-trash-o'></span>").HtmlAttributes(new { #style = "width:40px !important;min-width:0px !important;" }); ;
}).Width(160);
})
.Pageable()
.Sortable()
.Scrollable(scrollable => scrollable.Virtual(true))
.DataSource(dataSource => dataSource
.Ajax()
.PageSize(25)
.Events(events => events.Error("error_handler"))
.Model(model =>
{
model.Id(i => i.CustUserID);
})
.Read(read => read.Action("LazyCustUser_Read", "Admin")
)
//.Update(update => update.Action("InlineEditUnit_Update", "Unit"))
//.Destroy(update => update.Action("InlineEditUnit_Destroy", "Unit"))
)
)
}
catch (Exception exe)
{
CSP.Web.Helpers.Log objLog = new CSP.Web.Helpers.Log();
objLog.LogError(exe.Message.ToString() + this.ToString());
}
</div>
and My Admin Controller
public async Task<ActionResult> Index()
{
if (Session["custUserID"] != null)
{
try
{
var custUsers = await _custUserRepo.GetAllCustUsers();
var customers = await _CustomerRepo.GetAllCustomers();
var portfolios = await _portfolioRepo.GetPortFolios();
if (custUsers == null)
return RedirectToAction("ShowError", "Error");
adminViewModel.CustomerUsers = custUsers;
}
catch (Exception exe)
{
objError.LogError(exe.Message);
return RedirectToAction("ShowError", "Error");
}
return View(adminViewModel);
}
else
{
return RedirectToAction("Index", "Account");
}
}
public async Task<ActionResult> LazyCustUser_Read([DataSourceRequest] DataSourceRequest request)
{
if (Session["custUserID"] != null)
{
try
{
var custUser = await _custUserRepo.GetAllCustUsers();
return Json(custUser.ToDataSourceResult(request), JsonRequestBehavior.AllowGet);
}
catch (Exception exe)
{
Log objLog = new Log();
objLog.LogError(exe.InnerException.ToString());
return RedirectToAction("ShowError", "Error");
}
}
else
{
return RedirectToAction("Index", "Account");
}
}

can logstash process multiple output simultaneously?

i'm very new to logstash and elastic search. I am trying to store log files both in elasticsearch and a flat file. I know that logstash support both output. But are they processed simultaneously? or is it done periodically through a job?
Yes you can do this like so by tagging and cloning your inputs with the "add_tag" command on your shipper config.
input
{
tcp { type => "linux" port => "50000" codec => plain { charset => "US-ASCII" } }
tcp { type => "apache_access" port => "50001" codec => plain { charset => "US-ASCII" } }
tcp { type => "apache_error" port => "50002" codec => plain { charset => "US-ASCII" } }
tcp { type => "windows_security" port => "50003" codec => plain { charset => "US-ASCII" } }
tcp { type => "windows_application" port => "50004" codec => plain { charset => "US-ASCII" } }
tcp { type => "windows_system" port => "50005" codec => plain { charset => "US-ASCII" } }
udp { type => "network_equipment" port => "514" codec => plain { charset => "US-ASCII" } }
udp { type => "firewalls" port => "50006" codec => plain }
}
filter
{
grok { match => [ "host", "%{IPORHOST:ipaddr}(:%{NUMBER})?" ] }
mutate { replace => [ "fqdn", "%{ipaddr}" ] }
dns { reverse => [ "fqdn", "fqdn" ] action => "replace" }
if [type] == "linux" { clone { clones => "linux.log" add_tag => "savetofile" } }
if [type] == "apache_access" { clone { clones => "apache_access.log" add_tag => "savetofile" } }
if [type] == "apache_error" { clone { clones => "apache_error.log" add_tag => "savetofile" } }
if [type] == "windows_security" { clone { clones => "windows_security.log" add_tag => "savetofile" } }
if [type] == "windows_application" { clone { clones => "windows_application.log" add_tag => "savetofile" } }
if [type] == "windows_system" { clone { clones => "windows_system.log" add_tag => "savetofile" } }
if [type] == "network_equipment" { clone { clones => "network_%{fqdn}.log" add_tag => "savetofile" } }
if [type] == "firewalls" { clone { clones => "firewalls.log" add_tag => "savetofile" } }
}
output
{
#stdout { debug => true }
#stdout { codec => rubydebug }
redis { host => "1.1.1.1" data_type => "list" key => "logstash" }
}
And on your main logstash instance you would do this:
input {
redis {
host => "1.1.1.1"
data_type => "list"
key => "logstash"
type=> "redis-input"
# We use the 'json' codec here because we expect to read json events from redis.
codec => json
}
}
output
{
if "savetofile" in [tags] {
file {
path => [ "/logs/%{fqdn}/%{type}" ] message_format => "%{message}"
}
}
else { elasticsearch { host => "2.2.2.2" }
}
}
FYI, You can study The life of logstash event about the logstash event.
The output worker model is currently a single thread. Outputs will receive events in the order they are defined in the config file.
But the Outputs may decide to buffer events temporarily before publishing them. Ex: Output will buffers 2 or 3 events then just it write to file.
First you need to install output plugins:
/usr/share/logstash/bin/logstash-plugin install logstash-output-elasticsearch
/usr/share/logstash/bin/logstash-plugin install logstash-output-file
Then create conf files for output:
cat /etc/logstash/conf.d/nfs-output.conf
output {
file {
path => "/your/path/filebeat-%{+YYYY-MM-dd}.log"
}
}
cat /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["elasitc_ip:9200"]
manage_template => true
user => "elastic"
password => "your_password"
}
}
Then:
service logstash restart

Resources