Logstash HTTP Output Failure - logstash

I'm using logstash 6.5.4 in centos7 for collecting the information from system and sending it to node server through http output plugin. following is my code.
input {
stdin{}
}
output{
stdout{}
http {
url => "http://nodeserver.example.in:5000/send"
format => "json"
http_method => "post"
headers => ["Authorization", "Bearer ${CLOG_TOKEN}"]
content_type => "application/json"
}
}
with the stdout{} plugin I'm getting output on console,
but with http{} plugin I'm getting http output failure error
[2020-05-27T16:19:37,789][ERROR][logstash.outputs.http ] [HTTP Output Failure] Could not fetch URL {:url=>"http://nodeserver.example.in:5000/send", :method=>:post, :body=>"{\"#version\":\"1\",\"host\":\"vm2\",\"message\":\"hello\",\"#timestamp\":\"2020-05-27T10:49:32.699Z\"}", :headers=>{"Authorization"=>"Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTMxNjgzODIsInVzZXJuYW1lIjoic3lzYWRtaW4iLCJvcmdOYW1lIjoiUHJvdmlkZXJPcmciLCJyb2xlIjoid3JpdGVyIiwiaWF0IjoxNTkwNTc2MzgyfQ.vNaPEBhxG26oUYNKBYHKFtE0FH8mqHsKJRd45UjWFZE", "Content-Type"=>"application/json"}, :message=>"Connection reset", :class=>"Manticore::SocketException", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in `block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-http-5.2.3/lib/logstash/outputs/http.rb:239:in `send_event'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-http-5.2.3/lib/logstash/outputs/http.rb:175:in `send_events'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-http-5.2.3/lib/logstash/outputs/http.rb:124:in `multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:114:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:97:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:373:in `block in output_batch'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:372:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:324:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:286:in `block in start_workers'"], :will_retry=>true}
[2020-05-27T16:19:37,869][INFO ][logstash.outputs.http ] Retrying http request, will sleep for 0 seconds
my server is working fine through the curl command I'm able to send the data.
and also I tried the same code in other system with logstash 6.8.9 version, there it is working without any issue.
can any one suggest why this error is occurring?

Related

Logstash HTTP Output Plugin Error Could not fetch URL, Network is unreachable (connect failed)

I have configured a couple of pipelines in logstash, In one of the pipeline I have configured an http out plugin like
output {
http {
url => "url"
http_method => "post"
request_timeout => <timeout>
automatic_retries => <retry count>
retry_failed => false
}
}
For testing purpose, I configured a webhook url in url section and checked that it is receiving GET/POST requests using curl. But when logstash try to post data on the url using logstash.outputs.http plugin, it fails with ::Manticore::SocketException,
Detailed Error:
[HTTP Output Failure] Could not fetch URL {:url=>"url", :method=>:post, :body=> "some-body" :message=>"Network is unreachable (connect failed)", :class=>"Manticore::SocketException", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:37:in block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:79:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-http-5.2.4/lib/logstash/outputs/http.rb:239:in send_event'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-http-5.2.4/lib/logstash/outputs/http.rb:175:in send_events'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-http-5.2.4/lib/logstash/outputs/http.rb:124:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:138:in multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:293:in block in start_workers'"], :will_retry=>true}

Logstash beats input "invalid version of beats protocol"

I'm writing a kibana plugin and a logstash pipeline. For my tests, I just wrote a logstash input like that:
input {
beats {
port => 9600
ssl => false
ssl_verify_mode => "none"
}
}
But when I try to open a connection with node (code above):
invoke = (parameters, id, port, host) => {
var fs = require('fs');
console.log(`Sending message in beats, host= ${host}, port= ${port}, message= ${parameters.message}`);
var connectionOptions = {
host: host,
port: port
};
var client = lumberjack.client(connectionOptions, {rejectUnauthorized: false, maxQueueSize: 500});
client.writeDataFrame({"line": id + " " + parameters.message});
}
logstash gives to me "invalid version of beats protocol: 22" and "invalid version of beats protocol: 3":
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 22
at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.0.11.jar:?]
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.0.11.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
... 9 more
[2020-08-11T07:49:47,954][INFO ][org.logstash.beats.BeatsHandler] [local: 172.22.0.40:9600, remote: 172.22.0.1:33766] Handling exception: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3
[2020-08-11T07:49:47,955][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:404) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:371) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:253) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.49.Final.jar:4.1.49.Final]
at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 3
at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.0.11.jar:?]
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.0.11.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-all-4.1.49.Final.jar:4.1.49.Final]
... 11 more
Instead of use beats input you could try to use tcp input.
Example:
input {
tcp {
port => "9600"
codec => "json"
}
}
If you are using beats input and you want to use Logstash to perform additional processing on the data collected by Filebeat, you need to configure Filebeat to use Logstash.
To do this, you edit the Filebeat configuration file to disable the Elasticsearch output by commenting it out and enable the Logstash output by uncommenting the Logstash section:
output.logstash:
hosts: ["127.0.0.1:5044"]
You can read more on https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html
There could be many cases of that in my case issue was related to my filebeat.yml i was having below error on my logstash server
nd it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:404) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:371) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.2.6.jar:?]
at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.2.6.jar:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
... 11 more
When i check my filebeat so it says connection refused
root#ip-10-0-8-193:~# filebeat test output
elasticsearch: http://10.0.13.37:5044...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 10.0.13.37
dial up... OK
TLS... WARN secure connection disabled
talk to server... ERROR Get "http://10.0.13.37:5044": read tcp 10.0.8.193:34940->10.0.13.37:5044: read: connection reset by peer
when i closely check my logs so i found one misconfiguration from errors
[2022-10-12T11:08:06,107][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.0.14.30:9200"]}
above error means i didn't configure outputs correctly i closed elasticsearch line but miss output line
Solution In my case
I went back to my filebeat.yml and make the required changes
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.0.13.37:5044"]
make sure we properly commented elasticsearch output
For understanding more about this error we will need to see the filbeat.yml. Input plugin that you have used is a valid input plugin but in filbert.yml you might have not had output.logstash value or probably made some other mistakes. Can you please check if you have sent the output to Elasticsearch or Logstash ?
Please ensure you have this line of code in your filebeat.yml
output.logstash:
hosts: ["127.0.0.1:5044"]

asp.net core site behind proxy: Azure B2C signin results in correlation failed

I've got a web app that uses Azure B2C sign in. When I run the site locally, with config set to signin to the Azure B2C tenant, everything works as expected.
I then deploy the site to Azure where I have two webapp siting behind a FrontDoor proxy/load balancing configuration. I have also tried the same using traffic manager.
When I click on the Sign In link, which should redirect me to the Sign In B2C page, instead, I am redirected back to my site and get an error on the webpage: Correlation failed.
Assuming I have two web apps in the frontdoor configuration, called:
mywebapp-eastus
mywebapp-westus
and assuming the public domainname is https://www.mywebapp.com
when i make sign in require, in the response headers of the signin-oidc request, I am seeing this:
set-cookie: ARRAffinity=335ad67894a0a02a521f095924a8d7be4f7829a49d21743b7dd9ec8ce66879d7;Path=/;HttpOnly;Domain=mywebapp-eastus.azurewebsites.net
where mywebapp-eastus is actually an individual web app name. I would have expected to see the public domain here, not the individual web app that I connected to.
As can be seen from this Chrome dev tool screenshot, the signin-oidc then results in an error after the redirect has occurred:
I would expect to see this instead:
ARRAffinity=335ad67894a0a02a521f095924a8d7be4f7829a49d21743b7dd9ec8ce66879d7;Path=/;HttpOnly;Domain=www.mywebapp.com
I don't know if this is the underlying reason for the error.
Here is the code that sets up authentication and cookies:
services.Configure<CookiePolicyOptions>(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.Configure<ApiBehaviorOptions>(options =>
{
options.InvalidModelStateResponseFactory = ctx => new ValidationProblemDetailsResult();
});
services.AddAuthentication(sharedOptions =>
{
sharedOptions.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;
sharedOptions.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;
sharedOptions.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;
})
.AddAzureAdB2C(options => Configuration.Bind(AppSettings.AzureB2CSettings, options))
.AddCookie(CookieAuthenticationDefaults.AuthenticationScheme, options =>
{
options.ExpireTimeSpan = TimeSpan.FromMinutes(60);
options.SlidingExpiration = true;
options.Cookie.HttpOnly = false;
options.Cookie.SecurePolicy = CookieSecurePolicy.SameAsRequest;
});
Update
I have added x-forwarded-proto options with no noticeable effect:
var forwardingOptions = new ForwardedHeadersOptions()
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
};
forwardingOptions.KnownNetworks.Clear();
forwardingOptions.KnownProxies.Clear();
app.UseForwardedHeaders(forwardingOptions);
I thought i had found a related issue here: x-forwarded-proto-not-working
Any idea how I can resolve this issue? What options can I set so that the public domain is correctly populated into the response headers.
The web app is running asp.netcore 2.2 on linux (web app for containers).
Update 2:
I've added logging to the site. It shows that there's an underlying error: Cookie not found and Correlation failed
Information {Protocol="HTTP/1.1", Method="POST", ContentType="application/x-www-form-urlencoded", ContentLength=722, Scheme="http", Host="webapp-eastus.azurewebsites.net", PathBase="", Path="/signin-oidc", QueryString="", EventId={Id=1}, SourceContext="Microsoft.AspNetCore.Hosting.Internal.WebHost", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null, ConnectionId="0HLKC8D934664"} "Request starting HTTP/1.1 POST http://webapp-eastus.azurewebsites.net/signin-oidc application/x-www-form-urlencoded 722"
Debug {EventId={Id=1}, SourceContext="Microsoft.AspNetCore.HttpsPolicy.HstsMiddleware", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null, ConnectionId="0HLKC8D934664"} The request is insecure. Skipping HSTS header.
Debug {EventId={Id=1}, SourceContext="Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null, ConnectionId="0HLKC8D934664"} "POST" requests are not supported
Debug {EventId={Id=25, Name="RequestBodyStart"}, SourceContext="Microsoft.AspNetCore.Server.Kestrel", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null} Connection id ""0HLKC8D934664"", Request id ""0HLKC8D934664:00000001"": started reading request body.
Debug {EventId={Id=26, Name="RequestBodyDone"}, SourceContext="Microsoft.AspNetCore.Server.Kestrel", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null} Connection id ""0HLKC8D934664"", Request id ""0HLKC8D934664:00000001"": done reading request body.
[40m[1m[33mwarn[39m[22m[49m: Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectHandler[15]
'.AspNetCore.Correlation.OpenIdConnect.2KB8HPHJV3KhB2HCDp3C3b5iPXjcdAQLOrz5-6nGnwY' cookie not found.
Warning {EventId={Id=15}, SourceContext="Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectHandler", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null, ConnectionId="0HLKC8D934664"} '".AspNetCore.Correlation.OpenIdConnect.2KB8HPHJV3KhB2HCDp3C3b5iPXjcdAQLOrz5-6nGnwY"' cookie not found.
Information {EventId={Id=4}, SourceContext="Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectHandler", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null, ConnectionId="0HLKC8D934664"} Error from RemoteAuthentication: "Correlation failed.".
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "User-Agent": ["Edge Health Probe"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "X-Client-IP": ["1.1.1.1"]
Information {ElapsedMilliseconds=12.3269, StatusCode=302, ContentType=null, EventId={Id=2}, SourceContext="Microsoft.AspNetCore.Hosting.Internal.WebHost", RequestId="0HLKC8D934664:00000001", RequestPath="/signin-oidc", CorrelationId=null, ConnectionId="0HLKC8D934664"} "Request finished in 12.3269ms 302 "
Debug {EventId={Id=6, Name="ConnectionReadFin"}, SourceContext="Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets"} Connection id ""0HLKC8D934664"" received FIN.
Debug {EventId={Id=7, Name="ConnectionWriteFin"}, SourceContext="Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets"} Connection id ""0HLKC8D934664"" sending FIN because: ""The client closed the connection.""
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "X-Client-Port": ["63761"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "X-FD-HealthProbe": ["1"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "X-WAWS-Unencoded-URL": ["/"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "CLIENT-IP": ["1.1.1.1:63761"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "X-ARR-LOG-ID": ["5022cb4d-7c63-4a78-ad11-c8474246281d"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "DISGUISED-HOST": ["webapp-eastus.azurewebsites.net"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "X-SITE-DEPLOYMENT-ID": ["webapp-eastus"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "WAS-DEFAULT-HOSTNAME": ["webapp-eastus.azurewebsites.net"]
Debug {SourceContext="AppBuilderExtensions", RequestId="0HLKC8D934663:00000001", RequestPath="/", CorrelationId=null, ConnectionId="0HLKC8D934663"} Header: "X-Original-URL": ["/"]
Debug {EventId={Id=10, Name="ConnectionDisconnect"}, SourceContext="Microsoft.AspNetCore.Server.Kestrel"} Connection id ""0HLKC8D934664"" disconnecting.
Debug {EventId={Id=2, Name="ConnectionStop"}, SourceContext="Microsoft.AspNetCore.Server.Kestrel"} Connection id ""0HLKC8D934664"" stopped.
Just to be clear, I sticky sessions have been activated on the FrontDoor configuration. I also tried stopping one of the web apps during login to see what the effect was. Same behaviour. So, I don't think this is a DataProtection related issue.
Update 3
I think I may have found the underlying issue. I found this error:
AADB2C90006: The redirect URI 'https://webapp-eastus.azurewebsites.net/signin-oidc' provided in the request is not registered for the client id 'zzzzzzz-2121-4158-b0f8-9d164c95000'.
Correlation ID: xxxxxxxxx-xxxxxxxxx-xxxxxxx-xxxxxxx
When I looked for the X-Forwarded * headers in the request, I found they are missing. This is even though I have a block of code that looks like this:
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders =
ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});
in ConfigureServices, in Startup.cs
In Configure() I have this:
public void Configure(IApplicationBuilder app)
{
app.UseForwardedHeaders();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseCookiePolicy();
app.Use((context, next) =>
{
context.Request.Scheme = "https";
return next();
});
app.UseSession();
app.UseAuthentication();
app.LogAuthenticationRequests(_loggerFactory);
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
The question now is, why would the X-Forwarded-* headers be missing from requests?

replicate curl command python 3 urllib request API

This problem is kind of driving me crazy.
I'm doing a very simple python 3 script to manage an API in a public website.
I am able to do it with curl, but not in pyhton.
Can't use either requests library or curl in my real environment, just for tests
This is working:
curl -d "credential_0=XXXX&credential_1=XXXXXX" -c cookiefile.txt https://XXXXXXXXXXXXXXX/LOGIN
curl -d 'json={"devices" : ["00:1A:1E:29:73:B2","00:1A:1E:29:73:B2"]}' -b cookiefile.txt -v https://XXXXXXXXX/api-path --trace-ascii /dev/stdout
and we can see this in the curl debug:
Send header, 298 bytes (0x12a)
0000: POST /api-path HTTP/1.1
0034: Host: XXXXXXXXXXXXXXXX
0056: User-Agent: curl/7.47.0
006f: Accept: /
007c: Cookie: csrf_token=751b6bd9-0290-496b-820e-XXXXXXXX; session
00bc: =XXXXXX-6d29-4cf9-8907-XXXXXXXXXXXX
00e3: Content-Length: 60
00f7: Content-Type: application/x-www-form-urlencoded
0128:
=> Send data, 60 bytes (0x3c)
0000: json={"devices" : ["00:1A:1E:29:73:B2","00:1A:1E:29:73:B2"]}
== Info: upload completely sent off: 60 out of 60 bytes
This is the python code to replicate the second request, which is the problematic one
string_query={"devices" : [ "34:FC:B9:CE:14:7E","00:1A:1E:29:73:B2" ]}
jsonbody_url=urllib.parse.urlencode(string_query)
jsonbody_url=jsonbody_url.encode("utf-8")
req=urllib.request.Request(url,data=jsonbody_url,headers={"Cookie" :
cookie,"Content-Type": "application/x-www-form-urlencoded","User-
Agent":"curl/7.47.0","charset":"UTF-8","Content-
length":len(jsonbody_url),
"Connection": "Keep-Alive"},method='POST')
And the server is completely ignoring the Json content.
Everything else is working, login and other url parameters from the same API
Any ideas?
Try this:
import requests
string_query={"devices" : [ "34:FC:B9:CE:14:7E","00:1A:1E:29:73:B2" ]}
headers={
"Cookie" : cookie,
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent":"curl/7.47.0",
"charset":"UTF-8",
"Connection": "Keep-Alive"
}
response = requests.post(url,data=string_query,headers=headers)
print(response.content)

Logstash Multiline with Syslog

Have some difficulties with Logstash and multiline working together
I am using the Logspout container that forwards all stdout log entries as syslog to logstash.
This is the final content that logstash receives. Here are multiple lines that should represent two events.
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: 2015-02-10 11:55:38.496 INFO 1 --- [tp1302304527-19] c.z.service.DefaultInvoiceService : Creating with DefaultInvoiceService started...
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: 2015-02-10 11:55:48.596 WARN 1 --- [tp1302304527-19] o.eclipse.jetty.servlet.ServletHandler :
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]:
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:978)
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:868)
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
<14>2015-02-09T14:25:01Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:842)
Every log line is starts with syslog head.
Based on the above log content I created logstash config file.
input {
udp {
port => 5000
type => syslog
}
}
filter {
multiline {
pattern => "^<%{NUMBER}>%{TIMESTAMP_ISO8601} %{SYSLOGHOST:container_name} %{DATA}(?:\[%{POSINT}\])?:%{SPACE}%{TIMESTAMP_ISO8601}"
negate => true
what => "previous"
stream_identity => "%{container_name}"
}
grok {
match => [ "message", "(?m)^<%{NUMBER}>%{TIMESTAMP_ISO8601} %{SYSLOGHOST} %{DATA:container_name}(?:\[%{POSINT}\])?:%{SPACE}%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{NUMBER}%{SPACE}---%{SPACE}(?:\[%{DATA:threadname}\])?%{SPACE}%{JAVACLASS:clas
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
remove_field => ["timestamp"]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "source_host", "%{container_name}" ]
replace => [ "raw_message", "%{message}" ]
replace => [ "message", "%{logmessage}" ]
remove_field => [ "logmessage", "host", "source_host" ]
}
}
mutate {
strip => [ "threadname" ]
}
}
output {
elasticsearch { }
}
Now when the above events arrives the first event is correct parsed and displayed:
message = "Creating with DefaultInvoiceService started..."
The second event contains this message which contains three issues:
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]:
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 10000 ms while waiting for a server that matches AnyServerSelector{}. Client view of cluster state is {type=Unknown, servers=[{address=mongo:27017, type=Unknown, state=Connecting, exception={com.mongodb.MongoException$Network: Exception opening the socket}, caused by {java.net.UnknownHostException: mongo: unknown error}}]
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:978)
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:868)
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
<14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]: at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:842)
<14>2015-02-10T12:59:09Z logspout dev_nginx_1[1]: 192.168.59.3 - - [10/Feb/2015:12:59:09 +0000] "POST /api/invoice/ HTTP/1.1" 500 1115 "http://192.168.59.103/"; "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.94 Safari/537.36" "-"
The message text contains a line with an dev_nginx_1 entry which does not belong here. This should be treated as an separate event.
Each line contains the prefix. <14>2015-02-10T12:59:09Z logspout dev_zservice_1[1]:
Each line has an additional new line
Question.
Why is the dev_nginx_1 entry not an event on its own. Why is it considered to belong to the previous one?
How can I get rid of the syslog prefix in each line of the message.
How can I get rid of the additional new line?
As for (1), you're using container_name in the multiline. This is the field after the timestamp. In your example, they're all "logspout". Seems right to me.
As for (2), each line comes in with the prefix and the timestamp, so you would expect them to be there by default. You are doing a mutate{} to replace message with log_message, but I don't see that you're setting log_message. So, how did you think the prefix and timestamp were being removed?
For (1), replace %{SYSLOGHOST:container_name} %{DATA} in your multline pattern with %{SYSLOGHOST} %{DATA:container_name} (as you use in your grok).
For (2) and (3), you can try something like this:
mutate {
gsub => [ "message", "<\d+>.*?:\s", "", "message", "\n(\n)", "\1" ]
}
Here, the gsub setting is performing two operations:
Examine the field "message", find the substrings from "<14>" to a colon followed by a whitespace, and replace those substrings with empty strings.
Examine the field "message", find the substrings consisting of two consecutive newline characters, and replace them with one newline character. It performs the substitution using the \1 backrefernce to the group (\n), because if you try to use \n itself, Logstash will actually replace it with \\n, which won't work.

Resources