SImilar to protractor, I'm looking for some info on getting performance logs in leadfoot of internjs.
Below is only an example of getting logs in protractor
browser.manage().logs().get('performance').then(function (browserLog) {
if (browserLog.length > 0)
JSON.parse(JSON.stringiy(browserLog)).forEach(function (browserLog) {
console.log('log: ' + browserLog.message);
});
Yes, if performance logs are available, you can use Leadfoot's getLogsFor() function. Depends on the environment as far as what types of logs are available. You can use getAvailableLogTypes() to find that out for your use case.
According to the documentation:
getLogsFor(type: string): Promise.<Array.<LogEntry>>
Gets all logs from the remote environment of the given type. The logs
in the remote environment are cleared once they have been retrieved.
Related
I am generating logs for my client application where there is very limited internet connectivity. I am storing the offline logs and generating it to application insights once the user is back online. The problem I am facing is out of all the logs only request logs are coming rest are getting discarded. This is happening because of sampling even though I have already disabled the sampling from Startup.cs. Here is my code:
var aiOptions = new Microsoft.ApplicationInsights.AspNetCore.Extensions.ApplicationInsightsServiceOptions();
aiOptions.EnableAdaptiveSampling = false;
services.AddApplicationInsightsTelemetry(aiOptions);
Any Suggestions how to completely remove the sampling so that I can have all the logs in application insight.
Check this document to see different log levels. if you have latest version of sdk than ILogger Can capture without required action.
It will capture log level.
Here is the configuration of logging level.
.ConfigureLogging(
builder =>
{
builder.AddApplicationInsights("ikey");
builder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("", LogLevel.Information); // this will capture Info level traces and above.
}
For complete information check this SO thread.
Recently, I deployed a very simple Apache Beam pipeline to get some insights into how it behaved executing in Dataproc as opposed to on my local machine. I quickly realized that after executing that any DoFn or transform-level logging didn't appear within the job logs within the Google Cloud Console as I would have expected and I'm not entirely sure what might be missing.
All of the high level logging messages are emitted as expected:
// This works
log.info("Testing logging operations...")
pipeline
.apply(Create.of(...))
.apply(ParDo.of(LoggingDoFn))
The LoggingDoFn class here is a very basic transform that emits each of the values that it encounters as seen below:
object LoggingDoFn : DoFn<String, ...>() {
private val log = LoggerFactory.getLogger(LoggingDoFn::class.java)
#ProcessElement
fun processElement(c: ProcessContext) {
// This is never emitted within the logs
log.info("Attempting to parse ${c.element()}")
}
}
As detailed in the comments, I can see logging messages outside of the processElement() calls (presumably because those are being executed by the Spark runner), but is there a way to easily expose those within the inner transform as well? When viewing the logs related to this job, we can see the higher-level logging present, but no mention of a "Attempting to parse ..." message from the DoFn:
The job itself is being executed by the following gcloud command, which has the driver log levels explicitly defined, but perhaps there's another level of logging or configuration that needs to be added:
gcloud dataproc jobs submit spark --jar=gs://some_bucket/deployment/example.jar --project example-project --cluster example-cluster --region us-example --driver-log-levels com.example=DEBUG -- --runner=SparkRunner --output=gs://some_bucket/deployment/out
To summarize, log messages are not being emitted to the Google Cloud Console for tasks that would generally be assigned to the Spark runner itself (e.g. processElement()). I'm unsure if it's a configuration-related issue or something else entirely.
I have a node server running on Google Cloud Run. Now I want to enable stackdriver tracing. When I run the service locally, I am able to get the traces in the GCP. However, when I run the service as Google Cloud Run, I am getting an an error:
"#google-cloud/trace-agent ERROR TraceWriter#publish: Received error with status code 403 while publishing traces to cloudtrace.googleapis.com: Error: The request is missing a valid API key."
I made sure that the service account has tracing agent role.
First line in my app.js
require('#google-cloud/trace-agent').start();
running locally I am using .env file containing
GOOGLE_APPLICATION_CREDENTIALS=<path to credentials.json>
According to https://github.com/googleapis/cloud-trace-nodejs These values are auto-detected if the application is running on Google Cloud Platform so, I don't have this credentials on the gcp image
There are two challenges to using this library with Cloud Run:
Despite the note about auto-detection, Cloud Run is an exception. It is not yet autodetected. This can be addressed for now with some explicit configuration.
Because Cloud Run services only have resources until they respond to a request, queued up trace data may not be sent before CPU resources are withdrawn. This can be addressed for now by configuring the trace agent to flush ASAP
const tracer = require('#google-cloud/trace-agent').start({
serviceContext: {
service: process.env.K_SERVICE || "unknown-service",
version: process.env.K_REVISION || "unknown-revision"
},
flushDelaySeconds: 1,
});
On a quick review I couldn't see how to trigger the trace flush, but the shorter timeout should help avoid some delays in seeing the trace data appear in Stackdriver.
EDIT: While nice in theory, in practice there's still significant race conditions with CPU withdrawal. Filed https://github.com/googleapis/cloud-trace-nodejs/issues/1161 to see if we can find a more consistent solution.
I want to see ALL queries which are processed by my local MongoDB instance.
I tried to set db.setProfilingLevel(2) but I still only get access information, but no queries.
Does anybody now how I can log EVERY query?
db.setProfilingLevel(2) causes the MongoDB profiler to collect data for all operations.
Perhaps you are expecting the profiler docs to turn up in the MongoDB server logs? If so, then bear in mind that the profiler output is written to the system.profile collection in whichever database profiling has been enabled.
More details in the docs but the short summary is:
// turn up the logging
db.setProfilingLevel(2)
// ... run some commands
// find all profiler documents, most recent first
db.system.profile.find().sort( { ts : -1 } )
// turn down the logging
db.setProfilingLevel(0)
I would like to know If it's possible?
here is the code: numStreams I get it by using AmazonKinesisClient API
// Create the Kinesis DStreams
List<JavaDStream<byte[]>> streamsList = new ArrayList<>(numStreams);
for (int i = 0; i < numStreams; i++) {
streamsList.add(
KinesisUtils.createStream(jssc, kinesisAppName, streamName, endpointUrl, regionName,
InitialPositionInStream.TRIM_HORIZON, kinesisCheckpointInterval,
StorageLevel.MEMORY_AND_DISK_2(),accessesKey,secretKey)
);
}
I tried looking through the API and I just couldn't find any reference to disabling Apache Streaming CloudWatch.
here is the Warnings that i try getting rid of:
17/01/23 17:46:29 WARN CWPublisherRunnable: Could not publish 16 datums to CloudWatch
com.amazonaws.AmazonServiceException: User: arn:aws:iam:::user/Kinesis_Service is not authorized to perform: cloudwatch:PutMetricData (Service: AmazonCloudWatch; Status Code: 403; Error Code: AccessDenied; Request ID: *****)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1377)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:923)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:701)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:453)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:415)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:364)
at com.amazonaws.services.cloudwatch.AmazonCloudWatchClient.doInvoke(AmazonCloudWatchClient.java:984)
at com.amazonaws.services.cloudwatch.AmazonCloudWatchClient.invoke(AmazonCloudWatchClient.java:954)
at com.amazonaws.services.cloudwatch.AmazonCloudWatchClient.putMetricData(AmazonCloudWatchClient.java:853)
at com.amazonaws.services.kinesis.metrics.impl.DefaultCWMetricsPublisher.publishMetrics(DefaultCWMetricsPublisher.java:63)
at com.amazonaws.services.kinesis.metrics.impl.CWPublisherRunnable.runOnce(CWPublisherRunnable.java:144)
at com.amazonaws.services.kinesis.metrics.impl.CWPublisherRunnable.run(CWPublisherRunnable.java:90)
at java.lang.Thread.run(Unknown Source)
Preface : I know this is kind of old question, but just faced this so posting a solution for anyone who encounter this issue with Spark <= 2.3.3
It is possible to disable Cloudwatch metrics reporting at KCL (Kinesis Client) library level with withMetrics methods when building the client.
Unfortunately, Spark KinesisInputDStream method does not expose a way to change this setting and to make things worse, the default level is "DETAILED" which send 10s of metric every 10 seconds.
The way I took in order to disable it is to provide invalid credential to the method cloudWatchCredentials from KinesisInputDStream. IE : .cloudWatchCredentials(SparkAWSCredentials.builder.basicCredentials("DISABLED", "DISABLED").build())
Then comes the issue for CloudWatchAsyncClient logging warning at each tick, which I disabled by setting the following in spark log4j.properties config :
# Set Kinesis logging metrics to Warn - Since we intentionally provide
# wrong credentials in order to disable cloudwatch logging. Bad credential
# warning are logged at WARN level - so we still get errors.
log4j.logger.com.amazonaws.services.kinesis.metrics=ERROR
This will suppress warning for the metrics package class only (such as the one you mentioned) but will not suppress the error, in case those are needed.
This is nowhere close to an ideal solution, but this allowed us deploying a solution while existing Spark version deployed.
Next steps : open a ticket to Spark so they can hopefully allow us to disable it for the next versions.
Edit - created: https://issues.apache.org/jira/browse/SPARK-27420 for tracking