I have a scenario where Jenkins runs Flyway(DB migration tool, similar to liquibase) commands to connect to database and execute the SQL.
The log that gets generated contains the JDBC url string.
This has been masked in Jenkins console output.
But we also redirect the log to a file(to be sent as mail attachment) in which the URL is not masked which is a risk.
Is there any way the masking can be achieved inside the log file?
Or any way to not print or skip JDBC URL string?
PS: We also use logback framework for flyway logging.
Currently the URL is printed in INFO mode. We do not want to turn INFO mode off, because it has other necessary information.
Actually once the log file gets generated,
I just run sed command to replace the string in the file.
After which the log file is sent as an attachment.
And if you're wondering how to do it on a windows machine, you could use powershell or SED command will be available in cmd if git is installed with below setting,
Related
I have a pipeline that I run with nextflow which is a workflow framework.
It has an option of seeing real time logs on the http server.
The command to do this is like so:
nextflow run script.nf --with-weblog http://localhost:8891
But I don't see anything when I open my web browser. I have port forwarded while logging into the ubuntu instance and the python http server seems to work fine.
I will need help in understanding how I can set this up so I can view logs generated by my script on the url provided.
Thanks in advance!
In nextflow you need to be careful with the leading dashes of commandline parameters. Everything starting with two dashes like --input will be forwarded to your workflow/processes (e.g. as params.input) while parameters with a single leading dash like -entry are interpreted as parameters by nextflow.
It might be a typo in your question, but to make this work you have to use -with-weblog <url> (note that I used only a single dash here)
See the corresponding docs for further information on this:
Nextflow is able to send detailed workflow execution metadata and
runtime statistics to a HTTP endpoint. To enable this feature use the
-with-weblog as shown below:
nextflow run <pipeline name> -with-weblog [url]
However, this might not be the only problem you are encountering with your setting. You will have to store or process the webhooks that nextflow sends on the server.
P.S: since this is already an old question, how did you proceed? Have you solved that issue yourself in the meantime or gave up on it?
I want to execute a php file located in my apache server on localhost/remote from Processing. But I want to do it without opening the browser. How can I do this. The php script just adds data to mysql database using values obtained from GET request. Is this possible? Actually I tried using link("/receiver.php?a=1&b=2") but it opens a web page containing the php output.
Ideally such scripts must be generic so that it can be used as utility for both web and bash scripts, in-case you cannot change/modify script, then I would suggest to curl from terminal/bash script to make HTTP request to the given link.
Refer to this solution, as how to make request with CURL.
The Azure CLI can stream all logs (both native and custom - basically anything under \Home\LogFiles\)for an Azure App Service:
azure site log tail sitename
And you can apply a filter to the tail:
azure site log tail sitename --filter
But there appears to be no way to stream a specific, custom log file, e.g. \Home\LogFiles\MyCustomLog.log or \Home\LogFiles\MyLogs\MyCustomLog.log.
The azure site log tail command has a --path option, where --path is a directory path under \Home\LogFiles\, but it streams all logs in that directory.
Is there a way to stream just specific custom log files? If not, I can certainly make one subdirectory per-custom log file and use the --path option that way.
Is there a way to stream just specific custom log files? If not, I can certainly make one subdirectory per-custom log file and use the --path option that way.
In my opinion, we could not directly stream a specific custom log file via Azure CLI. Currently, we use the following command to stream the log from File System,
azure site log tail [options] [name]
and it supports only these two options,
-p, --path: the log path under LogFiles folder
-f, --filter: filter matching line
it just enable us to use -f “xxx” to filter and stream the specific log containing "xxx" in the message from specified folder. So if you’d like to steam a specific custom log file, as you said, you could create subfolder for each custom log file. Besides, you could create your own rule while writing log, for example, you could append “MyCustomLog” as prefix before log message when you write custom log. And then you could use -f to filter the specific log message containing the prefix.
I have just started to use logging for my C# application. I am using NLog for logging entries to a *.log file and I view it using a Notepad++.
I want to try Sentinel, although I can view the logs on sentinel, I am not sure with the initial steps of sentinel, do I have to do the following every time I want to start sentinel to read a log?
Add new logger
Provider registration - NLog viewer
Visualizing the log
Cant I just start the sentinel and choose from a set configuration files ? If I am running two C# applications one using Log4Net and another Nlog, do I have to go through these over again instead of just selecting a config file?
Also what is the purpose of saving a session in sentinel ?
Once you have a session saved in a file - file.sntl - you can instruct Sentinel to pull that session in on startup by supplying the filename on the command line. I have nlog.sntl saved and use the following from a command script:
#echo off
start /B c:\apps\sentinel\sentinel nlog.sntl
I'm sure you'd be able to create a program shortcut with the same information - I just can't be bothered
I deployed a nodejs app on Google App engine following this tutorial https://github.com/GoogleCloudPlatform/appengine-nodejs-quickstart it was successful and now I want to check the logs of the nodejs server, like in development from the terminal console. The Vms are managed by google but even if I ssh to them I don't know where to look for the logs.
You can read the stdout of the docker container that your app runs by doing docker logs <container id> in the VM instance. You can get the container id from docker ps.
No need to SSH into the instance though. You can simply fetch the logs from the Developers Console under Monitoring > Logs.
As #tamberg mentioned in a comment, the easiest option I've found for looking at logs produced by Google App Engine instances running Node.js is to simply use the log viewer at:
https://console.cloud.google.com/logs/viewer?resource=gae_app
Detailed instructions from https://cloud.google.com/appengine/docs/standard/nodejs/building-app/viewing-service-logs are as follows:
Open the Logs Viewer in the GCP Console: https://console.cloud.google.com/logs/viewer?resource=gae_app
In the first filter dropdown at the top of the page, ensure GAE Application is selected, and choose Default Service.
Use the second filter dropdown to select only stdout and click OK. This limits the viewer to logs sent to standard output.
Use the text field above the dropdowns to search for the name you used when you submitted your form. You should see the logs corresponding to your submissions.
The default logging is really awful. None of my console.log messages show up! There are a few ways you can fix this.
1) Write logs to a log file.
For example, /var/log/app_engine/custom_logs/applogs.log
https://cloud.google.com/appengine/articles/logging
"Cloud Logging and Managed VMs apps Applications using App Engine
Managed VMs should write custom log files to the VM's log directory at
/var/log/app_engine/custom_logs. These files are automatically
collected and made available in the Logs Viewer. Custom log files
must have the suffix .log or .log.json. If the suffix is .log.json,
the logs must be in JSON format with one JSON object per line. If the
suffix is .log, log entries are treated as plain text."
2) Use winston with winston-gae.
Create a transport that will send the logs to appengine.
3) Use gcloud-logging module
Too verbose for my liking, but it is another option.