(NLog) When to call ReconfigExistingLoggers? - nlog

What types of changes require a call to ReconfigExistingLoggers?
In my particular use case, I load everything from a config file and then:
Delete a Target
Delete a rule that would have logged to the target
This seems to work without me calling ReconfigExistingLoggers, but I wanted to be sure I wasn't missing anything.
Additionally, I'm considering a refactor that would use a variable. This means I would have a Target that uses a variable and a single rule that logs to that Target. At runtime, I would set/update that variable.
Does that require a call to ReconfigExistingLoggers?
My specific use case is around the Syslog target:
When my software starts up, it needs to decide whether to log to SyslogServerA or SyslogServerB. My current approach is:
Configure both servers in my Config file
Configure rules to log to both servers in my Config file
At startup, determine the server I should log to
Remove the Target and Rule for the other one
I can think of several ways to achieve my end goal of only logging to a single syslog server, but I'm not sure which way is best.
For what it's worth: If I have both Targets and both Rules active, I have a runaway memory problem that builds over time. This is why I'm actively disabling an unused UDP Syslog target.

The LogManager.ReconfigExistingLoggers() should be explicitly called after having added/updated/removed LoggingRules (Like a commit-operation). It will refresh the configuration of all active Logger-objects. It will also perform synchronous initialization of any new NLog targets, so when the call has completed then all changes has been applied.
NLog has the following method to remove existing target from a configuration:
LoggingConfiguration.RemoveTarget - It will automatically call ReconfigExistingLoggers (Along with removing LoggingRules that becomes empty)
NLog supports changes of LoggingConfiguration while application is running (Ex. adding removing LoggingRules and Targets). But it is recommended to register all NLog targets upfront, and then use semi-dynamic-filtering to enable/disable output to the relevant NLog targets (Notice minLevel="Off" means disable output to target)

Related

Trigger Logic App only once when FTP files are changed or modified

I am using Logic app to detect any change in a FTP folder. There are 30+ files and whenever there is any change the storage copies the file to blob. The issue is it's firing on each file if 30 files are changed then it will fire 30 times. I want it to fire only once no matter how many files in a folder changed. After blobs are copied I am firing a Get request so that my website is updated also. Am I using the wrong approach?
Below you can see my whole logic.
As per your verbatim you have mentioned that you are using the FTP connector but as per your screenshot (that has included file content property on the trigger) it looks like you are using the SFTP-SSH connector trigger as FTP trigger doesn't have that property. Please correct if my understanding is correct.
If you are using When a file is added or modified trigger, then it will trigger your workflow on every file that is modified or added, and this is expected behavior that it will trigger your workflow on every file that is modified or added.
But if you are using the When files are added or modified (properties only) then this action has the setting Split On property which you can disable (by default enabled) so your workflow will execute only once for all the files that are added or modified for the How often do you want to check for the item? property time you have configured.
In case if it is FTP connector then you need to disable the Split On property and it still holds valid. For more details refer to this section.

Spring integration: download file for each update on it from FTPS server

I am using Spring Integration.
In the Remote FTPS server, same file will be updated every time. Source system is not ready to create new file every time.
for everyupdate of the file. i need to download file and process it.
Require help to create filter to listen to every file update.
Use an FtpPersistentAcceptOnceFileListFilter in the filter attribute and a FileSystemPersistentAcceptOnceFileListFilter in the local-filter attribute.
These filters, as well as persisting state so it lives beyond the current execution, also compare the modified time.
However, you need to be very careful updating a file server-side, rather than replacing it with a new one for each update. It is entirely possible (and even likely) to fetch a partially updated file (fetch the file before the writer has finished updating it).

Modules initialization in node.js and persisting their state between requests

My scenario: I am going to upload some small amount of configuration data and also rarely changing data from the database to say exports.config, that I want to use instead of config file so that app admin (not a sysadmin :) could configure the application via web interface, and I wanted to make sure that this data will not be reloaded every time this module is require'd.
Am I right to assume that whatever [initialization] code I have in node.js module (outside of functions definitions) it will be executed only once per process lifetime, regardless how many times I require this module?
Probably a stupid question, but I am struggling to understand some aspects of how node.js functions.
Yes.
The file/module can be required many times per process lifetime, but will be executed only once. At least by default.
This works out nicely for you because you can simply query your config table once at app initialization and the exported values will be constant until the app is restarted.
From the nodejs module caching docs
Modules are cached after the first time they are loaded. This means (among other things) that every call to require('foo') will get exactly the same object returned, if it would resolve to the same file.
Multiple calls to require('foo') may not cause the module code to be executed multiple times. This is an important feature. With it, "partially done" objects can be returned, thus allowing transitive dependencies to be loaded even when they would cause cycles.
If you want to have a module execute code multiple times, then export a function, and call that function.

How to set defaults for perforce client specs

I'm trying to discover how to change the default set of Client Spec options and submit-options.
set P4CLIENT=my_new_client_1
p4 client
Gives me the following spec default-spec:
Client: my_new_client_1
...
Options: noallwrite noclobber nocompress unlocked nomodtime normdir
SubmitOptions: submitunchanged
...
Now on my machine i want to always use revertunchanged, rmdir for example, but it seems like I need remember to manually set this everytime I create a new client.
Is there any way to achieve this? p4 set seems to only affect the things that can be set by environment variables.
You can't change the default client spec template (unless you're the Perforce system administrator) but you can set up and use your own template. You would first create a dummy client with a client spec that has the values that you want:
Client: my_template_client
...
Options: noallwrite noclobber nocompress unlocked nomodtime rmdir
SubmitOptions: revertunchanged
...
Then you just specify that the dummy client should be used as a template when creating new clients:
p4 client -t my_template_client my_new_client_1
The first response here was incorrect:
You CAN create a default clientspec in Perforce using triggers.
Essentially, you create a script that runs on the server and runs whenever someone does a form-out on the form client. This script would have to check to see if the clientspec already exists, and then substitute a sensible "default" if it doesn't (if it's a new clientspec).
Note that this works fine and well, and it's even in the P4 SysAdmin Guide (the exact example you're looking for is there!) but it can be a bit difficult to debug, as triggers run on the SERVER, not on the client!
Manual:
http://www.perforce.com/perforce/r10.1/manuals/p4sag/06_scripting.html
Specific Case Example:
http://www.perforce.com/perforce/r10.1/manuals/p4sag/06_scripting.html#1057213
The Perforce Server Deployment Package (SDP), a reference implementation with best practices for operating a Perforce Helix Core server, includes sample triggers for exactly this purpose. See:
SetWsOptions.py - https://swarm.workshop.perforce.com/projects/perforce-software-sdp/files/main/Server/Unix/p4/common/bin/triggers/SetWsOptions.py
SetWsOptionsAndView.py - https://swarm.workshop.perforce.com/projects/perforce-software-sdp/files/main/Server/Unix/p4/common/bin/triggers/SetWsOptionsAndView.py
Using the p4 client -t <template_client> is useful and is something a regular user can do, and has a P4V (graphical user interface) equivalent as well. Only Perforce super users can mess with triggers.
There is one other trick for a super user to be aware of: They can designate a client spec to be used as a default if the user doesn't specify one with -t <template_client>. That can be done by setting the configurable template.client. See: https://www.perforce.com/manuals/cmdref/Content/CmdRef/configurables.configurables.html#template.client
One other suggestion: I suggest changing the default from submitunchanged to leaveunchanged rather than revertunchanged (as in the sample triggers above). Using leaveunchanged is better because, if you still want the file checked out, using leaveunchanged rather than revertunchanged saves you from having to navigate to the file to check it out again. It's a small thing, but optimal to go with leaveunchanged. If you do want to revert the unmodified file, it's slightly easier to revert than to checkout again, which might require more navigating or typing.

selecting a log file

We have multiple log files like database log, weblog, quartzlog in our application.
Any log from files under package /app/database will go to database log.
Any log from files under package /app/offline will go to quartzlog log.
What we need now is - want to direct the log staments from one of the java file under /app/database to be outputted to quartzlog instead of database log.
How can we select a particular log file in java file?
You need to define the appropriate appender that logs in the desired file. Read this short introduction to see how you can do it.
Then in the configuration file, you can instruct all messages from a specific package to go in the selected appender:
log4j.logger.my.package = DEBUG, myFileAppender
EDIT:
I believe that in log4j only package resolution is possible - you can't use an appender per file or method. You could try to work around this by adding an extra layer on top of log4j or implementing your own appender.
For example, instead of log.debug use:
my.loggerproxy.log.debug(message);
If you only need to do it from a single method, then the above will be enough. Just instruct the logger proxy package to be logged in a different file.

Resources