I'm working on a project and we want to handle our logging using log4j. I am running into some issues that I am not able to easily resolve looking at the log4j docs, or other documentation online.
I get the basic idea of putting logging code throughout the codebase and then having the properties file assort the logged data into a hierarchy of appenders and how to write out to a file. That's fine. This basically allows me to create greppable log files in one hard coded folder, such as this:
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=example.log
But I have two basic questions: I want to have the log location be dynamic, such as:
log4j.appender.R.File={$processDir}/example.log
Also, every time the user runs this app, a folder will be created with the output files. I would like to have the log file be placed there, and I'm not sure how to accomplish that.
The other issue (although I think this will be a lot easier once the first issue is addressed...) is about creating a formatted log that does not necessarily reflect the process of how the app ran...for example, a title, followed by a list of all input files, a list of all output files, any warnings encountered.
I think for that I would create an object that implemented ObjectRenderer and write a doRender method that gave me the info I wanted.
Does that sound correct?
Thanks!
You can use variable with this syntax
log4j.appender.R.File=${processDir}/example.log
You must define the variables as system properties (es. -DprocessDir=...) or manually (after creating folder) with
System.setProperty("processDir",logDir);
Related
In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.
In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.
So, there is this website where I have to log in and insert values in the add content->person roles and I have to take values from an excel file. I tried entering the values in the database directly but got nowhere. The database is too randomly generated.
I want to know- how to go by this problem? I think python would be the best way but I am more comfortable with java. The images bellow will help understand the situation better-
The log in from:
The form to be filled:
Try using feeds module:
https://www.drupal.org/project/feeds
Install it on you site first of course. Or look for some similar import module. Maybe this one:
https://www.drupal.org/project/datasources
If nothing succeeds then try making import script on your own. You have to parse document (would be much easier to open it from excel and export as CSV if possible http://php.net/manual/en/function.fgetcsv.php) and have some loop to write content into Drupal system. Use Drupal's functions for that, do not directly write to database. It's not hard as it looks like:
https://www.drupal.org/node/1388922
I've worked through the "Integrating Data" guide on the Spring website and have been trying to determine how to use configuration settings (substitution) in the integration.xml file rather than hard code various items. This is primarily driven by a desire to externalise some of the configuration from the XML and take advantage of Spring Boot's ability to allow for externalised configuration.
I've been trying to determine the solution for a while now and thought it's likely to be an easy answer (for those who know how).
In the snippet below (taken from the guide) I've used ${outputDir} as a placeholder for a configuration item I'll pass into the application:
<file:outbound-channel-adapter id="files"
mode="APPEND"
charset="UTF-8"
directory="${outputDir}"
filename-generator-expression="'HelloWorld'"/>
Essentially, I'm trying to determine what I need to do to get the ${outputDir} substitution working.
As part of working through the problem I reduced the code down to a demo that I've uploaded to BitBucket:
integration.xml will just copy files from a file:inbound-channel-adapter directory to a file:outbound-channel-adapterdirectory
The Application class uses Spring Boot to load the configuration into a DemoIntegration instance and it's the fields in that instance that I'd like to substitute into integration.xml at runtime.
Unless I'm mistaken (when I get this to work) I should be able to override the inputDir and outputDir items in integration.xml.
Your integration.xml references ${inputDir}, which is not there.
Just to make it work with the existing config, add/change the application.properties file in your classpath with inputDir=/tmp/in and outputDir. This way it matches with your used vars in the config file.
If you want to stick with your naming, then change the XML to use ${demo.inputDir}. These are the names you are using in your existing application.properties.
And if you want to stick to your #ConfigurationProperties, then you can put #{demoConfigration.inputDir} in the XML to access the bean, where your config is stored. Note, that your code currently fails (at least for me) as you basically define the bean twice (once per #EnableConfigurationProperties and once by #ComponentScan+#Component on the config.
I would like to know how I can load some files in a specific order. For instance, I would like to load my files according to their timestamp, in order to make sure that subsequent data updates are replayed in the proper order.
Lets say I have 2 types of files : deal info files and risk files.
I would like to load T1_Info.csv, then T1_Risk.csv, T2_Info.csv, T2_Risk.csv...
I have tried to implement a comparator, as it is said on Confluence, but it seems that the loadInstructions file has the priority. It will order the Info files and the risk files independently. (loading T1_Info.csv, T2_Info.csv and then T1_Risk.csv, T2_Risk.csv..)
Do I have to implement a custom file loader, or is it possible using an AP configuration ?
The loading of the files based on load instructions is done in
com.quartetfs.tech.store.csv.impl.CSVDataModelFactory.load(List<FileLoadDescriptor>). The FileLoadDescriptor list you receive is created directly from the load instructions files.
What you can do is create a simple instructions files with 2 entries, one for deal info and one for risk. So your custom implementation of CSVDataModelFactory will be called with a list of two items. In your custom implementation you scan the directory where the files are, sort them in the order you want them to be parsed and call the super.load() with the list of FileLoadDescriptor you created from the directory scanning.
If you want to also load files that are place in the future in this folder you have to add to your load instructions a line that will match all files and that will make the super.load() implementation to create a directory watcher for that (you should then maybe override createDirectoryWatcher() to not watch the files already present in the folder when load is called).