I have successfully configured an application that uses log4j for it's logging to log into a MySQL database. (Using org.apache.log4j.jdbc.JDBCAppender).
I also have some perl applications that log into the database as well. My perl apps are setup so that the name of the database table changes every month (log_2010_11, log_2010_10 etc). At the end of each month, I run reporting scripts on the month just completed, dump the table to an external file (which gets compressed and archived), and then drop the table. This way the total size of the logging database stays within sensible limits.
I would like to do the same with log4j, but there does not appear to be a log4j appender suitable for the purpose.
Is it possible to do something like this:
log4j.appender.SQ=org.apache.log4j.jdbc.JDBCRollingAppender
log4j.appender.SQ.Driver=com.mysql.jdbc.Driver
log4j.appender.SQ.URL=jdbc:mysql://localhost:3306/logs_{%year}_{%month}
Thank you.
I figured out how to do this:
log4j.appender.SQ=org.apache.log4j.jdbc.JDBCAppender
log4j.appender.SQ.Driver=com.mysql.jdbc.Driver
log4j.appender.SQ.URL=jdbc:mysql://localhost:3306/logs
log4j.appender.SQ.sql=INSERT INTO accesslog_%d{yyyy_MM} (date, time, tz, ...
It appears you can just put date format strings into the SQL statement, and JDBCAppender will expand them and log into the coresponding table.
However, it will not create new tables at the start of the new month, so currently I have to manualy create the tables beforehand, which is far from ideal.
You'd have to write your own appender to do this.
Another option would be to stay with the existing appender and do this:
You have a table in your database named log. Why not make a Perl script that makes a new table at end of every month, let's say log_12 for December, copies everything from log to log_12 and then delete everything from log? That way you don't have to mess around with making another appender.
How about a script to run monthly and dump that particular table into a back up file and then zip it for archiving. Upon complete, truncate the table or delete rows within date range.
Related
Our attachments have been stored in our database for a very long time, and it has caused the database to become huge and our backups to be extremely unreliable. We have moved our attachments to the file system and that shaved off a great deal of size.
Now our largest table is CMS_AttachmentHistory. I've been able to test brute force deleting every row in SQL (and every row in the CMS_VersionAttachment junction table). But is there a way to accomplish this in the Kentico Admin GUI without having to resort to this?
When I say brute force delete, I mean:
DELETE FROM dbo.CMS_VersionAttachment
DELETE FROM dbo.CMS_AttachmentHistory
There is an option in the GUI that will do this, but it's also going to impact the page version history. If you go to Settings > Content > Content management and look in the Workflow section, you can see a setting named Version history length. Reducing this to a lower number (I believe 20 is the default) will reduce the version history stored to reflect the new value by deleting the unneeded rows.
This will affect all version history though, not just the attachment, but also the pages themselves. That being the case, you would need to decide if you need/want to keep the version history of the pages or not.
If you don't want to lose that history, then I'd say that a good option would be to write a script that can set the AttachmentBinary column to null for the records that you don't need/want (given that you say that you now store the files on the filesystem, any current versions will have the correct value, so this is probably all of them)
Not sure about 8.2. But you can try to experiment with recycle bin/objects. There are a couple topics: topic 1 and topic 2. I just check it puts there attachment every time you delete it, even though I have in setting "files on disc". You can do like Kentico recommends set binary field to null or write script using API.
I am fairly new to the Alteryx.
I would like to create a process/workflow in Alteryx to import a file from a specified location but that should be controlled by the input parameters.
Kindly help me in this.
Thanks,
RTJ
You can connect an Interface Tool:File Browse via an Interface Tool:Action to change the file selected to an In/Out:Input Data:
You will then want to run the workflow using the Run As Analytic App:
In the developer tool category, you'll find the Dynamic Input tool. This works much like the standard Input Data tool, but can take records in to modify the data it collects.
https://help.alteryx.com/2018.2/DynamicInput.htm
It sounds like you have files in a standard location, but want to be able to dynamically select the ones to load.
Let's say you have a collection of sales files in the format "Sales_20190718.csv" but want to only get sales information for certain dates as specified in your workflow. You can point your Dynamic Input tool to the Sales_20190718.csv, and have it replace the "20190718" part with whatever input you gave to the tool before querying the information.
You could get a similar result by using wildcards in a basic Input Data tool by pulling data from "Sales_*", and ticking the "Output File Name as Field" box. This would load all your sales data (which could take some time) but then you could filter to the relevant files using the new FileName field.
I'm currently working on maintaining some old (new format .xlsx) excels with more than 60+ connections each that feed some various tables from a SqlServer.
I'm in search of some kind of toolkit, module, standalone script (or anything really) that let me bulk change the command text inside each connection properties.
The change should be not harder than changing part of the table name as the new table only contains the information that the table needs.
So far, the only thing that gets near what I need are those python modules but they don't appear to implement anything for handling connections.
Thank you in advance for any help you can provide.
I am trying to find out if it is possible to lock an rptdesign file.
The idea is to run a report as a service, but without being able to change the default parameters. I know I could just hide the parameter window but still the user could edit the rptdesign file and hard code new values.
Does anyone has any previous experience with this?
Is it possible to make an rptdesign file non-editable?
If you want to prevent users by modifying rptdesign file, you should do it on OS level to enable it only for certain users.
If you want to ensure that the report is not modified, you can add hidden field storing md5 sum from report file. I mean that this field can store md5sum from your report file on the disk. Then you can compare it with your original sum.
Anyway your problem is slightly different - you are expecting certain data from your customers and you want to be cheated by them. You can use the method with md5sum but it is rather the matter of trust to them here or any other possibility to access to their database than through the report (e.g. they can give you the direct access to the database or you can agree to store this data in your company, not theirs).
Let me know if this answer helps you better.
I have to run almost 50 queries daily for daily reports and copy-paste the data into Excel sheets. Is there a way to schedule a job on SQL Developer that exports data from all the queries in an Excel Workbook?
You could link the excel spreadsheets to your queries so they automatically update themselves.
Insert > Data from External Source. I do this with SQL Server a lot, and you can do it with Oracle too if you know the connection strings.
I would comment, but I dont have the rep yet.
I would advise using your operating system to schedule the task. Assuming that this is Windows (as you want to write to Excel) then you can use Task Scheduler to set off a cmd script or powershell script which can call SQLPLUS passing in a parameter for the the sql file that you wish to run. It would not be too difficult to output this to a CSV file which can be opened in Excel. If you actually need to write the data to a .xlsx (or similar) file then there are options (e.g. Python libraries that can do this), but it will not be as straight forward.
I am not sure exactly what part of this you need help with, so can I suggest that you consider the steps below, if you want to proceed do some research and have an attempt at each step and then post a question for each that you are stuck on with details of what you have tried:
Schedule a job from your operating system;
Write a script to call SQLplus and execute a .sql file;
Change query output to csv and redirect to file (or find a way to write directly to an Excel file if this is what you need to do);