I require the log4net to create date wise with log file name as "Application.20130125.txt", format of "Application.yyyyMMdd.txt" at the time of creating log file instead at the time of date changing at night 12 AM.
And when the date changes, it should create new logfile as "Application.20130126.txt"
Could you please suggest what changes need to be done to the
below code so that the date is created first and at the time of rolling,
creates a new file with the date "Application.20130126.txt"
LogPath = "C:\Logs\"
fileName = "ApplicationName" & "..txt"
hierarchy = DirectCast(LogManager.GetRepository(), Hierarchy)
patternLayout.ConversionPattern = "%m%n"
patternLayout.ActivateOptions()
roller.Layout = patternLayout
roller.RollingStyle = RollingFileAppender.RollingMode.Date
roller.DatePattern = "yyyyMMdd"
roller.AppendToFile = True
roller.StaticLogFileName = True
roller.File = LogPath & fileName
roller.PreserveLogFileNameExtension = True
roller.ActivateOptions()
hierarchy.Root.AddAppender(roller)
hierarchy.Root.Level = Level.Debug
hierarchy.Configured = True
log = LogManager.GetLogger("RollingFileAppender")
Please find my sample below that i have used that would write to the file as you need:
Dim fileappender = New log4net.Appender.RollingFileAppender()
fileappender.AppendToFile = True
fileappender.Threshold = log4net.Core.Level.Debug
fileappender.File = "MyLogFile_"
fileappender.DatePattern = "yyyyMMdd"
fileappender.StaticLogFileName = False
fileappender.Layout = New log4net.Layout.SimpleLayout()
fileappender.RollingStyle = log4net.Appender.RollingFileAppender.RollingMode.Date
fileappender.ActivateOptions()
DirectCast(log4net.LogManager.GetRepository(), log4net.Repository.Hierarchy.Hierarchy).Root.AddAppender(fileappender)
log4net.Config.BasicConfigurator.Configure(fileappender)
The above would create a log file with its filename as "MyLogFile_20130125" today
and once the date time changes it would write a new file as "MyLogFile_20130126"
You can alternatively test it by setting the datePattern to "yyyyMMddhhmm" and that would write a new log file each minute.
Related
I'm in need of optimizing import of .xls files to matlab due to xlsread being very time consuming with large amount of files. Current xlsread script as follows:
scriptName = mfilename('fullpath');
[currentpath, filename, fileextension]= fileparts(scriptName);
xlsnames = dir(fullfile(currentpath,'*.xls'));
xlscount = length(xlsnames);
xlsimportdata = zeros(7,6,xlscount);
for k = 1:xlscount
xlsimport = xlsread(xlsnames(k).name,'D31:I37');
xlsimportdata(:,1:size(xlsimport,2),k) = xlsimport;
end
I have close to 10k files per week that needs processing and with approx. 2sec per file processed on my current workstation, it comes in at about 5½ hours.
I have read that ActiveX can be used for this purpose however that is far beyond my current programming skills and have not been able to find a solution elsewhere. Any help on how to make this would be appreciated.
If it is simple to perform with ActiveX (or other proposed method), I would also be interested in data on cells D5 and G3, which I am currently grabbing from 'xlsnames(k,1).name' and 'xlsnames(k,1).date'
EDIT: updated to reflect the solution
% Get path to .m script
scriptName = mfilename('fullpath');
[currentpath, filename, fileextension]= fileparts(scriptName);
% Generate list of .xls file data
xlsnames = dir(fullfile(currentpath,'*.xls'));
xlscount = length(xlsnames);
SampleInfo = cell(xlscount,2);
xlsimportdata = cell(7,6,xlscount);
% Define xls data ranges to import
SampleID = 'G3';
SampleRuntime = 'D5';
data_range = 'D31:I37';
% Initiate progression bar
h = waitbar(0,'Initiating import...');
% Start actxserver
exl = actxserver('excel.application');
exlWkbk = exl.Workbooks;
for k = 1:xlscount
% Restart actxserver every 100 loops due limited system memory
if mod (k,100) == 0
exl.Quit
exl = actxserver('excel.application');
exlWkbk = exl.Workbooks;
end
exlFile = exlWkbk.Open([dname filesep xlsnames(k).name]);
exlSheet1 = exlFile.Sheets.Item('Page 0');
rngObj1 = exlSheet1.Range(SampleID);
xlsimport_ID = rngObj1.Value;
rngObj2 = exlSheet1.Range(SampleRuntime);
xlsimport_Runtime = rngObj2.Value;
rngObj3 = exlSheet1.Range(data_range);
xlsimport_data = rngObj3.Value;
SampleInfo(k,1) = {xlsimport_ID};
SampleInfo(k,2) = {xlsimport_Runtime};
xlsimportdata(:,:,k) = xlsimport_data;
% Progression bar updater
progress = round((k / xlscount) * 100);
importtext = sprintf('Importing %d of %d', k, xlscount);
waitbar(progress/100,h,sprintf(importtext));
disp(['Import progress: ' num2str(k) '/' num2str(xlscount)]);
end
%close actxserver
exl.Quit
% Close progression bar
close(h)
Give this a try. I am not an ActiveX Excel guru by any means. However, this works for me for my small amount of test XLS files (3). I never close the exlWkbk so I don't know if memory usage is building or if it automatically cleaned up when descoped after the next is opened in its place ... so use at your own risk. I am seeing an almost 2.5x speed increase which seems promising.
>> timeit(#getSomeXLS)
ans =
1.8641
>> timeit(#getSomeXLS_old)
ans =
4.6192
Please leave some feedback if this work on large number of Excel sheets because I am curious how it goes.
function xlsimportdata = getSomeXLS()
scriptName = mfilename('fullpath');
[currentpath, filename, fileextension]= fileparts(scriptName);
xlsnames = dir(fullfile(currentpath,'*.xls'));
xlscount = length(xlsnames);
xlsimportdata = zeros(7,6,xlscount);
exl = actxserver('excel.application');
exlWkbk = exl.Workbooks;
dat_range = 'D31:I37';
for k = 1:xlscount
exlFile = exlWkbk.Open([currentpath filesep xlsnames(k).name]);
exlSheet1 = exlFile.Sheets.Item('Sheet1'); %Whatever your sheet is called.
rngObj = exlSheet1.Range(dat_range);
xlsimport = cell2mat(rngObj.Value);
xlsimportdata(:,:,k) = xlsimport;
end
exl.Quit
I am trying to create and serve a zip file for user that contains ical files for different workers (each worker has his own ical file)
The problem is that i get the right number of iCal files in my zip but the last file has all the data from previous workers in it. (also the one before etc.)
this is the code I am using
What am I doing wrong?
This is my code
cal = Calendar()
import zipfile, cStringIO
exported_chunks_zip = cStringIO.StringIO()
zipf = zipfile.ZipFile(exported_chunks_zip, "w", compression=zipfile.ZIP_DEFLATED )
for i, rec in enumerate(grouped):
worker = rec['rw_worker_nick'].encode('cp1250')
for rr in rec["allData"]:
startDate = rr['rw_date']
startTime = rr['rw_time_start']
endTime = rr['rw_time_end']
evtstart = datetime.datetime.combine(startDate,startTime)
evtend = datetime.datetime.combine(startDate,endTime)
event = Event()
event.add('summary', rec['rw_worker_nick'])
event.add('dtstart', evtstart)
event.add('dtend', evtend)
cal.add_component(event)
text = cal.to_ical()
zipf.writestr(worker +'.ics', text)
text = ''
any suggestions?
thank you
You create only a single Calendar object outside of the for loop and then keep appending events to it. You should instead create a new Calendar object for each worker within the for loop:
for i, rec in enumerate(grouped):
cal = Calendar()
...
I need to append date as my logfile name contains date at the end.
e.g :
access_log.2013-12-11
access_log.2013-12-10
access_log.2013-12-09
access_log.2013-12-08
.
.
.
access_log.2013-09-08
AsI need to set logpath name under Fail2ban conf file (i.e jail.local)
I am aware that I can use '*' while mentioning log file name but as our log files are large and we also store 30 days worth of log files, so I thought it is not a good practice and will also performance related effects.
logpath = /opt/atlassian/jira/logs/access_log.*
Tested the below one's :
logpath = /opt/atlassian/jira/logs/access_log.%Y-%m-%d
logpath = "/opt/atlassian/jira/logs/access_log.%Y-%m-%d"
logpath = "/opt/atlassian/jira/logs/access_log.'%Y-%m-%d'"
but none have worked
Can anyone please help me in appending a variable date at the end of logpath to cover the above mentioned log files
I am not sure how you are getting your dates, but this will create a logpath in the format you desire
import datetime
d = datetime.date.today().strftime('%Y-%m-%d')
logpath = "/opt/atlassian/jira/logs/access_log/%s" %d
print logpath
I am using the mongoose module for my Express.js app, and I keep getting this error everytime I start up the app:
========================================================================================
= Please ensure that you set the default write concern for the database by setting =
= one of the options =
= =
= w: (value of > -1 or the string 'majority'), where < 1 means =
= no write acknowlegement =
= journal: true/false, wait for flush to journal before acknowlegement =
= fsync: true/false, wait for flush to file system before acknowlegement =
= =
= For backward compatibility safe is still supported and =
= allows values of [true | false | {j:true} | {w:n, wtimeout:n} | {fsync:true}] =
= the default value is false which means the driver receives does not =
= return the information of the success/error of the insert/update/remove =
= =
= ex: new Db(new Server('localhost', 27017), {safe:false}) =
= =
= http://www.mongodb.org/display/DOCS/getLastError+Command =
= =
= The default of no acknowlegement will change in the very near future =
= =
= This message will disappear when the default safe is set on the driver Db =
========================================================================================
I cannot figure out how to set the write concern. I am connecting to my database like this:
mongoose.connect('mongodb://localhost/reader')
What you want to do is:
mongoose.connect('mongodb://localhost/reader', {db:{safe:false}})
That will give you the default behavior that existed before this whole explicit write concern thing happened in the mongo driver.
More information here: http://mongoosejs.com/docs/api.html#index_Mongoose-createConnection
It was because of the connect-mongodb package. I changed it to connect-mongo and this fixed the problem!
I have a code which reads a an excel file and modifies the xls fil and then save it as an text file
[f,n] = uigetfile('*.xls');
[num,text,row]=xlsread(f);
data=num';
temp_dir = pwd;
[f,n]=uiputfile('*.txt');
if isstr(f)
cd(n);
fid=fopen(f,'w');
[rows,cols]=size(text);
for i=1:rows
fprintf(fid,'%s\t',text{i,1:end-1});
fprintf(fid,'%s\n',text{i,end});
end
plats = '%10f\t';
[rows,cols] = size(data);
for n = 1:rows-2
plats = [plats,'%10f\t'];
end
now I have like 100 xls file and I want to this process for all of them like batch processing.
I know that I could use :
files_from= dir(fullfile(from_dir,'*.xls'));
for i = 1:length(files_from)
FileName=files_from(i).name;
[num,txt,all]= xlsread(fullfile(from_dir,FileName));
xlswrite(fullfile(to_dir, files_from(i).name),data);
end
but I can't get it write :((((((((
please any suggestion?????