Apache Time Differs from Operating System Date and time? - linux

I have apache-tomcat-5.5.17 that I run using the terminal manually the time on the instance is not the same as the operating system , I tried to configure the apache time using php.ini , that is located in /etc/php.ini , I added :
[Date]
date.timezome="MyTimeZone"
it didn't take an effect on the instance even so I restarted the application , and the services , is there something else needs modifying ??

well guys I figured out how to solve this problem , the trick is to modify catalina.sh script and add the time zone to the java options :
-Duser.timezone="one of the Time Zones that are allowed"

Related

Specifying non-local time zone in GlobalConfiguration fails on Linux

My tests will be running on a variety of machines/containers scattered all across the USA, so I would like them all to log their time in Eastern Time. If I do this:
AtataContext.GlobalConfiguration.UseTimeZone(System.TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time"))
All my logs generated by Windows machines work fine, but in linux (Debian 11) I get this:
Error Message:
OneTimeSetUp: System.TimeZoneNotFoundException : The time zone ID 'Eastern Standard Time' was not found on the local computer.
----> System.IO.FileNotFoundException : Could not find file '/usr/share/zoneinfo/Eastern Standard Time'.
Is there something else I need to do to get the time zone synced across all logs?
The previous answer is correct. You can find a time zone by different names depending on current OS. But in addition to that here is another solution.
You can add TimeZoneConverter NuGet package to a project and use its TZConvert class to find a time zone by either Windows or Linux time zone ID.
AtataContext.GlobalConfiguration.UseTimeZone(TZConvert.GetTimeZoneInfo("US/Eastern"));
This works:
using System;
TimeZoneInfo ET;
if (WebDriverSetup.OSInfo.IsWindows)
{
ET = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time");
}
else
{
ET = TimeZoneInfo.FindSystemTimeZoneById("US/Eastern");
}
AtataContext.GlobalConfiguration.UseTimeZone(ET);

BESClient (BIGFIX) Linux redhat

I have a question about BESClient (bigfix ) config.
I want to install and config BESAgent-10.0.0.133-rhe6.x86_64 , using a puppet module to deploy it (on machines with RHEL6,7,8).
And I need know and understand, what does mean this parameter in config file besclient.config "effective date = Tue,%2003%20Nov%202020%2010:16:47%20+0100". I know that this is a timestamp.
I notice that , every time that puppet apply the configs on the machines, this timestamp change.
So , i donĀ“t know if this change in timestamp every time that puppet run on machines, have any bad effect on config.
The effective date only states the time stamp on which the setting was set by the bigfix client. It has no effect on the configuration itself.

TZ (timezone) doesn't work in "/etc/crontab"

I've tried setting the timezone as:
export TZ=Europe/Paris
and as:
TZ=Europe/Paris
But none of them work.
The server is setup for UTC time. And it needs to remain that way.
My job needs to happen at 4:00am (Paris time) when the server it not being used. However it happens at 6:00am (because UTC time).
How can I fix this?
As this is flagged with CentOS, I assume that you use cron from CentOS:
https://www.unix.com/man-page/centos/5/crontab/
Use the CRON_TZ variable before a section of rules that you want to run scheduled based on a different time zone.
If you want the commands themselves use that same TZ, then you need to add that to the rules manually.

SSIS package works from SSMS but not from agent job

I've an SSIS package to load excel file from network drive. It's designed to load content and then move the file to archived folder.
Everything works good when the following SQL statement runs in SSMS window.
However when it's copied to SQL agent job and executes from there, the file is neither loaded nor moved. But it shows "successful" from the agent log.
The same thing also happened to "SSIS job" instead of T-SQL job, even with proxy of windows account.(same account as ssms login)
Declare #execution_id bigint
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null
Select #execution_id
DECLARE #var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0
EXEC [SSISDB].[catalog].[start_execution] #execution_id
GO
P.S. At first relative path of network drive is applied, then switched to absolute path(\\server\folder). It's not solving the issue.
SSIS Package Jobs run under the context of the SQL Server Agent. What Account is setup to run the SQL Server Agent on the SQL Server? It may need to be run as a Domain account that has access to the network share.
Or you can copy the Excel file to local folder on the SQL Server, so the Package can access the file there.
Personally I avoid the File System Task - I have found it unreliable. I would replace that with a Script Task, and use .NET methods from the System.IO namespace e.g. File.Move. These are way more reliable and have mature error handling.
Here's a starting point for the System.IO namespace:
https://msdn.microsoft.com/en-us/library/ms404278.aspx
Be sure to select the relevant .NET version using the Other Versions link.
When I have seen things like this in the past it's been that my package isn't accessing the path I thought it was at run time, its looking somewhere else, finding an empty folder & exiting with success.
SSIS can have a nasty habit of going back to variable defaults . It may be looking at a different path you used in dev? Maybe hard code all path values as a test? or put in break points & double check the run time values of all variables & parameters.
Other long shots may be:
Name resolution, are you sure the network name is resolving correctly at runtime?
32/64 bit issues. Dev tends to run 32 bit, live may be 64 bit. May interfere with file paths? Maybe force to 32 bit at run time?
There is issue with sql statement not having statement terminator (;) that is causing issue.
Declare #execution_id bigint ;
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null ;
Select #execution_id ;
DECLARE #var0 smallint = 1 ;
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0 ;
EXEC [SSISDB].[catalog].[start_execution] #execution_id ;
GO
I have faced similar issue in service broker ..

Maximum execution time of 300 seconds exceeded in

I know this subject has like ... many duplcates here and there, but trust me I've spent time fetching these posts and my question remains answerless.
I'm running a PHP script on a Debian Linux / Nginx / PHP-FPM / APC / I think that's all.
I'm executing the script from my SSH terminal (CLI) like : > php plzrunthisFscript.php &
It used to work flawlessly but now, it returns this famous error.
What I've tried ALREADY and it failed (I mean it didn't change anything) :
Check what PHP.INI was used with a phpinfo(); inside my script (it's /etc/php5/cli/php.ini)
maximum_execution time is hardcoded to 0 (means no limit) for CLI if I'm not mistaken.
Tried to add at begining of my script : set_time_limit(0);
Tried to add : ini_set('max_execution_time', -1);
Tried to execute my php command with parameter -d max_execution_time=0
Tried to execute from a web interface (served by Nginx)
Tried to execute from another web interface (SErver by Apache2 this time) gives same error on page.
tried to configure max_execution_time 15 in /etc/php5/cli/php.ini to check if the php.ini that is supposed to be used (because phpinfo() inside my script is ignored or not : it's ignored
Everytime, it brings this error, and sometimes even after more than 300 seconds I think, which is really confusing.
If someone has any idea on how to fix this, or has some things I can try, please advise.
Thank you for your time.
God I finaly solved it!
I think that could be considered as a Magento Bug in a way.
Here is how to reproduce this :
You have Magento (My version is CE 1.7) running with APACHE 2.
One day, you decide to get rid of Apache and try to adopt Nginx.
You configure everything and it's working great, but you end up with this error one day, trying to rebuild your indexes as you often do.
Thing is, when you run (for example) : php indexer.php --reindex catalog_url &
This script includes another one called abstract.php which contains this awesome function :
protected function _applyPhpVariables()
This function will look for a .htaccess in your Magento root directory, then parse every single configuration parameters, and will execute the index script with these parameters.
How clever ...
Anyway, to solve this you just need to (delete / rename / burn) this .htaccess file, and then everything will go back to normal.
Thanks for your help everyone.

Resources