I tried to change time zone in redhat 7 from UTC to Asia/Kuala_Lumpur using command:
#timedatectl set-timezone Asia/Kuala_Lumpur
but it shows as below:
[root#mykultestrhel04t ~]# timedatectl
Local time: Thu 2019-08-22 06:41:03 UTC
Universal time: Thu 2019-08-22 06:41:03 UTC
RTC time: Thu 2019-08-22 06:41:03
Time zone: Asia/Kuala_Lumpur (UTC, +0000)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a
I want the result to be instead as below:
Time zone: Asia/Kuala_Lumpur (+08, +0800)
How can I change it to (+08, +0800)
Can anyone help?
You need to change the TZ environment variable. MY is the timezone setting for Malaysian peninsular.
export TZ=MY
You would normally add this command in your /etc/profile file. /etc/profile is the system-wide start-up script which will apply to all system users.
Note you can check from the shell your correct timezone setting using the tzselect shell command. It comes handy when you want to know what time it is in other countries, or if you just wonder what
timezones exist.
Related
I am having some trouble configuring the right timezone on our Databricks spark cluster. We want to configure both the timezone in Spark context as well as the system wide timezone (both are in UTC by default). We want all timezones to be Europe/Amsterdam or UTC+02:00 (with support for daylight savings time).
All code snippets are run in a Python Databricks notebook.
Default UNIX system timezone is UTC:
%sh
timedatectl
Output:
Local time: Wed 2021-09-22 11:24:39 UTC
Universal time: Wed 2021-09-22 11:24:39 UTC
RTC time: n/a
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
systemd-timesyncd.service active: yes
RTC in local TZ: no
Default Spark timezone is UTC:
spark.conf.get("spark.sql.session.timeZone")
Out[2]: 'Etc/UTC'
Now when I try to convert a date before and a date after DST is in effect, I get the following results:
for tz in ['Europe/Amsterdam', 'UTC+02:00', 'UTC']:
spark.conf.set("spark.sql.session.timeZone", tz)
print(f'Current SPARK timezone: {spark.conf.get("spark.sql.session.timeZone")}')
for date in ['2021-09-28 10:30:00', '2021-11-11 10:30:00']:
df = spark.createDataFrame([(date,)], ['t'])
dt=[row[0] for row in df.select(to_timestamp(df.t).alias('dt')).collect()][0]
print(dt)
Output:
Current SPARK timezone: Europe/Amsterdam
2021-09-28 08:30:00
2021-11-28 09:30:00
Current SPARK timezone: UTC+02:00
2021-09-28 08:30:00
2021-11-28 08:30:00
Current SPARK timezone: UTC
2021-09-28 10:30:00
2021-11-28 10:30:00
Question 1: Why does to_timestamp change my dates in some timezone settings... When I convert a string to timetamp I would not expect the data to be changed...
Question 2: Why do UTC+02:00 and Europe/Amsterdam provide different results (here UTC+02:00 does not seem to be DST aware)?
Now when I change the UNIX system timezone setting to Europe/Amsterdam with:
%sh
timedatectl set-timezone Europe/Amsterdam
timedatectl
Output:
Local time: Wed 2021-09-22 13:35:49 CEST
Universal time: Wed 2021-09-22 11:35:49 UTC
RTC time: n/a
Time zone: Europe/Amsterdam (CEST, +0200)
System clock synchronized: yes
systemd-timesyncd.service active: yes
RTC in local TZ: no
And run the Spark code to convert two timestamp again, I get different results as before:
Output:
Current SPARK timezone: Europe/Amsterdam
2021-09-28 10:30:00
2021-11-28 10:30:00
Current SPARK timezone: UTC+02:00
2021-09-28 10:30:00
2021-11-28 09:30:00
Current SPARK timezone: UTC
2021-09-28 12:30:00
2021-11-28 11:30:00
Question 3: Why are these results now different? Does spark also check system timezone setting besides its own Spark configuration? Why is UTC+02:00 now suddenly DST aware and not Europe/Amsterdam??
So In general I am a bit lost on how to correctly set UNIX and Spark timezones on our cluster so that our logging in Python shows correct timestamps and so that Spark correctly converts timestamp strings to real timestamps AND that our cluster is DST aware. Any help is much appreciated.
I want to get the status of daylight savings time in cross platforms as in the command timedatectl as active/not active. Since I am trying it with cross platforms I don't want to use timedatectl.
So my questions are:
When I use timedatectl status command in my system, it is not showing the DST Status. I don't know why. Can anybody explain this?
** My System output **
$ timedatectl status
Local time: Tue 2021-06-29 16:26:16 IST
Universal time: Tue 2021-06-29 10:56:16 UTC
RTC time: Tue 2021-06-29 10:56:16
Time zone: Asia/Kolkata (IST, +0530)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
It is not showing DST active : a (or) n/a status. I want know why?
Is there any default API or bin files similar to timedatectl to do the job of validating the DST is active/not status in the system (cross platforms)?
I am running a PostgreSQL 10 on a Vagrant VM Installed Version: 2.1.1 with the following OS:
Distributor ID: Debian
Description: Debian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
My locale setting on the server:
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE=de_AT.UTF-8
LC_... ="en_US.UTF-8"
LC_ALL=
timedatectl status gives me the following:
Local time: Mon 2018-07-23 20:58:07 CEST
Universal time: Mon 2018-07-23 18:58:07 UTC
RTC time: Mon 2018-07-23 18:58:06
Time zone: Europe/Vienna (CEST, +0200)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
My settings of my PSQL DB:
Select NOW(); gives me the right point of time.
show timezone; is set to 'Europe/Vienna', which is the right timezone.
The problem:
I have a guest_group table with the following row:
arrival_date DATE NOT NULL,
and the following value:
2018-07-23
But when I query my DB everything works as expected, except that I get the following result:
arrivalDate: 2018-07-22T22:00:00.000Z,
Somehow I get the previous day, the 22nd and not the 23rd.
A DB query in the shell gives me the right date - SELECT * FROM guest_group;
Update: I am requesting the DB with NodeJS and node-postgres. Everything works perfectly with the queries, except the date. After further research, I come to the conclusion that it has something to do with node-postgres and this problem. Here is a similar problem. I will work on my final solution the next week and will share it here.
i set aws system timezone to IST.
$ timedatectl
Local time: Fri 2018-06-15 16:43:20 IST
Universal time: Fri 2018-06-15 11:13:20 UTC
RTC time: Fri 2018-06-15 11:13:20
Time zone: Asia/Kolkata (IST, +0530)
Network time on: yes
NTP synchronized: no
RTC in local TZ: no
but play framework logs are still generating with UTC time zone
2018-06-15 11:22:46,002 [INFO] from application in main - Creating Pool for datasource 'default'
i am using play framework 2.5
and i am running play framework by sudo sbt clean dist
Try passing the timezone to the JVM like so:
sbt -Duser.timezone=Asia/Kolkata
In conf/logback.xml there should be a conversion word %date in an element similar to
<pattern>%date [%level] from %logger in %thread - %message%n%xException</pattern>
Date conversion word has format date{pattern, timezone} and by default:
...in the absence of the timezone parameter, the default timezone of the
host Java platform is used.
Therefore the JVM, and not the OS, has the final word on the timezone.
I have problems with the time settings on my linux server. The timezone is set to UTC, the locale settings are the following: (locale -a)
C
C.UTF-8
en_GB.utf8
POSIX
date command gives me
Wed Jul 26 09:39:42 UTC 2017
but the correct time is 10:39
On my desktop I have the same locale settings and utc timezone and it gives me the correct time.
Could someone explain what's wrong with the server and what should be changed, please?
Thanks in advance.