Set chrony.conf file to "server" instead of "pool" - rhel

I'm working with RHEL 8 and the ntp package is no longer supported and it is implemented by the chronyd daemon which is provided in the chrony package. The file is set up to use public servers from the pool.ntp.org project (pool 2.rhel.pool.ntp.org iburst). Is there a way to set server instead of pool?
My chrony.conf file:
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.rhel.pool.ntp.org iburst

Yes. Simply comment out the pool line and add a server one with the one you want:
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# pool 2.rhel.pool.ntp.org iburst
server 0.africa.pool.ntp.org iburst
server 0.us.pool.ntp.org
server 0.south-america.pool.ntp.org

Related

Does js-ipfs have a readonly gateway server?

When I start my local ipfs node with ipfs daemon, in the cmd I get this:
Gateway (readonly) sever listening on /ip4/127.0.0.1/tcp/8080
With this, I can say 127.0.0.1:8080/ipfs/CID and read files from IPFS.
In my Node.js app, when I run ipfs.create(), in the console I get logs about swarms, but not about a readonly gateway server. I have found out that the ipfs.create() function has an option Gateway that on default is set to /ip4/127.0.0.1/tcp/9090. But when I run my node and keep my app running, when I try to retrieve something with 127.0.0.1:9090/ipfs/CID, I get an ERR_CONNECTION_REFUSED. Why is that? While the app is running, I scanned my ports and nothing was attached to 9090.
I have found the answer. Yes, js-ipfs has a readonly gateway server, but it's not starting implicitly togheter with the node, you have to use ipfs-http-gateway package. The package doesn't really have good instructions, but here is how you do it. You import HttpGateway class from the package and give your ipfs instance to it as a constructor, then you call .start() from the HttpGateway instance. The .start() will take the config options from your ipfs instance, and will search for Adresses -> Gateway options that defaults to /ip4/127.0.0.1/tcp/9090 and start the gateway to that port. You can read the code from the package where the HttpGateway class is written, and you'll figure it all out.

Spring Data GemFire Server java.net.BindException in Linux

I have a Spring Boot app that I am using to start a Pivotal GemFire CacheServer.
When I jar up the file and run it locally:
java -jar gemfire-server-0.0.1-SNAPSHOT.jar
It runs fine without issue. The server is using the default properties
spring.data.gemfire.cache.log-level=info
spring.data.gemfire.locators=localhost[10334]
spring.data.gemfire.cache.server.port=40404
spring.data.gemfire.name=CacheServer
spring.data.gemfire.cache.server.bind-address=localhost
spring.data.gemfire.cache.server.host-name-for-clients=localhost
If I deploy this to a Centos distribution and run it with the same script but passing the "test" profile:
java -jar gemfire-server-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=test
with my test profile application-test.properties looking like this:
spring.data.gemfire.cache.server.host-name-for-clients=server.centralus.cloudapp.azure.com
I can see during startup that the server finds the Locator already running on the host (I start it through a separate process with Gfsh).
The server even joins the cluster for about a minute. But then it shuts down because of a bind exception.
I have checked to see if there is anything running on that port (40404) - and nothing shows up
EDIT
Apparently I DO get this exception locally - it just takes a lot longer.
It is almost instant when I start it up on the Centos distribution. On my Mac it takes around 2 minutes before the process throws the exception:
Adding a few more images of this:
Two bash windows - left is monitoring GF locally and right I use to check the port and start the Java process:
The server is added to the cluster. Note the timestamp of 16:45:05.
Here is the server added and it appears to be running:
Finally, the exception after about two minutes - again look at the timestamp on the exception - 16:47:09. The server is stopped and dropped from the cluster.
Did you start other servers using Gfsh? That is, with a Gfsh command similar to...
gfsh>start server --name=ExampleGfshServer --log-level=config
Gfsh will start CacheServers listening on the default CacheServer port of 40404.
You have a few options.
1) First, you can disable the default CacheServer when starting a server with Gfsh like so...
gfsh>start server --name=ExampleGfshServer --log-level=config --disable-default-server
2) Alternatively, you can change the CacheServer port when starting other servers using Gfsh...
gfsh>start server --name=ExampleGfshServer --log-level=config --server-port=50505
3) If you are starting multiple instances of your Spring Boot, Pivotal GemFire CacheServer class, then you can vary the spring.data.gemfire.cache.server.port property by declaring the property as a System property when you startup.
For instance, you can, in the Spring Boot application.properties, do...
#application.properties
...
spring.data.gemfire.cache.server.port=${gemfire.cache.server.port:40404}
And then when starting the application from the command-line...
java -Dgemfire.cache.server.port=48484 -jar ...
Of course, you could just set the SDG property from the command line too...
java -Dspring.data.gemfire.cache.server.port=48484 --jar ...
Anyway, I guarantee you that you have another process (e.g. Pivotal GemFire CacheServer) with a ServerSocket listening on port 40404, running. netstat -a | grep 40404 should give you better results.
Hope this helps.
Regards,
John

Read/Read-Write URIs for Amazon Web Services RDS

I am using HAProxy to for AWS RDS (MySQL) load balancing for my app, that is written using Flask.
The HAProxy.cfg file has following configuration for the DB
listen mysql
bind 127.0.0.1:3306
mode tcp
balance roundrobin
option mysql-check user haproxy_check
option log-health-checks
server db01 MASTER_DATABSE_ENDPOINT.rds.amazonaws.com
server db02 READ_REPLICA_ENDPOINT.rds.amazonaws.com
I am using SQLALCHEMY and it's URI is:
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#127.0.0.1:3306/DATABASE'
but when I am running an API in my test environment, the APIs that are just reading stuff from DB are executing just fine but the APIs that are writing something to DB are giving me errors mostly that:
(pymysql.err.InternalError) (1290, 'The MySQL server is running with the --read-only option so it cannot execute this statement')
I think I need to use 2 URLs now in this scenario, one for read-only operation and one for writes.
How does this work with Flask and SQLALCHEMY with HAProxy?
How do I tell my APP to use one URL for write operations and other HAProxy URL to read-only operations?
I didn't find any help from the documentation of SQLAlchemy.
Binds
Flask-SQLAlchemy can easily connect to multiple databases. To achieve
that it preconfigures SQLAlchemy to support multiple “binds”.
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#DEFAULT:3306/DATABASE'
SQLALCHEMY_BINDS = {
'master': 'mysql+pymysql://USER:PASSWORD#MASTER_DATABSE_ENDPOINT:3306/DATABASE',
'read': 'mysql+pymysql://USER:PASSWORD#READ_REPLICA_ENDPOINT:3306/DATABASE'
}
Referring to Binds:
db.create_all(bind='read') # from read only
db.create_all(bind='master') # from master

How to make the Hazelcast cluster to use only set of ports defined by user

Here is programming configuration for Hazelcast cluster
But i am facing some problem here ,its using many random ports other than the defined port ..What will be the issue?
Config config = new Config();
config.setInstanceName("cluster-1");
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(true);
config.getNetworkConfig().getJoin().getMulticastConfig().setMulticastGroup("224.2.2.3")
.setMulticastPort(54327).setMulticastTimeToLive(32).setMulticastTimeoutSeconds(10);
config.getNetworkConfig().getInterfaces().setEnabled(true).addInterface("192.168.1.23");
config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(false);
config.getNetworkConfig().setPort(5900);
You can define outbound port range to be used in your configuration by using the addOutboundPortDefinition method of NetworkConfig as follows:
config.getNetworkConfig().addOutboundPortDefinition("35000-35100");
For adding single ports to use for outbound network operations, you can use the addOutboundPort method of NetworkConfig as follows:
config.getNetworkConfig().addOutboundPort(37000);
More info can be found in Hazelcast reference manual.

Remote ArangoDB access

I am trying to access a remote ArangoDb install (on a windows server).
I've tried changing the endpoint in the arangod.conf as mentioned in another post here but as soon as I do the database stops responding both remotely and locally.
I would like to be able to do the following remotely:
Connect to the server in my application code (during development).
Connect to the server from a local arangosh shell.
Connect to the Arango server dashboard (http://127.0.0.1:8529/_db/_system/_admin/aardvark/standalone.html)
Long time since I came back to this. Thanks to the previous comments I was able to sort this out.
The file to edit is arangod.conf. On a windows machine located at:
C:\Program Files\ArangoDB 2.6.9\etc\arangodb\arangod.conf
The comments under the [server] section helped. I changed the endpoint to be the IP address of my server (bottom line)
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://192.168.0.14:8529
Now I am able to access the server from my client using the above address.
Please have a look at the managing endpoints documentation.It explains how to bind and how to check whether it worked out.

Resources