The site administrator should also check that the database details have been correctly specified in config.php - remote-access

We have configured moodle application connection using remote Database.I'm getting Database connection error. When i'm use localhost instead of remote ip its working. Can anybody help. Thanks in advance.
the below is my config.php file

To use a remote database, you'll need to set this value in config.php
$CFG->dbhost = 'localhost'; // eg 'localhost' or 'db.isp.com' or IP
See config-dist.php in your Moodle folder for details
//=========================================================================
// 1. DATABASE SETUP
//=========================================================================
// First, you need to configure the database where all Moodle data //
// will be stored. This database must already have been created //
// and a username/password created to access it. //
$CFG->dbtype = 'pgsql'; // 'pgsql', 'mariadb', 'mysqli', 'auroramysql', 'sqlsrv' or 'oci'
$CFG->dblibrary = 'native'; // 'native' only at the moment
$CFG->dbhost = 'localhost'; // eg 'localhost' or 'db.isp.com' or IP
$CFG->dbname = 'moodle'; // database name, eg moodle
$CFG->dbuser = 'username'; // your database username
$CFG->dbpass = 'password'; // your database password
$CFG->prefix = 'mdl_'; // prefix to use for all table names
$CFG->dboptions = array(
'dbpersist' => false, // should persistent database connections be
// used? set to 'false' for the most stable
// setting, 'true' can improve performance
// sometimes
'dbsocket' => false, // should connection via UNIX socket be used?
// if you set it to 'true' or custom path
// here set dbhost to 'localhost',
// (please note mysql is always using socket
// if dbhost is 'localhost' - if you need
// local port connection use '127.0.0.1')
'dbport' => '', // the TCP port number to use when connecting
// to the server. keep empty string for the
// default port
'dbhandlesoptions' => false,// On PostgreSQL poolers like pgbouncer don't
// support advanced options on connection.
// If you set those in the database then
// the advanced settings will not be sent.
'dbcollation' => 'utf8mb4_unicode_ci', // MySQL has partial and full UTF-8
// support. If you wish to use partial UTF-8
// (three bytes) then set this option to
// 'utf8_unicode_ci', otherwise this option
// can be removed for MySQL (by default it will
// use 'utf8mb4_unicode_ci'. This option should
// be removed for all other databases.
// 'fetchbuffersize' => 100000, // On PostgreSQL, this option sets a limit
// on the number of rows that are fetched into
// memory when doing a large recordset query
// (e.g. search indexing). Default is 100000.
// Uncomment and set to a value to change it,
// or zero to turn off the limit. You need to
// set to zero if you are using pg_bouncer in
// 'transaction' mode (it is fine in 'session'
// mode).
/*
'connecttimeout' => null, // Set connect timeout in seconds. Not all drivers support it.
'readonly' => [ // Set to read-only slave details, to get safe reads
// from there instead of the master node. Optional.
// Currently supported by pgsql and mysqli variety classes.
// If not supported silently ignored.
'instance' => [ // Readonly slave connection parameters
[
'dbhost' => 'slave.dbhost',
'dbport' => '', // Defaults to master port
'dbuser' => '', // Defaults to master user
'dbpass' => '', // Defaults to master password
],
[...],
],
Instance(s) can alternatively be specified as:
'instance' => 'slave.dbhost',
'instance' => ['slave.dbhost1', 'slave.dbhost2'],
'instance' => ['dbhost' => 'slave.dbhost', 'dbport' => '', 'dbuser' => '', 'dbpass' => ''],
'connecttimeout' => 2, // Set read-only slave connect timeout in seconds. See above.
'latency' => 0.5, // Set read-only slave sync latency in seconds.
// When 'latency' seconds have lapsed after an update to a table
// it is deemed safe to use readonly slave for reading from the table.
// It is optional. If omitted once written to a table it will always
// use master handle for reading.
// Lower values increase the performance, but setting it too low means
// missing the master-slave sync.
'exclude_tables' => [ // Tables to exclude from read-only slave feature.
'table1', // Should not be used, unless in rare cases when some area of the system
'table2', // is malfunctioning and you still want to use readonly feature.
], // Then one can exclude offending tables while investigating.
More info available in lib/dml/moodle_read_slave_trait.php where the feature is implemented.
]
*/
// For all database config settings see https://docs.moodle.org/en/Database_settings
);

Related

Keeping track of users history in express to make a 'back' button?

I'm making an express app and I want to include a 'back' button in the app UI that does basically exactly what the browser back button does.
I tried holding an array variable in the server that simply collects all of the URL params visited. For example, for the '/:lang' route ...
const browsingHistory = [];
app.get("/:lang", (req, res) => {
const lang = req.params.lang;
if (lang === "en" || lang === "fr") {
const templateVars = {
menuItems: db[lang].menuItems,
lang,
};
res.render("root", templateVars);
}
if (lang !== "favicon.ico") {
browsingHistory.push(lang);
console.log(`Browsing history: ${browsingHistory}`);
}
});
BUT I'm realizing this only works when locally hosted — once deployed, if there are multiple users simultaneously, how to keep track of each users' individual history? Or is there a better way of doing this?
Storing the browsing history will require user sessions. On each request, you will have to store the route that the user hits in their session variable.
In Express, this can be accomplished with the express-session library. You will want to initiate each session with some history property that begins as an empty array. Once express-session is set up, you can do something similar to the following
app.get("/:lang", (req, res) => {
const lang = req.params.lang;
req.session.history.push(lang);
...
});
app.get("/getMyPageHistory", (req, res) => {
res.send(req.session.history);
});
req.session will be unique for each user. So, you can store each user's unique history in this variable.
With that said, if you go down this route, you eventually will want some external session storage. By default, sessions are saved in your server's memory. This introduces a few issues that are explained in the express-session documentation. Here is their warning
Warning The default server-side session storage, MemoryStore, is purposely not designed for a production environment. It will leak memory under most conditions, does not scale past a single process, and is meant for debugging and developing.
They provide a list of compatible session stores

loopback3: associate users to different databases

I'm developing a project in loopback3 where I need to create accounts for multiple companies, where each compnay has its own database, I'm fully aware that the loopback3 docs has a section where they explain how to create datasources programmatically and how to create models from that datasource, and I've used that to create the following code which receives in the request a parameter which i called dbname and this one changes the linking to the wanted datasource..
userclinic.js
Userclinic.observe('before save', async (ctx, next) => {
const dbname = ctx.instance.dbname; // database selection
const dbfound = await Userclinic.app.models.Clinics.findOne({where:{dbname}}) // checking if that database really exist in out registred clients databases
if( dbfound ){ // if database found
await connectToDatasource(dbname, Userclinic) // link the model to that database
} else { // otherwise
next(new Error('cancelled...')) // cancel the save
}
})
utils.js (from where i export my connectToDatasource method)
const connectToDatasource = (dbname, Model) => {
console.log("welcome");
var DataSource = require('loopback-datasource-juggler').DataSource;
var dataSource = new DataSource({
connector: require('loopback-connector-mongodb'),
host: 'localhost',
port: 27017,
database: dbname
});
Model.attachTo(dataSource);
}
module.exports = {
connectToDatasource
}
So my problem is that the datasource is actually really changing but the save happens in the previous datasource that was selected (which means it saves the instance to the old database) and doesn't save to the new one till I send the request again. so chaging the datasource is taking two requests to happen and it's also saving the instance in both databases.
I guess that when the request happen loopback checks the datasource related to that model first before allowing any action on that model, I really need to get this done by tonight and I wish someone can help out.
PS: if anyone has a solution to this or knows how to associate multiple clients (users) to multiple databases (programmatically of course) in any way using loopback 3 I'm all ears (eyes).
Thanks in advance.

Sequelize Postgres how to set timezone gmt+1

Hi Stackoverflow Team,
i want to know, how can i set the timezone +01:00 for a timestamp, i can call all data entries with the neutral utc timezone... but i want to give a default timezone +01:00 Europe Berlin.
Even when i call my database table i want to get the right response. Thank you a lot for your answers..
const Sequelize = require('sequelize');
//CREATE DATABASE tai;
const sequelize = new Sequelize('tai', 'postgres', '', {
//username: 'root',
//password: 'root',
dialect: 'postgres',
logging: false,
//storage: "./database.sqlite3",
host: 'localhost',
dialectOptions: { useUTC: false },
typeCast: function (field, next) { // for reading from database
if (field.type === 'DATETIME') {
return field.string()
}
return next()
},
timezone: '+01:00',
pool: {
max: 2,
min: 0,
acquire: 10000,
idle: 10000
}
});
In SQL, you'd change the time zone for either the server or for the client session. If you change the time zone for the client session, you have to do that for every client session, every time you start a new client session. (More or less.)
A change to the server configuration file (postgresql.conf, timezone = '' could be understood as changing the default for all client connections. Changing the server configuration file is the most robust approach, but you might not have access to it or privileges to change it.
Setting the PGTZ environment variable lets libpq clients send a SET TIME ZONE command to the server when they connect connection. That could be understood as changing the default for one client.
Executing the SQL set time zone changes the time zone for a session. So that could be understood as changing the default for one client session.
For Sequelize, I think you want to use options.timezone (search this page for timezone), and you probably want to use "Europe/Berlin" rather than a literal, fixed offset.

Assigning WebSocket and net.Socket with unique id

I aspire to assign Websockets and net.Sockets with unique identifiers, so when a message is received, the client is identified by the identifier attached to the socket.
Previous research:
For Websocket:
According to this and this, the following is requires:
const app = express();
const server = http.createServer(app);
const wss = new WebSocket.Server({ server });
wss.on('connection', (ws) => {
ws.id = uuid.v4(); // This is the relevant line of code
ws.on('message', (msg: string) => {
...
}
});
For net.Socket:
Quite the same - according to this, the following is required:
var server = net.createServer();
server.on('connection', function(conn) {
conn.id = uuid.v4(); // This is the relevant line of code
conn.on('data', function(data) {
...
});
});
The problem
The error "Property 'id' does not exist on type 'WebSocket' [or 'Socket' accordingly]" during compilation. This explains why.
Optional Solution #1: Cast to any
The update net.Socket code will be:
var server = net.createServer();
server.on('connection', function(conn) {
(conn as any).id = uuid.v4();
conn.on('data', function(data) {
console.log('Session id:' + (conn as any).id);
});
});
The problem: It doesn't seem like good practice to me. I don't have references for why except for a hunch for now.
Optional Solution #2: Use a local variable [seems like a better option]
The update net.Socket code will be:
var server = net.createServer();
server.on('connection', function(conn) {
const id: string = uuid.v4();
conn.on('data', function(data) {
console.log('Session id:' + id);
});
});
After some experiencing, it seems to work properly.
So - the questions:
In optional solution #2: is it guaranteed that the id variable will always be available in this scope? In other words: is this solution valid? for both net.Socket & Websocket? Or is there something that I'm missing?
In general: Is there additional identifiers (perhaps built in identifiers in WebSocket and net.Socket) that can be used instead?
[Typescript version: 3.2.4].
I'm not aware of a field that already exists for that.
If you can keep the socket id in a local variable like you do in option 2, that's what I'd do. I generally prefer to avoid adding arbitrary fields to objects. However, sometimes you want the entirety of your application to be able to access the additional piece of data, and in those cases using a local variable won't work.
Adding a custom field
Instead of using a type assertion to any, you could add a file to your project that contains this:
declare module "net" {
interface Socket {
id: string;
}
}
This is augmenting the net.Socket interface to add an id field which is a string. This is a bit neater than a type assertion because the type assertion would let typos go through (e.g. (conn as any).ids). I used your code, removed the type assertion and added the above in a file named externals.d.ts that I put next to the .ts file that contains your code. tsc stopped complaining about the field. Note that you don't need to import this file or refer to it in any way. You must just have a tsconfig.json that picks it up along with the rest of your source. By default, it would be picked up due to the .d.ts extension.
In the past I've used type assertions and interface augmentation to add arbitrary fields to DOM nodes, and it worked just fine. However, when I did that, I used field names that were very singular, meaning that there was a very low chance of a clash with other libraries that might want to add their own fields. Your field name is id. I'd be worried about name clashes with other libraries that decide they want to keep track of sockets and add their own id field to a socket.
Using WeakMap
There's another method you can use. You could setup a WeakMap that associates the socket with the id. Here's an illustration. You could have a module socket-map that just exports a map that maps sockets to strings:
import * as net from "net";
export const socketMap = new WeakMap<net.Socket, string>();
And then you'd store the socket with the id when you obtain the socket:
import * as net from "net";
import * as uuid from "uuid";
import { socketMap } from "./socket-map";
var server = net.createServer();
server.on('connection', function(conn) {
const id = uuid.v4();
socketMap.set(conn, id); // You store the socket into the map.
conn.on('data', function(data) {
console.log('Session id:' + (conn as any).id);
});
});
Then later, in another module, you could get the id back with:
import * as net from "net";
import { socketMap } from "./socket-map";
export function foo(conn: net.Socket) {
const id = socketMap.get(conn);
}
Here I've just associated the socket with an id string, but you could have any structure you want in the values of the WeakMap. It could be an object that contains a whole slew of information besides an id.
The reason to use WeakMap is while the keys of a WeakMap object contain references to objects, these references do not count as far as garbage collection goes. So if your application is done with a socket and no longer references it anywhere than a WeakMap, the reference present in the WeakMap will still allow the socket to be collected by the garbage collector.

Failing to automatically re-connect to New PRIMARY after a replica set failover , from Mongoose (MongoDB, NodeJS Driver)

I made a simple NodeJS App, with Mongoose as MongoDB Driver. And connected to a mongodb replica set. The App is working fine until I shut down the current primary, When the PRIMARY is down, the replica set automatically elected a new PRIMARY. But, after that the node application doesn't seems to be responding for DB queries.
CODE: DB Connection
var options = {
server: {
socketOptions: {
keepAlive: 1,
connectTimeoutMS:30000,
socketTimeoutMS:90000 }
},
replset: {
socketOptions: {
keepAlive: 1,
connectTimeoutMS : 30000 ,
socketTimeoutMS: 90000
},
rs_name: 'rs0'
} };
var uri = "mongodb://xxx.xxx.xxx.xxx:27017,xxx.xxx.xxx.xxx:27017,xxx.xxx.xxx.xxx:27017/rstest";
mongoose.connect(uri,options);
CODE: DB Query
router.('/test',function(req,res){
var testmodel = new testModel('test') ;
testmodel.save(function (err, doc,numberAffected) {
if (err) {
console.log("ERROR: "+ err);
res.status = 404;
res.end;
}else{
console.log("Response sent ");
res.status = 200;
res.end;
}
});
});
Steps Followed
Created a MongoDB replica set in three VMs.
Created a simple nodeJS App (Express + Mongoose) with a test API as above
Sent GET request to 'test' continuously with some time interval to the app from a local system.
Took the PRIMARY instance down
Console will log "ERROR: Error: connection closed"
APPLICATION STOPPED RESPONDING TO REQUESTS
Varsions:
"express": "4.10.6",
"mongodb": "1.4.23",
"mongoose": "3.8.21",
A sample app that I have done for debugging this issue is available at https://melvingeorge#bitbucket.org/melvingeorge/nodejsmongorssample.git
I am not sure if this is a bug or some mis-configuration from my end. How to solve this issue ?
Write operations are made only on master instance. It will take some time for replica set to select a new primary server.
from http://docs.mongodb.org/manual/faq/replica-sets/
How long does replica set failover take?
It varies, but a replica set will select a new primary within a
minute.
It may take 10-30 seconds for the members of a replica set to declare
a primary inaccessible. This triggers an election. During the
election, the cluster is unavailable for writes.
The election itself may take another 10-30 seconds.
check your code with read operations (find/count)
as long as there is not a master instance, you can't do write operations
The 'rs_name' in replset options is necessary to specify a replicaSet. You can use mongoose.createConnection(uri, conf, callback) and get final conf in callback.
It looks like this got fixed in NODE-818 / 2.2.10.
But I am using 2.2.22 and still have a problem like that.
Upon reconnect, the mongo client reconnects to a a secondary instead of a newly selected primary which then is, I cannot write to the database.
My connection string is like mongodb://mongo1,mongo2,mongo3/db?replicaSet=rs

Resources