I have a NodeJS project on 2 Google Cloud instances behind a load-balancer. I'm using socket.io. I want to share the sessions between the instances.
Usually developers doing it using socket.io-redies, but I don't want redis just for that. I have Cloud SQL (Aka: MySQL), and I want to use MySQL for sharing the sessions.
I have understand the whole index.js of the redis adapter file, except this function:
https://github.com/socketio/socket.io-redis/blob/master/index.js#L93
Redis.prototype.onmessage = function(channel, msg){
var args = msgpack.decode(msg);
var packet;
if (uid == args.shift()) return debug('ignore same uid');
packet = args[0];
if (packet && packet.nsp === undefined) {
packet.nsp = '/';
}
if (!packet || packet.nsp != this.nsp.name) {
return debug('ignore different namespace');
}
args.push(true);
this.broadcast.apply(this, args);
};
If I need to get events from the MySQL (subscribe) I think it is not possible. Am I right?
Do you know another solution of sharing socket.io between two machines, without using Redis?
Related
This YouTube video #27:20 talks about populating the cache with routing info to avoid latency during a cold start.
You can either try to get a document you know doesn't exist, or you can use CosmosClient.CreateAndInitializeAsync().
I already have this code set up:
private async Task<Container> CreateContainerAsync(string endpoint, string authKey)
{
var cosmosClientBuilder = new CosmosClientBuilder(
accountEndpoint: endpoint,
authKeyOrResourceToken: authKey)
.WithConnectionModeDirect(portReuseMode: PortReuseMode.PrivatePortPool, idleTcpConnectionTimeout: TimeSpan.FromHours(1))
.WithApplicationName(UserAgentSuffix)
.WithConsistencyLevel(ConsistencyLevel.Session)
.WithApplicationRegion(Regions.AustraliaEast)
.WithRequestTimeout(TimeSpan.FromSeconds(DatabaseRequestTimeoutInSeconds))
.WithThrottlingRetryOptions(TimeSpan.FromSeconds(DatabaseMaxRetryWaitTimeInSeconds), DatabaseMaxRetryAttemptsOnThrottledRequests);
var client = cosmosClientBuilder.Build();
var databaseResponse = await CreateDatabaseIfNotExistsAsync(client).ConfigureAwait(false);
var containerResponse = await CreateContainerIfNotExistsAsync(databaseResponse.Database).ConfigureAwait(false);
return containerResponse;
}
Is there any way to incorporate CosmosClient.CreateAndInitializeAsync() with it to populate the cache?
If not, is it ok to do this to populate the cache?
public class CosmosClientWrapper
{
public CosmosClientWrapper(IKeyVaultFacade keyVaultFacade)
{
var container = CreateContainerAsync(endpoint, authenticationKey).GetAwaiter().GetResult();
// Get a document that doesn't exist to populate the routing info:
container.ReadItemAsync<object>(Guid.NewGuid().ToString(), PartitionKey.None).GetAwaiter().GetResult();
}
}
The point of CreateAndInitialize or BuildAndInitialize is to pre-establish the connections required to perform Data Plane operations to the desired containers (reference https://learn.microsoft.com/azure/cosmos-db/nosql/sdk-connection-modes#routing).
If the containers do not exist, then it makes no sense to use CreateAndInitialize or BuildAndInitialize because there are no connections that can be pre-established/warmed up, because there are no target backend endpoints to connect to. That is why the container/database information is required, because the only benefit is warming up the connections to the backend machines that support that/those container/s.
Please see CosmosClientBuilder.BuildAndInitializeAsync which creates the cosmos client and initialize the provided containers. I believe this is what you are looking for.
It seems I can't find a proper way to use the read/write functions for admin in the Cloud Functions. I am working on a messaging function that reads new messages created in the Realtime Database with Cloud Functions Node.js and uses the snapshot to reference a path. Here is my initial exports function:
var messageRef = functions.database.ref('Messages/{chatPushKey}/Messages/{pushKey}');
var messageText;
exports.newMessageCreated = messageRef.onCreate((dataSnapshot, context) => {
console.log("Exports function executed");
messageText = dataSnapshot.val().messageContent;
var chatRef = dataSnapshot.key;
var messengerUID = dataSnapshot.val().messengerUID;
return readChatRef(messengerUID, chatRef);
});
And here is the function that reads from the value returned:
function readChatRef(someUID, chatKey){
console.log("Step 2");
admin.database.enableLogging(true);
var db;
db = admin.database();
var userInfoRef = db.ref('Users/' + someUID + '/User Info');
return userInfoRef.on('value', function(snap){
return console.log(snap.val().firstName);
});
}
In the firebase cloud functions log I can read all console.logs except for the one inside return userInfoRef.on.... Is my syntax incorrect? I have attempted several other variations for reading the snap. Perhaps I am not using callbacks efficiently? I know for a fact that my service account key and admin features are up to date.
If there is another direction I need to be focusing on please let me know.
I'm trying to make a game, which works on rooms, lobby and such (imagine the chat app, except with additional checks/information storing).
Let's say, I have a module room.js
var EventEmitter = require('events');
class Room extends EventEmitter {
constructor (id, name) {
super();
this.id = id;
this.name = name;
this.users = [];
}
}
Room.prototype.addUser = function (user) {
if(this.users.indexOf(user) === -1) {
this.users.push(user);
this.emit('user_joined', user);
} else {
/* error handling */
}
};
module.exports = {
Room: Room,
byId: function (id) {
// where should I look up?
}
};
How can I get exactly this object (with events)? How can I access events emitted by this object?
In a single instance of node, I would do something like:
var rooms = [];
var room = new Room(1234, 'test room');
room.on('user_joined', console.log);
rooms.push(room);
Also, I don't quite understood how Redis is actually helping (is it replacement of EventEmitter?)
Regards.
EDIT: Would accept PM2 solutions too.
Instead of handling rooms in Node, you can replace them with channels in Redis).
When a new client wants to join in a room, the NodeJS app returns it the ID of this given room (that is to say the name of the channel), then the client suscribes to the selected room (your client is directly connected to Redis.
You can use a Redis Set to manage the list of rooms.
In this scenario, you don't need any event emitter, and your node servers are stateless.
Otherwise, it would mean Redis would be exposed on the Internet (assuming your game is public), so you must activate Redis authentication. A major problem with this solution is that you have to give the server password to all clients, so it's definitely unsecure.
Moreover, Redis' performances allow brute force attacks so exposing it on Internet is not recommended. That's why I think all communications should go through a Node instance, even if Redis is used as a backend.
To solve this, you can use socket.io to open sockets between Node and your clients, and make the Node instances (not the client) subscribe to the Redis channel. When a message is published by Redis, send it to the client through the socket. And add a layer of authentication to ensure only valid clients connect to a given channel.
Event emitter is not required. It's the Redis client which will be an event emitter (like in this example based on ioRedis)
I am trying to setup a simple scenerio using shoe + dnode +sockjs and I do not know how to broadcast a message to all users connected to the web application.
Do you know if there is a function or method which manage this? or should it be make by "hand"?
AFAIK, you have to roll it by "hand" as you say. Here is what I do:
server.js:
var shoe = require('shoe')
var connectedClients = {}
var conCount = 0
var sock = shoe(function(clientStream) {
clienStream.id = conCount
connectedClients[clientStream.id] = clientStream
conCount += 1
})
somewhere else in your server-side program:
//write to all connected clients
Object.keys(connectedClients).forEach(function(cid) {
var clientStream = connectedClients[cid]
clientStream.write(yourData)
})
Note, you'll want to introduce additional logic to only write to connected clients, so you'll want to remove disconnected clients from connectedClients, something like delete connectedClients[id].
Hopefully that helps.
I have a worker role in my hosted service.
The worker is sending e-mail daily bases.
But in the hosted service, there are 2 environment, Staging and Production.
So my worker role sends e-mail 2 times everyday.
I'd like to know how to detect if the worker is in stagning or production.
Thanks in advance.
As per my question here, you'll see that there is no fast way of doing this. Also, unless you really know what you are doing, I strongly suggest you not do this.
However, if you want to, you can use a really nice library (Azure Service Management via C#) although we did have some trouble with WCF using it.
Here's a quick sample on how to do it (note, you need to include the management certificate as a resource in your code & deploy it to Azure):
private static bool IsStaging()
{
try
{
if (!CloudEnvironment.IsAvailable)
return false;
const string certName = "AzureManagement.pfx";
const string password = "Pa$$w0rd";
// load certificate
var manifestResourceStream = typeof(ProjectContext).Assembly.GetManifestResourceStream(certName);
if (manifestResourceStream == null)
{
// should we panic?
return true;
}
var bytes = new byte[manifestResourceStream.Length];
manifestResourceStream.Read(bytes, 0, bytes.Length);
var cert = new X509Certificate2(bytes, password);
var serviceManagementChannel = Microsoft.Toolkit.WindowsAzure.ServiceManagement.ServiceManagementHelper.
CreateServiceManagementChannel("WindowsAzureServiceManagement", cert);
using (new OperationContextScope((IContextChannel)serviceManagementChannel))
{
var hostedServices =
serviceManagementChannel.ListHostedServices(WellKnownConfiguration.General.SubscriptionId);
// because we don't know the name of the hosted service, we'll do something really wasteful
// and iterate
foreach (var hostedService in hostedServices)
{
var ad =
serviceManagementChannel.GetHostedServiceWithDetails(
WellKnownConfiguration.General.SubscriptionId,
hostedService.ServiceName, true);
var deployment =
ad.Deployments.Where(
x => x.PrivateID == Zebra.Framework.Azure.CloudEnvironment.CurrentRoleInstanceId).
FirstOrDefault
();
if (deployment != null)
{
return deployment.DeploymentSlot.ToLower().Equals("staging");
}
}
}
return false;
}
catch (Exception e)
{
// if something went wrong, let's not panic
TraceManager.AzureFrameworkTraceSource.TraceData(System.Diagnostics.TraceEventType.Error, "Exception", e);
return false;
}
}
If you're using an SQL server (either Azure SQL or SQL Server hosted in VM), you could stop the Staging worker role from doing work by only allowing the public IP of the Production instance access to the database server.