Im currently running a nodejs server. The situation is this, I have a services folder, inside the folder, there are many service.js files. In each service, I will export the service and its functions.
eg.
var service = {};
service.fun1 = fun1;
service.fun2 = fun2;
...
module.exports = service;
function fun1(params) { ... };
Now, I have to require one service from another. For example, whenever I need to use the user service from another service, I will do
var userService = require('services/user.service');
And then I can use it as
userService.fun1(params).then(function(data){ ... });
I have a coupon service that need to require the user service. In the coupon service I can require all other service, except for the user service. And I got this error
TypeError: userService.getById is not a function
So I try to console log the userService, it is {}, which means it is not exported somehow. But from other services , I can require user service in the same way and it works fine.
So I try to require other service in the coupon service and all are working. So only the user service in coupon service is giving the error.
This is making me crazy.
------update -------
The code the throws the error is from coupon_service.js:
var userService = require('services/user.service');
function getAllCoupons(params) {
var deferred = Q.defer();
if (params.field1 && params.field2 && params.field3 && params.start_index && params.page_length) {
userService.fun1(params.field2).then(function(data) {
if (data.response1 == params.expected1 && helper.userIsAdmin(params.response1, data.validate)) {
deferred.resolve(getAll(params.field2, params.start_index, params.page_length, params.filter));
} else {
deferred.resolve(getCoupons(params));
}
});
} else {
deferred.resolve(getCoupons(params));
}
return deferred.promise;
}
Assuming both services are in the same /services/ folder, you should just be able to do:
var userService = require('./user.service');
since the path for the imported service should be relative to the importing service.
Related
Previously I have Azure Web App (.net core) and It successfully track the SQL Server and Service Bus dependency into Application Insights. It is not working some how with Azure Functions.
Environment
dotnet 6
dotnet-isolated mode
log level default set to "Information".
Azure Environment using Consumption plan for Azure Functions.
Application Insights key is configured.
I have Azure API management at front and backend is Azure Function and that call SQL Server and Service Bus.
Api Management Service to Azure function dependency successfully resolved but Azure Function to other component is not working.
I know I am posting my own answer. Also there are chance that in future there may be some good solution or it get integrated the way it is in in-process mode.
By then follow steps.
Add Package
Microsoft.ApplicationInsights.WorkerService
In program.cs in configuring host.
services.AddApplicationInsightsTelemetryWorkerService();
More info at
https://learn.microsoft.com/en-us/azure/azure-monitor/app/worker-service
The only way I've managed to solve this issue so far was by setting up custom Middleware
.ConfigureFunctionsWorkerDefaults(config =>
{
config.UseMiddleware<AiContextMiddleware>();
})
In the IServiceCollection you need to setup simply
.AddApplicationInsightsTelemetryWorkerService()
public class AiContextMiddleware : IFunctionsWorkerMiddleware
{
private readonly TelemetryClient _client;
private readonly string _hostname;
public AiContextMiddleware(TelemetryClient client)
{
_client = client;
_hostname = Environment.GetEnvironmentVariable("AI_CLOUD_ROLE_NAME");
}
public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
{
var operationId = ExtractOperationId(context.TraceContext.TraceParent);
// Let's create and start RequestTelemetry.
var requestTelemetry = new RequestTelemetry
{
Name = context.FunctionDefinition.Name,
Id = context.InvocationId,
Properties =
{
{ "ai.cloud.role", _hostname},
{ "AzureFunctions_FunctionName", context.FunctionDefinition.Name },
{ "AzureFunctions_InvocationId", context.InvocationId },
{ "AzureFunctions_OperationId", operationId }
},
Context =
{
Operation =
{
Id = operationId,
ParentId = context.InvocationId,
Name = context.FunctionDefinition.Name
},
GlobalProperties =
{
{ "ai.cloud.role", _hostname},
{ "AzureFunctions_FunctionName", context.FunctionDefinition.Name },
{ "AzureFunctions_InvocationId", context.InvocationId },
{ "AzureFunctions_OperationId", operationId }
}
}
};
var operation = _client.StartOperation(requestTelemetry);
try
{
await next(context);
}
catch (Exception e)
{
requestTelemetry.Success = false;
_client.TrackException(e);
throw;
}
finally
{
_client.StopOperation(operation);
}
}
private static string ExtractOperationId(string traceParent)
=> string.IsNullOrEmpty(traceParent) ? string.Empty : traceParent.Split("-")[1];
}
It's definitely not a perfect solution as you then get two starting logs, but as end result, you get all logs traces + dependencies correlated to an operation.
I've solved this issue in the first place like that, now I'm revisiting whether there are any better ways to solve this.
Let me know too whether you managed to solve this issue on your side.
I am using a cloud function written in node.js to list projects this is the index.js file containing the method, When I trigger this function I am getting only 1 project printed. ProjectA -> the cloud function also resides in ProjectA, I have another ProjectB which is not getting printed which is also in ACTIVE mode. I have owner permission for both the projects.
const {Resource} = require('#google-cloud/resource');
const resource = new Resource();
async function getProjects() {
try {
// Lists all current projects
const [projects] = await resource.getProjects();
console.log(`success in getProjects() call`);
// Set a uniform endTime for all the resulting messages
const endTime = new Date();
const endTimeStr = endTime.toISOString();
// sample 2019-11-12T17:58:26.068483Z
for (var i=0; i<projects.length;i++) {
console.log("Total Projects ",projects.length) //Printing as 1 instead of correct 2
// Only publish messages for active projects
if (projects[i]["metadata"]["lifecycleState"] === config.ACTIVE) {
// Construct a Pub/Sub message
console.log(`About to send Pub/Sub message ${projects[i]}}`);
const pubsubMessage = {
"token": config.METRIC_EXPORT_PUBSUB_VERIFICATION_TOKEN,
"project_id": projects[i]["id"],
"end_time": endTimeStr
}
}
}
} catch(err) {
console.error("Error in getProjects()");
console.error(err);
throw err;
}
}
However if i try the google api link
https://cloud.google.com/resource-manager/reference/rest/v1/projects/list#try-it
I am getting 2 projects as response which i have access to.
When you execute a Cloud Function you choose a service account that will execute it, normally it's the "App Engine default service account (project-id#appspot.gserviceaccount.com), that service account should have the "Project Owner" role.
The API call from the API explorer uses an API key that it's tied to your user account no the service account used to execute the Cloud Functions, that's why it shows you all your projects.
To fix your issue just add the service account, that you're using to execute the Cloud Function, to all your Projects with the Project Owner role, although other roles (like Project Viewer) are enough to list it.
I have different microservices developed in Hapi+Molecular.
I used hapi-moleculer npm module to add molecular in hapi, I am using redis as transported to communicate between services.
I can call functions of service A from service B...
what i need is to add authentication to call functions of other services.
Like if Service A calling function of Service B it needs to authenticate to prevent others from connecting to my services.
I am calling servies like this
request.broker.call('users.logout', { });
I saw a module imicros-auth for this but i didn't found it much useful is there anyother module which can do this or is there any better approach to custom code for service to service authentication.
It should be like
If service is calling its own function, then no auth required, if calling function of other service then it must be authenticated
One more thing it should not be like fetching auth from db or some kind of this which makes response of service slow, can be token based or something like this
Maybe this middleware? https://github.com/icebob/moleculer-protect-services
To use this, you should generate a JWT token with service name for all services and define a list of the permitted services. The middleware will validate the JWT.
Here is the source of the middleware:
const { MoleculerClientError } = require("moleculer").Errors;
module.exports = {
// Wrap local action handlers (legacy middleware handler)
localAction(next, action) {
// If this feature enabled
if (action.restricted) {
// Create new handler
return async function ServiceGuardMiddleware(ctx) {
// Check the service auth token in Context meta
const token = ctx.meta.$authToken;
if (!token)
throw new MoleculerClientError("Service token is missing", 401, "TOKEN_MISSING");
// Verify token & restricted services
// Tip: For better performance, you can cache the response because it won't change in runtime.
await ctx.call("guard.check", { token, services: action.restricted })
// Call the original handler
return await next(ctx);
}.bind(this);
}
// Return original handler, because feature is disabled
return next;
},
// Wrap broker.call method
call(next) {
// Create new handler
return async function(actionName, params, opts = {}) {
// Put the service auth token in the meta
if (opts.parentCtx) {
const service = opts.parentCtx.service;
const token = service.schema.authToken;
if (!opts.meta)
opts.meta = {};
opts.meta.$authToken = token;
}
// Call the original handler
return await next(actionName, params, opts);
}.bind(this);
},
};
I hope you can help me (us). I'm working on a API project which have two databases :
Production DB : api.myapp.fr
Testing DB : test.api.myapp.fr
Theses two databases are writable by the user.
When a user call our API, he can set the authorization header whichever he needs. For example :
Authorization: s_0
Will perform operations on api.myapp.fr and
Authorization: s_t_0
Will perform operations on test.api.myapp.fr .
My question is : How can I do that with sails ?
Actually, I have a policie which check if the user is using a production key or a testing key, and I override the default models with the one for testings purposes, like this :
if (!is_production) {
req.session.isProd = false;
req.session.logs.environment = "test";
User = UserTest;
Payment = PaymentTest;
PayzenStatus = PayzenStatusTest;
Transaction = TransactionTest;
Card = CardTest;
Doc = DocTest;
}
But you can see the problem if a user makes a test request and then a production request, the models are still the tests ones...
I use my models in services and policies, therefor I can't do
req.models = {};
// If not in production, use the test models
if (!is_production) {
req.session.isProd = false;
req.session.logs.environment = "test";
req.models.User = UserTest;
req.models.Payment = PaymentTest;
req.models.PayzenStatus = PayzenStatusTest;
req.models.Transaction = TransactionTest;
req.models.Card = CardTest;
req.models.Doc = DocTest;
}
// Otherwise use the production models
else {
req.models.User = User;
req.models.Payment = Payment;
req.models.PayzenStatus = PayzenStatus;
req.models.Transaction = Transaction;
req.models.Card = Card;
req.models.Doc = Doc;
}
If you have any idea on how ton achieve this (whatever the way, we can still perform deep changes in our code), I would be really happy to ear it.
Thanks
Two different ways of doing this.
First you could set an environment variable on your production host and check that environment variable to see if you are running in prod. If you are running in prod then use the URI to the production database.
Secodonly, which is probably the better way of doing this creating a config.js file that allows you to read environment variables. What I do for all my apps is set environment variables for connection info to databases and API keys. When running locally/testing I have some defaults in my app but when they environment variables are set they are read and used. So set the environment variable in production to point to your production databases.
The config.js file Im posting below contains references to VCAP, which assumes you are running on Cloud Foundry.
config.js
var VCAP_SERVICES = process.env["VCAP_SERVICES"],
vcapServices;
if (VCAP_SERVICES) {
vcapServices = JSON.parse(VCAP_SERVICES);
}
function getEnv(propName, defaultValue) {
if (process.env[propName]) {
return process.env[propName];
} else {
return defaultValue;
}
}
module.exports = function() {
return {
getEnv : getEnv,
couchDbURL: function() {
// Default to a local couch installation for development
if (VCAP_SERVICES) {
return vcapServices["cloudantNoSQLDB"][0].credentials.url;
}
else {
return "http://localhost:5984";
}
},
couchDbName: function() {
return getEnv("COUCHDB_NAME", "mydb");
}
};
};
app.js
var config = require("./config")();
console.log(config.couchDbURL());
I am using socket.io in node.js to implement chat functionality in my azure cloud project. In it i have been adding the user chat history to tables using node.js. It works fine when i run it on my local emulator, but strangely when i deploy to my azure cloud it doesnt work and it doesnt throw up any error either so its really mind boggling. Below is my code.
var app = require('express')()
, server = require('http').createServer(app)
, sio = require('socket.io')
, redis = require('redis');
var client = redis.createClient();
var io = sio.listen(server,{origins: '*:*'});
io.set("store", new sio.RedisStore);
process.env.AZURE_STORAGE_ACCOUNT = "account";
process.env.AZURE_STORAGE_ACCESS_KEY = "key";
var azure = require('azure');
var chatTableService = azure.createTableService();
createTable("ChatUser");
server.listen(4002);
socket.on('privateChat', function (data) {
var receiver = data.Receiver;
console.log(data.Username);
var chatGUID1 = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
return v.toString(16);
});
var chatRecord1 = {
PartitionKey: data.Receiver,
RowKey: data.Username,
ChatID: chatGUID2,
Username: data.Receiver,
ChattedWithUsername: data.Username,
Timestamp: new Date(new Date().getTime())
};
console.log(chatRecord1.Timestamp);
queryEntity(chatRecord1);
}
function queryEntity(record1) {
chatTableService.queryEntity('ChatUser'
, record1.PartitionKey
, record1.RowKey
, function (error, entity) {
if (!error) {
console.log("Entity already exists")
}
else {
insertEntity(record1);
}
})
}
function insertEntity(record) {
chatTableService.insertEntity('ChatUser', record, function (error) {
if (!error) {
console.log("Entity inserted");
}
});
}
Its working on my local emulator but not on cloud and I came across a reading that DateTime variable of an entity should not be null when creating a record on cloud table. But am pretty sure the way am passing timestamp is fine, it is right? any other ideas why it might be working on local but not on cloud?
EDIT:
I hav also been getting this error when am running the socket.io server, but in spite of this error the socket.io functionality is working fine so i didnt bother to care about it. I have no idea what the error means in the first place.
{ [Error: connect ECONNREFUSED]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect' }
Couple things:
You shouldn't need to set Timestamp, the service should be populating that automatically when you insert a record.
When running it locally you can set the environment variables to the Windows Azure storage account settings and see if it will successfully write to the table when running on your developer box. Instead of running in the emulator, just set the environment variables and run the app directly with node.exe.
Are you running in a web role or worker role? I'm assuming it's a cloud service since you mentioned the emulator. If it's a worker role, maybe add some instrumentation to log to file to assist in debugging. If it's a web role you can add an iisnode.yml file in the root of the application, with the following line in the file to enable logging of stdout/stderr:
loggingEnabled: true
This will capture stdout/stderr to an iislog folder under the approot folder on e: or f: of the web role instance. You can remote desktop to the instance and look at the logs to see if the logs you have for successful insertion are occurring.
Otherwise, it's not obvious from the code above what's going on. Similar code worked fine for me. Relevant bits for my test code can be found at https://gist.github.com/Blackmist/5326756.
Hope this helps.