Specifying ClassDefinition in hazelcast.yaml? - hazelcast

Using hazelcast 5.2.1
We are moving from a java-based config in a custom application to stand-alone serve with a yaml config, since we would like to use the public docker image as a base for a hazelcast member. We excpet to just add some jar files in ${HZ_HOME}bin/user-lib and a config file in ${HZ_HOME}/hazelcast.yaml.
Our config gets picked up, and the server starts. But when the clients try to put objects, things go bad. The server logs the error:
com.hazelcast.nio.serialization.HazelcastSerializationException: Cannot write null portable without explicitly registering class definition
How can we add ClassDefinition objects to the config?
We have classes implementing VersionedPortable, and have static ClassDefinition members for them.
Until now we have just added the class definitions programmatically while configuring the member instance in our own applications, but we cannot find a hook to do this when using yaml config?

Related

Can I omit site.pp to execute one class on all nodes

I habe puppetserver and three nodes. I created on master createuser.pp file with createuser class.
Additionally I created site.pp with following content:
node 'app01','app02,'app03'{
include createuser
}
Now executing from each app server: puppet agent -tv will create user. All work fine.
My question is can I do the same without defining site.pp manifest ?
I can of course add include createuser at the end of class definition in file createuser.pp
but how can i run it from all three app hosts ? Do I need to create a tag ?
Thanks for any tips.
can I do the same without defining site.pp manifest ?
In any version of Puppet still supported, you can do it without a file named site.pp, but not without a "site manifest" in a more general sense. This can be a single file of configurable name (default "site.pp"), or it can be the collective contents of all the files in a specified directory. Either way, the site manifest is the entry point for catalog building. Other manifests are consulted only as needed.
I can of course
add include createuser at the end of class definition in file
createuser.pp
In current Puppet, top-scope code may appear only in the site manifest (though some historic versions were more lax). Your class should be in a module, not in the site manifest, and in that case no, you couldn't use an include statement in the same file with the class definition. You need a class declaration either in the site manifest or in some other class directly or indirectly declared by the site manifest.
but how can i run it from all three app hosts ? Do I
need to create a tag ?
On the agent side, tags are useful only for limiting which classes and resources are applied. Only those already in its catalog are available -- you cannot use tags from that end to apply classes or resources that wouldn't ordinarily be applied.
You ensure that the class is in the node's catalog by declaring it in the site manifest (as you are doing now) or in another class directly or indirectly declared in the site manifest.

Environment specific configuration of datasources in loopback4 application

I have just started my first loopback project and chosen loopback4 version for the application. Its purely a server application which will interact with databases (Redis and mongodb) and will call external API services due to micro-service architecture.
Now, I have 3 datasources in my application i.e. mongodb, Redis, and REST based datasource to call external services. I am facing 2 problems in going forward.
1. Environment specific configurations of Datasources: I need to maintain configuration for all three datasources according to the NODE_ENV environment variable. For lb3 i found this solution,
https://loopback.io/doc/en/lb3/Environment-specific-configuration.html#data-source-configuration
which does not work in lb4. One solution is to add configuration files having names mongodb.staging.json and mongodb.production.json and same for redis and rest datasources in directory src/datasources, and load this config according to NODE_ENV variable using if condition and pass it to the constructor of datasource. It works but it does not seem nice, as it should be application's responsibility to do this.
Can somebody suggest me lb3 equivalent solution for the above?
2. Calling External APIs via datasource: in lb4, To call external services its recommended to have a separate REST based datasource and its service to call it via controller. Now, In REST datasource config, one has to define a template of all the API calls which will happen to the external service https://loopback.io/doc/en/lb4/REST-connector.html#defining-a-custom-method-using-a-template.
As my application calls external service heavily with relatively large number of request parameters. It becomes really messy to declare each API call with its request params and to maintain this in the datasource config which will be environment specific.
Can somebody tell me a more robust and cleaner alternative of the above problem?
Thanks in advance!!
Using environment variables in datasource configs
The datasource config is simply a JSON file that's imported in into *.datasource.ts. Hence, you can replace that JSON file with a Typescript file and import it accordingly. LoopBack 4 does not provide any custom variable substitution mechanism. Instead, it is recommended to use process.env.
Recent CLI versions replace the JSON config in favour of using a single Typescript file:
import {inject} from '#loopback/core';
import {juggler} from '#loopback/repository';
const config = {
name: 'db',
connector: 'memory',
};
export class DbDataSource extends juggler.DataSource {
static dataSourceName = 'db';
static readonly defaultConfig = config;
constructor(
#inject('datasources.config.db', {optional: true})
dsConfig: object = config,
) {
super(dsConfig);
}
}
The dependency injection in the constructor allows you to override the config programmatically via the IoC container of the application.
Further reading
https://loopback.io/doc/en/lb4/DataSources.html
Calling external APIs without REST connector
The REST connector enforces a well-defined interface for querying external APIs so as to be able to do validation before sending out the request.
If this is not favourable, it is possible to create a new Service as a wrapper to the HTTP queries. From there, you can expose your own functions to handle requests to an external API. As Services do not need to follow a rigid structure, it is possible to customize it to your use-case.
It is also possible to create a new request directly inside the controller using either built-in or external libraries.
Overall, there isn't a 100% right or wrong way of doing certain things in LoopBack 4. Hence why the framework provides numerous ways to tackle the same issue.

Meteor 1.3 and configuration

i have a simple question.
When you use node + webpack you can easily configure whatever you want.
For example i can write in config default path for my app modules.
Haw can i do it in Meteor 1.3? do they have some config file such Webpack?
Meteor applications can store configuration options like API keys or global settings. An easy way to provide this configuration is with a settings.json file in the root of your Meteor application. The key/value pairs are available only on the server, but you can provide public access to settings by using public:
settings.json
{
"privateKey": "privateValue",
"public": {
"publicKey": "publicValue"
}
}
These values are available in your app using Meteor.settings.
From the Full Meteor Docs:
Meteor.settings contains deployment-specific configuration options. You can initialize settings by passing the --settings option (which takes the name of a file containing JSON data) to meteor run or meteor deploy. When running your server directly (e.g. from a bundle), you instead specify settings by putting the JSON directly into the METEOR_SETTINGS environment variable. If the settings object contains a key named public, then Meteor.settings.public will be available on the client as well as the server. All other properties of Meteor.settings are only defined on the server. You can rely on Meteor.settings and Meteor.settings.public being defined objects (not undefined) on both client and server even if there are no settings specified. Changes to Meteor.settings.public at runtime will be picked up by new client connections.
A good write-up can also be found on TheMeteorChef's Blog

MVC 5 keeps sending me to /Account/Login

Despite adding OWIN authentication to my MVC site I keep getting redirected to /Account/Login even though I have set authentication to none in the web.config and changed the Owin LoginPath to /Login/
I have also noticed that the Startup.cs ConfigureAuth(IAppBuilder app) never gets hit.....
I have added the following packages
Microsoft.Owin
Microsoft.Aspnet.Identity.Owin
Microsoft.Owin.Security
Microsoft.Owin.Security.Cookies
Microsoft.Owin.Security.Oauth
Owin
Do I need to configure OWIN to use my Startup.cs class or should it just work?
You should take a look at OWIN Startup Class Detection
Every OWIN Application has a startup class where you specify
components for the application pipeline. There are different ways you
can connect your startup class with the runtime, depending on the
hosting model you choose (OwinHost, IIS, and IIS-Express). The startup
class shown in this tutorial can be used in every hosting application.
You connect the startup class with the hosting runtime using one of
the these approaches:
Naming Convention: Katana looks for a class named Startup in namespace matching the assembly name or the global namespace.
OwinStartup Attribute: This is the approach most developers will take to specify the startup class. The following attribute will
set the startup class to the TestStartup class in the StartupDemo
namespace.
[assembly: OwinStartup(typeof(StartupDemo.TestStartup))]
The OwinStartup attribute overrides the naming convention. You can also specify a friendly name with this attribute, however, using a
friendly name requires you to also use the appSetting element in the
configuration file.
There are a few other things in addition to the above, which you will find at the link provided.

ClassLoader - Loading and saving data

Hopefully someone can help me with this.
It is my understanding that using a ClassLoader is the most reliable way to load in content.
public class Pipeline{
public static URL getResource(String filename) {
return ClassLoader.getSystemResource(filename);
}
public static InputStream getResourceAsStream(String filename) {
return ClassLoader.getSystemResourceAsStream(filename);
}
}
If you had a file at "[jar bundle]/resources/abc.png" ..You would load it by:
URL url = Pipeline.getResource("resources/abc.png");
Loading is simple.
Saving is what's getting me.
I have a program that collects data while running, saves that data on exit, and then loads the data back in next time and keeps adding to it.
Easiest solution I think would be to save back into the jar bundle so that ClassLoader can get at them. Is this even possible? Or recommended?
I don't mind having my resources outside of the jar, just as long as I don't have to resort to 'File' to get at them and save to them. (Unless it can be done cleanly)
folder/application.jar
folder/resources/abc.png
If you could ../ back one from where the ClassLoader is looking it would be easy to cleanly get data from the directory that actually contains the jar file
Pipeline.getResource("../resources/abc.png");
Any ideas?
This isn't really what class loaders are meant for. Loading resources from the class loader is meant so that you can bundle up your application as one package and components can read each other without worrying about how the system you're deploying to is setup.
If the file in the JAR is meant to be changed by the app, then it isn't part of the app and thus probably shouldn't be in the JAR.
I don't have a lot of context on your app, but hopefully my suggestion will be valid for your situation.
I recommend setting a requirement in your app that it has a work area to which it is allowed to read and write and accept a configuration setting that specifies where this directory is. Typical ways to do this in Java are with environment variables, system properties or JNDI settings (for container deployments).
Examples:
Tomcat's startup scripts figure out where it is installed and sets a system property called catalina.home and allows you to over-ride it with an environment variable called CATALINA_HOME.
JBoss looks for JBOSS_HOME
Java application servers typically look for JAVA_HOME to find the JDK.

Resources