How do I verify the consistency level configured on the Java driver? - cassandra

We are currently upgrading from 3.x to 4.x. We are using the programaticBuilder for the DriverConfigLoader. Below is the code for same.
DriverConfigLoader driverConfigLoader = DriverConfigLoader.programmaticBuilder()
.withDuration(DefaultDriverOption.HEARTBEAT_INTERVAL, Duration.ofSeconds(60))
.withString(DefaultDriverOption.REQUEST_CONSISTENCY, ConsistencyLevel.LOCAL_QUORUM.name())
.withString(DefaultDriverOption.RETRY_POLICY_CLASS, "DefaultRetryPolicy")
.withString(DefaultDriverOption.RECONNECTION_POLICY_CLASS, "ConstantReconnectionPolicy")
.withDuration(DefaultDriverOption.RECONNECTION_BASE_DELAY, Duration.ofSeconds(5))
.withString(DefaultDriverOption.LOAD_BALANCING_POLICY_CLASS, "DcInferringLoadBalancingPolicy")
.build();
Wanted to check how to verify this correct setting of ConsistencyLevel when the write/read happens. is there a debug log print mechanism available for this purpose.

Your question suggests that you don't trust that the configured consistency level is not being honoured by the driver so you're looking for proof that it does. To me it doesn't make sense. Perhaps you ran into another problem related to request consistency and you should post information about that instead.
In any case, the DriverConfigLoader is provided for convenience but we discourage its use because it means that you are hard-coding configuration within your app which is bad practice. If you need to make a change, you are forced to have to recompile your app again by virtue that the configuration is hardcoded. Only use the programmatic loader if you have a very specific reason.
The recommended method for configuring the driver options is to use an application configuration file (application.conf). The advantages include:
driver options is configured in a central location,
hot-reload support, and
changes do not require recompiling the app.
To set the basic request consistency to LOCAL_QUORUM:
datastax-java-driver {
basic {
request {
consistency = LOCAL_QUORUM
}
}
}
For details, see Configuring the Java driver. Cheers!

For DataStax Java Driver 4.x version you can do something like this:
CqlSession session = CqlSession.builder().withConfigLoader(driverConfigLoader).build();
DriverConfig config = session.getContext().getConfig();
config.getProfiles().forEach(
(name, profile) -> {
System.out.println("Profile: " + name);
profile.entrySet().forEach(System.out::println);
System.out.println();
});
This will print the values for every defined option in every defined profile. It won't print undefined options though.

Related

Can a Node package require a DB connection?

As per the title, can a Node.js package require a database connection?
For example, I have written a specific piece of middlware functionality that I plan to publish via NPM, however, it requires a connection to a NoSQL database. The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value.
Is this considered bad practice?
It is not a bad practice as long as you require the DB needed and also explicitly state it clearly in your Readme.md file, it's only a bad practice when you go ahead and work without provide a comment in your codes or a readme.md file that will guide any other person going through your codes.
Example:
//require your NoSQL database eg MongoDB
const mongoose = require('mongoose');
// to connect to the database. **boy** is the database name
mongoose.connect('mongodb://localhost/boy', function(err) {
if (err) {
console.log(err);
} else {
console.log("Success");
}
});
You generally have two choices when your module needs a database and wants to remain as independently useful as possible:
You can load a preferred database in your code and use it.
You can provide the developer using your module a means of passing in a database that meets your specification to be used by your module. The usual way of passing in the database would be for the developer using your module to pass your module the data in a module constructor function.
In the first case, you may need to allow the developer to specify a disk store path to be used. In the second case, you have to be very specific in your documentation about what kind of database interface is required.
There's also a hybrid option where you offer the developer the option of configuring and passing you a database, but if not provided, then you load your own.
The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value. Is this considered bad practice?
No, it's not a bad practice. This would be an implementation of option number 1 above. As long as your customers (the developers using your module) don't mind you loading and using Mongoose, then this is perfectly fine.

I wrote a Liferay module. How to make it configurable by administrators?

I have created a Liferay 7 module, and it works well.
Problem: In the Java source code I hard-coded something that administrators need to modify.
Question: What is the Liferay way to externalize settings? I don't mind if the server has to be restarted, but of course the ability to modify settings on a live running server (via Gogo Shell?) could be cool provided that these settings then survive server restarts.
More specifically, I have a module for which I would like to be able to configure an API key that looks like "3g9828hf928rf98" and another module for which I would like to configure a list of allowed structures that looks like "BASIC-WEB-CONTENT","EVENTS","INVENTORY".
Liferay is utilizing the standard OSGi configuration. It's quite a task documenting it here, but it's well laid out in the documentation.
In short:
#Meta.OCD(id = "com.foo.bar.MyAppConfiguration")
public interface MyAppConfiguration {
#Meta.AD(
deflt = "blue",
required = false
)
public String favoriteColor();
#Meta.AD(
deflt = "red|green|blue",
required = false
)
public String[] validLanguages();
#Meta.AD(required = false)
public int itemsPerPage();
}
OCD stands for ObjectClassDefinition. It ties this configuration class/object to the configurable object through the id/pid.
AD is for AttributeDefinition and provides some hints for the configuration interface, which is auto-generated with the help of this meta type.
And when you don't like the appearance of the autogenerated UI, you "only" have to add localization keys for the labels that you see on screen (standard Liferay translation).
You'll find a lot more details on OSGi configuration for example on enroute, though the examples I found are always a bit more complex than just going after the configuration.

Refresh Cache in Spring

How I can refresh spring cache when I inserted data into Database through my services and when I added data directly into database.Can we achieve this?.
Note:
I am using following libs
1)net.sf.json-lib
2)spring-support-context
through my services
This is typically achieved in your application's services (e.g. #Service application components) using the Spring #Cacheable, #CachePut annotations, for example...
#Service
class BookService {
#Cacheable("Books")
Book findBook(ISBN isbn) {
...
return booksRepository().find(isbn);
}
#CachePut(cacheNames = "Books", key = "#book.isbn")
Book update(Book book) {
...
return booksRepository.save(book);
}
}
Both #Cacheable and #CachePut will update the cache provider as the underlying method may callback through to the underlying database.
and when I added data directly into database
This is typically achieved by the underlying cache store. For example, in GemFire, you can use a CacheLoader to "read-through" (to your underlying database perhaps) on "cache misses". See GemFire's user guide documentation on "How Data Loaders Work" as an example and more details.
So, back to our example, if the "Book (Store)" database was updated independent of the application (using Spring's Caching Annotation support and infrastructure), then a developer just needs to define a strategy on cache misses. And, in GemFire that could be a CacheLoader, and when...
bookService.find(<some isbn>);
is called resulting in a "cache miss", GemFire's CacheLoader will kick in, load the cache with that book and Spring will see it as a "cache hit".
Of course, our implementation of bookService.find(..) went to the underlying database anyway, but it only retrieves a "single" book. A loader could be implemented to populate an entire set (or range) of books based on some criteria (such as popularity), where the application service expects those particular set of books to be searched for by potential customers, using the application, first, and pre-cache them.
So, while Spring's Cache annotations typically work per entry, a cache store specific strategy can be used to prefetch and, in-a-way, "refresh" the cache, lazily, on the first cache miss.
In summary, while the former can be handled by Spring, the "refresh" per say is typically handled by the caching provider (e.g. GemFire).
Hopefully this gives you some ideas.
Cheers,
John

Java method is successfully executed inside a Java agent, but fails if executed in a Java class in the database's code

In my XPages application, I use the javax.mail library to read mail messages from IMAP accounts. The class I use to get and save messages works perfectly fine if I use it inside a Java agent. However, if I put the exact same class into "Code/Java" in my XPages project, the methods throw javax.mail.NoSuchProviderException: No provider for imaps when I try to get the session store:
Properties props = new Properties();
props.setProperty("mail.imaps.socketFactory.class", "AlwaysTrustSSLContextFactory");
props.setProperty("mail.imaps.socketFactory.port", "993");
props.setProperty("mail.imap.ssl.enable", "true");
props.setProperty("mail.imaps.ssl.trust", "*");
URLName url = new URLName("imaps", server, 993, "", username, password);
Session session = Session.getInstance(props, null);
Store store = session.getStore(url); //THE ERROR OCCURS HERE
store.connect();
The javax.mail library that I added to the project's build path is exactly the same that I use in the Java agent.
Some posts I found for the mentioned type of exception suggest that it might be caused by multiple versions of javax.mail being included in the build path. However, this does not seem to be the case because removing javax.mail from the build path causes the class to not be built.
Does anybody know what's the problem here?
Please check the versions and providers of javax.mail used. You can do this by adding
props.setProperty("mail.debug", "true" );
to your properties.
On the console (Client's Java Console or Server console you can see the result, something like this:
DEBUG: JavaMail version 1.4ea
and
DEBUG: getProvider() returning javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc]
Alternativly, you can get the list of available providers programmatically (when you have no console access):
Session session = Session.getInstance(props, null);
Provider[] providers = session.getProviders();
for( Provider p:providers ){
if( ! ("".equals(tmpStr)) )
tmpStr = tmpStr + ",";
tmpStr = tmpStr + p.getProtocol();
}
[You will see that the list of providers does not contain imaps on the server (8.5.3)]
Was going to comment, but rep isn't high enough yet....
Is the java agent in the same db?
If it is, can you cut it and try the xpage when the agent isn't there? Probably a long shot, but may be worth checking that the agent isn't interfering in any way.
You want to check out my article with a code sample how to read IMAP from Notes. I didn't run that code as agent, but from the command line. Chances are, that it will work better for you. Main difference (possibly?): I used the Google enhanced IMAP classes.
Check it out and let us know if that worked for you!

WCF binding -wsHttpBinding uses a session?

In a previous thread one of the respondents said that using wsHttpBinding used a session. Since I'm working in a clustered IIS environment, should I disable this? As far as I know, sessions don't work in a cluster.
If I need to disable this, how do I go about it?
That probably was me :-) By default, your service and the binding used will determine if a session comes into play or not.
If you don't do anything, and use wsHttpBinding, you'll have a session. If you want to avoid that, you should:
switch to another protocol/binding where appropriate
decorate your service contracts with a SessionMode attribute
If you want to stop a service from ever using a session, you can do so like this:
[ServiceContract(Namespace="....", SessionMode=SessionMode.NotAllowed)]
interface IYourSession
{
....
}
and you can decorate your service class with the appropriate instance context mode attributes:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
class YourService : IYourService
{
....
}
With this, you should be pretty much on the safe side and not get any sessions whatsoever.
Marc

Resources