Solr experts,
at the moment i am using a custom proxy script to only accept requests with the right keypass-parameter. Is it possible to configure solr for such a use case, so i do not need this proxy script?
for example: localhost/proxy/search?keypass=asdaefva&query=SEARCHPARAMETERS
best regards
Tim
If you have a recent enough Solr version, you can use Solr's built in support for Authentication and Authorization. This also allows you to limit the collections and operations that a given key (i.e. user:pass) can access.
These are configured in a file named security.json, which is either store in Zookeeper (for SolrCloud), or locally on disk (using a local file in stand alone mode support was added later than the original support for using it in cluster mode).
{
"authentication":{
"class":"solr.BasicAuthPlugin",
"credentials":{
"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="
}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[
{
"name":"security-edit",
"role":"admin"
}
],
"user-role":{
"solr":"admin"
}
}
}
}
When running Solr in standalone mode, you need to create the security.json file and put it in the $SOLR_HOME directory for your installation (this is the same place you have located solr.xml and is usually server/solr).
Related
We are currently upgrading from 3.x to 4.x. We are using the programaticBuilder for the DriverConfigLoader. Below is the code for same.
DriverConfigLoader driverConfigLoader = DriverConfigLoader.programmaticBuilder()
.withDuration(DefaultDriverOption.HEARTBEAT_INTERVAL, Duration.ofSeconds(60))
.withString(DefaultDriverOption.REQUEST_CONSISTENCY, ConsistencyLevel.LOCAL_QUORUM.name())
.withString(DefaultDriverOption.RETRY_POLICY_CLASS, "DefaultRetryPolicy")
.withString(DefaultDriverOption.RECONNECTION_POLICY_CLASS, "ConstantReconnectionPolicy")
.withDuration(DefaultDriverOption.RECONNECTION_BASE_DELAY, Duration.ofSeconds(5))
.withString(DefaultDriverOption.LOAD_BALANCING_POLICY_CLASS, "DcInferringLoadBalancingPolicy")
.build();
Wanted to check how to verify this correct setting of ConsistencyLevel when the write/read happens. is there a debug log print mechanism available for this purpose.
Your question suggests that you don't trust that the configured consistency level is not being honoured by the driver so you're looking for proof that it does. To me it doesn't make sense. Perhaps you ran into another problem related to request consistency and you should post information about that instead.
In any case, the DriverConfigLoader is provided for convenience but we discourage its use because it means that you are hard-coding configuration within your app which is bad practice. If you need to make a change, you are forced to have to recompile your app again by virtue that the configuration is hardcoded. Only use the programmatic loader if you have a very specific reason.
The recommended method for configuring the driver options is to use an application configuration file (application.conf). The advantages include:
driver options is configured in a central location,
hot-reload support, and
changes do not require recompiling the app.
To set the basic request consistency to LOCAL_QUORUM:
datastax-java-driver {
basic {
request {
consistency = LOCAL_QUORUM
}
}
}
For details, see Configuring the Java driver. Cheers!
For DataStax Java Driver 4.x version you can do something like this:
CqlSession session = CqlSession.builder().withConfigLoader(driverConfigLoader).build();
DriverConfig config = session.getContext().getConfig();
config.getProfiles().forEach(
(name, profile) -> {
System.out.println("Profile: " + name);
profile.entrySet().forEach(System.out::println);
System.out.println();
});
This will print the values for every defined option in every defined profile. It won't print undefined options though.
I have just started my first loopback project and chosen loopback4 version for the application. Its purely a server application which will interact with databases (Redis and mongodb) and will call external API services due to micro-service architecture.
Now, I have 3 datasources in my application i.e. mongodb, Redis, and REST based datasource to call external services. I am facing 2 problems in going forward.
1. Environment specific configurations of Datasources: I need to maintain configuration for all three datasources according to the NODE_ENV environment variable. For lb3 i found this solution,
https://loopback.io/doc/en/lb3/Environment-specific-configuration.html#data-source-configuration
which does not work in lb4. One solution is to add configuration files having names mongodb.staging.json and mongodb.production.json and same for redis and rest datasources in directory src/datasources, and load this config according to NODE_ENV variable using if condition and pass it to the constructor of datasource. It works but it does not seem nice, as it should be application's responsibility to do this.
Can somebody suggest me lb3 equivalent solution for the above?
2. Calling External APIs via datasource: in lb4, To call external services its recommended to have a separate REST based datasource and its service to call it via controller. Now, In REST datasource config, one has to define a template of all the API calls which will happen to the external service https://loopback.io/doc/en/lb4/REST-connector.html#defining-a-custom-method-using-a-template.
As my application calls external service heavily with relatively large number of request parameters. It becomes really messy to declare each API call with its request params and to maintain this in the datasource config which will be environment specific.
Can somebody tell me a more robust and cleaner alternative of the above problem?
Thanks in advance!!
Using environment variables in datasource configs
The datasource config is simply a JSON file that's imported in into *.datasource.ts. Hence, you can replace that JSON file with a Typescript file and import it accordingly. LoopBack 4 does not provide any custom variable substitution mechanism. Instead, it is recommended to use process.env.
Recent CLI versions replace the JSON config in favour of using a single Typescript file:
import {inject} from '#loopback/core';
import {juggler} from '#loopback/repository';
const config = {
name: 'db',
connector: 'memory',
};
export class DbDataSource extends juggler.DataSource {
static dataSourceName = 'db';
static readonly defaultConfig = config;
constructor(
#inject('datasources.config.db', {optional: true})
dsConfig: object = config,
) {
super(dsConfig);
}
}
The dependency injection in the constructor allows you to override the config programmatically via the IoC container of the application.
Further reading
https://loopback.io/doc/en/lb4/DataSources.html
Calling external APIs without REST connector
The REST connector enforces a well-defined interface for querying external APIs so as to be able to do validation before sending out the request.
If this is not favourable, it is possible to create a new Service as a wrapper to the HTTP queries. From there, you can expose your own functions to handle requests to an external API. As Services do not need to follow a rigid structure, it is possible to customize it to your use-case.
It is also possible to create a new request directly inside the controller using either built-in or external libraries.
Overall, there isn't a 100% right or wrong way of doing certain things in LoopBack 4. Hence why the framework provides numerous ways to tackle the same issue.
I'm behind a firewall and lazybones can't reach its repository without a proxy.
I've searched the source and can't seem to find any reference to a proxy that seems to be relevant.
Support was officially added in version 0.8.1 of Lazybones, albeit via a general mechanism to add arbitrary system properties to the application in its configuration file, ~/.lazybones/config.groovy.
You can read about the details in the project README, but in essence, simply add the following to your config.groovy file:
systemProp {
http {
proxyHost = "localhost"
proxyPort = 8181
}
https {
proxyHost = "localhost"
proxyPort = 8181
}
}
You can use the systemProp. prefix to add any system properties to Lazybones, similar to the way it works in Gradle.
Is that what You're looking for? Basically You need to add some properties to gradle.properties file.
I am using Cygwin on Windows and I have modified the last line of
~/.gvm/lazybones/current/bin/lazybones
to say
exec "$JAVACMD" "${JVM_OPTS[#]}" -classpath "$CLASSPATH" "-Dhttp.proxyHost=127.0.0.1" "-Dhttp.proxyPort=8888" "-Dhttp.nonProxyHosts=localhost|127.0.0.1" uk.co.cacoethes.lazybones.LazybonesMain "$#"
Please note the quotes around the options. It works very well with my local Fiddler installation.
I have found no better way to enable proxy support due to the way the script is using eval. Maybe a more experienced shell script programmer can come up with a more elegant solution.
I was able to get out through the proxy setting the environment settings of
Picked up JAVA_TOOL_OPTIONS: -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8080
-Dhttp.nonProxyHosts="lmig.com" -Dhttps.proxyHost=127.0.0.1 -Dhttps.proxyPort=8080
unfortunately my environment requires authentication so I couldn't provide the complete proxy this way. I first ran "OWASP Zed Attach Proxy (ZAP)" which allowed me to run a proxy on my own machine (at port 8080) which then provided the complete authentication required.
This was able to then run the complete "lazybones list" command which retrieved the contents of the respositories.
Unfortunately I was not able to create an application from those templates becuase bintray required a login (though an anonymous login would do) and couldn't seem to get an additional level of authentication (I received "Unauthorized" from bintray)
In WSO2 Enterprise Store 1.0.0 there is a lack of security on some aspects.
For example: several public files contain sensitive data as the location and clear password of keystores:
/store/config/publisher.json
/publisher/config/publisher.json
I'm still trying to figure why these data are needed on client side...
Is there any configuration setting to solve this issue?
You can solve this issue by adding following URL mapping to the jaggery.conf inside both publisher and store apps.
{
"url": "/config/*",
"path": "/"
}
I'm writing a node CRUD app that requires a few CouchDB views (I'm using express and cradle).
I've got the node app itself controlled with git, but my DB views are currently uncontrolled.
What's the recommended way to put these under source control? I don't want to put the entire database (including data) under source control.
Take a look at couchapp, http://couchapp.org/. You can use that to push your version-controlled design docs to a database.
Maybe useful: also CouchApp may push some docs in db. For example, doc(s) of configure or demo. For that put file in folder '_docs' (the same level with 'lists', 'shows', etc.) in JSON format.
File: 'any-configure.json'
{
"_id": "any-configure",
"fieldA": "...",
"fieldB": "...",
...
}
As pointed, using couchapp could make it easier to work with design documents.
I have implemented a similar approach in a Java project, an example here and the class that manages these documents.