Class org.apache.oozie.action.hadoop.SparkMain not found - apache-spark

following are all oozie files which i have been using to run job. I have created folder on hdfs /test/jar and put workflow.xml and coordinator.xml file.
Properties File
nameNode=hdfs://host:8020
jobTracker=host:8050
queueName=default
oozie.use.system.lib.path=trueoozie.coord.application.path=${nameNode}/test/jar/coordinator.xml
oozie.action.sharelib.for.spark=spark2
start=2019-05-22T07:37Z
end=2019-05-22T07:40Z
freq=*/1 * * * *
zone=UTC
user.name=oozie
oozie.action.sharelib.for.spark.exclusion=oozie/jackson
#oozie.libpath=${nameNode}/user/oozie/share/lib
Coordinator File
<coordinator-app xmlns = "uri:oozie:coordinator:0.5" name = "test" frequency = "${freq}" start = "${start}" end = "${end}" timezone = "${zone}">
<controls>
<timeout>1</timeout>
</controls>
<action>
<workflow>
<app-path>${nameNode}/test/jar/workflow.xml</app-path>
</workflow>
</action>
</coordinator-app>
Workflow file
<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.5">
<start to="test" />
<action name="test">
<spark xmlns="uri:oozie:spark-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<master>yarn</master>
<mode>cluster</mode>
<name>Spark Example</name>
<class>com.spark.excel.mysql.executor.Executor</class>
<jar>${nameNode}/test/jar/com.spark.excel.mysql-0.1.jar</jar>
<spark-opts>--executor-memory 2G --num-executors 2</spark-opts>
</spark>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Workflow failed, error message [${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end" />
</workflow-app>
I have setup sharelib path also. Oozie is showing spark2 also through shareliblist and added oozie-sharelib-spark.jar file also in spark2.Ozzie job is submission and running also, but when it try to execute spark job then throughing error.

I had the same error. In my case I had to add in the properties file
oozie.use.system.libpath=true

Related

unable to connect hazzelcast IMDG locally from .net hazelcast client

I have downloaded the hazelcast locally. but when i am trying to connect it from .net client it does not get connected.
below exception is shown in console of Hazecast.
om.hazelcast.internal.nio.tcp.TcpIpConnection
WARNING: [10.253.200.102]:5701 [dev] [4.0.1] Connection[id=7, /10.253.200.102:5701->/10.253.200.102:56020, qualifier=null, endpoint=null, alive=false, connectionType=NONE] closed. Reason: Exception in Connection[id=7, /10.253.200.102:5701->/10.253.200.102:56020, qualifier=null, endpoint=null, alive=true, connectionType=NONE], thread=hz.epic_northcutt.IO.thread-in-0
java.lang.IllegalStateException: Unknown protocol: CB2
at com.hazelcast.internal.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:116)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:382)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:367)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:293)
at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:248)
enter image description here
**Default Hazelcast Config file:**
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-4.0.xsd">
<cluster-name>dev</cluster-name>
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<!--
Allowed port range when connecting to other nodes.
0 or * means use system provided port.
-->
<ports>0</ports>
</outbound-ports>
<join>
<multicast enabled="true">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="false">
<interface>127.0.0.1</interface>
<member-list>
<member>127.0.0.1</member>
</member-list>
</tcp-ip>
<aws enabled="false">
<access-key>my-access-key</access-key>
<secret-key>my-secret-key</secret-key>
<!--optional, default is us-east-1 -->
<region>us-west-1</region>
<!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
<host-header>ec2.amazonaws.com</host-header>
<!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
<security-group-name>hazelcast-sg</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
<gcp enabled="false">
<zones>us-east1-b,us-east1-c</zones>
</gcp>
<azure enabled="false">
<client-id>CLIENT_ID</client-id>
<client-secret>CLIENT_SECRET</client-secret>
<tenant-id>TENANT_ID</tenant-id>
<subscription-id>SUB_ID</subscription-id>
<cluster-id>HZLCAST001</cluster-id>
<group-name>RESOURCE-GROUP-NAME</group-name>
</azure>
<kubernetes enabled="false">
<namespace>MY-KUBERNETES-NAMESPACE</namespace>
<service-name>MY-SERVICE-NAME</service-name>
<service-label-name>MY-SERVICE-LABEL-NAME</service-label-name>
<service-label-value>MY-SERVICE-LABEL-VALUE</service-label-value>
</kubernetes>
<eureka enabled="false">
<self-registration>true</self-registration>
<namespace>hazelcast</namespace>
</eureka>
<discovery-strategies>
</discovery-strategies>
</join>
<interfaces enabled="false">
<interface>10.10.1.*</interface>
</interfaces>
<ssl enabled="false"/>
<socket-interceptor enabled="false"/>
<symmetric-encryption enabled="false">
<!--
encryption algorithm such as
DES/ECB/PKCS5Padding,
PBEWithMD5AndDES,
AES/CBC/PKCS5Padding,
Blowfish,
DESede
-->
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
<failure-detector>
<icmp enabled="false"/>
</failure-detector>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<security>
<client-block-unmapped-actions>true</client-block-unmapped-actions>
</security>
<queue name="default">
<!--
Maximum size of the queue. When a JVM's local queue size reaches the maximum,
all put/offer operations will get blocked until the queue size
of the JVM goes down below the maximum.
Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
-->
<max-size>0</max-size>
<!--
Number of backups. If 1 is set as the backup-count for example,
then all entries of the map will be copied to another JVM for
fail-safety. 0 means no backup.
-->
<backup-count>1</backup-count>
<!--
Number of async backups. 0 means no backup.
-->
<async-backup-count>0</async-backup-count>
<empty-queue-ttl>-1</empty-queue-ttl>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</queue>
<map name="default">
<!--
Data type that will be used for storing recordMap.
Possible values:
BINARY (default): keys and values will be stored as binary data
OBJECT : values will be stored in their object forms
NATIVE : values will be stored in non-heap region of JVM
-->
<in-memory-format>BINARY</in-memory-format>
<!--
Metadata creation policy for this map. Hazelcast may process objects of supported types ahead of time to
create additional metadata about them. This metadata then is used to make querying and indexing faster.
Metadata creation may decrease put throughput.
Valid values are:
CREATE_ON_UPDATE (default): Objects of supported types are pre-processed when they are created and updated.
OFF: No metadata is created.
-->
<metadata-policy>CREATE_ON_UPDATE</metadata-policy>
<!--
Number of backups. If 1 is set as the backup-count for example,
then all entries of the map will be copied to another JVM for
fail-safety. 0 means no backup.
-->
<backup-count>1</backup-count>
<!--
Number of async backups. 0 means no backup.
-->
<async-backup-count>0</async-backup-count>
<!--
Maximum number of seconds for each entry to stay in the map. Entries that are
older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
will get automatically evicted from the map.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0
-->
<time-to-live-seconds>0</time-to-live-seconds>
<!--
Maximum number of seconds for each entry to stay idle in the map. Entries that are
idle(not touched) for more than <max-idle-seconds> will get
automatically evicted from the map. Entry is touched if get, put or containsKey is called.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
-->
<max-idle-seconds>0</max-idle-seconds>
<eviction eviction-policy="NONE" max-size-policy="PER_NODE" size="0"/>
<!--
While recovering from split-brain (network partitioning),
map entries in the small cluster will merge into the bigger cluster
based on the policy set here. When an entry merge into the
cluster, there might an existing entry with the same key already.
Values of these entries might be different for that same key.
Which value should be set for the key? Conflict is resolved by
the policy set here. Default policy is PutIfAbsentMapMergePolicy
There are built-in merge policies such as
com.hazelcast.spi.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
com.hazelcast.spi.merge.PutIfAbsentMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
com.hazelcast.spi.merge.HigherHitsMergePolicy ; entry with the higher hits wins.
com.hazelcast.spi.merge.LatestUpdateMergePolicy ; entry with the latest update wins.
-->
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
<!--
Control caching of de-serialized values. Caching makes query evaluation faster, but it cost memory.
Possible Values:
NEVER: Never cache deserialized object
INDEX-ONLY: Caches values only when they are inserted into an index.
ALWAYS: Always cache deserialized values.
-->
<cache-deserialized-values>INDEX-ONLY</cache-deserialized-values>
</map>
<multimap name="default">
<backup-count>1</backup-count>
<value-collection-type>SET</value-collection-type>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</multimap>
<replicatedmap name="default">
<in-memory-format>OBJECT</in-memory-format>
<async-fillup>true</async-fillup>
<statistics-enabled>true</statistics-enabled>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</replicatedmap>
<list name="default">
<backup-count>1</backup-count>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</list>
<set name="default">
<backup-count>1</backup-count>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</set>
<reliable-topic name="default">
<read-batch-size>10</read-batch-size>
<topic-overload-policy>BLOCK</topic-overload-policy>
<statistics-enabled>true</statistics-enabled>
</reliable-topic>
<ringbuffer name="default">
<capacity>10000</capacity>
<backup-count>1</backup-count>
<async-backup-count>0</async-backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<in-memory-format>BINARY</in-memory-format>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</ringbuffer>
<flake-id-generator name="default">
<prefetch-count>100</prefetch-count>
<prefetch-validity-millis>600000</prefetch-validity-millis>
<epoch-start>1514764800000</epoch-start>
<node-id-offset>0</node-id-offset>
<bits-sequence>6</bits-sequence>
<bits-node-id>16</bits-node-id>
<allowed-future-millis>15000</allowed-future-millis>
<statistics-enabled>true</statistics-enabled>
</flake-id-generator>
<serialization>
<portable-version>0</portable-version>
</serialization>
<lite-member enabled="false"/>
<cardinality-estimator name="default">
<backup-count>1</backup-count>
<async-backup-count>0</async-backup-count>
<merge-policy batch-size="100">HyperLogLogMergePolicy</merge-policy>
</cardinality-estimator>
<scheduled-executor-service name="default">
<capacity>100</capacity>
<durability>1</durability>
<pool-size>16</pool-size>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</scheduled-executor-service>
<crdt-replication>
<replication-period-millis>1000</replication-period-millis>
<max-concurrent-replication-targets>1</max-concurrent-replication-targets>
</crdt-replication>
<pn-counter name="default">
<replica-count>2147483647</replica-count>
<statistics-enabled>true</statistics-enabled>
</pn-counter>
<cp-subsystem>
<cp-member-count>0</cp-member-count>
<group-size>0</group-size>
<session-time-to-live-seconds>300</session-time-to-live-seconds>
<session-heartbeat-interval-seconds>5</session-heartbeat-interval-seconds>
<missing-cp-member-auto-removal-seconds>14400</missing-cp-member-auto-removal-seconds>
<fail-on-indeterminate-operation-state>false</fail-on-indeterminate-operation-state>
<raft-algorithm>
<leader-election-timeout-in-millis>2000</leader-election-timeout-in-millis>
<leader-heartbeat-period-in-millis>5000</leader-heartbeat-period-in-millis>
<max-missed-leader-heartbeat-count>5</max-missed-leader-heartbeat-count>
<append-request-max-entry-count>100</append-request-max-entry-count>
<commit-index-advance-count-to-snapshot>10000</commit-index-advance-count-to-snapshot>
<uncommitted-entry-count-to-reject-new-appends>100</uncommitted-entry-count-to-reject-new-appends>
<append-request-backoff-timeout-in-millis>100</append-request-backoff-timeout-in-millis>
</raft-algorithm>
</cp-subsystem>
<metrics enabled="true">
<management-center enabled="true">
<retention-seconds>5</retention-seconds>
</management-center>
<jmx enabled="true"/>
<collection-frequency-seconds>5</collection-frequency-seconds>
</metrics>
</hazelcast>
**Client Code:**
using Hazelcast.Client;
using Hazelcast.Config;
using Hazelcast.Core;
using System;
namespace HazelCastClientApp1
{
public class HazelcastClientFactory
{
static IHazelcastInstance client;
public static IHazelcastInstance GetClient()
{
if (client == null)
{
InitializeClient();
}
return client;
}
private static void InitializeClient()
{
var cfg = new ClientConfig();
var hazelcastUrl = "127.0.0.1:5701";
cfg.GetNetworkConfig().AddAddress(hazelcastUrl);
client = HazelcastClient.NewHazelcastClient(cfg);
Console.WriteLine("Local address : {0}", client.GetLocalEndpoint().GetSocketAddress());
Console.ReadKey();
}
}
}
The [4.0.1] and Unknown protocol: CB2 mentions in the error messages indicate that you are trying to connect a v3 client to a v4 server. The v3 client does not support v4 servers. Please ensure that you are using compatible client and server.

How to fix the error while saving the image in gallery?

I was trying to save an image in a gallery via an app.
this is the error I got
java.lang.IllegalArgumentException: Couldn't find meta-data for a provider with authority com.example.android.fileprovider
I have a button which would capture an image from the camera by calling the below function.
**Code for saving the image **
val REQUEST_TAKE_PHOTO = 1
private fun dispatchTakePictureIntent() {
Intent(MediaStore.ACTION_IMAGE_CAPTURE).also { takePictureIntent ->
// Ensure that there's a camera activity to handle the intent
takePictureIntent.resolveActivity(packageManager)?.also {
// Create the File where the photo should go
val photoFile: File? = try {
createImageFile()
} catch (ex: IOException) {
// Error occurred while creating the File
Log.i("Exception","${ex.toString()}")
null
}
// Continue only if the File was successfully created
photoFile?.also {
val photoURI: Uri = FileProvider.getUriForFile(
this,
"com.example.android.fileprovider",
it
)
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, photoURI)
startActivityForResult(takePictureIntent, REQUEST_TAKE_PHOTO)
}
}
}
}
lateinit var currentPhotoPath: String
#Throws(IOException::class)
private fun createImageFile(): File {
// Create an image file name
val timeStamp: String = SimpleDateFormat("yyyyMMdd_HHmmss").format(Date())
val storageDir: File? = getExternalFilesDir(Environment.DIRECTORY_PICTURES)
return File.createTempFile(
"JPEG_${timeStamp}_", /* prefix */
".jpg", /* suffix */
storageDir /* directory */
).apply {
// Save a file: path for use with ACTION_VIEW intents
currentPhotoPath = absolutePath
}
}
Now I in google docs I found out that we need to add some more lines in Manifest.xml file, so I added them, but still, there is an error
<?XML version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.intentexperiment">
<uses-feature android:name="android.hardware.camera"
android:required="true" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<application
android:allowBackup="true"
android:icon="#mipmap/ic_launcher"
android:label="#string/app_name"
android:roundIcon="#mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="#style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<action android:name="android.intent.action.SET_ALARM" />
<category android:name="android.intent.category.DEFAULT" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
<provider
android:name="android.support.v4.content.FileProvider"
android:authorities="com.example.android.fileprovider"
android:exported="false"
android:grantUriPermissions="true">
<meta-data
android:name="android.support.FILE_PROVIDER_PATHS"
android:resource="#xml/file_paths"></meta-data>
</provider>
</application>
<paths xmlns:android="http://schemas.android.com/apk/res/android">
<external-files-path
android:name="image" android:path="Android/data/com.example.package.name/files/Pictures" />
</paths>
I don't understand where the error is.
EDIT
I have moved the
<path>
...
</path>
in an XML file and accordingly changed the android: resource attribute
but the error now is
java.lang.RuntimeException: Unable to get provider android.support.v4.content.FileProvider: java.lang.ClassNotFoundException: Didn't find class "android.support.v4.content.FileProvider" on path: DexPathList[[zip file "/data/app/com.example.intentexperiment-l0jZknwE-m3bgGza6RJAug==/base.apk"],nativeLibraryDirectories=[/data/app/com.example.intentexperiment-l0jZknwE-m3bgGza6RJAug==/lib/arm64, /system/lib64]]
EDIT 2
after doing this
android:name="androidx.core.content.FileProvider"
still there is a error
java.lang.IllegalArgumentException: Failed to find configured root that contains /storage/emulated/0/Android/data/com.example.intentexperiment/files/Pictures/JPEG_20200425_194311_3114424866623330911.jpg
at androidx.core.content.FileProvider$SimplePathStrategy.getUriForFile(FileProvider.java:744)
at androidx.core.content.FileProvider.getUriForFile(FileProvider.java:418)
at com.example.intentexperiment.MainActivity.dispatchTakePictureIntent(MainActivity.kt:54)
at com.example.intentexperiment.MainActivity.access$dispatchTakePictureIntent(MainActivity.kt:22)
at com.example.intentexperiment.MainActivity$onCreate$1.onClick(MainActivity.kt:31)
at android.view.View.performClick(View.java:6600)
at android.view.View.performClickInternal(View.java:6577)
at android.view.View.access$3100(View.java:781)
at android.view.View$PerformClick.run(View.java:25912)
at android.os.Handler.handleCallback(Handler.java:873)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:193)
at android.app.ActivityThread.main(ActivityThread.java:6923)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:870)

Count operation not working on aggregated IgniteDataFrame

I am working with Apache Spark and Apache Ignite. I have a spark dataset which I wrote in Ignite using following code
dataset.write()
.mode(SaveMode.Overwrite)
.format(FORMAT_IGNITE())
.option(OPTION_CONFIG_FILE(), "ignite-server-config.xml")
.option(OPTION_TABLE(), "CUSTOM_VALUES")
.option(OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(), "ID")
.save();
And I am reading it again to perform group by operation which will be pushed to Ignite.
Dataset igniteDataset = sparkSession.read()
.format(FORMAT_IGNITE())
.option(OPTION_CONFIG_FILE(), "ignite-server-config.xml")
.option(OPTION_TABLE(), "CUSTOM_VALUES")
.load();
RelationalGroupedDataset idGroupedData = igniteDataset.groupBy(customized_id);
Dataset<Row> result = idGroupedData.agg(count(id).as("count_id"),
count(fid).as("count_custom_field_id"),
count(type).as("count_customized_type"),
count(val).as("count_value"), count(customized_id).as("groupCount"));
Now, I want to get the number of rows returned by groupby action. So, I am calling count() on dataset asresult.count();
When I do this, I get following exception.
Caused by: org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "SELECT COUNT(1) AS COUNT FROM (SELECT FROM CUSTOM_VALUES GROUP[*] BY CUSTOMIZED_ID) TABLE1 "; expected "., (, USE, AS, RIGHT, LEFT, FULL, INNER, JOIN, CROSS, NATURAL, ,, SELECT"; SQL statement:
SELECT COUNT(1) AS count FROM (SELECT FROM CUSTOM_VALUES GROUP BY CUSTOMIZED_ID) table1 [42001-197]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
at org.h2.message.DbException.getSyntaxError(DbException.java:217)
Other functions such as show(), collectAsList().size(); works.
What am I missing here ?
I tested your example against the last community version 8.7.5 of GridGain that is the opensource version of Gridgain based on Ignite 2.7.0 sources with a subset of additional fixes (https://www.gridgain.com/resources/download).
Here is the code:
public class Main {
public static void main(String[] args) {
if (args.length < 1)
throw new IllegalArgumentException("You should set the path to client configuration file.");
String configPath = args[0];
SparkSession session = SparkSession.builder()
.enableHiveSupport()
.getOrCreate();
Dataset<Row> igniteDataset = session.read()
.format(IgniteDataFrameSettings.FORMAT_IGNITE()) //Data source
.option(IgniteDataFrameSettings.OPTION_TABLE(), "Person") //Table to read.
.option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath) //Ignite config.
.load();
RelationalGroupedDataset idGroupedData = igniteDataset.groupBy("CITY_ID");
Dataset<Row> result = idGroupedData.agg(count("id").as("count_id"),
count("city_id").as("count_city_id"),
count("name").as("count_name"),
count("age").as("count_age"),
count("company").as("count_company"));
result.show();
session.close();
}
}
Here are the maven dependencies:
<dependencies>
<dependency>
<groupId>org.gridgain</groupId>
<artifactId>gridgain-core</artifactId>
<version>8.7.5</version>
</dependency>
<dependency>
<groupId>org.gridgain</groupId>
<artifactId>ignite-core</artifactId>
<version>8.7.5</version>
</dependency>
<dependency>
<groupId>org.gridgain</groupId>
<artifactId>ignite-spring</artifactId>
<version>8.7.5</version>
</dependency>
<dependency>
<groupId>org.gridgain</groupId>
<artifactId>ignite-indexing</artifactId>
<version>8.7.5</version>
</dependency>
<dependency>
<groupId>org.gridgain</groupId>
<artifactId>ignite-spark</artifactId>
<version>8.7.5</version>
</dependency>
</dependencies>
Here is the cache configuration:
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="Person"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="sqlSchema" value="PUBLIC"/>
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="PersonKey"/>
<property name="valueType" value="PersonValue"/>
<property name="tableName" value="Person"/>
<property name="keyFields">
<list>
<value>id</value>
<value>city_id</value>
</list>
</property>
<property name="fields">
<map>
<entry key="id" value="java.lang.Integer"/>
<entry key="city_id" value="java.lang.Integer"/>
<entry key="name" value="java.lang.String"/>
<entry key="age" value="java.lang.Integer"/>
<entry key="company" value="java.lang.String"/>
</map>
</property>
<property name="aliases">
<map>
<entry key="id" value="id"/>
<entry key="city_id" value="city_id"/>
<entry key="name" value="name"/>
<entry key="age" value="age"/>
<entry key="company" value="company"/>
</map>
</property>
</bean>
</list>
</property>
</bean>
</list>
</property>
Using Spark 2.3.0 that is only supported for ignite-spark dependency I have next result on my test data:
Data:
ID,CITY_ID,NAME,AGE,COMPANY,
4,1,Justin Bronte,23,bank,
3,1,Helen Richard,49,bank,
Result:
+-------+--------+-------------+----------+---------+-------------+
|CITY_ID|count_id|count_city_id|count_name|count_age|count_company|
+-------+--------+-------------+----------+---------+-------------+
| 1| 2| 2| 2| 2| 2|
+-------+--------+-------------+----------+---------+-------------+
Also, this code could be fully applied to Ignite 2.7.0.

log4j2 RollingFileAppender old file gets removed after 7 rollovers

I use following log4j RollingFile appender in my webapp.
<Appenders>
<RollingFile name="logFile"
fileName="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log" immediateFlush="true"
filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i">
<PatternLayout pattern="%d{yyyyMMdd-HHmmss.SSS}|%X{username}|%-5p|%t| %-100m (%c{1})%n"/>
<Policies>
<OnStartupTriggeringPolicy/>
</Policies>
</RollingFile>
</Appenders>
With filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i", when log is rolled over, old file gets renamed to a filename with an index number (specified with %i), so all old files should get renamed and should be preserved.
I rollover the log programmatically with following code.
org.apache.logging.log4j.Logger logManagerLogger = LogManager.getLogger();
Map<String, org.apache.logging.log4j.core.Appender> appenders = ((org.apache.logging.log4j.core.Logger) logManagerLogger).getAppenders();
appenders.forEach((appenderName, appender) -> {
if (appender instanceof RollingFileAppender) {
LOGGER.info("Switching log for appender " + appenderName);
((RollingFileAppender) appender).getManager().rollover();
}
});
But, after 7 rollovers, the existing file gets removed (not renamed according to the specified filePattern) and log is continued in a new file.
What could be the issue here?
set DefaultRolloverStrategy(default is 7), In your config will be:
<Appenders>
<RollingFile name="logFile"
fileName="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log" immediateFlush="true"
filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i">
<PatternLayout pattern="%d{yyyyMMdd-HHmmss.SSS}|%X{username}|%-5p|%t| %-100m (%c{1})%n"/>
<Policies>
<OnStartupTriggeringPolicy/>
</Policies>
<DefaultRolloverStrategy max="100"/>
</RollingFile>
</Appenders>
now, it will have 100 log file to rollover.
If you want unlimited rollingfile,
According to Log4j2 documentation, from release 2.8, it can be done by setting fileIndex attribute to nomax. For example:
<DefaultRolloverStrategy fileIndex="nomax" />

connection not getting established between hazelcast client and hazelcast server

The client and server configs for hazelcast v3.6 are copied below. I can run the server (listening on 127.0.0.1:5706)
I get the following error on the hazelcast client side:
[warn] c.h.c.c.n.ClientConnection - Connection [/127.0.0.1:5701] lost. Reason: java.lang.NullPointerException[null]
[warn] c.h.c.s.i.ClusterListenerSupport - Unable to get alive cluster connection, try in 2986 ms later, attempt 1 of 2.
hazelcast-client.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast-client xsi:schemaLocation="http://www.hazelcast.com/schema/client-config hazelcast-client-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<properties>
<property name="hazelcast.client.shuffle.member.list">true</property>
<property name="hazelcast.client.heartbeat.timeout">60000</property>
<property name="hazelcast.client.heartbeat.interval">5000</property>
<property name="hazelcast.client.event.thread.count">5</property>
<property name="hazelcast.client.event.queue.capacity">1000000</property>
<property name="hazelcast.client.invocation.timeout.seconds">120</property>
</properties>
<network>
<cluster-members>
<address>127.0.0.1:5701</address>
<!-- <address>0.0.0.0</address> -->
</cluster-members>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-timeout>60000</connection-timeout>
<connection-attempt-period>3000</connection-attempt-period>
<connection-attempt-limit>2</connection-attempt-limit>
<socket-options>
<tcp-no-delay>false</tcp-no-delay>
<keep-alive>true</keep-alive>
<reuse-address>true</reuse-address>
<linger-seconds>3</linger-seconds>
<timeout>-1</timeout>
<buffer-size>32</buffer-size>
</socket-options>
<socket-interceptor enabled="false">
<class-name>com.hazelcast.examples.MySocketInterceptor</class-name>
<properties>
<property name="foo">bar</property>
</properties>
</socket-interceptor>
<ssl enabled="false">
<factory-class-name>com.hazelcast.examples.MySslFactory</factory-class-name>
</ssl>
<aws enabled="false" connection-timeout-seconds="11">
<inside-aws>true</inside-aws>
<access-key>TEST_ACCESS_KEY</access-key>
<secret-key>TEST_SECRET_KEY</secret-key>
<region>us-east-1</region>
<host-header>ec2.amazonaws.com</host-header>
<security-group-name>hazelcast-sg</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
</network>
<executor-pool-size>40</executor-pool-size> <!-- reduce the pool size after profiling -->
<security>
<credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials>
</security>
<listeners>
<!--<listener>com.hazelcast.examples.MembershipListener</listener>
<listener>com.hazelcast.examples.InstanceListener</listener>
<listener>com.hazelcast.examples.MigrationListener</listener>
-->
</listeners>
<!-- change to kryo -->
<!-- <serialization>
<portable-version>3</portable-version>
<use-native-byte-order>true</use-native-byte-order>
<byte-order>BIG_ENDIAN</byte-order>
<enable-compression>false</enable-compression>
<enable-shared-object>true</enable-shared-object>
<allow-unsafe>false</allow-unsafe>
<data-serializable-factories>
<data-serializable-factory factory-id="1">com.hazelcast.examples.DataSerializableFactory
</data-serializable-factory>
</data-serializable-factories>
<portable-factories>
<portable-factory factory-id="2">com.hazelcast.examples.PortableFactory</portable-factory>
</portable-factories>
<serializers>
<global-serializer>com.hazelcast.examples.GlobalSerializerFactory</global-serializer>
<serializer type-class="com.hazelcast.examples.DummyType"
class-name="com.hazelcast.examples.SerializerFactory"/>
</serializers>
<check-class-def-errors>true</check-class-def-errors>
</serialization>
-->
<native-memory enabled="false" allocator-type="POOLED">
<size unit="MEGABYTES" value="128" />
<min-block-size>1</min-block-size>
<page-size>1</page-size>
<metadata-space-percentage>40.5</metadata-space-percentage>
</native-memory>
<!--
<proxy-factories>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ1" service="sampleService1"/>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ2" service="sampleService1"/>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ3" service="sampleService3"/>
</proxy-factories>
-->
<load-balancer type="random"/>
<!--
Beware that near-cache eviction configuration is different for NATIVE in-memory format.
Proper eviction configuration example for NATIVE in-memory format :
`<eviction max-size-policy="USED_NATIVE_MEMORY_SIZE" eviction-policy="LFU" size="60"/>`
-->
<!-- <near-cache name="default">
<max-size>2000</max-size>
<time-to-live-seconds>90</time-to-live-seconds>
<max-idle-seconds>100</max-idle-seconds>
<eviction-policy>LFU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
<in-memory-format>OBJECT</in-memory-format>
<local-update-policy>INVALIDATE</local-update-policy>
</near-cache>
-->
<!--
<query-caches>
<query-cache name="query-cache-name" mapName="map-name">
<predicate type="class-name">com.hazelcast.examples.ExamplePredicate</predicate>
<entry-listeners>
<entry-listener include-value="true" local="false">com.hazelcast.examples.EntryListener</entry-listener>
</entry-listeners>
<include-value>true</include-value>
<batch-size>1</batch-size>
<buffer-size>16</buffer-size>
<delay-seconds>0</delay-seconds>
<in-memory-format>BINARY</in-memory-format>
<coalesce>false</coalesce>
<populate>true</populate>
<eviction eviction-policy="LRU" max-size-policy="ENTRY_COUNT" size="10000"/>
<indexes>
<index ordered="false">name</index>
</indexes>
</query-cache>
</query-caches>
-->
</hazelcast-client>
hazelcast server
here is the console message on the server and server config file:
console message
INFO: [127.0.0.1]:5701 [dev] [3.6] Established socket connection between /127.0.1.1:5701 and /127.0.0.1:47301
Mar 10, 2016 12:01:48 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [127.0.0.1]:5701 [dev] [3.6] Connection [/127.0.0.1:47301] lost. Reason: java.io.EOFException[Remote socket closed!]
hazelcast.xml (server)
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<ports>0-5900</ports>
</outbound-ports>
<join>
<multicast enabled="false">
<!--<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>-->
</multicast>
<tcp-ip enabled="true">
<member>127.0.0.1</member>
</tcp-ip>
</join>
<interfaces enabled="true">
<interface>127.0.0.1</interface>
</interfaces>
<ssl enabled="false" />
<socket-interceptor enabled="false" />
<symmetric-encryption enabled="false">
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<map name="userMap">
<async-backup-count>1</async-backup-count>
<near-cache>
<max-size>5000</max-size>
<invalidate-on-change>true</invalidate-on-change>
</near-cache>
<map-store enabled="false">
<class-name></class-name>
<write-delay-seconds>0</write-delay-seconds>
</map-store>
</map>
</hazelcast>
Client code
ClientConfig clientConfig = new XmlClientConfigBuilder().build(); //the xml file is being loaded
HazelcastInstance hazelcastClient = HazelcastClient.newHazelcastClient(clientConfig);
I do not have a firewall running on my computer. Any thoughts on what I may have misconfigured?
Update:
I am able to connect when i specify the ip address programmatically so I am assuming the issue is either with my client config or how I am readig it:
ClientCOnfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().addAddress("127.0.0.1");
HazelcastInstance hcastClient = HazelcastClient.newHazelcastClient(clientConfig);
The issue was caused by the following line in the client config xml file:
<security>
<credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials>
</security>
Once this was commented out, the client was able to connect to the server. I will update once I gather more information regarding its usage.
Only seems to be available in enterprise edition, not the community one but ideally, it should have either let the connection establish or generate a meaningful error message.

Resources