Spring state machine UML stays in memory - state-machine

I've been working with Spring State machines for over a year now trying different ways to implement according to my requirements, and I've come across a serious issue when I use UML.
I use papyrus to draw the UML and I have many UML stored in a certain location. The one I need to use is selected dynamically. That has been done successfully. Now I have come across a serious problem. Below is the code on how I have called the UML.
Resource resource = new FileSystemResource(stmDir+"/"+model+".uml");
UmlStateMachineModelFactory umlBuilder = new UmlStateMachineModelFactory(resource);
umlBuilder.setStateMachineComponentResolver(resolveActionConfig(model));
StateMachineModelFactory<String, String> modelFactory = umlBuilder;
Builder<String, String> builder = StateMachineBuilder.builder();
builder.configureModel().withModel().factory(modelFactory);
builder.configureConfiguration().withConfiguration().beanFactory(new StaticListableBeanFactory());
stateMachine = builder.build();
And as you can see I use new UmlStateMachineModelFactory(resource);
UmlStateMachineModelFactory Class has the following code
#Override
public StateMachineModel<String, String> build() {
Model model = null;
try {
model = UmlUtils.getModel(getResourceUri(resolveResource()).getPath());
} catch (IOException e) {
throw new IllegalArgumentException("Cannot build build model from resource " + resource + " or location " + location, e);
}
UmlModelParser parser = new UmlModelParser(model, this);
DataHolder dataHolder = parser.parseModel();
// we don't set configurationData here, so assume null
return new DefaultStateMachineModel<String, String>(null, dataHolder.getStatesData(), dataHolder.getTransitionsData());
}
and everytime I create one UmlStateMachineModelFactory, it in turn creates one UmlModelParser.
This class has
import org.eclipse.emf.common.util.EList;
import org.eclipse.emf.ecore.util.EcoreUtil;
import org.eclipse.uml2.uml.Activity;
import org.eclipse.uml2.uml.Constraint;
import org.eclipse.uml2.uml.Event;
import org.eclipse.uml2.uml.Model;
import org.eclipse.uml2.uml.OpaqueBehavior;
import org.eclipse.uml2.uml.OpaqueExpression;
import org.eclipse.uml2.uml.PackageableElement;
import org.eclipse.uml2.uml.Pseudostate;
import org.eclipse.uml2.uml.PseudostateKind;
import org.eclipse.uml2.uml.Region;
import org.eclipse.uml2.uml.Signal;
import org.eclipse.uml2.uml.SignalEvent;
import org.eclipse.uml2.uml.State;
import org.eclipse.uml2.uml.StateMachine;
import org.eclipse.uml2.uml.TimeEvent;
import org.eclipse.uml2.uml.Transition;
import org.eclipse.uml2.uml.Trigger;
import org.eclipse.uml2.uml.UMLPackage;
import org.eclipse.uml2.uml.Vertex;
These remain in my memory causing it to use up a large amount of memory and doesn't get collected by the garbage collector. This is causing a lot of trouble as we are using this for a large scale application and many instances are created every few minutes.
Please suggest a workaround.
EDIT- I managed to create a singleton wrapper for this problem but regardless of that, it persists. My colleague had found out that the loaded resources do not unload. so everytime I call builder.build(),
ResourceSet resourceSet = new ResourceSetImpl();
resourceSet.getPackageRegistry().put(UMLPackage.eNS_URI, UMLPackage.eINSTANCE);
resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put(UMLResource.FILE_EXTENSION, UMLResource.Factory.INSTANCE);
resourceSet.createResource(modelUri);
Resource resource = resourceSet.getResource(modelUri, true);
this is called. I wonder is this is causing the heap build-up. Please help

I pushed some fixes per gh572 to master and 1.2.x. Hopefully those work for you. At least I was able to see garbage collection to work better. I'm planning to create releases later this week.

Related

How to read cassandra FQL logs in java?

I have a bunch of cassandra FQL logs with the "cq4" extension. I would like to read them in Java, is there a Java class that those log entries can be mapped into?
These are the logs I see.
I want to read this with this code:
import net.openhft.chronicle.Chronicle;
import net.openhft.chronicle.ChronicleQueueBuilder;
import net.openhft.chronicle.ExcerptTailer;
import java.io.IOException;
public class Main{
public static void main(String[] args) throws IOException {
Chronicle chronicle = ChronicleQueueBuilder.indexed("/Users/pavelorekhov/Desktop/fql_logs").build();
ExcerptTailer tailer = chronicle.createTailer();
while (tailer.nextIndex()) {
tailer.readInstance(/*class goes here*/)
}
}
}
I think from the code and screenshot you can understand what kind of class I need in order to read log entries into objects. Does that class exist in some cassandra maven dependency?
You are using Chronicle 3.x, which is very old.
I suggest using Chronicle 5.20.123, which is the version Cassandra uses.
I would assume Cassandra has it's own tool for reading the contents of these file however, you can dump the raw messages with net.openhft.chronicle.queue.main.DumpMain
I ended up cloning cassandra's github repo from here: https://github.com/apache/cassandra
In their code they have the FQLQueryIterator class which you can use to read logs, like so:
SingleChronicleQueue scq = SingleChronicleQueueBuilder.builder().path("/Users/pavelorekhov/Desktop/fql_logs").build();
ExcerptTailer excerptTailer = scq.createTailer();
FQLQueryIterator iterator = new FQLQueryIterator(excerptTailer, 1);
while (iterator.hasNext()) {
FQLQuery fqlQuery = iterator.next(); // object that holds the log entry
// do whatever you need to do with that log entry...
}

Groovy to validate if Node exists or not

I'm trying to validate in Groovy if a node came from an external system, if the external system has a value the node comes with a value, if the system dont have a value the node doesn't come in the payload.
Based on this i need to change/correct if its a NEW or UPDATE process for that record in an existing node.
Incoming XML is:
<urn:ExternalReqForApprovalImportRequest xmlns:urn="urn:Ariba:Buyer:vrealm_1" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xs="http://www.w3.org/2001/XMLSchema" partition="" variant="">
<urn:ExternalReqForApprovalInput_Item>
<urn:item>
<urn:Name>TEST</urn:Name>
<urn:Operation>XXXXXX</urn:Operation>
<urn:HeaderExtrinsics>
<Extrinsics>
<Extrinsic name="PRRefID">THIS IS THE NODE THAT MAY OR NOT MAY COME</Extrinsic>
<Extrinsic name="XXXXXXX">Value</Extrinsic>
<Extrinsic name="AnotherField">TValue</Extrinsic>
</Extrinsics>
</urn:HeaderExtrinsics>
</urn:item>
</urn:ExternalReqForApprovalInput_Item>
</urn:ExternalReqForApprovalImportRequest>
I created this groovy:
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import java.lang.String;
def Message processData(Message message) {
map = message.getProperties();
value = map.get("PRRefID");
Reader reader = message.getBody(Reader)
def FG = new XmlParser().parse(reader)
if(value == null) {
FG.'urn:ExternalReqForApprovalInput_Item'.'urn:item'.'urn:HeaderExtrinsics'.Operation = "NEW"
} else {
FG.'urn:ExternalReqForApprovalInput_Item'.'urn:item'.'urn:HeaderExtrinsics'.Operation = "UPDATE"
}
message.setBody(FG);
return message;
}
What i want to achieve is validate if Extrinsic with name PRRefID comes in the xml, if it comes i need to update the Operation to UPDATE, if it doesnt come i need to set NEW as the value.
I tried to map the xpath as a Property (probably there is a cleaner way to map this from the direct xpath), but my issue right now is changing the value, since its a extrinsic with a specific name, apparently that is nor the right format for the assignation, so which should it be?
Thank you.
Just needs some syntax tweaks,
FG.'urn:ExternalReqForApprovalInput_Item'.'urn:item'.'urn:HeaderExtrinsics'.Extrinsics.Extrinsic[0].value = "NEW"
You have to specify the namespace wherever necessary
Note that when you parse, FG becomes root of the document so don't specify urn:ExternalReqForApprovalImportRequest again
I assumed you are trying to update the text node value
Note also that Extrinsic will be an array of nodes, so you have to refer to it by index
I finally solved it following your recommendation in the namespace!
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import java.lang.String;
import groovy.util.XmlSlurper
import groovy.xml.XmlUtil
def Message processData(Message message) {
map = message.getProperties();
value = map.get("PRRefID");
Reader reader = message.getBody(Reader)
def root = new XmlParser().parse(reader)
if(value == "") {
root.'urn:ExternalReqForApprovalInput_Item'.'urn:item'.'urn:Operation'[0].value = "NEW"
}
message.setBody(XmlUtil.serialize(root))
return message;
}
Thank you.

Haxe - Why can I not access a child's attribute without getting an error that the parent does not have the given attribute?

I've recently been getting into Haxe and just started to use HaxeFlixel to load a Tiled .TMX file.
I am creating a TiledMap object and passing it the TMX file path, then I want to iterate over the layers in that object to add them to the game scene. However when I try to access .tileArray (which is a property of TiledTileLayer) I get the following error :-
flixel.addons.editors.tiled.TiledLayer has no field tileArray
Here is the code:
package;
import flixel.FlxState;
import flixel.tile.FlxTilemap;
import flixel.addons.editors.tiled.TiledMap;
import openfl.Assets;
class PlayState extends FlxState
{
private var _tiled_map:TiledMap;
override public function create():Void
{
_tiled_map = new TiledMap("assets/data/Map1.tmx");
for(layer in _tiled_map.layers){
var layerData:Array<Int> = layer.tileArray;
}
super.create();
}
override public function update(elapsed:Float):Void
{
super.update(elapsed);
}
}
I've found the following example - http://coinflipstudios.com/devblog/?p=182 which seems to work fine for people.
So I wanted to check whether the layer object was a TiledTileLayer as it should be, or TiledLayer, with the following:
trace(Type.typeof(layer));
Which sure enough yields:
PlayState.hx:24: TClass([class TiledTileLayer])
So if it is a TiledTileLayer which has the field tileArray why is it moaning?
I had a look at the source code (https://github.com/HaxeFlixel/flixel-addons/blob/dev/flixel/addons/editors/tiled/TiledMap.hx#L135) and TiledTileLayer inherits from TiledLayer. Layers is an array of type TiledLayer, so I think this is why it is moaning. I can clearly see that the array is storing child objects of TiledLayer, but as soon as I access any props/methods of those children, it complains that the parent does not have that field? Very confusing!
To run I'm using this command: C:\HaxeToolkit\haxe\haxelib.exe run lime test flash -debug -Dfdb
Thank you!
So if it is a TiledTileLayer which has the field tileArray why is it moaning?
It may be a TiledTileLayer in this case, but that may not always be the case. layers is an Array<TileLayer> after all, so it could be a TiledObjectLayer or a TiledImageLayer as well (which don't have a tileArray field). This can nicely be seen in the code you linked. The concrete type can only be known at runtime, but the error you get happens at compile-time.
If you know for sure there won't be any object or image layers, you can just cast it to a TiledTileLayer. However, just to be safe, it's good practice to check the type beforehand anyway:
for (layer in _tiled_map.layers) {
if (Std.is(layer, TiledTileLayer)) {
var tileLayer:TiledTileLayer = cast layer;
var layerData:Array<Int> = tileLayer.tileArray;
}
}
It works without this for the tutorial you linked because it was made for an older version of flixel-addons.

Custom update listener to set subtask's fix-version

I'm developing custom listener which will update subtask's fix version to same value as it's parent issue.
Currently we are using post-function in workflow in order to set subtask's fix version according to parent on subtask creation. This however doesn't cover cases when subtask already exists and parent's fix version gets updated. New value from parent task is not propagated to subtask.
I'm using script runner and I'm creating 'Custom lisatener', for my specific project and specified Event: 'Issue Updated'. I added script as following:
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.config.SubTaskManager
import com.atlassian.jira.event.issue.AbstractIssueEventListener
import com.atlassian.jira.event.issue.IssueEvent
import com.atlassian.jira.event.type.EventDispatchOption
import com.atlassian.jira.issue.Issue
import com.atlassian.jira.issue.IssueManager
import com.atlassian.jira.issue.MutableIssue
import com.atlassian.jira.project.version.Version
import org.apache.log4j.Logger
class CopyFixVersionFromParentToChild extends AbstractIssueEventListener {
Logger log = Logger.getLogger(CopyFixVersionFromParentToChild.class);
SubTaskManager subTaskManager = ComponentAccessor.getComponent(SubTaskManager.class)
IssueManager issueManager = ComponentAccessor.getComponent(IssueManager.class)
#Override
void issueUpdated(IssueEvent event) {
log.warn("\nIssue updated!!!\n")
try {
Issue updatedIssue = event.getIssue()
if (updatedIssue.issueTypeObject.name == "Parent issue type") {
Collection<Version> fixVersions = new ArrayList<Version>()
fixVersions = updatedIssue.getFixVersions()
Collection<Issue> subTasks = updatedIssue.getSubTaskObjects()
if (subTaskManager.subTasksEnabled && !subTasks.empty) {
subTasks.each {
if (it instanceof MutableIssue) {
((MutableIssue) it).setFixVersions(fixVersions)
issueManager.updateIssue(event.getUser(), it, EventDispatchOption.ISSUE_UPDATED, false)
}
}
}
}
} catch (ex) {
log.debug "Event: ${event.getEventTypeId()} fired for ${event.issue} and caught by script 'CopyVersionFromParentToChild'"
log.debug(ex.getMessage())
}
}
}
Problem is, that it doesn't work. I'm not sure whethe rit's problem that my script logic is encapsulated inside class. Do I have to register this in some specific way? Or am I using script runner completely wrong and I'm pasting this script to wrong section? I checked code against JIRA API and it looks like it should work, my IDE doesnt show any warnings/errors.
Also, could anyone give me hints on where to find logging output from custom scripts like this? Whatever message I put into logger, I seem to be unable to find anywhere in JIRA logs (although I'm aware that script might not work for now).
Any response is much appreciated guys, Thanks.
Martin
Well, I figure it out.
Method I posted, which implements listener as groovy class is used in different way than I expected. These kind of script files were used to be located in to specific path in JIRA installation and ScriptRunner would register them into JIRA as listeners.
In in order to create 'simple' listener script which reacts to issue updated event, I had to strip it down to this code
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.event.type.EventDispatchOption
import com.atlassian.jira.issue.Issue
import com.atlassian.jira.issue.IssueManager
import com.atlassian.jira.issue.MutableIssue
import com.atlassian.jira.project.version.Version
IssueManager issueManager = ComponentAccessor.getComponent(IssueManager.class)
Issue updatedIssue = event.getIssue()
Collection<Version> fixVersions = new ArrayList<Version>()
fixVersions = updatedIssue.getFixVersions()
Collection<Issue> subTasks = updatedIssue.getSubTaskObjects()
subTasks.each {
if (it instanceof MutableIssue) {
((MutableIssue) it).setFixVersions(fixVersions)
issueManager.updateIssue(event.getUser(), it, EventDispatchOption.ISSUE_UPDATED, false)
}
}
You past this to script runner interface and it works :-). Hope this helps anyone who's learning ScriptRunner. Cheers.
Matthew

Neo4j Spatial: can't run spatial

I have been trying to work with Neo4j Spatial for my project, but I can't make it work.
With limited documentation and examples I figured out how to load OSM map to the database. But to check if it is loaded, I am trying to execute a spatial query.
While trying to run my code I get this error:
import.java:69: error: cannot access GremlinGroovyPipeline
.startIntersectSearch(layer, bbox)
^
class file for com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline not found
I understand what's wrong (it can't find the required library), but I don't know how to fix it. The reason is when I run Neo4j Spatial tests, LayerTest.java and TestSpatial.java do include GeoPipeline library and it works perfectly fine. However, when I created my simple java file to test Neo4j, and trying to execute commands that depend GeoPipeline library I get the error above.
I read the instructions on github for Neo4j and saw this note:
Note: neo4j-spatial has a mandatory dependency on
GremlinGroovyPipeline from the com.tinkerpop.gremlin.groovy package.
The dependency in neo4j is type 'provided', so when using
neo4j-spatial in your own Java project, make sure to add the following
dependency to your pom.xml, too.
However, I am not using Maven to build my app. It is a simple java file, that I want to run to test if I get how everything works.
here is the code from my java file:
package org.neo4j.gis.spatial;
import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.geotools.data.DataStore;
import org.geotools.data.neo4j.Neo4jSpatialDataStore;
import org.geotools.data.simple.SimpleFeatureCollection;
import org.neo4j.gis.spatial.osm.OSMDataset;
import org.neo4j.gis.spatial.osm.OSMDataset.Way;
import org.neo4j.gis.spatial.osm.OSMGeometryEncoder;
import org.neo4j.gis.spatial.osm.OSMImporter;
import org.neo4j.gis.spatial.osm.OSMLayer;
import org.neo4j.gis.spatial.osm.OSMRelation;
import org.neo4j.gis.spatial.pipes.osm.OSMGeoPipeline;
import org.neo4j.graphdb.Direction;
import org.neo4j.graphdb.Node;
import org.neo4j.graphdb.Relationship;
import com.vividsolutions.jts.geom.Envelope;
import com.vividsolutions.jts.geom.Geometry;
import org.neo4j.kernel.impl.batchinsert.BatchInserter;
import org.neo4j.kernel.impl.batchinsert.BatchInserterImpl;
import org.neo4j.kernel.EmbeddedGraphDatabase;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.gis.spatial.pipes.GeoPipeline;
class SpatialOsmImport {
public static void main(String[] args)
{
OSMImporter importer = new OSMImporter("ott.osm");
Map<String, String> config = new HashMap<String, String>();
config.put("neostore.nodestore.db.mapped_memory", "90M" );
config.put("dump_configuration", "true");
config.put("use_memory_mapped_buffers", "true");
BatchInserter batchInserter = new BatchInserterImpl("target/dependency", config);
importer.setCharset(Charset.forName("UTF-8"));
try{
importer.importFile(batchInserter, "ott.osm", false);
batchInserter.shutdown();
GraphDatabaseService db = new EmbeddedGraphDatabase("target/dependency");
importer.reIndex(db, 10000);
db.shutdown();
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
GraphDatabaseService database = new EmbeddedGraphDatabase("target/dependency");
try{
SpatialDatabaseService spatialService = new SpatialDatabaseService(database);
Layer layer = spatialService.getLayer("layer_roads");
LayerIndexReader spatialIndex = layer.getIndex();
System.out.println("Have " + spatialIndex.count() + " geometries in " + spatialIndex.getBoundingBox());
Envelope bbox = new Envelope(-75.80, 45.19, -75.7, 45.23);
// Search searchQuery = new SearchIntersectWindow(bbox);
// spatialIndex.executeSearch(searchQuery);
// List<SpatialDatabaseRecord> results = searchQuery.getResults();
List<SpatialDatabaseRecord> results = GeoPipeline
.startIntersectSearch(layer, bbox)
.toSpatialDatabaseRecordList();
doGeometryTestsOnResults(bbox, results);
} finally {
database.shutdown();
}
}
private static void doGeometryTestsOnResults(Envelope bbox, List<SpatialDatabaseRecord> results) {
System.out.println("Found " + results.size() + " geometries in " + bbox);
Geometry geometry = results.get(0).getGeometry();
System.out.println("First geometry is " + geometry);
geometry.buffer(2);
}
}
It is very simple right now, but I can't make it work. How do I include com.tinkerpop.gremlin.groovy.GremlinGroovyPipeline in my app, so it works?
I run everything on Ubuntu 12.04 and java version "1.7.0_25", Java(TM) SE Runtime Environment (build 1.7.0_25-b15).
Any help is greatly appreciated.
the best way to get all the required dependencies in a place where you can include them in your classpath is to run
mvn dependency:copy-dependencies
in neo4j-spatial, and find the libs to include in target/deps, see http://maven.apache.org/plugins/maven-dependency-plugin/usage.html

Resources