How to undeploy a process with ODE Deployment API? - bpel

I am using the Apache ODE Deployment API to deploy an application. So far so good.
When I try undeploying it I use the following code:
private void undeployProcessInODE() {
DeploymentService service = new DeploymentServiceLocator();
try {
DeploymentServicePortType port = service
.getDeploymentServiceSOAP11port_http();
String[] deployedPackages = port.listDeployedPackages();
String deployedPackage;
QName[] qNames;
QName qName;
for (int i = 0; i < deployedPackages.length; i++) {
deployedPackage = deployedPackages[i];
qNames = port.listProcesses(deployedPackage);
for (int j = 0; j < qNames.length; j++) {
qName = qNames[i];
port.undeploy(qName);
}
}
} catch (IOException e) {
e.printStackTrace();
} catch (ServiceException e) {
e.printStackTrace();
}
}
and it throws
Invocation of operation undeploy failed: org.apache.ode.axis2.OdeFault: Invalid bundle name, only non empty alpha-numerics and _ strings are allowed.
because the local part of the qName is bpel258-156 which I guess is some kind of deploy versioning I don't know how to control. My folder inside the WEB-INF/processes is BPEL_process and all the files inside it bpel258.bpel and so on. I can't find a reference to anything where the "version" number is added, so I don't know how to avoid this.
Besides, I'm still not sure what "undeploy" means in ODE terms. Is it just deleting my process folder? What is the .deploy file next to my folder and why is it empty?
I have tried many times to just delete both folder and .deployed but ODE remembers them and tries to locate them. How do I reset this?
As an extra, I must say I ended up changing the whole ode folder from a Tomcat I used through Eclipse to the standalone Jetty in order to have the folder called BPEL_process and overwrite this every time. Before this, ODE would just make a new folder with the versioning number and I didn't know how to change it. Help in this would also be appreciated.
I'm aware these may be too many questions at once but I believe they are all strongly related.

Related

P4API.net: how to use P4Callbacks delegates

I am working on a small tool to schedule p4 sync daily at specific times.
In this tool, I want to display the outputs from the P4API while it is running commands.
I can see that the P4API.net has a P4Callbacks class, with several delegates: InfoResultsDelegate, TaggedOutputDelegate, LogMessageDelegate, ErrorDelegate.
My question is: How can I use those, I could not find a single example online of that. A short example code would be amazing !
Note: I am quite a beginner and have never used delegates before.
Answering my own questions by an example. I ended up figuring out by myself, it is a simple event.
Note that this only works with P4Server. My last attempt at getting TaggedOutput from a P4.Connection was unsuccessful, they were never triggered when running a command.
So, here is a code example:
P4Server p4Server = new P4Server(syncPath);
p4Server.TaggedOutputReceived += P4ServerTaggedOutputEvent;
p4Server.ErrorReceived += P4ServerErrorReceived;
bool syncSuccess = false;
try
{
P4Command syncCommand = new P4Command(p4Server, "sync", true, syncPath + "\\...");
P4CommandResult rslt = syncCommand.Run();
syncSuccess=true;
//Here you can read the content of the P4CommandResult
//But it will only be accessible when the command is finished.
}
catch (P4Exception ex) //Will be caught only when the command has completely failed
{
Console.WriteLine("P4Command failed: " + ex.Message);
}
And the two methods, those will be triggered while the sync command is being executed.
private void P4ServerErrorReceived(uint cmdId, int severity, int errorNumber, string data)
{
Console.WriteLine("P4ServerErrorReceived:" + data);
}
private void P4ServerTaggedOutputEvent(uint cmdId, int ObjId, TaggedObject Obj)
{
Console.WriteLine("P4ServerTaggedOutputEvent:" + Obj["clientFile"]);
}

Wait for finishing uploads

I have written a program that writes multiple files into a SharePoint-List using CSOM:
foreach (file in myFiles) {
UploadFileToSharePoint(file.dataq, context, url+ file.name);
}
UploadFileToSharePoint(dummy, context, url+"finish.txt");
public static void UploadFileToSharePoint(Byte[] fileStream, ClientContext cCtx, String destUrl)
{
using (MemoryStream stream = new MemoryStream(fileStream))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(cCtx, destUrl, stream, true);
}
}
As you can see, when done uploading N files into a target folder, a last file called "finish.txt" is uploaded. Now I have an Event Receiver for that list checking if the last file is uploaded:
public override void ItemAdded(SPItemEventProperties properties)
{
try
{
if (properties.ListItem.FileSystemObjectType == SPFileSystemObjectType.File)
{
if (properties.ListItem.File != null && properties.ListItem.File.ParentFolder != null)
{
if (properties.ListItem.File.Name.Contains("finish.txt"))
{
// last file inside folder
AnalyzeFolder(properties.ListItem);
}
}
}
base.ItemAdded(properties);
}
Occasionally I receive an error
Error moving files: Microsoft.SharePoint.SPException: Save Conflict.
Your changes conflict with those made concurrently by another user. If
you want your changes to be applied, click Back in your Web browser,
refresh the page, and resubmit your changes. --->
System.Runtime.InteropServices.COMException: Save Conflict. Your
changes conflict with those made concurrently by another user.
My guess would be that this happens because the last file is uploaded before the uploads of the other files before are finished.
How can I make sure that all uploads are finished before I upload the last file to prevent that error?
This isn't exactly what I was wanting, but making the Event Receiver work synchronous by adding
<Synchronization>Synchronous</Synchronization>
to the elements.xml solved that issue.
(I won't mark this as an answer, because originally I did not want to have that event receiver synchronously, but to wait for the other events. So if someone has a better answer: Feel free to post it)

Nested IMessageQueueClient publish using Servicestack InMemoryTransientMessageService

We are using InMemoryTransientMessageService to chain several one-way notification between services. We can not use Redis provider, and we do not really need it so far. Synchronous dispatching is enough.
We are experimenting problems when using a publish inside a service that is handling another publish. In pseudo-code:
FirstService.Method()
_messageQueueClient.Publish(obj);
SecondService.Any(obj)
_messageQueueClient.Publish(obj);
ThirdService.Any(obj)
The SecondMessage is never handled. In the following code of ServiceStack TransientMessageServiceBase, when the second message is processed, the service "isRunning" so it does not try to handled the second:
public virtual void Start()
{
if (isRunning) return;
isRunning = true;
this.messageHandlers = this.handlerMap.Values.ToList().ConvertAll(
x => x.CreateMessageHandler()).ToArray();
using (var mqClient = MessageFactory.CreateMessageQueueClient())
{
foreach (var handler in messageHandlers)
{
handler.Process(mqClient);
}
}
this.Stop();
}
I'm not sure about the impact of changing this behaviour in order to be able to nest/chain message publications. Do you think it is safe to remove this check? Some other ideas?
After some tests, it seems there is no problem in removing the "isRunning" control. All nested publications are executed correctly.

Caching requests to reduce processing (TPL?)

I'm currently trying to reduce the number of similar requests being processed in a business layer by:
Caching the requests a method receives
Performing the slow processing task (once for all similar requests)
Return the result to each requesting method calls
Things to note, are that:
The original method calls are not currently in a async BeginMethod() / EndMethod(IAsyncResult)
The requests arrive faster than the time it takes to generate the output
I'm trying to use TPL where possible, as I am currently trying to learn more about this library
eg. Improving the following
byte[] RequestSlowOperation(string operationParameter)
{
Perform slow task here...
}
Any thoughts?
Follow up:
class SomeClass
{
private int _threadCount;
public SomeClass(int threadCount)
{
_threadCount = threadCount;
int parameter = 0;
var taskFactory = Task<int>.Factory;
for (int i = 0; i < threadCount; i++)
{
int i1 = i;
taskFactory
.StartNew(() => RequestSlowOperation(parameter))
.ContinueWith(result => Console.WriteLine("Result {0} : {1}", result.Result, i1));
}
}
private int RequestSlowOperation(int parameter)
{
Lazy<int> result2;
var result = _cacheMap.GetOrAdd(parameter, new Lazy<int>(() => RequestSlowOperation2(parameter))).Value;
//_cacheMap.TryRemove(parameter, out result2); <<<<< Thought I could remove immediately, but this causes blobby behaviour
return result;
}
static ConcurrentDictionary<int, Lazy<int>> _cacheMap = new ConcurrentDictionary<int, Lazy<int>>();
private int RequestSlowOperation2(int parameter)
{
Console.WriteLine("Evaluating");
Thread.Sleep(100);
return parameter;
}
}
Here is a fast, safe and maintainable way to do this:
static var cacheMap = new ConcurrentDictionary<string, Lazy<byte[]>>();
byte[] RequestSlowOperation(string operationParameter)
{
return cacheMap.GetOrAdd(operationParameter, () => new Lazy<byte[]>(() => RequestSlowOperation2(operationParameter))).Value;
}
byte[] RequestSlowOperation2(string operationParameter)
{
Perform slow task here...
}
This will execute RequestSlowOperation2 at most once per key. Please be aware that the memory held by the dictionary will never be released.
The user delegate passed to the ConcurrentDictionary is not executed under lock, meaning that it could execute multiple times! My solution allows multiple lazies to be created but only one of them will ever be published and materialized.
Regarding locking: this solution will take locks, but it does not matter because the work items are far more expensive than the (few) lock operations.
Honestly, the use of TPL as a technology here is not really important, this is just a straight up concurrency problem. You're trying to protect access to a shared resource (the cached data) and, to do that, the only approach is to lock. Either that or, if the cache entry does not already exist, you could allow all incoming threads to generate it and then subsequent requesters benefit from the cached value once it's stored, but there's little value in that if the resource is slow/expensive to generate and cache.
Perhaps some more details will make it clear on exactly why you're trying to accomplish this without a lock. I'll happily to revise my answer if more detail makes it clearer what you're trying to do.

How do I tell my C# application to close a file it has open in a FileInfo object or possibly Bitmap object?

So I was writing a quick application to sort my wallpapers neatly into folders according to aspect ratio. Everything is going smoothly until I try to actually move the files (using FileInfo.MoveTo()). The application throws an exception:
System.IO.IOException
The process cannot access the file because it is being used by another process.
The only problem is, there is no other process running on my computer that has that particular file open. I thought perhaps that because of the way I was using the file, perhaps some internal system subroutine on a different thread or something has the file open when I try to move it. Sure enough, a few lines above that, I set a property that calls an event that opens the file for reading. I'm assuming at least some of that happens asynchronously. Is there anyway to make it run synchronously? I must change that property or rewrite much of the code.
Here are some relevant bits of code, please forgive the crappy Visual C# default names for things, this isn't really a release quality piece of software yet:
private void button1_Click(object sender, EventArgs e)
{
for (uint i = 0; i < filebox.Items.Count; i++)
{
if (!filebox.GetItemChecked((int)i)) continue;
//This calls the selectedIndexChanged event to change the 'selectedImg' variable
filebox.SelectedIndex = (int)i;
if (selectedImg == null) continue;
Size imgAspect = getImgAspect(selectedImg);
//This is gonna be hella hardcoded for now
//In the future this should be changed to be generic
//and use some kind of setting schema to determine
//the sort/filter results
FileInfo file = ((FileInfo)filebox.SelectedItem);
if (imgAspect.Width == 8 && imgAspect.Height == 5)
{
finalOut = outPath + "\\8x5\\" + file.Name;
}
else if (imgAspect.Width == 5 && imgAspect.Height == 4)
{
finalOut = outPath + "\\5x4\\" + file.Name;
}
else
{
finalOut = outPath + "\\Other\\" + file.Name;
}
//Me trying to tell C# to close the file
selectedImg.Dispose();
previewer.Image = null;
//This is where the exception is thrown
file.MoveTo(finalOut);
}
}
//The suspected event handler
private void filebox_SelectedIndexChanged(object sender, EventArgs e)
{
FileInfo selected;
if (filebox.SelectedIndex >= filebox.Items.Count || filebox.SelectedIndex < 0) return;
selected = (FileInfo)filebox.Items[filebox.SelectedIndex];
try
{
//The suspected line of code
selectedImg = new Bitmap((Stream)selected.OpenRead());
}
catch (Exception) { selectedImg = null; }
if (selectedImg != null)
previewer.Image = ResizeImage(selectedImg, previewer.Size);
else
previewer.Image = null;
}
I have a long-fix in mind (that's probably more efficient anyway) but it presents more problems still :/
Any help would be greatly appreciated.
Since you are using your selectedImg as a Class scoped variable it is keeping a lock on the File while the Bitmap is open. I would use an using statement and then Clone the Bitmap into the variable you are using this will release the lock that Bitmap is keeping on the file.
Something like this.
using ( Bitmap img = new Bitmap((Stream)selected.OpenRead()))
{
selectedImg = (Bitmap)img.Clone();
}
New answer:
I looked at the line where you do an OpenRead(). Clearly, this locks your file. It would be better to provide the file path instead of an stream, because you can't dispose your stream since bitmap would become erroneous.
Another thing I'm looking in your code which could be a bad practice is binding to FileInfo. Better create a data-transfer object/value object and bind to a collection of this type - some object which has the properties you need to show in your control -. That would help in order to avoid file locks.
In the other hand, you can do some trick: why don't you show streched to screen resolution images compressing them so image size would be extremly lower than actual ones and you provide a button called "Show in HQ"? That should solve the problem of preloading HD images. When the user clicks "Show in HQ" button, loads that image in memory, and when this is closed, it gets disposed.
It's ok for you?
If I'm not wrong, FileInfo doesn't block any file. You're not opening it but reading its metadata.
In the other hand, if you application shows images, you should move to memory visible ones and load them to your form from a memory stream.
That's reasonable because you can open a file stream, read its bytes and move them to a memory stream, leaving the lock against that file.
NOTE: This solution is fine for not so large images... Let me know if you're working with HD images.
using(selectedImg = new Bitmap((Stream)selected))
Will that do it?

Resources