Currently i'm working on a crm project. In this project i have to send data to web service and take refined data back. This operation must be work on custom workflow but i'm stuck infact i have no idea how to do it? Any suggestions?
Here is my service code;
var tmpIncident = getIncidentById(organizationServiceContext);
if (tmpIncident != null) //if we have decent incident we connect service and proceed the data.
{
GetCustomerInfoService.TransactionServiceClient client = new GetCustomerInfoService.TransactionServiceClient();
GetCustomerInfoService.TransactionRequest request = new GetCustomerInfoService.TransactionRequest();
#region authentication
request.AuthenticationData.UserName = "user";
request.AuthenticationData.Password = "pass";
#endregion
Guid id = Guid.NewGuid(); //create random guid
request.RequestId = id.ToString();
request.OrderNumber = tmpIncident.vrp_ordernumber;
GetCustomerInfoService.TransactionResponse response = client.GetTransactionByOrderNumber(request);
tmpIncident.CustomerId = new EntityReference("Contact", new Guid(response.Message));
this.updateChanges(organizationServiceContext, tmpIncident);
client.Close();
}
When i tested plugin, i received that error;
Error Message:
Unhandled Exception: System.InvalidOperationException: Could not find default endpoint element that references contract 'GetCustomerInfoService.ITransactionService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this contract could be found in the client element.
at System.ServiceModel.Description.ConfigLoader.LoadChannelBehaviors(ServiceEndpoint serviceEndpoint, String configurationName)
at System.ServiceModel.ChannelFactory.InitializeEndpoint(String configurationName, EndpointAddress address)
at System.ServiceModel.ChannelFactory1..ctor(String endpointConfigurationName, EndpointAddress remoteAddress)
at System.ServiceModel.ConfigurationEndpointTrait1.CreateSimplexFactory()
at System.ServiceModel.ClientBase1.CreateChannelFactoryRef(EndpointTrait1 endpointTrait)
at System.ServiceModel.ClientBase`1.InitializeChannelFactoryRef()
at Vrp.Crm.PluginLibrary2013.GetCustomerInfoService.TransactionServiceClient..ctor() in :line 0
at Vrp.Crm.PluginLibrary2013.CustomWorkflows.SetCumstomerIdToIncident.Execute(CodeActivityContext context) in c:\Veripark\Projects\gisik\DRCRM.VERITOUCH.CRM2013\PluginLibrary2013\CustomWorkflows\CheckSubIncidentForMainIncident.cs:line 72
at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)
Assuming that you are using Dynamics CRM. This is a high-level overview:
Create a custom workflow activity that makes the actual call to the webservice. This should help you get started: https://msdn.microsoft.com/en-us/library/gg328515.aspx
The custom activity would have output parameters that would return the results from the webservice back to the calling CRM workflow.
Finally create the workflow and its triggers that would make use of your custom activity. More info:
http://crmbook.powerobjects.com/system-administration/processes/workflows/
https://msdn.microsoft.com/en-us/library/gg328264.aspx
I hope this is enough information to get you heading down the right path.
Related
I have code in a standalone application that invokes an Acumatica action to generate reports; I am running into timeouts on large documents while the action completes.
What is the best method to handle these timeouts? I need to wait for the action to complete in order to retrieve the files I've generated.
Standalone application code:
public SalesOrder GenerateAcumaticaLabels(string orderNbr, string reportType)
{
SalesOrder salesOrder = null;
using (ISoapClientProvider clientProvider = soapClientFactory.Create())
{
try
{
SalesOrder salesOrderToFind = new SalesOrder
{
OrderType = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).First() },
OrderNbr = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).Last() },
ReturnBehavior = ReturnBehavior.OnlySpecified,
};
salesOrder = clientProvider.Client.Get(salesOrderToFind) as SalesOrder;
InvokeResult invokeResult = new InvokeResult();
invokeResult = clientProvider.Client.Invoke(salesOrder, new exportSFPReport());
ProcessResult processResult = clientProvider.Client.GetProcessStatus(invokeResult);
//Wait for the update to complete before we attempt to retrieve the files
while (processResult.Status == ProcessStatus.InProcess)
{
Thread.Sleep(1000); //pause for 1 second
processResult = clientProvider.Client.GetProcessStatus(invokeResult);
}
}
And the action in Acumatica:
public PXAction<SOOrder> ExportSFPReport;
[PXButton]
[PXUIField(DisplayName = "Generate Robot SFP PDF")]
protected IEnumerable exportSFPReport(PXAdapter adapter)
{
//Report Paramenters
Dictionary<String, String> parameters = new Dictionary<String, String>();
parameters["SOOrder.OrderType"] = Base.Document.Current.OrderType;
parameters["SOOrder.OrderNbr"] = Base.Document.Current.OrderNbr;
IEnumerable reportFileInfo = ExportReport(adapter, "IN619217", parameters);
exportTrayLabelReport(adapter, "SFP");
return reportFileInfo;
}
The problem here is that your action is synchronous, so it is trying to complete within the Invoke call (which is not a good thing for long processes). You have to explicitly make your operation long-running by using PXLongOperation.StartOperation inside your handler, and then your client code should work properly, as it already handles the waiting and checking.
I believe the reason why you encounter time-out is because there is no TCP communication between the time you sent the request and receive the response. With TCP KeepAlive flag set to true, the client will periodically ping the server to reset the time-out period.
That would be the best way. However Acumatica connections are rather high level so I don't think you'll be able to easily access that flag. What I would try first in a scenario that doesn't involve external application is to wrap your action event-handler code in a PXLongOperation block which has to do something similar to keep connection alive under the hood:
PXLongOperation.StartOperation(this or Base, delegate
{
your code here
});
When I do encounter time-outs in Acumatica that can't be solved with PXLongOperation I go for the simplest method which is increasing IIS timeout in Web.Config file. I'm not sure if your use case with external application will go well with async PXLongOperation. The handler would return prematurely and the client could not be able to retrieve the async payload.
So you might have to increase time-out instead. As far as I know there's no real practical drawback to doing this unless your website is under threat of DOS attacks.
You can locate and edit the Web.Config file of your Acumatica instance using inetmgr program if you are self-hosting Acumatica. Otherwise talk to your SAAS contact to see if that's an option.
I'm pretty sure you are hitting IIS time-out. A tell-tale sign would be lost connection after exactly 5 minutes which is the default 300 seconds value. You can edit Web.Config file to increase executionTimeout value. It's not a bad idea to increase maxRequestLength too if you are requesting large amount of data from Acumatica API as this is also a common cause of failure that you miss in testing and occurs in real-life scenarios:
<httpRuntime executionTimeout="300" requestValidationMode="2.0" maxRequestLength="1048576" />
I have been able to get my mobile Android app to receive messages generated from the Pinpoint Campaign console (https://console.aws.amazon.com/pinpoint/home) to a specific device by targeting the segment to a custom attribute that only that device has.
Pinpoint Campaign config
Mobile push channel
Standard campaign
Segment defined using custom attributes, holdout 0%
Silent notification
Custom JSON
Launch immediate
Now I would like to implement this feature in my Java app using the SDK APIs and target the device's Pinpoint endpoint.
GetEndpointRequest getEndpointRequest = new GetEndpointRequest()
.withApplicationId(appId)
.withEndpointId(endpointId);
GetEndpointResult endpointResult = getAmazonPinpointClient().getEndpoint(getEndpointRequest);
DirectMessageConfiguration directMessageConfiguration =
new DirectMessageConfiguration().withGCMMessage(new GCMMessage().withBody(body).withSilentPush(true).withAction(Action.OPEN_APP));
AddressConfiguration addressConfiguration = new AddressConfiguration().withChannelType(ChannelType.GCM);
MessageRequest messageRequest = new MessageRequest().withMessageConfiguration(directMessageConfiguration)
.addAddressesEntry(endpointResponse.getAddress(), addressConfiguration);
SendMessagesRequest sendMessagesRequest = new SendMessagesRequest()
.withApplicationId(appId)
.withMessageRequest(messageRequest);
The "body" is the same JSON I put in the Pinpoint Campaign console. When I run this, I get back a DeliveryStatus of SUCCESSFUL but the device never receives the message.
{ApplicationId: MY_APP_ID,Result: {clrVUcv-AwA:APA91bHGXkxpDJiw5kOMROA2XTJXuKreMklq9jemHO_KGYTIw6w84Fw9zLv9waMgLgha61IR-kZxgmrnFu-OGp8l6WFgp4Wolh4oOvZwMobGYNgzivv3bGIK83t-e4hiLx1TTaEIeRdQ={DeliveryStatus: SUCCESSFUL,StatusCode: 200,StatusMessage: {"multicast_id":4803589342422496921,"success":1,"failure":0,"canonical_ids":0,"results":[{"message_id":"0:1515105369948916%c551fa42f9fd7ecd"}]},}}}
I have also tried this via the AWS CLI:
aws pinpoint send-messages --application-id MY_APP_ID --message-request "{\"Addresses\":{\"clrVUcv-AwA:APA91bHGXkxpDJiw5kOMROA2XTJXuKreMklq9jemHO_KGYTIw6w84Fw9zLv9waMgLgha61IR-kZxgmrnFu-OGp8l6WFgp4Wolh4oOvZwMobGYNgzivv3bGIK83t-e4hiLx1TTaEIeRdQ\":{\"ChannelType\":\"GCM\"}},\"MessageConfiguration\":{\"GCMMessage\":{\"Body\":\"{\\\"message\\\":\\\"stuff\\\"}\",\"SilentPush\":true}}}"
with a similar result (get 200 status code and DeliveryStatus of SUCCESSFUL but the app never receives). I tried using the "Direct" message in the AWS Pinpoint console but they do not seem to support the same format (forces Action and Title/Message instead of silent push message with custom JSON).
Am I getting the endpoint incorrectly? How do I translate the above campaign into a message? I see there is a sendUserMessages() API call as well but that doesn't seem to be right one (I couldn't find where to specify the specific user endpoint)?
The client receives the campaign via the registered Service:
public class PushListenerService extends GcmListenerService {
#Override
public void onMessageReceived(final String from, final Bundle data) {
AWSMobileClient.initializeMobileClientIfNecessary(this.getApplicationContext());
final NotificationClient notificationClient = AWSMobileClient.defaultMobileClient()
.getPinpointManager().getNotificationClient();
NotificationClient.CampaignPushResult pushResult =
notificationClient.handleGCMCampaignPush(from, data, this.getClass());
Log.e(LOG_TAG, " onMessageReceived - got messages" + data);
Do GCM direct messages get sent through the same campaign method or do I have to register a different service to process these?
Found the solution based on the AWS CLI command I was able to run. Should have been using the "Data" element and not the "Body" and need to enable "SilentPush".
EndpointResponse endpointResponse = getPinpointEndpointResponse(appId, pinpointEndpointId);
Map<String, String> data = new HashMap<>();
// construct data here, currently only supports Map<String, String>
// why not HashMap<String, Object> so it can support full JSON????
DirectMessageConfiguration directMessageConfiguration =
new DirectMessageConfiguration().withGCMMessage(new GCMMessage().withData(data).withSilentPush(true));
AddressConfiguration addressConfiguration = new AddressConfiguration().withChannelType(ChannelType.GCM);
MessageRequest messageRequest = new MessageRequest().withMessageConfiguration(directMessageConfiguration)
.addAddressesEntry(endpointResponse.getAddress(), addressConfiguration);
SendMessagesRequest sendMessagesRequest = new SendMessagesRequest()
.withApplicationId(appId)
.withMessageRequest(messageRequest);
I am trying to integrate Azure App Insights with an Azure Function App (HttpTriggered). I want to add my own keys and values in the "customDimensions" object of the requests table. Right now it only shows the following:
On query
requests
| where iKey == "449470fb-****" and id == "5e17e23e-****"
I get this:
LogLevel: Information
Category: Host.Results
FullName: Functions.FTAID
StartTime: 2017-07-14T14:24:10.9410000Z
param__context: ****
HttpMethod: POST
param__req: Method: POST, Uri: ****
Succeeded: True
TriggerReason: This function was programmatically called via the host APIs.
EndTime: 2017-07-14T14:24:11.6080000Z
I want to add more key values such as:
EnvironmentName: Development
ServiceLine: Business
Based on this answer, I implemented the ITelemetryInitializer interface as follows:
public class CustomTelemetry : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
var requestTelemetry = telemetry as RequestTelemetry;
if (requestTelemetry == null) return;
requestTelemetry.Context.Properties.Add("EnvironmentName", "Development");
}
}
Here is how the run.csx code for the Azure Function App looks like:
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ExecutionContext context, TraceWriter log)
{
// Initialize the App Insights Telemetry
TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
TelemetryConfiguration.Active.TelemetryInitializers.Add(new CustomTelemetry());
TelemetryClient telemetry = new TelemetryClient();
var jsonBody = await req.Content.ReadAsStringAsync();
GetIoItemID obj = new GetIoItemID();
JArray output = obj.GetResponseJson(jsonBody, log, telemetry);
var response = req.CreateResponse(HttpStatusCode.OK);
response.Content = new StringContent(output.ToString(), System.Text.Encoding.UTF8, "application/json");
return response;
}
But this did not work...
I believe, since you're creating the TelemetryClient yourself in this example, you don't need to bother with the telemetry initializer, you could just do
var telemetry = new TelemetryClient();
telemetry.Context.Properties["EnvironmentName"] = "Development";
directly, and everything sent by that instance of that telemetry client will have those properties set.
You'd need that telemetry initializer if you don't have control over who's creating the telemetry client and want to touch every item of telemetry created wherever?
I don't know how that TelemetryClient instance gets used downstream in azure functions though, so i'm not entirely positive, though.
Edit: from azure functions post about this, it says:
We’ll be working hard to get Application Insights ready for production
workloads. We’re also listening for any feedback you have. Please file
it on our GitHub. We’ll be adding some new features like better
sampling controls and automatic dependency tracking soon. We hope
you’ll give it a try and start to gain more insight into how your
Functions are behaving. You can read more about how it works at
https://aka.ms/func-ai
and the example from that func-ai link has a couple things:
1) it creates the telemetry client statically up front once (instead of in each call to the function)
private static TelemetryClient telemetry = new TelemetryClient();
private static string key = TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
and inside the function it is doing:
telemetry.Context.Operation.Id = context.InvocationId.ToString();
to properly do correlation with events you might create with your telemetry client so you might want to do that too.
2) it appears that the telemetry client you create you can use, but they create their own telemetry client and send data there, so anything you touch in your telemetry client's context isn't seen by azure functions itself.
so, to me that leads me to something you can try:
add a static constructor in your class, and in that static constructor, do the telemetry initializer thing you were doing above. possibly this gets your telemetry initializer added to the context before azure functions starts creating its request and calling your method?
If that doesn't work, you might need to post on their GitHub or email the person listed in the article for more details on how to do this?
I'm trying to read a value from a list in a remote SharePoint site (different SP Web App). The web apps are set up with Claims Auth, and the client web app SP Managed account is configured with an SPN. I believe Kerberos and claims are set up correctly, but I am unable to reach the remote server, and the request causes an exception: "The remote server returned an error: (401) Unauthorized."
The exception occurs in the line ctx.ExecuteQuery(); but it does not catch the exception in the if (scope.HasException) instead, the exception is caught by the calling code (outside of the using{} block).
When I look at the traffic at the remote server using Wireshark, it doesn't look like the request is even getting to the server; it's almost as if the 401 occurs before the Kerberos ticket is exchanged for the claim.
Here's my code:
using (ClientContext ctx = new ClientContext(contextUrl))
{
CredentialCache cc = new CredentialCache();
cc.Add(new Uri(contextUrl), "Kerberos", CredentialCache.DefaultNetworkCredentials);
ctx.Credentials = cc;
ctx.AuthenticationMode = ClientAuthenticationMode.Default;
ExceptionHandlingScope scope = new ExceptionHandlingScope(ctx);
Web ctxWeb = ctx.Web;
List ctxList;
Microsoft.SharePoint.Client.ListItemCollection listItems;
using (scope.StartScope())
{
using (scope.StartTry())
{
ctxList = ctxWeb.Lists.GetByTitle("Reusable Content");
CamlQuery qry = new CamlQuery();
qry.ViewXml = string.Format(ViewQueryByField, "Title", "Text", SharedContentTitle);
listItems = ctxList.GetItems(qry);
ctx.Load(listItems, items => items.Include(
item => item["Title"],
item => item["ReusableHtml"],
item => item["ReusableText"]));
}
using (scope.StartCatch()) { }
using (scope.StartFinally()) { }
}
ctx.ExecuteQuery();
if (scope.HasException)
{
result = string.Format("Error retrieving content<!-- Error Message: {0} | {1} -->", scope.ErrorMessage, contextUrl);
}
if (listItems.Count == 1)
{
Microsoft.SharePoint.Client.ListItem contentItem = listItems[0];
if (SelectedType == SharedContentType.Html)
{
result = contentItem["ReusableHtml"].ToString();
}
else if (SelectedType == SharedContentType.Text)
{
result = contentItem["ReusableText"].ToString();
}
}
}
I realize the part with the CredentialCache shouldn't be necessary in claims, but every single example I can find is either running in a console app, or in a client side application of some kind; this code is running in the codebehind of a regular ASP.NET UserControl.
Edit: I should probably mention, the code above doesn't even work when the remote URL is the root site collection on the same web app as the calling code (which is in a site collection under /sites/)--in other words, even when the hostname is the same as the calling code.
Any suggestions of what to try next are greatly appreciated!
Mike
Is there a reason why you are not using the standard OM?
You already said this is running in a web part, which means it is in the context of application pool account. Unless you elevate permissions by switching users, it won't authenticate correctly. Maybe try that. But I would not use the client OM when you do have access to the API already.
I have developed a windows services which work to send email for neglected lead after 7 days.
When I run code in my machine it work and continue. But when I install in a Windows services then some error occurs:
using (OrganizationServiceProxy serviceProxy =
new OrganizationServiceProxy(OrganizationUri, HomeRealmUri, Credentials, null))
{
IOrganizationService service = (IOrganizationService)serviceProxy;
ColumnSet Indexcol = new ColumnSet(new string[] { columnname });
QueryByAttribute indexattribute = new QueryByAttribute();
indexattribute.EntityName = EntityName;
indexattribute.Attributes.AddRange(RangeAttribute);
indexattribute.Values.AddRange(RangeValues);
indexattribute.ColumnSet = Indexcol;
RetrieveMultipleRequest req_index = new RetrieveMultipleRequest();
req_index.Query = indexattribute;
try {
// Error occurs when this line executes
RetrieveMultipleResponse resp_index =
(RetrieveMultipleResponse)service.Execute(req_index);
EntityCollection mcs_index = resp_index.EntityCollection;
}
}
and error is
The server was unable to process the request due to an internal error.
For more information about the error, either turn on
IncludeExceptionDetailInFaults (either from ServiceBehaviorAttribute
or from the configuration behavior) on the server in
order to send the exception information back to the client, or turn on
tracing as per the Microsoft .NET Framework 3.0 SDK documentation and
inspect the server trace logs.
kindly guide me I am stuck here. :(