Azure cognitive Service - Speech sample code failed with authentication error - azure

The program returns: CANCELED: Reason=Error ErrorDetails=WebSocket Upgrade failed with an authentication error (401). Please check for correct subscription key (or authorization token) and region name. SessionId: cbfcdf7f26304343a08de6c398652053
I'm using my free trial subscription key and westus region. This is the sample code found here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/quickstarts/speech-to-text-from-microphone?tabs=unity%2Cx-android%2Clinux%2Cjava-runtime&pivots=programming-language-csharp
using UnityEngine;
using UnityEngine.UI;
using Microsoft.CognitiveServices.Speech;
#if PLATFORM_ANDROID
using UnityEngine.Android;
#endif
#if PLATFORM_IOS
using UnityEngine.iOS;
using System.Collections;
#endif
public class Helloworld : MonoBehaviour
{
// Hook up the two properties below with a Text and Button object in your UI.
public Text outputText;
public Button startRecoButton;
private object threadLocker = new object();
private bool waitingForReco;
private string message;
private bool micPermissionGranted = false;
#if PLATFORM_ANDROID || PLATFORM_IOS
// Required to manifest microphone permission, cf.
// https://docs.unity3d.com/Manual/android-manifest.html
private Microphone mic;
#endif
public async void ButtonClick()
{
// Creates an instance of a speech config with specified subscription key and service region.
// Replace with your own subscription key and service region (e.g., "westus").
var config = SpeechConfig.FromSubscription("yourSubscriptionKey", "yourRegion");
// Make sure to dispose the recognizer after use!
using (var recognizer = new SpeechRecognizer(config))
{
lock (threadLocker)
{
waitingForReco = true;
}
// Starts speech recognition, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Checks result.
string newMessage = string.Empty;
if (result.Reason == ResultReason.RecognizedSpeech)
{
newMessage = result.Text;
}
else if (result.Reason == ResultReason.NoMatch)
{
newMessage = "NOMATCH: Speech could not be recognized.";
}
else if (result.Reason == ResultReason.Canceled)
{
var cancellation = CancellationDetails.FromResult(result);
newMessage = $"CANCELED: Reason={cancellation.Reason} ErrorDetails={cancellation.ErrorDetails}";
}
lock (threadLocker)
{
message = newMessage;
waitingForReco = false;
}
}
}
void Start()
{
if (outputText == null)
{
UnityEngine.Debug.LogError("outputText property is null! Assign a UI Text element to it.");
}
else if (startRecoButton == null)
{
message = "startRecoButton property is null! Assign a UI Button to it.";
UnityEngine.Debug.LogError(message);
}
else
{
// Continue with normal initialization, Text and Button objects are present.
#if PLATFORM_ANDROID
// Request to use the microphone, cf.
// https://docs.unity3d.com/Manual/android-RequestingPermissions.html
message = "Waiting for mic permission";
if (!Permission.HasUserAuthorizedPermission(Permission.Microphone))
{
Permission.RequestUserPermission(Permission.Microphone);
}
#elif PLATFORM_IOS
if (!Application.HasUserAuthorization(UserAuthorization.Microphone))
{
Application.RequestUserAuthorization(UserAuthorization.Microphone);
}
#else
micPermissionGranted = true;
message = "Click button to recognize speech";
#endif
startRecoButton.onClick.AddListener(ButtonClick);
}
}
void Update()
{
#if PLATFORM_ANDROID
if (!micPermissionGranted && Permission.HasUserAuthorizedPermission(Permission.Microphone))
{
micPermissionGranted = true;
message = "Click button to recognize speech";
}
#elif PLATFORM_IOS
if (!micPermissionGranted && Application.HasUserAuthorization(UserAuthorization.Microphone))
{
micPermissionGranted = true;
message = "Click button to recognize speech";
}
#endif
lock (threadLocker)
{
if (startRecoButton != null)
{
startRecoButton.interactable = !waitingForReco && micPermissionGranted;
}
if (outputText != null)
{
outputText.text = message;
}
}
}
}

The sample code you pasted above still has the placeholder values for region and subscription key. Just double checking that you did in fact replace those strings with your own subscription key and region? If that's true, can you please turn on logging, run the code again, and then provide the log? We can help diagnose from there...
To turn on logging, see https://aka.ms/speech/logging.

Related

How to get Microsoft Azure Speech To Text to start transcribing when program is run? (Unity, C#)

I am trying to build a simple app using Microsoft Azure's Cognitive Services Speech To Text SDK in Unity3D. I've following this tutorial, and it worked quite well. The only problem with this tutorial is that the Speech-To-Text is activated by a button. When you press the button, it'll transcribe for the duration of a sentence, and you'll have to press the button again for it to transcribe again. My problem is I'd like it to start transcribing as soon as the program is run in Unity, rather than having to press a button each time I want to transcribe a sentence.
Here is the code.
public async void ButtonClick()
{
// Creates an instance of a speech config with specified subscription key and service region.
// Replace with your own subscription key and service region (e.g., "westus").
var config = SpeechConfig.FromSubscription("[My API Key]", "westus");
// Make sure to dispose the recognizer after use!
using (var recognizer = new SpeechRecognizer(config))
{
lock (threadLocker)
{
waitingForReco = true;
}
// Starts speech recognition, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Checks result.
string newMessage = string.Empty;
if (result.Reason == ResultReason.RecognizedSpeech)
{
newMessage = result.Text;
}
else if (result.Reason == ResultReason.NoMatch)
{
newMessage = "NOMATCH: Speech could not be recognized.";
}
else if (result.Reason == ResultReason.Canceled)
{
var cancellation = CancellationDetails.FromResult(result);
newMessage = $"CANCELED: Reason={cancellation.Reason} ErrorDetails={cancellation.ErrorDetails}";
}
lock (threadLocker)
{
message = newMessage;
waitingForReco = false;
}
}
}
void Start()
{
if (outputText == null)
{
UnityEngine.Debug.LogError("outputText property is null! Assign a UI Text element to it.");
}
else if (startRecoButton == null)
{
message = "startRecoButton property is null! Assign a UI Button to it.";
UnityEngine.Debug.LogError(message);
}
else
{
// Continue with normal initialization, Text and Button objects are present.
}
}
void Update()
{
lock (threadLocker)
{
if (startRecoButton != null)
{
startRecoButton.interactable = !waitingForReco && micPermissionGranted;
}
}
}
I've tried removing the Button object, but then the speech-to-text won't run.
Any tips or advice would be amazing. Thank you.
Per the comments in the script of the tutorial your referenced:
// Starts speech recognition, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
But it's not as simple as replacing 'RecognizeOnceAsync' with 'StartContinuousRecognitionAsync', because the behaviours are different. RecognizeOnceAsync will basically turn on your mic for a maximum of 15 seconds, and then stop listening.
Instead, make the button into 'should I listen continuously or not?' using StartContinuousRecognitionAsync and StopContinuousRecognitionAsync, and then change your Start function to simply start up a new recognizer and have it waiting for the Speech Recognizer event to come through. Below is the script I used to enable this functionality:
using UnityEngine;
using UnityEngine.UI;
using Microsoft.CognitiveServices.Speech;
public class HelloWorld : MonoBehaviour
{
public Text outputText;
public Button startRecordButton;
// PULLED OUT OF BUTTON CLICK
SpeechRecognizer recognizer;
SpeechConfig config;
private object threadLocker = new object();
private bool speechStarted = false; //checking to see if you've started listening for speech
private string message;
private bool micPermissionGranted = false;
private void RecognizingHandler(object sender, SpeechRecognitionEventArgs e)
{
lock (threadLocker)
{
message = e.Result.Text;
}
}
public async void ButtonClick()
{
if (speechStarted)
{
await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false); // this stops the listening when you click the button, if it's already on
lock(threadLocker)
{
speechStarted = false;
}
}
else
{
await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false); // this will start the listening when you click the button, if it's already off
lock (threadLocker)
{
speechStarted = true;
}
}
}
void Start()
{
startRecordButton.onClick.AddListener(ButtonClick);
config = SpeechConfig.FromSubscription("KEY", "REGION");
recognizer = new SpeechRecognizer(config);
recognizer.Recognizing += RecognizingHandler;
}
void Update()
{
lock (threadLocker)
{
if (outputText != null)
{
outputText.text = message;
}
}
}
}
And below is a gif of me using this functionality. You'll not that I don't click the button at all (and it was only clicked once, prior to the gif being recorded)(also, sorry for the strange sentences, my coworkers kept interrupting asking who I was talking to)

How to overlay object on Cloud Recognized Image Target in Vuforia?

I started developing an android AR app using Vuforia library a few days ago. I am able to overlay my custom image over pre-defined stones-and-chips target in the vuforia sample app provided. In Cloud Reco sample, it recognizes the target image and i am able to fetch the meta data for the same. How can I now overlay my custom image over this recognized target?
In the given User Defined Target sample, it creates an Image Target at runtime, and overlays my image, but i want it overlayed only on cloud-recognized targets.
So I tried integrating it in CloudReco.java, so that when an object is cloud-recognized, it builds a target out of it at runtime.
#Override
public void onVuforiaUpdate(State state)
{
// Get the tracker manager:
TrackerManager trackerManager = TrackerManager.getInstance();
// Get the object tracker:
ObjectTracker objectTracker = (ObjectTracker) trackerManager
.getTracker(ObjectTracker.getClassType());
// Get the target finder:
TargetFinder finder = objectTracker.getTargetFinder();
// Check if there are new results available:
final int statusCode = finder.updateSearchResults();
// Show a message if we encountered an error:
if (statusCode < 0)
{
boolean closeAppAfterError = (
statusCode == UPDATE_ERROR_NO_NETWORK_CONNECTION ||
statusCode == UPDATE_ERROR_SERVICE_NOT_AVAILABLE);
showErrorMessage(statusCode, state.getFrame().getTimeStamp(), closeAppAfterError);
} else if (statusCode == TargetFinder.UPDATE_RESULTS_AVAILABLE)
{
// Process new search results
if (finder.getResultCount() > 0)
{
TargetSearchResult result = finder.getResult(0);
// Check if this target is suitable for tracking:
if (result.getTrackingRating() > 0)
{
Trackable trackable = finder.enableTracking(result);
// Trying out the below method
// ==================================================
String targetMetaData = result.getMetaData();
Log.d("TargetMetaData", targetMetaData);
// Create the data set:
dataSetUserDef = objectTracker.createDataSet();
if (!objectTracker.activateDataSet(dataSetUserDef))
{
Log.d(LOGTAG, "Failed to activate data set.");
return;
}
Log.d(LOGTAG, "Successfully loaded and activated data set.");
// Build a target out of the selected image object
ImageTargetBuilder targetBuilder = objectTracker
.getImageTargetBuilder();
if (targetBuilder != null)
{
if (targetBuilder.getFrameQuality() == ImageTargetBuilder.FRAME_QUALITY.FRAME_QUALITY_LOW)
{
Log.d("Cloud Reco", "Low Frame Quality");
}
String name;
if (!targetBuilder.build("randomName", 320.0f)) {
Log.d("CloudReco", "could not build target");
}
}
// ===================================================
if (mExtendedTracking)
trackable.startExtendedTracking();
}
}
}
}
However I am getting the below error :`
03-10 08:17:15.330 291-28912/? D/ae_mgr: ae_mgr [doPvAEmonitor()][getPreviewFlare] i2FlareOffset = 27 i2FlareGain = 515
03-10 08:17:15.339 28797-28882/com.vuforia.samples.VuforiaSamples I/AR: Successfully created ImageTarget.
03-10 08:17:15.340 291-28927/? D/MtkCam/MtkCamUtils: [queryRawStride]imgFmt(269), imgWidth(1632), stride(1640)
03-10 08:17:15.341 291-28927/? I/iio/ifunc: [_config] path(2),CQ(4),en1(0x40300000),en2(0x38a00003),dma(0x00280080),fmt(0x00000106),ctl(0x00010050),tcm_en(0x98100003),isIspOn(0x1),muxS2(0xc0080308)
03-10 08:17:15.341 291-28927/? I/MdpMgr: [init] +,cqIndex(4),sceID(8)
03-10 08:17:15.341 28797-28882/com.vuforia.samples.VuforiaSamples D/TargetMetaData: {"id":1,"image_url":"\/home\/lakshay\/repos\/barium-repos\/scormREPO\/3\/1\/mobile\/5wAhJHu04Ht_DX1600_DY1600_CX800_CY480.png5wAhJHu04Ht_DX1600_DY1600_CX800_CY480.png"}
03-10 08:17:15.343 28797-28882/com.vuforia.samples.VuforiaSamples I/AR: ObjectTracker: Successfully created dataset
03-10 08:17:15.344 291-28913/? D/flash_mgr_m.cpp: flash_mgr_m.cpp isAFLampOn 0
03-10 08:17:15.345 28797-28882/com.vuforia.samples.VuforiaSamples D/CloudReco: Successfully loaded and activated data set.
03-10 08:17:15.346 28797-28882/com.vuforia.samples.VuforiaSamples E/AR: Could not initiate UserDef Target build process
03-10 08:17:15.346 28797-28882/com.vuforia.samples.VuforiaSamples E/AR: Not in scanning state or invalid frame quality reading of -1. Call build() again in scanning mode with valid frame quality values.
03-10 08:17:15.346 28797-28882/com.vuforia.samples.VuforiaSamples D/CloudReco: could not build target
03-10 08:17:15.355 28797-28810/com.vuforia.samples.VuforiaSamples I/BufferQueueProducer: [SurfaceTexture--1-28797-1](this:0x8731e000,id:1,api:4,p:291,c:28797) queueBuffer: slot 1 is dropped, handle=0x854ba300
I also tried tweaking the UserDefinedTargets.java so that before building the Image Target, the TargetFinder checks if it the object matches the cloud targets. But seems like the same image here is not matching.
Here is the tweaked code :
when camera is clicked :
// Button Camera clicked
public void onCameraClick(View v)
{
if (isUserDefinedTargetsRunning())
{
// Shows the loading dialog
loadingDialogHandler
.sendEmptyMessage(LoadingDialogHandler.SHOW_LOADING_DIALOG);
// Builds the new target
//startBuild();
//// Calling startBuild as an async task instead
// since it involves network calls
SyncIncoData syncIncoData = new SyncIncoData();
syncIncoData.execute(new String[] { null });
}
}
building targets :
void startBuild()
{
TrackerManager trackerManager = TrackerManager.getInstance();
ObjectTracker objectTracker = (ObjectTracker) trackerManager
.getTracker(ObjectTracker.getClassType());
if (objectTracker != null)
{
ImageTargetBuilder targetBuilder = objectTracker
.getImageTargetBuilder();
if (targetBuilder != null)
{
if (targetBuilder.getFrameQuality() == ImageTargetBuilder.FRAME_QUALITY.FRAME_QUALITY_LOW)
{
showErrorDialogInUIThread();
}
// Tried below code for cloud-reco on this object
// ======================================================
// Initialize target finder:
TargetFinder targetFinder = objectTracker.getTargetFinder();
// Start initialization:
if (targetFinder.startInit(kAccessKey, kSecretKey))
{
Log.d("TargetFind", "init");
targetFinder.waitUntilInitFinished();
}
Log.d("TargetFind", "init over");
int resultCode = targetFinder.getInitState();
if (resultCode != TargetFinder.INIT_SUCCESS)
{
if(resultCode == TargetFinder.INIT_ERROR_NO_NETWORK_CONNECTION)
{
Log.d("TargetFinder", "no net");
}
else
{
Log.d("TargetFinder", "no service");
}
Log.e(LOGTAG, "Failed to initialize target finder.");
return;
}
Log.d("TargetFinder", "validated");
// Get the target finder:
TargetFinder finder = objectTracker.getTargetFinder();
Log.d("TargetFinder", "found");
// Check if there are new results available:
final int statusCode = finder.updateSearchResults();
Log.d("Status Code", String.valueOf(statusCode));
//// status is coming 1 => TargetFinder.UPDATE_NO_REQUEST ////
// Show a message if we encountered an error:
if (statusCode < 0)
{
Log.d("CloudFind", "not found");
return;
}
else if (statusCode == TargetFinder.UPDATE_RESULTS_AVAILABLE) {
//// Not coming here ////
Log.d("CloudFind", "found");
// Process new search results
if (finder.getResultCount() > 0) {
TargetSearchResult result = finder.getResult(0);
String targetMetaData = result.getMetaData();
Log.d("TargetMetaData", targetMetaData);
// Check if this target is suitable for tracking:
// Only if the target is matched and suitable for tracking
// then the Image target should be built
if (result.getTrackingRating() > 0) {
String name;
do
{
name = "UserTarget-" + targetBuilderCounter;
Log.d(LOGTAG, "TRYING " + name);
targetBuilderCounter++;
} while (!targetBuilder.build(name, 320.0f));
refFreeFrame.setCreating();
}
}
}
}
}
}
async task :
private class SyncIncoData extends AsyncTask<String, String, String> {
#Override
protected String doInBackground(String... ot) {
startBuild();
return null;
}
#Override
protected void onPreExecute() {
super.onPreExecute();
}
#Override
protected void onPostExecute(String result) {
}
#Override
protected void onProgressUpdate(String... values) {
super.onProgressUpdate(values);
}
}
PS :
I am open to change the approach to achieve this result.
Just need to match the image against a target, which i have on Vuforia web db, and fetch an image from another server to overlay on it.

PXProcessing SetError not showing in UI Grid

This my process screen:
as you can see it throws errors but it doesnt indicate the error mark on the grid.
After clicking the process button, it just unchecks the checkbox in my records
i want the grid to be like this(with the red 'x' mark):
this is my graph :
public PXCancel<PayrollFilter> Cancel;
public PXSetup<PayrollSetup> PayrollSetup;
public PXFilter<PayrollFilter> Filter;
[PXFilterable]
public PXFilteredProcessingJoin<PayrollEmployeeProcess, PayrollFilter,
InnerJoin<EPEmployee,
On<PayrollEmployee.employeeID, Equal<EPEmployee.bAccountID>>,
InnerJoin<Branch,
On<EPEmployee.parentBAccountID, Equal<Branch.bAccountID>>>>,
Where<PayrollEmployee.payPeriodID, Equal<Current<PayrollFilter.payPeriodID>>,
And<Branch.branchID, Equal<Current<AccessInfo.branchID>>>>> EmployeePayrollProcess;
#region Constructor
public PayrollProcess()
{
PayrollSetup setup = PayrollSetup.Current;
EmployeePayrollProcess.SetSelected<PayrollEmployeeProcess.selected>();
EmployeePayrollProcess.SetProcessDelegate(delegate (List<PayrollEmployeeProcess> employees)
{
if (Filter.Current == null) return;
var payPeriod = Filter.Current.PayPeriodID ?? 0;
var payrollPeriod = Filter.Current.PayrollPeriodID ?? 0;
if (payPeriod == 0 || payrollPeriod == 0) return;
PXLongOperation.StartOperation(this, delegate ()
{
bool errorOccured = false;
foreach (PayrollEmployeeProcess employee in employees)
{
PayrollRegisterEntry graph = PXGraph.CreateInstance<PayrollRegisterEntry>();
try
{
graph.ProcessPayroll(employee, payPeriod, payrollPeriod);
PXProcessing<PayrollEmployeeProcess>.SetInfo("Employee processed");
}
catch (Exception ex)
{
errorOccured = true;
//employees.IndexOf(employee),
PXProcessing<PayrollEmployeeProcess>.SetError(ex);
}
finally
{
graph.Clear();
}
}
if (errorOccured) throw new PXException("At least one employee was not processed.");
});
});
// EmployeePayrollProcess.
}`
can anyone can help me? I'm using Acumatica 6
Throwing an exception in Acumatica sets the error in the header. To set a Row or Field level error you need to set/raise it. There's a few ways to set/raise errors, what they have in common is that they don't use the 'throw' keyword.
For a processing screen with a filter, use the following syntax to raise the error:
PXFilteredProcessing<GridDetailDAC, GridFilterDAC>.SetError(rowIndex, new PXSetPropertyException("Error Message", PXErrorLevel.RowError));
Processing screen without filter:
PXProcessing.SetError(rowIndex, new PXException("Error Message"));

RFCommConnectionTrigger in Windows Universal Apps To detect Incoming Bluetooth Connection

I am working on a Windows Universal App. I Want to get the Data from a Bluetooth Device to the Windows Phone. I am Using the Concept of RFCommCommunicationTrigger for this Purpose.
Here's the code Snippet I am Using
var rfTrigger = new RfcommConnectionTrigger();
// Specify what the service ID is
rfTrigger.InboundConnection.LocalServiceId = RfcommServiceId.FromUuid(new Guid("<some_base_guid>"));
//Register RFComm trigger
var rfReg = RegisterTaskOnce(
"HWRFCommTrigger",
"BackgroundLibrary.RFBackgroundTask",
rfTrigger, null
);
SetCompletedOnce(rfReg, OnTaskCompleted);
Here the Function of RegisterTaskOnce
static private IBackgroundTaskRegistration RegisterTaskOnce(string taskName, string entryPoint, IBackgroundTrigger trigger, params IBackgroundCondition[] conditions)
{
// Validate
if (string.IsNullOrEmpty(taskName)) throw new ArgumentException("taskName");
if (string.IsNullOrEmpty(entryPoint)) throw new ArgumentException("entryPoint");
if (trigger == null) throw new ArgumentNullException("trigger");
// Look to see if the name is already registered
var existingReg = (from reg in BackgroundTaskRegistration.AllTasks
where reg.Value.Name == taskName
select reg.Value).FirstOrDefault();
Debug.WriteLine("Background task "+ taskName+" is already running in the Background");
// If already registered, just return the existing registration
if (existingReg != null)
{
return existingReg;
}
// Create the builder
var builder = new BackgroundTaskBuilder();
builder.TaskEntryPoint = entryPoint;
builder.Name = taskName;
builder.SetTrigger(trigger);
// Conditions?
if (conditions != null)
{
foreach (var condition in conditions)
{
builder.AddCondition(condition);
}
}
// Register
return builder.Register();
}
Here's the code for SetCompletedOnce this will add a Handler only once
static private void SetCompletedOnce(IBackgroundTaskRegistration reg, BackgroundTaskCompletedEventHandler handler)
{
// Validate
if (reg == null) throw new ArgumentNullException("reg");
if (handler == null) throw new ArgumentNullException("handler");
// Unsubscribe in case already subscribed
reg.Completed -= handler;
// Subscribe
reg.Completed += handler;
}
I have also Written the BackgroundLibrary.RFBackgroundTask.cs
public sealed class RFBackgroundTask : IBackgroundTask
{
public async void Run(IBackgroundTaskInstance taskInstance)
{
BackgroundTaskDeferral deferral = taskInstance.GetDeferral();
try
{
Debug.WriteLine(taskInstance.TriggerDetails.GetType());
taskInstance.Canceled += new BackgroundTaskCanceledEventHandler(OnCanceled);
Debug.WriteLine("RFComm Task Running");
Debug.WriteLine(taskInstance.TriggerDetails.GetType().ToString());
}
catch (System.Exception e)
{
Debug.WriteLine("RFComm Task Error: {0}", e.Message);
}
deferral.Complete();
}
}
The Run Method is Invoked Every Time The Device tries to Open the Connection.
The type of the Trigger that is obtained (the type I am debugging in the run method of the RFBackgroundTask.cs) is printed as
Windows.Devices.Bluetooth.Background.RfcommConnectionTriggerDetails
But I am Unable use that because I dont have this Class in the BackgroundLibrary project.
The Documentation says that this Provides information about the Bluetooth device that caused this trigger to fire.
It has Variables like Socket,RemoteDevice etc.
I think I am Missing something very simple
Can you please help me out .
Once your background task is launched, simply cast the TriggerDetails object to an RfcommConnectionTriggerDetails object:
public sealed class RFBackgroundTask : IBackgroundTask
{
public async void Run(IBackgroundTaskInstance taskInstance)
{
BackgroundTaskDeferral deferral = taskInstance.GetDeferral();
try
{
taskInstance.Canceled += new BackgroundTaskCanceledEventHandler(OnCanceled);
RfcommConnectionTriggerDetails details = (RfcommConnectionTriggerDetails)taskInstance.TriggerDetails;
StreamSocket = details.Socket; // Rfcomm Socket
// Access other properties...
}
catch (System.Exception e)
{
Debug.WriteLine("RFComm Task Error: {0}", e.Message);
}
deferral.Complete();
}
}

E_SCN_READINCOMPATIBLE Notification error thrown while scanning bar code on MC9090G

I'm using EMDK 2.5 (VS2008 and VC# and .NetCF3.5) Barcode2 class from the library to write a sample application to scan bar codes. I followed the samples available in EMDK namely CS_Barcode2Sample1 project.Every time I hardware trigger the scan the notification "E_SCN_READINCOMPATIBLE" is thrown and not able to retrieve the scanned data. The documentation doesn't say much about the cause of E_SCN_READINCOMPATIBLE notification and no luck from Google search. I tried several options including making use of Symbol.Barcode and the outcome is same.
I also tried EMDK 2.3 but the result is same.
I've pasted the whole code here....
public partial class Form1 : Form
{
private Barcode2 myBarcode2 = null;
public Form1()
{
InitializeComponent();
InitBarcode();
}
public bool InitBarcode()
{
// If the Barcode2 object is already initialized then fail the initialization.
if (myBarcode2 != null)
{
return false;
}
else // Else initialize the reader.
{
try
{
Symbol.Barcode2.Device[] AvailableDevices = Symbol.Barcode2.Devices.SupportedDevices;
if (AvailableDevices.Length == 0)
{
return false;
}
if (AvailableDevices.Length == 1)
{
//get the first available scanner in the list
Symbol.Barcode2.Device MyDevice = AvailableDevices[0];
// Create the reader, based on selected device.
myBarcode2 = new Barcode2(MyDevice);
// Attach a scan notification handler.
//this.myScanNotifyHandler = new Barcode2.OnScanHandler(myBarcode2_ScanNotify);
myBarcode2.OnScan += myBarcode2_ScanNotify;
// Attach a status notification handler.
//this.myStatusNotifyHandler = new Barcode2.OnStatusHandler(myBarcode2_StatusNotify);
myBarcode2.OnStatus += myBarcode2_StatusNotify;
myBarcode2.Config.TriggerMode = TRIGGERMODES.HARD;
// Submit a scan.
myBarcode2.Scan(5000);
}
}
catch (OperationFailureException ex)
{
MessageBox.Show("Exception Raised 1");
return false;
}
catch (InvalidRequestException ex)
{
MessageBox.Show("Exception Raised 2");
return false;
}
catch (InvalidIndexerException ex)
{
MessageBox.Show("Exception Raised 3");
return false;
}
}
return false;
}
private void myBarcode2_ScanNotify(ScanDataCollection scanDataCollection)
{
// Checks if the BeginInvoke method is required because the OnScan delegate is called by a different thread
if (this.InvokeRequired)
{
// Executes the OnScan delegate asynchronously on the main thread
this.BeginInvoke(new Barcode2.OnScanHandler(myBarcode2_ScanNotify), new object[] { scanDataCollection });
}
else
{
// Get ScanData
ScanData scanData = scanDataCollection.GetFirst;
int i;
switch (scanData.Result)
{
case Symbol.Barcode2.Results.SUCCESS:
String str = scanData.Text;
myBarcode2.Config.TriggerMode = TRIGGERMODES.HARD;
myBarcode2.Scan(5000);
break;
case Symbol.Barcode2.Results.E_SCN_READTIMEOUT:
break;
case Symbol.Barcode2.Results.CANCELED:
break;
case Symbol.Barcode2.Results.E_SCN_DEVICEFAILURE:
i = 93;
break;
default:
if (scanData.Result == Symbol.Barcode2.Results.E_SCN_READINCOMPATIBLE)
{
// If the failure is E_SCN_READINCOMPATIBLE, exit the application.
MessageBox.Show("Fatal Error");
this.Close();
return;
}
break;
}
}
}
private void myBarcode2_StatusNotify(StatusData statusData)
{
// Checks if the Invoke method is required because the OnStatus delegate is called by a different thread
if (this.InvokeRequired)
{
// Executes the OnStatus delegate on the main thread
this.Invoke(new Barcode2.OnStatusHandler(myBarcode2_StatusNotify), new object[] { statusData });
}
else
{
int i;
switch (statusData.State)
{
case States.IDLE:
break;
case States.READY:
break;
default:
break;
}
}
}
}
}
I've went thru this recently also, as I observed, it probably due to the scanner device is occupied by other application, where the scan request has been queued already, you can go to memory management, and kill the suspect app, and try your app again.
Refer to the Symbol FAQ

Resources