Ambient Light Sensor Interrupt Status register not getting updated - windows-ce

I'm using WinCE 7 Visual Studio 2008 and writing a driver code for ALS (MAX44009). I have written the following code for reading the interrupt status register and displaying messages when the interrupt has occurs. But, it works randomly for a few times only. For eg., when I close the sensor with my hand, I get the messages only few times, and then, it doesn't go into the data==1 condition even when it has to interrupt and continues to loop. The threshold timer is 0. The AlsRegRead function does an I2CRead. pAlsDrvInfo is the driver context. ADD_ALS_INT_STATUS is 0. DumpAlsRegistry function will print the content of all the registers except register 0x0.
while(1)
{
AlsRegRead(pAlsDrvInfo, ADD_ALS_INT_STATUS, &data, sizeof(UINT8));
if (data == 1)
{
DumpAlsRegistry(pAlsDrvInfo);
RETAILMSG(1,(L"Interrupt Received...\r\n"));
}
}
Please guide me where I'm making mistake.

I have found the reason behind this. Two issues had been behind this and both of them are equally important.
1) The sensor had been in a partially damaged state.
2) It requires some delay. So, I added Sleep(1000) at the start of the loop.
while(1)
{
Sleep(1000);
AlsRegRead(pAlsDrvInfo, ADD_ALS_INT_STATUS, &data, sizeof(UINT8));
if (data == 1)
{
DumpAlsRegistry(pAlsDrvInfo);
RETAILMSG(1,(L"Interrupt Received...\r\n"));
}
}
Thanks.

Related

How to "simulate" processing time consuming tasks for FreeRTOS aimed to discuss real-time system topics using Linux simulator

I'm trying to use FreeRTOS to discuss real time concepts with students, using the POSIX-Linux simulator framework. To accomplish this, I have to find a way of wasting processing time in a controlled way (simulating task in "Running" status by a predetermined period of processing time).
Delays are not good because they change the task status to "Block" and, with a preemptive scheduler, it means the scheduler can give the processor to other tasks. Using Linux native time control approaches (e.g. using clock_gettime() to build the logic) are not good because I don't have the control of the exact running time of a single task, specially with preemptiveness. Regular iterations (for, while) don't have the control I need for processing time (my computer and students computers will have different processing times depending on their architectures).
During my researches on FreeRTOS documentation, I found both the TaskStatus_t struct and the vTaskGetInfo() function which were supposed to help me out. And my problem is when I implement something like:
// Creating the task
if (xTaskCreate(
app_task,
task1_info.name,
configMINIMAL_STACK_SIZE,
(void *) &task1_info,
task1_info.priority,
NULL
) != pdPASS) printf("Task create error %s\n", task1_info.name);
// ...
starting_time_ticks = xTaskGetTickCount(); // starting time in ticks
vTaskStartScheduler();
// ...
static void app_task( void *pvParameters )
// The task itself
{ // ...
for( ;; )
{ // ...
app_proc_ticks(
pdMS_TO_TICKS( task_info.proc_time_ms ),
task_info.name
); // consuming processing time...
// ... }
// ... }
static void app_proc_ticks( TickType_t proc_time_ticks, uint8_t name[APP_MAX_MSG_SIZE])
// Consuming the number of ticks in order to attain a certain processing time
{
TaskHandle_t xHandle;
TaskStatus_t xTaskDetails;
xHandle = xTaskGetHandle( name );
configASSERT( xHandle );
vTaskGetInfo( xHandle, &xTaskDetails, pdTRUE, eInvalid );
TickType_t begin = xTaskDetails.ulRunTimeCounter;
while((xTaskDetails.ulRunTimeCounter - begin) < proc_time_ticks)
{
vTaskGetInfo( xHandle, &xTaskDetails, pdTRUE, eInvalid );
}
}
For a task_info.proc_time_ms equal to 25 ms, my code shows up as the task is consuming around 250 ms worth of ticks, a error factor of 10x. The way I count this is with the following "timestamp" strategy:
static TickType_t get_timestamp_ticks() {
return xTaskGetTickCount() - starting_time_ticks;
}
As far as I can see, I'm having problems to understand and properly convert xTaskDetails.ulRunTimeCounter time unit (ticks, ms, or probably something else). Also probably some tick to ms constant I'm not aware of. Right now, to convert from "ms" to "ticks" I'm using pdMS_TO_TICKS() macro and to convert from "ticks" to "ms" I'm multiplying the number of ticks by portTICK_RATE_MS.
Also, after make, I'm using taskset -c 0 ./build/posix_demo to run and ensure the use of a single processor by my executable.
I'm not trying to hold on to this solution, though. If anyone could share how to do a time controlled delay with "real consumption of processing time" for tasks in FreeRTOS, I would appreciate it as well.

nRF52 DK BLE can't sleep

I’m using Nordic nRF52 DK in order to make a BLE application that broadcasts data through a custom characteristic with Read and Notify properties.
I’m currently working with PlatformIO and Visual Studio Code for this project.
To measure the power consumption I am using the Power Profiler kit.
The power consumption is always above 2.3mA which is extremely high based on Online Power Profiler For Ble.
Online Power Profiler For Ble settings:
{
"chip": "1",
"voltage": "3",
"dcdc": "on",
"lf_clock": "lfrc",
"radio_tx": "-40",
"ble_type": "adv",
"ble_int": "1000",
"tx_size": "20"
}
My goal is to put the board to sleep until a new Bluetooth connection is established and then execute the eventQueue for the sensor value update and other processes. After the disconnection event the board must be put to sleep again.
First, I tried to implement sleep for an mbed sample project with BLE features BLE_BatteryLevel.
Note: I removed the blink event from the sample code.
I have added _event_queue,break_dispatch() inside onDisconnectionComplete callback function, in order to force the ble to exit from its functions.
I do not know if it is the right choice but I wanted somehow to exit ble’s event queue and let the board sleep.
I have tried the following:
Use a DeepSleepLock object in a block of code in order to execute sleep on its destruction
Using ThisThread::sleep(5s)
int main()
{
while (true)
{
ThisThread::sleep_for('5s');
{
DeepSleepLock dp;
BLE &ble = BLE::Instance();
ble.onEventsToProcess(schedule_ble_events);
BatteryDemo demo(ble, event_queue);
demo.start();
ThisThread::sleep_for('5s');
}
}
}
Using void sleep()
int main()
{
BLE &ble = BLE::Instance();
ble.onEventsToProcess(schedule_ble_events);
BatteryDemo demo(ble, event_queue);
demo.start();
ble.shutdown();
sleep();
}
Power Profiler Screenshot from BLE onDisconection
Hal_sleep functions
int main()
{
BLE &ble = BLE::Instance();
ble.onEventsToProcess(schedule_ble_events);
BatteryDemo demo(ble, event_queue);
demo.start();
ble.shutdown();
hal_sleep();
}
Disable input and output using at the start of main
mbed_file_handle(STDIN_FILENO)->enable_input(false);
mbed_file_handle(STDIN_FILENO)->enable_output(false);
Adding rtos::Kernel::attach_idle_hook(&sleep); at the start of main()
int main()
{
rtos::Kernel::attach_idle_hook(&sleep);
BLE &ble = BLE::Instance();
ble.onEventsToProcess(schedule_ble_events);
BatteryDemo demo(ble, event_queue);
demo.start();
}
Nothing seems to put the board on sleep, the power consumption is always high.
Power Profiler Screenshots
BLE Enabled State
BLE Disconnected state (sleep)
I couldn't find any example for power consumption and sleep using BLE.

MFC Edit Control EN_KILLFOCUS issue

I am using Visual Studio 2013 and making MFC Dialog based application. I am running into strange issue with Kill Focus of Edit Control.
Please see below:
==========================================================================
In my application, I have two Edit Controls on Dialog Box.
1st Edit Control -> IDC_EDIT_QUALITY1
2nd Edit Control -> IDC_EDIT_QUALITY2
I have handled both's EN_KILLFOCUS event to validate the value.
BEGIN_MESSAGE_MAP(CTestDlg, CDialog)
ON_EN_KILLFOCUS(IDC_EDIT_QUALITY1, &CTestDlg::OnQuality1EditKillFocus)
ON_EN_KILLFOCUS(IDC_EDIT_QUALITY2, &CTestDlg::OnQuality2EditKillFocus)
END_MESSAGE_MAP()
void CTestDlg::OnQuality1EditKillFocus()
{
ValidateQualityParams(IDC_EDIT_QUALITY1);
}
void CTestDlg::OnQuality2EditKillFocus()
{
ValidateQualityParams(IDC_EDIT_QUALITY2);
}
#define MIN_QUALITY_VALUE 1
#define MAX_QUALITY_VALUE 100
void CTestDlg::ValidateQualityParams(int qualityParamID)
{
CString strQuality1;
if (IDC_EDIT_QUALITY1 == qualityParamID)
{
m_ctrlQuality1.GetWindowText(strQuality1);
if ((_ttoi(strQuality1) < MIN_QUALITY_VALUE) || (_ttoi(strQuality1) > MAX_QUALITY_VALUE))
{
CString strMessage;
strMessage.Format(_T("Quality1 value must be between %d to %d."), MIN_QUALITY_VALUE, MAX_QUALITY_VALUE);
**AfxMessageBox(strMessage);**
m_ctrlQuality1.SetSel(0, -1);
m_ctrlQuality1.SetFocus();
return;
}
}
CString strQuality2;
if (IDC_EDIT_QUALITY2 == qualityParamID)
{
m_ctrlQuality2.GetWindowText(strQuality2);
if ((_ttoi(strQuality2) < MIN_QUALITY_VALUE) || (_ttoi(strQuality2) > MAX_QUALITY_VALUE))
{
CString strMessage;
strMessage.Format(_T("Quality2 value must be between %d to %d."), MIN_QUALITY_VALUE, MAX_QUALITY_VALUE);
AfxMessageBox(strMessage);
m_ctrlQuality2.SetSel(0, -1);
m_ctrlQuality2.SetFocus();
return;
}
}
}
Now, the issue happens when, after changing the value in 1st Edit Control (IDC_EDIT_QUALITY1), say entering 0 in it and pressing TAB key, the flow goes as below:
void CTestDlg::OnQuality1EditKillFocus() is called.
It calls ValidateQualityParams(IDC_EDIT_QUALITY1)
Inside ValidateQualityParams, it goes to if (IDC_EDIT_QUALITY1 == qualityParamID) condition.
As the value I entered is less than MIN_QUALITY_VALUE, so it shows the Message by calling AfxMessageBox.
- Now, even from the callstack of AfxMessageBox, it hits void CTestDlg::OnQuality2EditKillFocus() internally.
Although callstack of OnQuality1EditKillFocus is NOT finished yet, OnQuality2EditKillFocus gets called from the callstack of AfxMessageBox.
I don't understand the cause of this issue. Has anyone encountered such issue before?
In my resource.h, I have two distinct values for IDC_EDIT_QUALITY1 and IDC_EDIT_QUALITY2
#define IDC_EDIT_QUALITY1 1018
#define IDC_EDIT_QUALITY2 1020
Please help on this issue.
I believe the EN_KILLFOCUS notification for the IDC_EDIT_QUALITY2 control you are receiving is caused not by the m_ctrlQuality1.SetFocus() call, but instead by the AfxMessageBox() call.
When you press the [Tab] key IDC_EDIT_QUALITY1 loses the focus, and IDC_EDIT_QUALITY2 gets the focus. Then you receive the EN_KILLFOCUS notification for IDC_EDIT_QUALITY1. You display the error-message, which causes the application to "yield" (start processing messages again), while the message-box is displayed. The m_ctrlQuality1.SetFocus() call won't take place before the AfxMessageBox() returns, ie before you close the message-box, and therefore the EN_KILLFOCUS notification for IDC_EDIT_QUALITY2 can't be the result of that call. I guess it's the result of displaying the message-box (IDC_EDIT_QUALITY2 has got the focus, but the message-box makes it lose it).
You may work around it by adding a memeber variable, as Staytuned123 suggested, but in a different setting: name it, say m_bKillFocusProcessing, and set it to TRUE while you are processing ANY EN_KILLFOCUS notification (AfxMessageBox() plus SetFocus()), and to FALSE when you are done processing it; if it's already TRUE exit without doing anything. That is, only one EN_KILLFOCUS notification may be processed at a time.
However, such a user-interface (displaying a message-box on exiting a field) is rather weird. And why reinvent the wheel and not instead use the DDX/DDV feature, which MFC already offers? You can define member variables associated with controls, and perform various checks, including range-check. Call UpdateData(TRUE) to perform the checks (for all controls on the dialog) and transfer the data to the member variables. Or you can put some error-displaying controls (usually in red color), activated when an error is found, like in .net or the web.
When you pressed TAB key, IDC_EDIT_QUALITY2 got focus. But because value entered was out of bound, the program called m_ctrlQuality1.SetFocus(), which in turn caused OnQuality2EditKillFocus() to get called.
Add a member variable says m_bQuality1OutOfBound and set it to true right before calling m_ctrlQuality1.SetFocus(). In OnQuality2EditKillFocus(), when m_bQuality1OutOfBound is true, set it to false and don't call ValidateQualityParams(IDC_EDIT_QUALITY2).

How to solve simultaneity bias in groovy

What is the best way of solving the simultaneity bias in groovy?
I have a issue with a queue. I select all queue with state wait and
update the state to running. Most of the time it goes okay, but sometimes by error state running get loaded from another thread.
I know from study time, I get illustrated to a way to solve through java. But I can't remember how.
Below should illustrate the problem.
Thread one:
List<Queue> elements = Queue.findAll { state = wait }
sleep(1000) //Illustration of a delay
elements.each {
it.state = running
it.save()
}
Thread two (starts at same time as thread one):
List<Queue> elements = Queue.findAll { state = wait } //The error occur here, as thread one already begin executing same elements.
//Problem should be solved somehow here.
elements.each {
it.state = running
it.save()
}
Thread two select same element as Thread one. But thread two should not go further with the element, as Thread one is the owner now.

KeSetSystemAffinityThread behavior

Some questions about KeSetSystemAffinityThread function, since MSDN is quite laconic.
NOTE: I can't use the more complete KeSetSystemAffinityThreadEx because I must still support Windows XP.
1) How can I restore the previous affinity? The function does not return the old value, how can I obtain it?
2) Is it true that passing 0 to the function restores the default system affinity? I have found such assertion in some forums, but I can't find it in official MS documentation.
3) Is the new thread's system affinity mask maintained after a return to user mode, or is it restored to the default each time the thread enters in system mode?
4) What happens if previous system affinity mask is not restored?
(I'd rather post four different questions, but they seem to me too interdependent)
Use the undocumented KeRevertToUserAffinityThread(void) in WinXP. A quick search yields little information about the API but I found an implementation of the same function in ReastOS :
ReactOS KeRevertToUserAffinityThread
It is rather simple so I copy & paste it here:
VOID NTAPI KeRevertToUserAffinityThread ( VOID )
{
KIRQL OldIrql;
PKPRCB Prcb;
PKTHREAD NextThread, CurrentThread = KeGetCurrentThread();
ASSERT_IRQL_LESS_OR_EQUAL(DISPATCH_LEVEL);
ASSERT(CurrentThread->SystemAffinityActive != FALSE);
/* Lock the Dispatcher Database */
OldIrql = KiAcquireDispatcherLock();
/* Set the user affinity and processor and disable system affinity */
CurrentThread->Affinity = CurrentThread->UserAffinity;
CurrentThread->IdealProcessor = CurrentThread->UserIdealProcessor;
CurrentThread->SystemAffinityActive = FALSE;
/* Get the current PRCB and check if it doesn't match this affinity */
Prcb = KeGetCurrentPrcb();
if (!(Prcb->SetMember & CurrentThread->Affinity))
{
/* Lock the PRCB */
KiAcquirePrcbLock(Prcb);
/* Check if there's no next thread scheduled */
if (!Prcb->NextThread)
{
/* Select a new thread and set it on standby */
NextThread = KiSelectNextThread(Prcb);
NextThread->State = Standby;
Prcb->NextThread = NextThread;
}
/* Release the PRCB lock */
KiReleasePrcbLock(Prcb);
}
/* Unlock dispatcher database */
KiReleaseDispatcherLock(OldIrql);
}
Note the function takes no argument and just restore the affinity from some elements in the currrent KTHREAD struct. I guess this answer your question 1 & 2. Just call this function with no argument. I have done a test in 32bit WinXP and confirmed this. Question 4 is simple, your thread will continue to run using the processor affinity your've set.
I have no idea to your question 3. But most likely a switch between user and kernel mode has no effect on the current processor affinity in effect since this is something stored in the KTHREAD struct.

Resources