MarkLogic Filesystem Log entry [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I am seeing some slow Marklogic cluster logs like below
2020-01-14 05:55:22.649 Info: Slow background cluster.background.clusters, 5.727 sec
2020-01-14 05:55:22.649 Info: Slow background cluster.background.hosts.AssignmentManager, 5.581 sec
I suspect MarkLogic filesystem is running slow and does not able to keep up with MarkLogic. I am seeing below log entry also:-
2020-01-14 05:55:53.380 Info: Linux file system mount option 'barrier' is default; recommend faster 'nobarrier' if storage has non-volatile write cache
I want to know what is the meaning of the above log entry in MarkLogic? How can I be sure that filesystem is having slowness problems or not?

The meaning of "slow messages" is that a background activity takes longer time than expected. It is an indicator of starvation.
From your question it's impossible to say what is causing it. Typically, it's related to underlying physical infrastructure where MarkLogic is running. MarkLogic doesn't have its filesystem or other resources - it uses the OS's filesystem, memory etc. and if available physical resources are not enough for MarkLogic to serve the requested load, background operations will take longer time than expected. This will always be reflected in the log.
You can read more here:
Understanding "slow background" messages
https://help.marklogic.com/Knowledgebase/Article/View/508/0/understanding-slow-infrastructure-notifications
29 August 2019 10:54 AM
Introduction
In more recent versions of MarkLogic Server, "slow background" error log messages were added to note and help diagnose slowness.
Details
For "Slow background" messages, the system is timing how long it took to do some named background activity. These activities should not take long and the "slow background" message is an indicator of starvation. The activity can be slow because:
it is waiting on a mutex or semaphore held by some other slow thread;
the operating system is stalling it, possibly because it is thrashing because of low memory.
Looking at the "slow background" messages in isolation is not sufficient to understand the reason - we just know a lot of time passed since the last time we read the time of day clock. To understand the actual cause, additional evidence will need to be gathered from the time of the incident.
Notes:
In general, we do not time how long it takes to acquire a mutex or semaphore as reading the clock is usually more expensive than getting a mutex or semaphore.
We do not time things that usually take about a microsecond.
We do time things that usually take about a millisecond.
Related Articles
Knowledgebase: Understanding Slow Infrastructure Notifications
Knowledgebase: (Understanding slow 'journal frame' entries in the ErrorLog)[https://help.marklogic.com/Knowledgebase/Article/View/460/0/understanding-slow-journal-frame-entries-in-the-errorlog]
Knowledgebase: (Hung Messages in the ErrorLog)[https://help.marklogic.com/Knowledgebase/Article/View/35/0/hung-messages-in-the-errorlog]

Related

What does it mean by "user threads cannot take advantage of multithreading or multiprocessing"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
user threads cannot take advantage of multithreading or multiprocessing
source : wikipedia
Does this mean a CPU cannot efficiently execute multiple user threads simultaneously ?
Does this mean a CPU cannot switch between two or more user threads ?
For example : there are two user threads t0 and t1. t0 is the first one to execute. Will t1 only begin execution when t0 has finished or can switching take place ?
PS : This question might look like more than one question but I guess it is just one.
Here's what the page currently says:
Threads are sometimes implemented in userspace libraries, thus called user threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit from multi-processor machines (M:N model). In this article the term "thread" (without kernel or user qualifier) defaults to referring to kernel threads. User threads as implemented by virtual machines are also called green threads. User threads are generally fast to create and manage, but cannot take advantage of multithreading or multiprocessing and get blocked if all of their associated kernel threads get blocked even if there are some user threads that are ready to run.
As you can see, in one paragraph it is stating BOTH that user threads both can take advantage of multiprocessors (via associated kernel threads), AND that it cannot.
I suggest that you ask your question on the Wikipedia page's Talk page, and see if they authors can enlighten you as to what they mean ... and why they are saying it.
But what I think they are saying that user(-space) threads that aren't backed by multiple kernel threads typically cannot execute simultaneously on multiple cores.
However, I would hesitate to say that this is inherent to user threads per se; i.e. that it would be impossible to implement an OS in which a application could exploit multiple cores without any kernel assistance.

What is the difference between CPU threads and cores? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
My Intel CPU has 6 cores and 12 threads. I know that each core can do computation in parallel to other 5 cores. Thus, if I run a program on every 6 core, I get a 6 times speed up. But I cannot understand how that relates to threads. If I run my program on 12 threads of my 6 cores, will I get a 12 times speed up?
A thread is a "logical core", it has a full set of registers, uses its own virtual address space, and can perform anything a core can do, so in that sense - you have 12 cores.
However, a thread shares most of it's execution resources with its counterpart thread on the same core. Since modern cores can handle multiple instructions at the same time, having two (or more) threads allows you to essentially "throw" instructions from the 2 software threads into a large "pool", and have them executed whenever they're ready. If you have a single thread taking up full 100% of your core utilization, then you won't gain much from that, but if one of the threads leaves some empty slots, because it has branch mispredictions, data dependencies, long memory delays, or any other cause for inefficiency - the other thread sharing the core can use these slots instead, giving you a nice boost (since the alternative was to wait until the first thread finished its time slot and doing an expensive context switch).
In general, you can think of that in the following way - running 2 software threads on 2 cores would give you the best performance, running them on a single core with simultaneous multithreading would be slightly slower, especially in case you're bounded on execution (but less so if you're bounded for e.g. on memory latency). However if you don't have this feature, running the same 2 workloads on a single core would require you to run them one after the other (in timeslots), which would probably be much slower.
Edit: note that there are different ways of implementing this concept, see for e.g. - Difference between intel and AMD multithreading
A thread is a "simultaneous" computation on the same core. So one core can manage two threads and effectively acts as two cores. This is a very basic answer I'm afraid.

How could i do multi threading in embedded programmes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Hi i am an embedded programmer. Recently we came across a project where we are forced to use multi threading. I have used the same in java but i could not implement it my embedded code for 8051. Could any body please help me?
Threading requires that there be some mechanism to switch threads, typically called a scheduler.
Broadly speaking, there are two types of threading: cooperative, and pre-emptive.
In cooperative threading, each thread does some work and then transfers control back to the scheduler. This is almost like having a grand while(1) {} loop as a program structure, only with more independence (only during development) of the tasks. It still suffers from the risk of one task hogging the CPU, or even locking up and preventing anything else from running. In effect, the independence between tasks is only an illusion or organizational abstraction for the developer.
In pre-emptive multi-tasking, the scheduler (likely driven from a timer interrupt) periodically forces a change of tasks by grabbing execution out of one thread, saving its state, and restarting a different frozen thread. This is a little trickier to set up, but a lot more reliable.
Often with either scheme, you would not write the infrastructure from scratch, but instead would use a primitive operating system or at least scheduler routine developed by others.
For a very small embedded system though, you can also consider that interrupt service routines can themselves provide something akin to alternate threads for handling certain brief and/or urgent tasks. If your serial interrupt fires, you grab some character(s) and store them for later interpretation at a convenient time by something else. Many tasks can be implemented by using interrupts to deal with the immediate part, and then doing resulting work at a later point in a while(1) {} type program structure.
Some might properly laugh at the idea of a scheduler running on an 8051 - though for an oddity of reasons, inexpensive little 8051-equivalent cores end up in some fairly complicated special purpose chips today (typically accessorized by huge amounts of banked memory, and powerful peripheral engines to do the real work), so it's actually not uncommon to see multithreading solutions with dynamic task creation implemented on them in order to manage everything which the device does.
The architecture of the 8051 is not amenable to any reasonable preemptive scheduling. At least the stack, and probably more, in the on-chip RDATA/IDATA has to swapped out to XDATA and it gets very messy.
8051 is good for toaster/washing-machine controllers.
If you want/need such functionality as a premptive scheduler, move to ARM.

What is the difference between CFQ, Deadline, and NOOP? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm recompiling my kernel, and I want to choose an I/O scheduler. What's the difference between these?
If you compile them all, you can select at boot time or per-device which scheduler to use. No need to pick at compile time, unless you are targeting an embedded device where every byte counts. See Documentation/block/switching-sched.txt for details on switching per-device or system-wide at boot.
The CFQ scheduler allows you to set priorities via the ionice(1) tool or the ioprio_set(2) system call. This allows giving precedence to some processes or forcing others to do their IO only when the system's block devices are relatively idle. The queues are implemented by segregating the IO requests from processes into queues, and handling the requests from each queue similar to CPU scheduling. Details on configuring it can be found in Documentation/block/cfq-iosched.txt.
The deadline scheduler by contrast looks at all writes from all processes at once; it sorts the writes by sector number, and writes them all in linear fashion. The deadlines means that it tries to write each block before its deadline expires, but within those deadlines, is free to re-arrange blocks as it sees fit. Details on configuring it can be found in Documentation/block/deadline-iosched.txt.
Probably very little in practice.
In my testing, I found that in general NOOP is a bit better if you have a clever RAID controller. Others have reported similar results, but your workload may be different.
However, you can select them at runtime (without reboot) so don't worry about it at compile-time.
My understanding was that the "clever" schedulers (CFQ and deadline) are only really helpful on traditional "spinning disc" devices which don't have a RAID controller.

How to check if a process is in hang state (Linux) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
Is there any command in Linux through which i can know if the process is in hang state.
Is there any command in Linux through which i can know if the process is in hang state.
There is no command, but once I had to do a very dumb hack to accomplish something similar. I wrote a Perl script which periodically (every 30 seconds in my case):
run ps to find list of PIDs of the watched processes (along with exec time, etc)
loop over the PIDs
start gdb attaching to the process using its PID, dumping stack trace from it using thread apply all where, detaching from the process
a process was declared hung if:
its stack trace didn't change and time didn't change after 3 checks
its stack trace didn't change and time was indicating 100% CPU load after 3 checks
hung process was killed to give a chance for a monitoring application to restart the hung instance.
But that was very very very very crude hack, done to reach an about-to-be-missed deadline and it was removed a few days later, after a fix for the buggy application was finally installed.
Otherwise, as all other responders absolutely correctly commented, there is no way to find whether the process hung or not: simply because the hang might occur for way to many reasons, often bound to the application logic.
The only way is for application itself being capable of indicating whether it is alive or not. Simplest way might be for example a periodic log message "I'm alive".
you could check the files
/proc/[pid]/task/[thread ids]/status
What do you mean by ‘hang state’? Typically, a process that is unresponsive and using 100% of a CPU is stuck in an endless loop. But there's no way to determine whether that has happened or whether the process might not eventually reach a loop exit state and carry on.
Desktop hang detectors just work by sending a message to the application's event loop and seeing if there's any response. If there's not for a certain amount of time they decide the app has ‘hung’... but it's entirely possible it was just doing something complicated and will come back to life in a moment once it's done. Anyhow, that's not something you can use for any arbitrary process.
Unfortunately there is no hung state for a process. Now hung can be deadlock. This is block state. The threads in the process are blocked. The other things could be live lock where the process is running but doing the same thing again and again. This process is in running state. So as you can see there is no definite hung state.
As suggested you can use the top command to see if the process is using 100% CPU or lot of memory.

Resources