Exactly what does the phases of distcc mean? Am I already using pump mode? And how do I use pump mode in Cygwin? - cygwin

From what I have read being able to use pump mode with distcc requires that you encapsulate make in the pump script. However, I do not have it in my path and I can not find not find it as a package or included in the distcc package for Cygwin.
However, when I compile with distcc and use distccmon-text to monitor which hosts are contacted and their phase I clearly see that some of them, sometimes, are in the Preprocess phase. I thought all preprocessing was done on the client executing the make script when not using pump mode. And that the whole idea of pump mode was preprocessing on the remote hosts (and thus requireing the same include files).
This has left me confused. My main question is: Exactly what does the phases: Startup, Blocked, Connected, Preprocess, Conect, Send, Receive and Done of distcc mean?
And as a sub-question: How can I use pump mode with distcc in Cygwin?

Exactly what does the phases of distcc
mean?
Ok, this is embarrassing, but I just wasted four hours on the web trying to answer that question. Next time I'll just pull the source and look at it. But, you raise a good point: It's amazing this isn't readily available information.
THESE ARE MY GUESSES (because I don't want to admit I wasted four hours not answering the question!):
Startup - could otherwise be called
"initialization/loading", not yet
ready for first task
Blocked - is awaiting access to local
file or local processor, I stumbled
across recent bug fixes that set "timeout" to one second while it waited for the
processor to become available, and I'm aware that it uses zero-length "flock" files to block at times
Connected - process initiated contact
with a client, is now reserved for a
job (??), or is compiling a job (??)
Preprocess - is performing preprocess
operation
Conect - is hand-shaking with client for atomic operation, maybe to become reserved
(??)
Send - is sending compiled object
file back to client
Receive - is receiving source to be
compiled, or receiving zipped headers
(if using pump-mode)
Done - could otherwise be called
"idle/available"
NOTE: Because of Google's "pump-mode" algorithm, there's actually quite a bit of "hand-shaking" that goes on between the client (running distcc) and volunteer (running distccd). First, in pump-mode, all the headers that are expected to be "needed" are bundled, zipped, and pushed to the volunteer (where it is unzipped into a local mirror like that on the client machine). However, it appears that some further communication between the volunteer-and-client is possible to incrementally transfer other headers as-needed, so that would explain the "more-rich" communications phases/states listed above.
Am I already using pump mode?
I very much doubt it, as you did not configure it by wrapping the compile option through make or scons (necessary to run the Google algorithm to predict header usage for bundling-and-transport to the volunteer), nor can you find the "pump" script. But, I cannot explain your seeing the "Preprocess" states on your volunteers. (You're not referring to the "Preprocess" state on your clients, right? That would be entirely understandable, as by default preprocessing is on the client.)
Rather, I suppose the implementation makes it possible that the hard-state-machine would move through ALL the states, INCLUDING "preprocessing", even when there is no preprocessing to do, before it advances to the next state. For example, even if it did no preprocessing on the volunteer side, distccd would receive the source file, write it to disk, and then launch the compiler. If you're on Cywin, those are not instantaneous, especially if it is a large source file (especially after all the headers included in it). So, you might see the "preprocess" phase until it manually initiates the next phase for the compile operation itself.
HEY ... I don't see an obvious "compile" phase, so it is POSSIBLE that the "preprocess" phase embodies "compile" or "preprocess-and-compile" (since those phases are often combined in many compilers anyway).
Sorry -- I'm just guessing.
And how do I use pump mode in Cygwin?
I haven't tried it, but it is supposed to be possible. Apparently the most common problem with Cygwin is that some Windows compilers cannot handle the default TMPDIR setting when distcc is run under Cygwin. The fix is to put something like export TMPDIR=c:/temp in /etc/profile
The FAQ may be able to help more: http://distcc.googlecode.com/svn/trunk/doc/web/faq.html

Related

Meaning warning "File is touched by more than one package"

I am creating a simple linux kernel with buildroot and I am adding a small driver I've done myself, I created the Config.in file and drivername.mk to be able to select the driver in make menuconfig succesfully.
When executing make to build the image, the compilation goes correctly until my driver starts to compile, it looks to compile and create the image right but I get loooots of warnings saying that different files in ./lib/gcc/arm-buildroot-linux-uclibcgnueabihf/ are touched by more than one package: [u'host-gcc-initial', u'host-gcc-final'].
Anyone can explain me a bit about this issue and what is causing it? Do you need any more info to know what is happening? Is it safe to ignore them?
Thanks beforehand
Actually, doing a search on 'touched by more than one package', I found http://lists.busybox.net/pipermail/buildroot/2017-October/205602.html, where we find that this warning can safely be ignored if you're not doing a parallel build and aren't a kernel maintainer.
That said, if you're submitting code for inclusion in the Linux kernel, please be a good citizen and make sure you identify all of the things your code is dependent upon. (I'm not actually an active kernel hacker, so I don't know what method they're using for this right now.)
The basic idea is that there are a bunch of steps in compiling things that need to be done in a logical order. In a small project, we simply use dependencies that we know to put in because we also coded in that dependency. But with a project the size of the kernel, you can guarantee that not everyone does this. Some of them instead just specify dependencies if they're needed for things to build properly - if the default order works, things could go years before someone figures out that there was a missing dependency, causing them grief when they were trying to update just the one thing that was a missing dependency, and the other code not getting updated as a result.
When you're doing things in parallel, on the other hand, it becomes a lot more complicated. Now you really need to have every dependency specified, because there is no longer any inherent dependable order. Some people will probably still build serially, while others use two processing threads. I'll use 8. I've worked in groups that would be inclined to do 30, because they're on a 32 processor machine, and don't really need all of those during the off hours. Suddenly the fact that the file you needed from a directory that normally got processed 30 directories before yours is now getting processed at the same time as your file that needed it, because you didn't list the dependency and everything in those 30 directories that hasn't already been processed and isn't being processed has a dependency that's not yet finished its processing.

Changing the configuration of an already-built kernel and recompiling only what's been changed

The scenario outlined is this:
Someone has built the Linux kernel from source code.
That person wants to change the build configuration.
They still have all of the object files and temporary files that were produced by the previous build operation.
Given all of that, what needs to be done to rebuild as few things as possible in order to save time?
I understand that these will trigger or necessitate a complete recompilation of the source code:
Running make clean.
Running make menuconfig.
make clean is an obvious course of action to avoid to achieve the desired goal because it deletes all object files, both those that would need to be rebuilt and those that could otherwise be left alone. I don't know why make menuconfig would cause the build system to recompile everything, but I've read on here that that is what it would do.
The problem I see with not having the second avenue open to me is that if I change the configuration manually with a text editor, the options that I change might require changes in other options that depend on them (e.g., IMA_TRUSTED_KEYRING depends on SYSTEM_TRUSTED_KEYRING) and I'd be working without an interface that would automatically make those required secondary changes.
It occurred to me that invoking scripts/kconfig/mconf, the program built and launched by make menuconfig, could possibly be a solution to the problems described in the previous paragraph since it was not stated that mconf is what makes the build system recompile everything. But, it possibly could be that very program, so I do not wish to try it until I know it won't do that.
Sooooo, how does one achieve the stated objective given the stated scenario?

what dequeues requests queued by blk_execute_rq_nowait

I'm working on increasing a timeout in the SCSI mid-layer driver in Linux. At least, that's the quest. I'm familiarizing myself with the driver. This is turning out to be a formidable task. The Linux Documentation Project seems to be woefully out of date (the tour of the kernel is based on v 1.0.9 ... really?). I also found this from kernel.org. I'm not sure how up-to-date that is either.
A description of the problem is that we send SCSI commands through sg. Any timeout specified in sg_io_hdr_t seems to be ignored if it's longer than 30 seconds. I haven't seen anything in the sg driver code which seems to trump with 30 if the timeout requested is larger. Normally, we submit commands using the write/poll/read method through sg. I've traced through the sg code and I believe calling write(2) takes the following path:
sg_write()
sg_common_write()
blk_execute_rq_nowait()
By no means am I 100% positive of this, but it does seem plausible. My question to kernel developers here is, what call should I grep for which would dequeue this request? I haven't found anything in the references I do have which state this.
Ultimately, I'm looking for where, in the mid-layer, requests like this are dequeued for transmission to the lower layer. My premise is that, if I know what calls dequeues requests from the queue used in blk_execute_rq_nowait(), then I can grep through the appropriate source files looking for that and move from there. (If someone would be kind enough to tell me if all of the files listed in the first link are the correct list of files for the SCSI mid-layer in Linux, I thank you in advance. My kernel version: 2.6.32.)
Do I have things incorrect? Are requests like this just taken by the lower layer? I assume "no" because this seems like what the mid-layer is supposed to do: route these things to the proper place.
blk_execute_rq() - this call inserts a request at the back of the I/O scheduler queue. So you should be looking into the I/O scheduler code which dequeues the requests. You may want to start of looking at what I/O scheduler your system is running under,cat /sys/block/sda/queue/scheduler and settings under
ls /sys/block/sda/queue/scheduler
(should be something like noop [deadline] cfq), and thereafter look into the scheduler code.

$HASP373 and IEF403I z/os syslog

I ask myself a question about the z/os log:
I just would like to know if all the operations getting started were always called by $HASP373 and IEF403I ?
And for the status Ended called by $HASP395 and IEF404I ?
The trouble with z/OS is that it's really hard to explain something without introducing another concept that also needs explaining. This, in turn, requires another explanation etc. This is partly due to the z/OS operating system being from a different planet compared to Unix, Windows, OS X etc, all of which are broadly similar.
Those messages are issued by the system for a lot of the work that happens on a mainframe, but not all of it.
All work on z/OS runs in its own address space, which is almost like a mini-VM. There will be many address spaces in a z/OS system (380 in ours currently). A program in an address space is not aware of any other address spaces and thinks it has access to the entire 2Gb (31-bit addressing) range of memory (different address spaces can communicate if necessary & authorised, and more than 2GB is available with 64-bit addressing). A program in one address space cannot crash a program in another address space by overwriting storage. Programs in 2 different address spaces can access the same memory address, but don't affect each other, as they will actually, unbeknown to them, access different memory.
There are 4 types of address spaces:
TSO (Time Sharing Option) - these are users logged on to the system, typing commands and getting responses. They may run scripts, using the languages REXX and Clist (Command Lists - older, generally replaced by REXX) much like Perl and shell scripts, submit batch jobs, write and compile code etc.
BATCH JOBS (or JOB) - This is where you want to run a program, so you create a text file with the name of the program(s) to run and the file(s) that it/they need(s) and SUBMIT it. The system will run the program(s) and tell you when they are done, Whilst running, you can go and do something else. You don't even need to be logged on - you can prepare an FTP job (for example) to run at 01:00 whilst you're asleep and another job to run if the first one works.
STARTED TASKS (STCs) - Very similar to a batch job. Usually started either by the system itself when it starts or by an operator issuing a START command for that STC at the system console. (E.g. 'START DB2' starts the DB2 started task. Alternatively a user may submit a batch job for their own test DB2 system.)
System Address Spaces (SYSAS). Consider these like a Unix daemon. started by the operating system itself for various essential processes. There are also address spaces representing processes running under the 'Unix' half of z/OS (USS - Uxniz System Services), but that's another story.
There is no such thing as an 'operation' in z/OS terms. Within an address space, many programs may be running, each one identified by a TCB (Task Control Block) or SRB (System Request Block).
However, if you knew that the information you wanted was produced by a normal batch job, then looking for the £HASP373 and £HASP395 messages for that job would be the right place to start. Bear in mind that the message ids (HASP373 and HASP395) might not start with a '£' on your system. '£' is the default, but it is a customisable parameter. $ and # are also fairly common.
I do know what I'm talking about, but if any of the above is not clear, then I haven't explained it very well. I may be guilty of doing exactly what I warned against and explaining an unknown concept by using another unknown concept. :-)
Work gets into z/OS through something called the subsystem interface. Part of this flow is that generally, when an address space is started, it requests work from the subsystem that started the address space through a well-defined interface (IEFSSREQ). This handshake is where things like your HASP messages come from.
Here's a watered down example.
An operator enters a START command from a system console. As part of processing that command, the system creates an address space, and eventually a thread in the new address space says, "ok - I'm ready...give me some work to do". This goes to the primary job entry subsystem, who hands the address space something to do - the internal data structures representing the task that the operator started in this case. As part of this chain, the various $HASP messages are issued, and this works pretty much the same way for TSO sessions, started tasks (STCs) and JCL submitted for a batch job.
JES2/JES3 are examples of subsystems, but there are others.
For example, if our operator added the SUB=MSTR parameter on the start command, the requests wouldn't go through the primary JES - and so there wouldn't be any of the $HASP messages you're looking for. There are plenty of vendor applications that start and manage address spaces outside of JES, and this is the stuff you miss by limiting yourself to the HASP and IEF401 messages.
Also, UNIX Services has a variety of APIs similar to UNIX "fork" that can be used to spawn address spaces without necessarily involving JES.
If you want to know about activity starting and ending, there are better ways - SMF, ENF signals, etc. A great way to learn this stuff if you don't know already is to use the system trace facilities and read some dumps. The wonderful thing about z/OS is that it's all right there, for those who spend the time figuring out where to look.
No. Those messages are for jobs. Not all operations are jobs. An example of an operation that is not a job would be a system command. I don't have a z/OS system at hand right now, but I believe another example of an operation that would not use the messages you reference would be a started task.
This may be helpful, as it attempts to explain z/OS concepts in Unix terms.
A job is something that goes through JES2/JES3. (In your case, JES2.) JES2/JES3 jobs are generally used for batch type of work. For example, a sort job, where I submit something, and come back later and get an answer. However, there's a lot of work running under z/OS that doesn't go through JES2/JES3.
Part of the problem here is what you mean by an operation; for example, while you may get a message saying that DB2 has started, after it's started, it's not going to tell you every time it gets a query. A TSO user might run a REXX exec underneath his/her address space, but that's not going to go through JES.
Another way to look at this is that JES2/JES3 are job management subsystems, but they aren't equivalent to the kernel on a unix/windows system, which does schedule all the work running on the system. For z/OS, there are multiple ways that work can come in to a system; examples include JES2/JES3, TSO, ISPF, CICS, DB2, IMS, via the console, etc. It's then up to the master scheduler/WLM/SRM to manage all the requests that come in through all of the subsystems.
If you have access to a z/OS system, look into SDSF, or whatever you use to manage JES2. The ST panel, under SDSF, is a list of things that are running/eligible to run that are managed by JES2. However, if you look at the DA panel (assuming you have authority to do so), you'll note that there are a lot of address spaces that show up on the DA panel that don't show up in the ST panel.
If address spaces are started through the JES2-subsystem, which is normally the case unless another subsystem or MSTR is specified using the MVS START command, then the $HASP373 jobname STARTED is issued. Similarly, when the address space ends, message $HASP395 is issued.
The IEF403I and IEF404I messages are issued by the system in similar situations and independent of what either JES2 or JES3 are doing and regardless under what subsystem the address space was started. The messages are only issued when the operator has requested to monitor job names using the SETCON MONITOR or the MONITOR JOBNAMES command. Products for automated operations typically do this.

How to "hibernate" a process in Linux by storing its memory to disk and restoring it later?

Is it possible to 'hibernate' a process in linux?
Just like 'hibernate' in laptop, I would to write all the memory used by a process to disk, free up the RAM. And then later on, I can 'resume the process', i.e, reading all the data from memory and put it back to RAM and I can continue with my process?
I used to maintain CryoPID, which is a program that does exactly what you are talking about. It writes the contents of a program's address space, VDSO, file descriptor references and states to a file that can later be reconstructed. CryoPID started when there were no usable hooks in Linux itself and worked entirely from userspace (actually, it still does work, depending on your distro / kernel / security settings).
Problems were (indeed) sockets, pending RT signals, numerous X11 issues, the glibc caching getpid() implementation amongst many others. Randomization (especially VDSO) turned out to be insurmountable for the few of us working on it after Bernard walked away from it. However, it was fun and became the topic of several masters thesis.
If you are just contemplating a program that can save its running state and re-start directly into that state, its far .. far .. easier to just save that information from within the program itself, perhaps when servicing a signal.
I'd like to put a status update here, as of 2014.
The accepted answer suggests CryoPID as a tool to perform Checkpoint/Restore, but I found the project to be unmantained and impossible to compile with recent kernels.
Now, I found two actively mantained projects providing the application checkpointing feature.
The first, the one I suggest 'cause I have better luck running it, is CRIU
that performs checkpoint/restore mainly in userspace, and requires the kernel option CONFIG_CHECKPOINT_RESTORE enabled to work.
Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA: /krɪʊ/, Russian: криу), is a software tool for Linux operating system. Using this tool, you can freeze a running application (or part of it) and checkpoint it to a hard drive as a collection of files. You can then use the files to restore and run the application from the point it was frozen at. The distinctive feature of the CRIU project is that it is mainly implemented in user space.
The latter is DMTCP; quoting from their main page:
DMTCP (Distributed MultiThreaded Checkpointing) is a tool to transparently checkpoint the state of multiple simultaneous applications, including multi-threaded and distributed applications. It operates directly on the user binary executable, without any Linux kernel modules or other kernel modifications.
There is also a nice Wikipedia page on the argument: Application_checkpointing
The answers mentioning ctrl-z are really talking about stopping the process with a signal, in this case SIGTSTP. You can issue a stop signal with kill:
kill -STOP <pid>
That will suspend execution of the process. It won't immediately free the memory used by it, but as memory is required for other processes the memory used by the stopped process will be gradually swapped out.
When you want to wake it up again, use
kill -CONT <pid>
The more complicated solutions, like CryoPID, are really only needed if you want the stopped process to be able to survive a system shutdown/restart - it doesn't sound like you need that.
Linux Kernel has now partially implemented the checkpoint/restart futures:https://ckpt.wiki.kernel.org/, the status is here.
Some useful information are in the lwn(linux weekly net):
http://lwn.net/Articles/375855/ http://lwn.net/Articles/412749/ ......
So the answer is "YES"
The issue is restoring the streams - files and sockets - that the program has open.
When your whole OS hibernates, the local files and such can obviously be restored. Network connections don't, but then the code that accesses the internet is typically more error checking and such and survives the error conditions (or ought to).
If you did per-program hibernation (without application support), how would you handle open files? What if another process accesses those files in the interim? etc?
Maintaining state when the program is not loaded is going to be difficult.
Simply suspending the threads and letting it get swapped to disk would have much the same effect?
Or run the program in a virtual machine and let the VM handle suspension.
Short answer is "yes, but not always reliably". Check out CryoPID:
http://cryopid.berlios.de/
Open files will indeed be the most common problem. CryoPID states explicitly:
Open files and offsets are restored.
Temporary files that have been
unlinked and are not accessible on the
filesystem are always saved in the
image. Other files that do not exist
on resume are not yet restored.
Support for saving file contents for
such situations is planned.
The same issues will also affect TCP connections, though CryoPID supports tcpcp for connection resuming.
I extended Cryopid producing a package called Cryopid2 available from SourceForge. This can
migrate a process as well as hibernating it (along with any open files and sockets - data
in sockets/pipes is sucked into the process on hibernation and spat back into these when
process is restarted).
The reason I have not been active with this project is I am not a kernel developer - both
this (and/or the original cryopid) need to get someone on board who can get them running
with the lastest kernels (e.g. Linux 3.x).
The Cryopid method does work - and is probably the best solution to general purpose process
hibernation/migration in Linux I have come across.
The short answer is "yes." You might start by looking at this for some ideas: ELF executable reconstruction from a core image (http://vx.netlux.org/lib/vsc03.html)
As others have noted, it's difficult for the OS to provide this functionality, because the application needs to have some error checking builtin to handle broken streams.
However, on a side note, some programming languages and tools that use virtual machines explicitly support this functionality, such as the Self programming language.
This is sort of the ultimate goal of clustered operating system. Mathew Dillon puts a lot of effort to implement something like this in his Dragonfly BSD project.
adding another workaround: you can use virtualbox. run your applications in a regular virtual machine and simply "save the machine state" whenever you want.
I know this is not an answer, but I thought it could be useful when there are no real options.
if for any reason you don't like virtualbox, vmware and Qemu are as good.
Ctrl-Z increases the chances the process's pages will be swapped, but it doesn't free the process's resources completely. The problem with freeing a process's resources completely is that things like file handles, sockets are kernel resources the process gets to use, but doesn't know how to persist on its own. So Ctrl-Z is as good as it gets.
There was some research on checkpoint/restore for Linux back in 2.2 and 2.4 days, but it never made it past prototype. It is possible (with the caveats described in the other answers) for certain values of possible - I you can write a kernel module to do it, it is possible. But for the common value of possible (can I do it from the shell on a commercial Linux distribution), it is not yet possible.
There's ctrl+z in linux, but i'm not sure it offers the features you specified. I suspect you asked this question since it doesn't

Resources