Is there any kill_proc() replacement for proprietary Linux kernel drivers? - linux

I'm in the process of porting 4 proprietary (read: non-GPL) Linux kernel drivers (that I didn't write) from RHEL 5.x to RHEL 6.x (2.6.32 kernel). The drivers all use kill_proc() for signalling the user-space "session", but this function has been removed from the more recent kernels (somewhere between 2.6.18 and 2.6.32). I've seen this question asked many times here and elsewhere and I've searched fairly extensively, but of the many suggested solutions, none work due to either the functions no longer being exported, or requrieing a GPL-only function (see below). Does anyone know of a solution that could work for a proprietary driver?
given: kill_proc(pid, sig, 1);
The simplest solution I found was to use: kill_proc_info(sig, SEND_SIG_PRIV, pid); however kill_proc_info is no longer exported so it can't be used.
kill_pid_info() has been suggested (this is called by kill_proc_info() after setting an rcu_read_lock(). kill_pid_info() requires a struct pid* so I could use: kill_pid_info(sig, SEND_SIG_PRIV, find_vpid(pid)); however find_vpid() is exported for GPL use only and this is a proprietary driver. Is there another way to get the struct pid*?
kill_pid_info() also sets up an rcu_read_lock() and then calls group_send_sig_info(). Unfortunately, group_send_siginfo() is not exported, and also it requires a struct task_struct*, but the required find_task_by_vpid() function is not exported either.
Another suggestion was kill_pid(), but this also requires a struct pid*, and again, the function find_vpid() is only exported for GPL.
There were also suggestions for send_sig() and send_sig_info(), but these also require a struct task_struct*, and again, find_task_by_pid() is not exported, and pid_task() requires that (GPLd) find_vpid() to get a struct pid*. Also, these function don't set an rcu_read_lock() and they also pass a FALSE value for the group flag (whereas kill_proc ended up using a TRUE value) - so there could be some subtle differences.
That's all that I could find. Does anyone have a suggestion that will work for my case? Thanks in advance.

Since there have been no responses to my question, I've been
reading much of the kernel code and I think I've found a
solution.
It seems that the only exported function that provides the
same semantics as kill_proc() is kill_pid(). We can't use
the GPL find_vpid() function to get the needed struct pid*,
but if we can get the struct task_struct*, then we can get
the struct pid* from there as:
task->pids[PIDTYPE_PID].pid
Since find_task_by_vpid() is no longer exported, it seems
the only way to find the task is to go through the entire
task list looking for it. So, the proposed solution is:
int my_kill_proc(pid_t pid, int sig) {
int error = -ESRCH; /* default return value */
struct task_struct* p;
struct task_struct* t = NULL;
struct pid* pspid;
rcu_read_lock();
p = &init_task; /* start at init */
do {
if (p->pid == pid) { /* does the pid (not tgid) match? */
t = p;
break;
}
p = next_task(p); /* "this isn't the task you're looking for" */
} while (p != &init_task); /* stop when we get back to init */
if (t != NULL) {
pspid = t->pids[PIDTYPE_PID].pid;
if (pspid != NULL) error = kill_pid(pspid,sig,1);
}
rcu_read_unlock();
return error;
}
I know it will take a lot more time to search the whole task list rather
than using the hash tables, but it's all I've got. Some concerns/questions
that I have:
Is the rcu_read_lock() sufficient for this? Would
it be better to use something like preempt_disable() instead?
Can the struct task_struct ever NOT have a PIDTYPE_PID entry
in the pids array? And if so, is checking for NULL sufficient?
I'm new to working with the kernel, are there any other
suggestions to make this better?

Related

Is CGAL 2D Regularized Boolean Set-Operations lib thread safe?

I am currently using the library mentioned in the title, see
CGAL 2D-reg-bool-set-op-pol
The library provides types for polygons and polygon sets which are internally represented as so called arrangements.
My question is: How far is this library thread safe, that is, fit for parallel computation on its objects?
There could be several levels in which thread safety is guaranteed:
1) If I take an object from a library like an arrangement
Polygon_set_2 S;
I might be able to execute
Polygon_2 P;
S.join(P);
and
Polygon_2 Q;
S.join(Q);
in two different concurrent execution units/threads in parallel without harm and get the right result, as if I had done everything sequentially. That would be the highest degree of thread safety/possible parallelism.
2) In fact for me a much lesser degree would be enough. In that case S and P would be members of a class C so that two class instances have different S and P instances. Then I would like to compute (say) S.join(P) in parallel for a list of instances of the class C, say, by calling a suitable member function of C with std::async
Just to be complete, I insert here a bit of actual code from my project which gives more flesh to these terse descriptions.
// the following typedefs are more or less standard from the
// CGAL library examples.
typedef CGAL::Exact_predicates_exact_constructions_kernel Kernel;
typedef Kernel::Point_2 Point_2;
typedef Kernel::Circle_2 Circle_2;
typedef Kernel::Line_2 Line_2;
typedef CGAL::Gps_circle_segment_traits_2<Kernel> Traits_2;
typedef CGAL::General_polygon_set_2<Traits_2> Polygon_set_2;
typedef Traits_2::General_polygon_2 Polygon_2;
typedef Traits_2::General_polygon_with_holes_2 Polygon_with_holes_2;
typedef Traits_2::Curve_2 Curve_2;
typedef Traits_2::X_monotone_curve_2 X_monotone_curve_2;
typedef Traits_2::Point_2 Point_2t;
typedef Traits_2::CoordNT coordnt;
typedef CGAL::Arrangement_2<Traits_2> Arrangement_2;
typedef Arrangement_2::Face_handle Face_handle;
// the following type is not copied from the CGAL library example code but
// introduced by me
typedef std::vector<Polygon_with_holes_2> pwh_vec_t;
// the following is an excerpt of my full GerberLayer class,
// that retains only data members which are used in the join()
// member function. These data is therefore local to the class instance.
class GerberLayer
{
public:
GerberLayer();
~GerberLayer();
void join();
pwh_vec_t raw_poly_lis;
pwh_vec_t joined_poly_lis;
Polygon_set_2 Saux;
annotate_vec_t annotate_lis;
polar_vec_t polar_lis;
};
//
// it is not necessary to understand the working of the function
// I deleted all debug and timing output etc. It is just to "showcase" some typical
// operations from the CGAL reg set boolean ops for polygons library from
// Efi Fogel et.al.
//
void GerberLayer::join()
{
Saux.clear();
auto it_annbase = annotate_lis.begin();
annotate_vec_t::iterator itann = annotate_lis.begin();
bool first_block = true;
int cnt = 0;
while (itann != annotate_lis.end()) {
gpolarity akt_polar = itann->polar;
auto itnext = std::find_if(itann, annotate_lis.end(),
[=](auto a) {return a.polar != akt_polar;});
Polygon_set_2 Sblock;
if (first_block) {
if (akt_polar == Dark) {
Saux.join(raw_poly_lis.begin() + (itann - it_annbase),
raw_poly_lis.begin() + (itnext - it_annbase));
}
first_block = false;
} else {
if (akt_polar == Dark) {
Saux.join(raw_poly_lis.begin() + (itann - it_annbase),
raw_poly_lis.begin() + (itnext - it_annbase));
} else {
Polygon_set_2 Saux1;
Saux1.join(raw_poly_lis.begin() + (itann - it_annbase),
raw_poly_lis.begin() + (itnext - it_annbase));
Saux.complement();
pwh_vec_t auxlis;
Saux1.polygons_with_holes(std::back_inserter(auxlis));
Saux.join(auxlis.begin(), auxlis.end());
Saux.complement();
}
}
itann = itnext;
}
ende:
joined_poly_lis.clear();
annotate_lis.clear();
Saux.polygons_with_holes (std::back_inserter (joined_poly_lis));
}
int join_wrapper(GerberLayer* p_layer)
{
p_layer->join();
return 0;
}
// here the parallelism (of the "embarassing kind") occurs:
// for every GerberLayer a dedicated task is started, which calls
// the above GerberLayer::join() function
void Window::do_unify()
{
std::vector<std::future<int>> fivec;
for(int i = 0; i < gerber_layer_manager.num_layers(); ++i) {
GerberLayer* p_layer = gerber_layer_manager.at(i);
fivec.push_back(std::async(join_wrapper, p_layer));
}
int sz = wait_for_all(fivec); // written by me, not shown
}
One might think, that 2) must be possible trivially as only "different" instances of polygons and arrangements are in the play. But: It is imaginable, as the library works with arbitrary precision points (Point_2t in my code above) that, for some implementation reason or other, all the points are inserted in a list static to the class Point_2t, so that identical points are represented only once in this list. So there would be nothing like "independent instances of Point_2t" and as a consequence also not for "Polygon_2" or "Polygon_set_2" and one could say farewell to thread safety.
I tried to resolve this question by googling (not by analyzing the library code, I have to admit) and would hope for an authoritative answer (hopefully positive as this primitive parallelism would greatly speed up my code).
Addendum:
1)
I implemented this already and made a test run with nothing exceptional occurring and visually plausible results, but of course this proves nothing.
2) The same question for the CGAL 2D-Arrangement-package from the same authors.
Thanks in advance!
P.S.: I am using CGAL 4.7 from the packages supplied with Ubuntu 16.04 (Xenial). A newer version on Ubuntu 18.04 gave me errors so I decided to stay with 4.7. Should a version newer than 4.7 be thread-safe, but not 4.7, of course I will try to use that newer version.
Incidentally I could not find out if the libcgal***.so libraries as supplied by Ubuntu 16.04 are thread safe as described in the documentation. Especially I found no reference to the Macro-Variable CGAL_HAS_THREADS that is mentioned in the "thread-safety" part of the docs, when I looked through the build-logs of the Xenial cgal package on launchpad.
Indeed there are several level of thread safety.
The 2D Regularized Boolean operation package depends of the 2D Arrangement package, and both packages depend on a kernel. For most operations the EPEC kernel is required.
Both packages are thread-safe, except for the rational-arc traits (Arr_rational_function_traits_2).
However, the EPEC kernel is not thread-safe yet when sharing number-type objects among threads. So, if you, for example, construct different arrangements in different threads, from different input sets of curves, respectively, you are safe.

finer-grained control than with LD_PRELOAD?

I have a dynamically linked ELF executable on Linux, and I want to swap a function in a library it is linked against. With LD_PRELOAD I can, of course, supply a small library with a replacement for the function that I compile myself. However, what if in the replacement I want to call the original library function? For example, the function may be srand(), and I want to hijack it with my own seed choice but otherwise let srand() do whatever it normally does.
If I were linking to make said executable, I would use the wrap option of the linker but here I only have the compiled binary.
One trivial solution I see is to cut and paste the source code for the original library function into the replacement - but I want to handle the more general case when the source is unavailable. Or, I could hex edit the needed extra code into the binary but that is specific to the binary and also time consuming. Is something more elegant possible than either of these? Such as some magic with the loader?
(Apologies if I were not using the terminology precisely...)
Here's an example of wrapping malloc:
// LD_PRELOAD will cause the process to call this instead of malloc(3)
// report malloc(size) calls
void *malloc(size_t size)
{
// on first call, get a function pointer for malloc(3)
static void *(*real_malloc)(size_t) = NULL;
static int malloc_signal = 0;
if(!real_malloc)
{
// real_malloc = (void *(*)(size_t))dlsym(RTLD_NEXT, "malloc");
*(void **) (&real_malloc) = dlsym(RTLD_NEXT, "malloc");
}
assert(real_malloc);
if (malloc_signal == 0)
{
char *string = getenv("MW_MALLOC_SIGNAL");
if (string != NULL)
{
malloc_signal = 1;
}
}
// call malloc(3)
void *retval = real_malloc(size);
fprintf(stderr, "MW! %f malloc size %zu, address %p\n", get_seconds(), size, retval);
if (malloc_signal == 1)
{
send_signal(SIGUSR1);
}
return retval;
}
The canonical answer is to use dlsym(RTLD_NEXT, ...).
From the man page:
RTLD_NEXT
Find the next occurrence of the desired symbol in the search
order after the current object. This allows one to provide a
wrapper around a function in another shared object, so that,
for example, the definition of a function in a preloaded
shared object (see LD_PRELOAD in ld.so(8)) can find and invoke
the "real" function provided in another shared object (or for
that matter, the "next" definition of the function in cases
where there are multiple layers of preloading).
See also this article.
Just for completeness, regarding editing the function name in the binary - I checked and it works but not without potential hiccups. E.g., in the example I mentioned, one can find the offset of "srand" (e.g., via strings -t x exefile | grep srand) and hex edit the string to "sran0". But names of symbols may be overlapping (to save space), so if the code also calls rand(), then there is only one "srand" string in the binary for both. After the change the unresolved references will then be to sran0 and ran0. Not a showstopper, of course, but something to keep in mind. The dlsym() solution is certainly more flexible.

Identifying bug in linux kernel module

I am marking Michael's as he was the first. Thank you to osgx and employee of the month for additional information and assistance.
I am attempting to identify a bug in a consumer/produce kernel module. This is a problem being given to me for a course in university. My teaching assistant was not able to figure it out, and my professor said it was okay if I uploaded online (he doesn't think Stack can figure it out!).
I have included the module, the makefile, and the Kbuild.
Running the program does not guarantee the bug will present itself.
I thought the issue was on line 30 since it is possible for a thread to rush to line 36, and starve the other threads. My professor said that is not what he is looking for.
Unrelated question: What is the purpose of line 40? It seems out of place to me, but my professor said it serves a purporse.
My professor said the bug is very subtle. The bug is not deadlock.
My approach was to identify critical sections and shared variables, but I'm stumped. I am not familiar with tracing (as a method of debugging), and was told that while it may help it is not necessary to identify the issue.
File: final.c
#include <linux/completion.h>
#include <linux/init.h>
#include <linux/kthread.h>
#include <linux/module.h>
static int actor_kthread(void *);
static int writer_kthread(void *);
static DECLARE_COMPLETION(episode_cv);
static DEFINE_SPINLOCK(lock);
static int episodes_written;
static const int MAX_EPISODES = 21;
static bool show_over;
static struct task_info {
struct task_struct *task;
const char *name;
int (*threadfn) (void *);
} task_info[] = {
{.name = "Liz", .threadfn = writer_kthread},
{.name = "Tracy", .threadfn = actor_kthread},
{.name = "Jenna", .threadfn = actor_kthread},
{.name = "Josh", .threadfn = actor_kthread},
};
static int actor_kthread(void *data) {
struct task_info *actor_info = (struct task_info *)data;
spin_lock(&lock);
while (!show_over) {
spin_unlock(&lock);
wait_for_completion_interruptible(&episode_cv); //Line 30
spin_lock(&lock);
while (episodes_written) {
pr_info("%s is in a skit\n", actor_info->name);
episodes_written--;
}
reinit_completion(&episode_cv); // Line 36
}
pr_info("%s is done for the season\n", actor_info->name);
complete(&episode_cv); //Why do we need this line?
actor_info->task = NULL;
spin_unlock(&lock);
return 0;
}
static int writer_kthread(void *data) {
struct task_info *writer_info = (struct task_info *)data;
size_t ep_num;
spin_lock(&lock);
for (ep_num = 0; ep_num < MAX_EPISODES && !show_over; ep_num++) {
spin_unlock(&lock);
/* spend some time writing the next episode */
schedule_timeout_interruptible(2 * HZ);
spin_lock(&lock);
episodes_written++;
complete_all(&episode_cv);
}
pr_info("%s wrote the last episode for the season\n", writer_info->name);
show_over = true;
complete_all(&episode_cv);
writer_info->task = NULL;
spin_unlock(&lock);
return 0;
}
static int __init tgs_init(void) {
size_t i;
for (i = 0; i < ARRAY_SIZE(task_info); i++) {
struct task_info *info = &task_info[i];
info->task = kthread_run(info->threadfn, info, info->name);
}
return 0;
}
static void __exit tgs_exit(void) {
size_t i;
spin_lock(&lock);
show_over = true;
spin_unlock(&lock);
for (i = 0; i < ARRAY_SIZE(task_info); i++)
if (task_info[i].task)
kthread_stop(task_info[i].task);
}
module_init(tgs_init);
module_exit(tgs_exit);
MODULE_DESCRIPTION("CS421 Final");
MODULE_LICENSE("GPL");
File: kbuild
Kobj-m := final.o
File: Makefile
# Basic Makefile to pull in kernel's KBuild to build an out-of-tree
# kernel module
KDIR ?= /lib/modules/$(shell uname -r)/build
all: modules
clean modules:
When cleaning up in tgs_exit() the function executes the following without holding the spinlock:
if (task_info[i].task)
kthread_stop(task_info[i].task);
It's possible for a thread that's ending to set it's task_info[i].task to NULL between the check and call to kthread_stop().
I'm quite confused here.
You claim this is a question from an upcoming exam and it was released by the person delivering the course. Why would they do that? Then you say that TA failed to solve the problem. If TA can't do it, who can expect students to pass?
(professor) doesn't think Stack can figure it out
If the claim is that the level on this website is bad I definitely agree. But still, claiming it is below a level to be expected from a random university is a stretch. If there is no claim of the sort, I once more ask how are students expected to do it. What if the problem gets solved?
The code itself is imho unsuitable for teaching as it deviates too much from common idioms.
Another answer here noted one side effect of the actual problem. Namely, it was stated that the loop in tgs_exit can race with threads exiting on their own and test the ->task pointer to be non-NULL, while it becomes NULL just afterwards. The discussion whether this can result in a kthread_stop(NULL) call is not really relevant.
Either a kernel thread exiting on its own will clear everything up OR kthread_stop (and maybe something else) is necessary to do it.
If the former is true, the code suffers from a possible use-after-free. After tgs_exit tests that the pointer, the target thread could have exited. Maybe prior to kthread_stop call or maybe just as it was executed. Either way, it is possible that the passed pointer is stale as the area was already freed by the thread which was exiting.
If the latter is true, the code suffers from resource leaks due to insufficient cleanup - there are no kthread_stop calls if tgs_exit is executed after all threads exit.
The kthread_* api allows threads to just exit, hence effects are as described in the first variant.
For the sake of argument let's say the code is compiled in into the kernel (as opposed to being loaded as a module). Say the exit func is called on shutdown.
There is a design problem that there are 2 exit mechanisms and it transforms into a bug as they are not coordinated. A possible solution for this case would set a flag for writers to stop and would wait for a writer counter to drop to 0.
The fact that the code is in a module makes the problem more acute: unless you kthread_stop, you can't tell if the target thread is gone. In particular "actor" threads do:
actor_info->task = NULL;
So the thread is skipped in the exit handler, which can now finish and let the kernel unload the module itself...
spin_unlock(&lock);
return 0;
... but this code (located in the module!) possibly was not executed yet.
This would not have happened if the code followed an idiom if always using kthread_stop.
Other issue is that writers wake everyone up (so-called "thundering herd problem"), as opposed to at most one actor.
Perhaps the bug one is supposed to find is that each episode has at most one actor? Maybe that the module can exit when there are episodes written but not acted out yet?
The code is extremely weird and if you were shown a reasonable implementation of a thread-safe queue in userspace, you should see how what's presented here does not fit. For instance, why does it block instantly without checking for episodes?
Also a fun fact that locking around the write to show_over plays no role in correctness.
There are more issues and it is quite likely I missed some. As it is, I think the question is of poor quality. It does not look like anything real-world.

Atomicity of writev() system call in Linux

I've looked in the kernel source for linux kernel 4.4.0-57-generic and don't see any locks in the writev() source. Is there something I'm missing? I don't see how writev() is atomic or thread-safe.
Not a kernel expert here, but I'll share my point of view anyway. Feel free to spot any mistakes.
Browsing the kernel (v4.9 though I wouldn't expect it to be so different), and trying to trace the writev(2) system call, I can observe subsequent function calls that create the following path:
SYSCALL_DEFINE3(writev, ..)
do_writev(..)
vfs_writev(..)
do_readv_writev(..)
Now the path branches, depending on whether a write_iter method is implemented and hooked on the struct file_operations field of the struct file that the system call is referring to.
If it's not NULL, the path is:
5a. do_iter_readv_writev(..), which calls the method filp->f_op->write_iter(..) at this point.
If it is NULL, the path is:
5b. do_loop_readv_writev(..), which calls repeatedly in a loop the method filp->f_op->write at this point.
So, as far as I understand, the writev() system call is as thread safe as the underlying write() (or write_iter()) is, which of course can be implemented in various ways, e.g. in a device driver, and may or may not use locks according to its needs and its design.
EDIT:
In kernel v4.4 the paths look pretty similar:
SYSCALL_DEFINE3(writev, ..)
vfs_writev(..)
do_readv_writev(..)
and then it depends on whether the write_iter method as a field in struct file_operations of the struct file is NULL or not, just like the case in v4.9, described above.
VFS (Virtual File System) by itself doesn't garantee atomicity of writev() call. It just calls filesystem-specific .write_iter method of struct file_operations.
It is responsibility of specific filesystem implementation for make method atomically write to the file.
For example, in ext4 filesystem function ext4_file_write_iter uses
mutex_lock(&inode->i_mutex);
for make writting atomic.
Found it in fs.h:
static inline void file_start_write(struct file *file)
{
if (!S_ISREG(file_inode(file)->i_mode))
return;
__sb_start_write(file_inode(file)->i_sb, SB_FREEZE_WRITE, true);
}
and then in super.c:
/*
* This is an internal function, please use sb_start_{write,pagefault,intwrite}
* instead.
*/
int __sb_start_write(struct super_block *sb, int level, bool wait)
{
bool force_trylock = false;
int ret = 1;
#ifdef CONFIG_LOCKDEP
/*
* We want lockdep to tell us about possible deadlocks with freezing
* but it's it bit tricky to properly instrument it. Getting a freeze
* protection works as getting a read lock but there are subtle
* problems. XFS for example gets freeze protection on internal level
* twice in some cases, which is OK only because we already hold a
* freeze protection also on higher level. Due to these cases we have
* to use wait == F (trylock mode) which must not fail.
*/
if (wait) {
int i;
for (i = 0; i < level - 1; i++)
if (percpu_rwsem_is_held(sb->s_writers.rw_sem + i)) {
force_trylock = true;
break;
}
}
#endif
if (wait && !force_trylock)
percpu_down_read(sb->s_writers.rw_sem + level-1);
else
ret = percpu_down_read_trylock(sb->s_writers.rw_sem + level-1);
WARN_ON(force_trylock & !ret);
return ret;
}
EXPORT_SYMBOL(__sb_start_write);
Thanks again.

Debugging in threading building Blocks

I would like to program in threading building blocks with tasks. But how does one do the debugging in practice?
In general the print method is a solid technique for debugging programs.
In my experience with MPI parallelization, the right way to do logging is that each thread print its debugging information in its own file (say "debug_irank" with irank the rank in the MPI_COMM_WORLD) so that the logical errors can be found.
How can something similar be achieved with TBB? It is not clear how to access the thread number in the thread pool as this is obviously something internal to tbb.
Alternatively, one could add an additional index specifying the rank when a task is generated but this makes the code rather complicated since the whole program has to take care of that.
First, get the program working with 1 thread. To do this, construct a task_scheduler_init as the first thing in main, like this:
#include "tbb/tbb.h"
int main() {
tbb::task_scheduler_init init(1);
...
}
Be sure to compile with the macro TBB_USE_DEBUG set to 1 so that TBB's checking will be enabled.
If the single-threaded version works, but the multi-threaded version does not, consider using Intel Inspector to spot race conditions. Be sure to compile with TBB_USE_THREADING_TOOLS so that Inspector gets enough information.
Otherwise, I usually first start by adding assertions, because the machine can check assertions much faster than I can read log messages. If I am really puzzled about why an assertion is failing, I use printfs and task ids (not thread ids). Easiest way to create a task id is to allocate one by post-incrementing a tbb::atomic<size_t> and storing the result in the task.
If I'm having a really bad day and the printfs are changing program behavior so that the error does not show up, I use "delayed printfs". Stuff the printf arguments in a circular buffer, and run printf on the records later after the failure is detected. Typically for the buffer, I use an array of structs containing the format string and a few word-size values, and make the array size a power of two. Then an atomic increment and mask suffices to allocate slots. E.g., something like this:
const size_t bufSize = 1024;
struct record {
const char* format;
void *arg0, *arg1;
};
tbb::atomic<size_t> head;
record buf[bufSize];
void recf(const char* fmt, void* a, void* b) {
record* r = &buf[head++ & bufSize-1];
r->format = fmt;
r->arg0 = a;
r->arg1 = b;
}
void recf(const char* fmt, int a, int b) {
record* r = &buf[head++ & bufSize-1];
r->format = fmt;
r->arg0 = (void*)a;
r->arg1 = (void*)b;
}
The two recf routines record the format and the values. The casting is somewhat abusive, but on most architectures you can print the record correctly in practice with printf(r->format, r->arg0, r->arg1) even if the the 2nd overload of recf created the record.
~
~

Resources