strtod() and sprintf() inconsistency under GCC and MSVC - string

I'm working on a cross-platform app for Windows and Mac OS X, and I have a problem with two standard C library functions:
strtod() - string-to-double conversion
sprintf() - when used for outputting double-precision floating point numbers)
Their GCC and MSVC versions return different results, in some digits of mantissa. But it plays a cruicial role if the exponent value is large. An example:
MSVC: 9,999999999999999500000000000000e+032
GCC: 9,999999999999999455752309870428e+32
MSVC: 9,999999999999999500000000000000e+033
GCC: 9,999999999999999455752309870428e+33
MSVC: 9,999999999999999700000000000000e+034
GCC: 9,999999999999999686336610791798e+34
The input test numbers have an identical binary representation under MSVC and GCC.
I'm looking for a well-tested cross-platform open-source implementation of those functions, or just for a pair of functions that would correctly and consistently convert double to string and back.
I've already tried the clib GCC implementation, but the code is too long and too dependent on other source files, so I expect the adaptation to be difficult.
What implementations of string-to-double and double-to-string functions would you recommend?

Converting between floating point numbers and strings is hard - very hard. There are numerous papers on the subject, including:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
How to Print Floating-Point Numbers Accurately
How to Read Floating-Point Numbers Accurately
General Decimal Arithmetic
The last of those is a treasure trove of information on floating point decimal arithmetic.
The GNU glibc implementation is likely to be about as good as it gets - but it won't be short or simple.
Addressing examples
A double normally stores 16 (some might argue 17) significant decimal digits. MSVC is processing 17 digits. Anything beyond that is noise. GCC is doing as you ask it, but there aren't enough bits in a double to warrant the extra 14 digits you are requesting. If you had 16-byte 'long double' values (SPARC, PPC, Intel x86_64 for Mac), then you might warrant 32 significant figures. However, the differences you are showing are QoI; I might even argue that MS is doing a better job than GCC/glibc here (and I don't often say that!).

The only algorithm I know for printing the exact value of a floating point number in decimal is as follows:
Convert the mantissa to a decimal integer. You can either do this by pulling apart the bits to read the mantissa directly, or you can write a messy floating point loop that first multiplies the value by a power of two to put it in the range 1<=x<10, then pulls off a digit at a time by casting to int, subtracting, and multiplying by 10.
Apply the exponent by repeatedly multiplying or dividing by 2. This is an operation on the string of decimal digits you generated. Every ~3 multiplications will add an extra digit to the left. Every single dividion will add an extra digit to the right.
It's slow and ugly but it works...

The following function dtoa returns a string that losslessly converts back into the same double.
If you rewrite aisd to test all of your string-to-float implementations, you'll have portable output among them.
// Return whether a string represents the given double.
int aisd(double f, char* s) {
double r;
sscanf(s, "%lf", &r);
return r == f;
}
// Return the shortest lossless string representation of an IEEE double.
// Guaranteed to fit in 23 characters (including the final '\0').
char* dtoa(char* res, double f) {
int i, j, lenF = 1e9;
char fmt[8];
int e = floor(log10(f)) + 1;
if (f > DBL_MAX) { sprintf(res, "1e999"); return res; } // converts to Inf
if (f < -DBL_MAX) { sprintf(res, "-1e999"); return res; } // converts to -Inf
if (isnan(f)) { sprintf(res, "NaN"); return res; } // NaNs don't work under MSVCRT
// compute the shortest representation without exponent ("123000", "0.15")
if (!f || e>-4 && e<21) {
for (i=0; i<=20; i++) {
sprintf(fmt, "%%.%dlf", i);
sprintf(res, fmt, f);
if (aisd(f, res)) { lenF = strlen(res); break; }
}
}
if (!f) return res;
// compute the shortest representation with exponent ("123e3", "15e-2")
for (i=0; i<19; i++) {
sprintf(res, "%.0lfe%d", f * pow(10,-e), e); if (aisd(f, res)) break;
j = strlen(res); if (j >= lenF) break;
while (res[j] != 'e') j--;
res[j-1]--; if (aisd(f, res)) break; // try mantissa -1
res[j-1]+=2; if (aisd(f, res)) break; // try mantissa +1
e--;
}
if (lenF <= strlen(res)) sprintf(res, fmt, f);
return res;
}
See Can't get a NaN from the MSVCRT strtod/sscanf/atof functions for the MSVCRT NaN problem. If you don't need to recognize NaNs, you can output infinity ("1e999") when you get one.

Related

unsigned integer devision rounding AVR GCC

I am struggling to understand how to divide an unsigned integer by a factor of 10 accounting for rounding like a float would round.
uint16_t val = 331 / 10; // two decimal to one decimal places 3.31 to 3.3
uint16_t val = 329 / 10; // two decimal to one decimal places 3.29 to 3.3 (not 2.9)
I would like both of these sums to round to 33 (3.3 in decimal equivalent of 1 decimal place)
Im sure the answer is simple if i were more knowledgable than i am on how processors perform integer division.
Since integer division rounds the result downward to zero, just add half of the divisor, whatever it is. I.e.:
uint16_t val = ((uint32_t)x + 5) / 10; // convert to 32-bit to avoid overflow
or in more general form:
static inline uint16_t divideandround(uint16_t quotient, uint16_t divisor) {
return ((uint32_t)quotient + (divisor >> 1)) / divisor;
}
If you are sure there will no 16-bit overflow (i.e. values will always be not more than 65530) you can speed up the calculation by keeping values 16 bit:
uint16_t val = (x + 5) / 10;
I think I have worked it out, This seems to give me the right answer, please let me know if I am actually wrong and it fails.
uint16_t val = 329;
if (val%10>=5)
{
val = (val+5)/10;
}
else
{
val = val/10;
}
You can do it with just one 16-bit divmod operation:
#include <stdint.h>
uint16_t udiv10_round (uint16_t n)
{
uint16_t q = n / 10;
uint16_t r = n % 10;
return r >= 5 ? q + 1 : q;
}
When you are optimizing for size (-Os), avr-gcc will compute both quotient and remainder by means of one library call to __udivmodhi4.
When you are optimizing for speed (-O2), avr-gcc might avoid1 __udivmodhi4 altogether and instead perform a 16×16=32 multiplication with a factor of 0xcccd, so that the quotient and remainder are easy to compute from the high part of that product.
1This happens if the MCU you are compiling for supports MUL. If MUL is nor supported, avr-gcc still uses divmod operation as a 16×16=32 multiplication would not gain anything.

GMP setting last digit to zero

I’m looking for fastest way to set last digit of positive number l declated as mpz_t to zero. I didn’t find the function could to this already. For example 6531489321483 should be changed to 6531489321480.
Update
It appears that subtraction and modulo is the superior method for zeroing out the last digit with mpz_t types. Just as #MarkDickinson and #MarcGlisse pointed out, the asymptotic behavior greatly favors using mpz_tdiv_r_ui (or mpz_fdiv_r_ui) over mpz_div_ui followed by mpz-mul_ui. My original benchmarks were on relatively small numbers (25 digits). I retested on a 175 digit number and the sub_mod method was nearly 40% faster.
Test value: 1234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789
Result with div_mul: 1234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456780
Result with sub_mod: 1234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456780
time with division followed by multiplication: 6.145656
time with subtraction and modulo: 4.413998
And with a 350 digit number we see that sub_mod is around 85% faster:
Test value: 12345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789
Result with div_mul: 12345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456780
Result with sub_mod: 12345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456789123456789876543212345678912345678987654321234567891234567898765432123456780
time with division followed by multiplication: 10.256122
time with subtraction and modulo: 5.522990
It should be noted that whether we use mpz_tdiv_r_ui or mpz_fdiv_r_ui, the results were almost identical.
Since the sub_mod method was only marginally slower with smaller numbers, it seems reasonable to only use this method for all cases.
It would be nice to tests this on different compilers. I'm currently using clang 5.0.1.
Original
Benchmarks on my machine show that division followed by multiplication is faster than finding the remainder via modulo operator and subtracting.
#include <stdio.h>
#include <time.h>
#include <gmp.h>
void div_mul(mpz_t x) {
mpz_tdiv_q_ui(x, x, 10u);
mpz_mul_ui(x, x, 10u);
}
// Maybe this could be simpler?
void sub_mod(mpz_t x, mpz_t y) {
// N.B. mpz_mod_ui is equivalent to mpz_fdiv_r_ui. Changed to
// mpz_tdiv_r_ui for consistency with div_mul.
mpz_tdiv_r_ui(y, x, 10u);
mpz_sub(x, x, y);
}
Main:
int main() {
mpz_t testVal;
mpz_init(testVal);
mpz_set_str(testVal, "1234567898765432123456789", 10);
gmp_printf("Test value: %Zd\n", testVal);
mpz_t x;
mpz_t y;
mpz_init(x);
mpz_init(y);
mpz_set(x, testVal);
div_mul(x);
gmp_printf("Result with div_mul: %Zd\n", x);
mpz_set(x, testVal);
sub_mod(x, y);
gmp_printf("Result with sub_mod: %Zd\n", x);
const int limit = 100000000;
const double checkPoint0 = (double) clock() / CLOCKS_PER_SEC;
for (int i = 0; i < limit; ++i) {
mpz_set(x, testVal);
div_mul(x);
}
const double checkPoint1 = (double) clock() / CLOCKS_PER_SEC;
const double time_div_mul = checkPoint1 - checkPoint0;
printf("time with division followed by multiplication: %f\n", time_div_mul);
const double checkPoint2 = (double) clock() / CLOCKS_PER_SEC;
for (int i = 0; i < limit; ++i) {
mpz_set(x, testVal);
sub_mod(x, y);
}
const double checkPoint3 = (double) clock() / CLOCKS_PER_SEC;
const double time_sub_mod = checkPoint3 - checkPoint2;
printf("time with subtraction and modulo: %f\n", time_sub_mod);
mpz_clear(testVal);
mpz_clear(x);
mpz_clear(y);
return 0;
}
Output:
Test value: 1234567898765432123456789
Result with div_mul: 1234567898765432123456780
Result with sub_mod: 1234567898765432123456780
time with division followed by multiplication: 2.941251
time with subtraction and modulo: 3.171949
I suspect that one of the reasons that the latter method is slightly slower is that 2 variables are needed as complicated multi operations on the same line are not accessible in the C api. If we could use gmpxx, we could write x - x % 10.
Another thought as to why the first method is faster, is that the div_mul involves two operations with unsigned integers while the sub_mod method involves an operation with an unsigned integer followed by an operation with mpz_t.
I tried to get this reproduced on ideone.com but could not get gmp.h loaded. I opted to implement a similar benchmark with type long long int just for fun. You will note the presence of volatile and that the limit is one billion instead of one hundred million as seen above. The volatile was need to keep the for loop from being optimized away.
Converting the number to a string and changing last character wouldn't be the fastest way?

How do I properly truncate a 64 bit float to a 32 bit truncated float and back (dropping precision) with Node.js?

So obviously there's no 32 bit float, but if we're trying to store big data efficiently and many of our values are floats no greater than 100,000 with exactly 2 decimal places, it makes sense to store the 64 bit value in 32 bits by dropping the bits representing the precision that we don't need.
I tried doing this by simply writing to a 64 bit BE float buffer like so and slicing the first 4 bytes:
// float32 = Number between 0.00 and 100000.00
const setFloat32 = (float32) => {
b64.writeDoubleBE(float32, 0) // b64 = 64 bit buffer
b32 = b64.slice(0, 4)
return b32;
}
And reading it by adding on 4 empty bytes:
// b32 = the 32 bit buffer from the previous func
const readFloat32 = (b32) => {
// b32Empty = empty 32 bit buffer
return Buffer.concat([b32, b32Empty]).readDoubleBE(0);
}
But this modified flat decimals like:
1.85 => 1.8499994277954102
2.05 => 2.049999237060547
How can I fix my approach to do this correctly, and do so in the most efficient manner for read speed?
If you only want to keep two decimals of precision, you can convert your value to a shifted integer and store that:
function shiftToInteger(val, places) {
// multiply by a constant to shift the decimals you want to keep into
// integer positions, then use Math.round() or Math.floor()
// to truncate the rest of the decimals - depending upon which behavior you want
// then return the shifted integer that will fit into a U32 for storage
return Math.round(val * (10 ** places));
}
This creates a shifted integer that can then be stored in a 32 bit value (with the value limits you described), such as a Uint32Array or Int32Array. To use it when you retrieve it from storage, you would then divide it by 100 to convert it back to a standard Javascript float for usage.
The key is to convert whatever decimal precision you want to keep to an integer so you can store that in a non-float type of value that is just large enough for your max anticipated value. You can storage efficiency because you're using all the storage bits for the desired precision rather than wasting a lot of unnecessary storage bits on the decimal precision that you don't need to keep.
Here's an example:
function shiftToInteger(val, places) {
return Math.round(val * (10 ** places));
}
function shiftToFloat(integer, places) {
return integer / (10 ** places);
}
let x = new Uint32Array(10);
x[0] = shiftToInteger(1.85, 2);
console.log(x[0]); // output shifted integer value
console.log(shiftToFloat(x[0], 2)); // convert back to decimal value

On Linux 64-bit, can ptrace() return a double?

Assuming addr is address of a local variable on stack, are the following correct ways for retrieving the values of variables (ChildPid is tracee's id)?
double data = (double) ptrace(PTRACE_PEEKDATA, ChildPid, addr, 0);
float data = (float) ptrace(PTRACE_PEEKDATA, ChildPid, addr, 0);
Thanks.
The documentation says that PTRACE_PEEKDATA returns a word. It also says
The size of a "word" is determined by the operating-system variant (e.g., for 32-bit Linux it is 32 bits).
So you can't reliably use a single ptrace() call to get at a double on a 32-bit system, just half of it. The other half's address probably depends on if the stack grows up or down. On a 64 bit system you'd have to figure out which half of the returned word has the float.
So... it's all very system dependent on what you have to do.
Casting long to double won't get you the desired result. Casting numbers converts the numeric value, it doesn't copy bits. What you need is something like:
long pt = ptrace(PTRACE_PEEKDATA, ChildPid, addr, 0);
double result;
assert (sizeof(pt) == sizeof(result), "Oops, wrong word size!");
memcpy (&result, &pt, sizeof(result));
To get a float, you need to know which half of the word it occupies (normally you shouldn't use addr which is not aligned to a word boundary). Thus you need something like the following:
long pt = ptrace(PTRACE_PEEKDATA, ChildPid, addr, 0);
float result;
assert (sizeof(pt) == 2*sizeof(result), "Oops, wrong word size!");
// either this (for the lower half of the word)
memcpy (&result, &pt, sizeof(result));
// or this (for the upper half of the word)
memcpy (&result, ((char*)&pt)+sizeof(result), sizeof(result));

Longest Subsequence with all occurrences of a character at 1 place

In a sequence S of n characters; each character may occur many times in the sequence. You want to find the longest subsequence of S where all occurrences of the same character are together in one place;
For ex. if S = aaaccaaaccbccbbbab, then the longest such subsequence(answer) is aaaaaaccccbbbb i.e= aaa__aaacc_ccbbb_b.
In other words, any alphabet character that appears in S may only appear in one contiguous block in the subsequence. If possible, give a polynomial time
algorithm to determine the solution.
Design
Below I give a C++ implementation of a dynamic programming algorithm that solves this problem. An upper bound on the running time (which is probably not tight) is given by O(g*(n^2 + log(g))), where n is the length of the string and g is the number of distinct subsequences in the input. I don't know a good way to characterise this number, but it can be as bad as O(2^n) for a string consisting of n distinct characters, making this algorithm exponential-time in the worst case. It also uses O(ng) space to hold the DP memoisation table. (A subsequence, unlike a substring, may consist of noncontiguous character from the original string.) In practice, the algorithm will be fast whenever the number of distinct characters is small.
The two key ideas used in coming up with this algorithm were:
Every subsequence of a length-n string is either (a) the empty string or (b) a subsequence whose first element is at some position 1 <= i <= n and which is followed by another subsequence on the suffix beginning at position i+1.
If we append characters (or more specifically character positions) one at a time to a subsequence, then in order to build all and only the subsequences that satisfy the validity criteria, whenever we add a character c, if the previous character added, p, was different from c, then it is no longer possible to add any p characters later on.
There are at least 2 ways to manage the second point above. One way is to maintain a set of disallowed characters (e.g. using a 256-bit array), which we add to as we add characters to the current subsequence. Every time we want to add a character to the current subsequence, we first check whether it is allowed.
Another way is to realise that whenever we have to disallow a character from appearing later in the subsequence, we can achieve this by simply deleting all copies of the character from the remaining suffix, and using this (probably shorter) string as the subproblem to solve recursively. This strategy has the advantage of making it more likely that the solver function will be called multiple times with the same string argument, which means more computation can be avoided when the recursion is converted to DP. This is how the code below works.
The recursive function ought to take 2 parameters: the string to work on, and the character most recently appended to the subsequence that the function's output will be appended to. The second parameter must be allowed to take on a special value to indicate that no characters have been appended yet (which happens in the top-level recursive case). One way to accomplish this would be to choose a character that does not appear in the input string, but this introduces a requirement not to use that character. The obvious workaround is to pass a 3rd parameter, a boolean indicating whether or not any characters have already been added. But it's slightly more convenient to use just 2 parameters: a boolean indicating whether any characters have been added yet, and a string. If the boolean is false, then the string is simply the string to be worked on. If it is true, then the first character of the string is taken to be the last character added, and the rest is the string to be worked on. Adopting this approach means the function takes only 2 parameters, which simplifies memoisation.
As I said at the top, this algorithm is exponential-time in the worst case. I can't think of a way to completely avoid this, but some optimisations can help certain cases. One that I've implemented is to always add maximal contiguous blocks of the same character in a single step, since if you add at least one character from such a block, it can never be optimal to add fewer than the entire block. Other branch-and-bound-style optimisations are possible, such as keeping track of a globally best string so far and cutting short the recursion whenever we can be certain that the current subproblem cannot produce a longer one -- e.g. when the number of characters added to the subsequence so far, plus the total number of characters remaining, is less than the length of the best subsequence so far.
Code
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
#include <functional>
#include <map>
using namespace std;
class RunFinder {
string s;
map<string, string> memo[2]; // DP matrix
// If skip == false, compute the longest valid subsequence of t.
// Otherwise, compute the longest valid subsequence of the string
// consisting of t without its first character, taking that first character
// to be the last character of a preceding subsequence that we will be
// adding to.
string calc(string const& t, bool skip) {
map<string, string>::iterator m(memo[skip].find(t));
// Only calculate if we haven't already solved this case.
if (m == memo[skip].end()) {
// Try the empty subsequence. This is always valid.
string best;
// Try starting a subsequence whose leftmost position is one of
// the remaining characters. Instead of trying each character
// position separately, consider only contiguous blocks of identical
// characters, since if we choose one character from this block there
// is never any harm in choosing all of them.
for (string::const_iterator i = t.begin() + skip; i != t.end();) {
if (t.end() - i < best.size()) {
// We can't possibly find a longer string now.
break;
}
string::const_iterator next = find_if(i + 1, t.end(), bind1st(not_equal_to<char>(), *i));
// Just use next - 1 to cheaply give us an extra char at the start; this is safe
string u(next - 1, t.end());
u[0] = *i; // Record the previous char for the recursive call
if (skip && *i != t[0]) {
// We have added a new segment that is different from the
// previous segment. This means we can no longer use the
// character from the previous segment.
u.erase(remove(u.begin() + 1, u.end(), t[0]), u.end());
}
string v(i, next);
v += calc(u, true);
if (v.size() > best.size()) {
best = v;
}
i = next;
}
m = memo[skip].insert(make_pair(t, best)).first;
}
return (*m).second;
}
public:
RunFinder(string s) : s(s) {}
string calc() {
return calc(s, false);
}
};
int main(int argc, char **argv) {
RunFinder rf(argv[1]);
cout << rf.calc() << '\n';
return 0;
}
Example results
C:\runfinder>stopwatch runfinder aaaccaaaccbccbbbab
aaaaaaccccbbbb
stopwatch: Terminated. Elapsed time: 0ms
stopwatch: Process completed with exit code 0.
C:\runfinder>stopwatch runfinder abbaaasdbasdnfa,mnbmansdbfsbdnamsdnbfabbaaasdbasdnfa,mnbmansdbfsbdnamsdnbfabbaaasdbasdnfa,mnbmansdbfsbdnamsdnbfabbaaasdbasdnfa,mnbmansdbfsbdnamsdnbf
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,mnnsdbbbf
stopwatch: Terminated. Elapsed time: 609ms
stopwatch: Process completed with exit code 0.
C:\runfinder>stopwatch -v runfinder abcdefghijklmnopqrstuvwxyz123456abcdefghijklmnop
stopwatch: Command to be run: <runfinder abcdefghijklmnopqrstuvwxyz123456abcdefghijklmnop>.
stopwatch: Global memory situation before commencing: Used 2055507968 (49%) of 4128813056 virtual bytes, 1722564608 (80%) of 2145353728 physical bytes.
stopwatch: Process start time: 21/11/2012 02:53:14
abcdefghijklmnopqrstuvwxyz123456
stopwatch: Terminated. Elapsed time: 8062ms, CPU time: 7437ms, User time: 7328ms, Kernel time: 109ms, CPU usage: 92.25%, Page faults: 35473 (+35473), Peak working set size: 145440768, Peak VM usage: 145010688, Quota peak paged pool usage: 11596, Quota peak non paged pool usage: 1256
stopwatch: Process completed with exit code 0.
stopwatch: Process completion time: 21/11/2012 02:53:22
The last run, which took 8s and used 145Mb, shows how it can have problems with strings containing many distinct characters.
EDIT: Added in another optimisation: we now exit the loop that looks for the place to start the subsequence if we can prove that it cannot possibly be better than the best one discovered so far. This drops the time needed for the last example from 32s down to 8s!
EDIT: This solution is wrong for OP's problem. I'm not deleting it because it might be right for someone else. :)
Consider a related problem: find the longest subsequence of S of consecutive occurrences of a given character. This can be solved in linear time:
char c = . . .; // the given character
int start = -1;
int bestStart = -1;
int bestLength = 0;
int currentLength = 0;
for (int i = 0; i < S.length; ++i) {
if (S.charAt(i) == c) {
if (start == -1) {
start = i;
}
++currentLength;
} else {
if (currentLength > bestLength) {
bestStart = start;
bestLength = currentLength;
}
start = -1;
currentLength = 0;
}
}
if (bestStart >= 0) {
// longest sequence of c starts at bestStart
} else {
// character c does not occur in S
}
If the number of distinct characters (call it m) is reasonably small, just apply this algorithm in parallel to each character. This can be easily done by converting start, bestStart, currentLength, bestLength to arrays m long. At the end, scan the bestLength array for the index of the largest entry and use the corresponding entry in the bestStart array as your answer. The total complexity is O(mn).
import java.util.*;
public class LongestSubsequence {
/**
* #param args
*/
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
String str = sc.next();
execute(str);
}
static void execute(String str) {
int[] hash = new int[256];
String ans = "";
for (int i = 0; i < str.length(); i++) {
char temp = str.charAt(i);
hash[temp]++;
}
for (int i = 0; i < hash.length; i++) {
if (hash[i] != 0) {
for (int j = 0; j < hash[i]; j++)
ans += (char) i;
}
}
System.out.println(ans);
}
}
Space: 256 -> O(256), I don't if it's correct to say this way..., cause O(256) I think is O(1)
Time: O(n)

Resources