I have an application that playback audio. It takes encoded audio data over RTP and decode it to 16bit array. The decoded 16bit array is converted to 8 bit array (byte array) as this is required for some other functionality.
Even though audio playback is working it is breaking continuously and very hard to recognise audio output. If I listen carefully I can tell it is playing the correct audio.
I suspect this is due to the fact I convert 16 bit data stream into a byte array and use the write(byte[], int, int, AudioTrack.WRITE_NON_BLOCKING) of AudioTrack class for audio playback.
Therefore I converted the byte array back to a short array and used write(short[], int, int, AudioTrack.WRITE_NON_BLOCKING) method to see if it could resolve the problem.
However now there is no audio sound at all. In the debug output I can see the short array has data.
What could be the reason?
Here is the AUdioTrak initialization
sampleRate =AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
minimumBufferSize = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
minimumBufferSize,
AudioTrack.MODE_STREAM);
Here is the code converts short array to byte array
for (int i=0;i<internalBuffer.length;i++){
bufferIndex = i*2;
buffer[bufferIndex] = shortToByte(internalBuffer[i])[0];
buffer[bufferIndex+1] = shortToByte(internalBuffer[i])[1];
}
Here is the method that converts byte array to short array.
public short[] getShortAudioBuffer(byte[] b){
short audioBuffer[] = null;
int index = 0;
int audioSize = 0;
ByteBuffer byteBuffer = ByteBuffer.allocate(2);
if ((b ==null) && (b.length<2)){
return null;
}else{
audioSize = (b.length - (b.length%2));
audioBuffer = new short[audioSize/2];
}
if ((audioSize/2) < 2)
return null;
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
for(int i=0;i<audioSize/2;i++){
index = i*2;
byteBuffer.put(b[index]);
byteBuffer.put(b[index+1]);
audioBuffer[i] = byteBuffer.getShort(0);
byteBuffer.clear();
System.out.print(Integer.toHexString(audioBuffer[i]) + " ");
}
System.out.println();
return audioBuffer;
}
Audio is decoded using opus library and the configuration is as follows;
opus_decoder_ctl(dec,OPUS_SET_APPLICATION(OPUS_APPLICATION_AUDIO));
opus_decoder_ctl(dec,OPUS_SET_SIGNAL(OPUS_SIGNAL_MUSIC));
opus_decoder_ctl(dec,OPUS_SET_FORCE_CHANNELS(OPUS_AUTO));
opus_decoder_ctl(dec,OPUS_SET_MAX_BANDWIDTH(OPUS_BANDWIDTH_FULLBAND));
opus_decoder_ctl(dec,OPUS_SET_PACKET_LOSS_PERC(0));
opus_decoder_ctl(dec,OPUS_SET_COMPLEXITY(10)); // highest complexity
opus_decoder_ctl(dec,OPUS_SET_LSB_DEPTH(16)); // 16bit = two byte samples
opus_decoder_ctl(dec,OPUS_SET_DTX(0)); // default - not using discontinuous transmission
opus_decoder_ctl(dec,OPUS_SET_VBR(1)); // use variable bit rate
opus_decoder_ctl(dec,OPUS_SET_VBR_CONSTRAINT(0)); // unconstrained
opus_decoder_ctl(dec,OPUS_SET_INBAND_FEC(0)); // no forward error correction
Let's assume you have a short[] array which contains the 16-bit one channel data to be played.
Then each sample is a value between -32768 and 32767 which represents the signal amplitude at the exact moment. And 0 value represents a middle point (no signal). This array can be passed to the audio track with ENCODING_PCM_16BIT format encoding.
But things are going weird when playing ENCODING_PCM_8BIT is used (See AudioFormat)
In this case each sample encoded by one byte. But each byte is unsigned. That means, it's value is between 0 and 255, while 128 represents the middle point.
Java has no unsigned byte format. Byte format is signed. I.e. values -128...-1 will represent actual values of 128...255. So you have to be careful when converting to the byte array, otherwise it will be a noise with barely recognizable source sound.
short[] input16 = ... // the source 16-bit audio data;
byte[] output8 = new byte[input16.length];
for (int i = 0 ; i < input16.length ; i++) {
// To convert 16 bit signed sample to 8 bit unsigned
// We add 128 (for rounding), then shift it right 8 positions
// Then add 128 to be in range 0..255
int sample = ((input16[i] + 128) >> 8) + 128;
if (sample > 255) sample = 255; // strip out overload
output8[i] = (byte)(sample); // cast to signed byte type
}
To perform backward conversion all should be the same: each single sample to be converted to exactly one sample of the output signal
byte[] input8 = // source 8-bit unsigned audio data;
short[] output16 = new short[input8.length];
for (int i = 0 ; i < input8.length ; i++) {
// to convert signed byte back to unsigned value just use bitwise AND with 0xFF
// then we need subtract 128 offset
// Then, just scale up the value by 256 to fit 16-bit range
output16[i] = (short)(((input8[i] & 0xFF) - 128) * 256);
}
The issue of not being able to convert data from byte array to short array was resolved when used bitwise operators instead of using ByteArray. It could be due not setting the correct parameters in ByteArray or it is not suitable for such conversion.
Nevertheless implementing conversion using bitwise operators resolved the problem. Since the original question has been resolved by this approach, please consider this as the final answer.
I will raise a separate topic for playback issue.
Thank you for all your support.
I've been running Blarggs CPU tests through my Gameboy emulator, and the op r,r test shows that my ADC instruction is not working properly, but that ADD is. My understanding is that the only difference between the two is adding the existing carry flag to the second operand before addition. As such, my ADC code is the following:
void Emu::add8To8Carry(BYTE &a, BYTE b) //4 cycles - 1 byte
{
if((Flags >> FLAG_CARRY) & 1)
b++;
FLAGCLEAR_N;
halfCarryAdd8_8(a, b); //generates H flag based on addition
carryAdd8_8(a, b); //generates C flag appropriately
a+=b;
if(a == 0)
FLAGSET_Z;
else
FLAGCLEAR_Z;
}
I entered the following into a test ROM:
06 FE 3E 01 88
Which leaves A with the value 0 (Flags = B0) when the carry flag is set, and FF (Flags = 00) when it is not. This is how it should work, as far as my understanding goes. However, it still fails the test.
From my research, I believe that flags are affected in an identical manner to ADD. Literally the only change in my code from the working ADD instruction is the addition of the flag check/potential increment in the first two lines, which my test code seems to prove works.
Am I missing something? Perhaps there's a peculiarity with flag states between ADD/ADC? As a side note, SUB instructions also pass, but SBC fails in the same way.
Thanks
The problem is that b is an 8 bit value. If b is 0xff and carry is set then adding 1 to b will set it to 0 and won't generate carry if added with a >= 1. You get similar problems with the half carry flag if the lower nybble is 0xf.
This might be fixed if you call halfCarryAdd8_8(a, b + 1); and carryAdd8_8(a, b + 1); when carry is set. However, I suspect that those routines also take byte operands so you may have to make changes to them internally. Perhaps by adding the carry as a separate argument so that you can do tmp = a + b + carry; without overflow of b. But I can only speculate without the source to those functions.
On a somewhat related note, there's a fairly simple way to check for carry over all the bits:
int sum = a + b;
int no_carry_sum = a ^ b;
int carry_into = sum ^ no_carry_sum;
int half_carry = carry_into & 0x10;
int carry = carry_info & 0x100;
How does that work? Consider that bitwise "xor" gives the expected result of each bit if there is no carry going in to that bit: 0 ^ 0 == 0, 1 ^ 0 == 0 ^ 1 == 1 and 1 ^ 1 == 0. By xoring sum with no_carry_sum we get the bits where the sum differs from the bit-by-bit addition. sum is only different whenever there is a carry into a particular bit position. Thus both the half carry and carry bits can be obtained with almost no overhead.
I'm trying to convert a 24 bit usb audio stream into a 32 bit stream so my microcontroller's peripherals can play happily with the stream (it can only handle 16 or 32 bit data like most mcus...).
The following code is what I got from the mcu's company... didn't work as expected and I ended up getting really distorted audio.
// Function takes usb stream and processes the data for our peripherals
// #data - usb stream data
// #byte_count - size of stream
void process_usb_stream(uint8_t *data, uint16_t byte_count) {
// Etc code that gets buffers ready to read the stream...
// Conversion here!
int32_t *buffer;
int sample_count = 0;
for (int i = 0; i < byte_count; i += 3) {
buffer[sample_count++] = data[i] | data[i+1] << 8 | data[i+2] << 16;
}
// Send buffer to peripherals for them to use...
}
Any help with converting the data from a 24 bit stream to 32 bit stream would be super awesome! This area of work is very hard for me :(
data[...] is a uint8_t. You need to cast that before shifting, because data[...]<<8 and data[...]<<16 are undefined. They'll either be 0 or unchanged, neither of which is what you want.
Also, you need to shift by another 8 bits to get the full range and put the sign bit in the right place.
Also, you're treating the data as if it were in little-endian format. Make sure it is. I'll assume that's correct, so something like this works:
int32_t *buffer;
int sample_count = 0;
for (int i = 0; i+3 <= byte_count; ) {
int32_t v = ((int32_t)data[i++])<<8;
v |= ((int32_t)data[i++])<<16;
v |= ((int32_t)data[i++])<<24;
buffer[sample_count++] = v;
}
Finally, note that this assumes that byte_count is divisible by 3 -- make sure that's true!
this is DSP stuff if, also post this question on http://dsp.stackexchange.com
In DSP the process of changing the bit depth is called scaling
16 bit resolution has 65536 values
24 bit resolution has 16777216
possible values
32 bit has 4294967296 values so the factor is 256
According to https://electronics.stackexchange.com/questions/229268/what-is-name-of-process-used-to-change-sample-bit-depth/229271
reduction from 24 bit to 16 bit is called scaling down and is done by dividing each value by 256.
This can be done by bitwise shifting every bit by 8
y = x >> 8. When scaling down this way the LSB is lost
Scaling up to 32 bit is more complicated and there are several approaches how to do this. It may work by multiplying each bit of the value with a value between 2⁰ and 2⁸.
Push the 24 bit value in a 32 bit register and then left-shifting each bit by a value between 2⁰ and 2⁸:
data32[31] = data32[23] << 8;
data32[22] = data32[14] << 8;
...
data32[0] = data32[0];
and interpolate the bits you do not get with this (linear interpolation)
Maybe there are much better scaling up algortihms ask on http://dsp.stackexchange.com
See also http://blog.bjornroche.com/2013/05/the-abcs-of-pcm-uncompressed-digital.html for the scaling up problem...
My program recently encountered a weird segfault when running. I want to know if somebody had met this error before and how it could be fixed. Here is more info:
Basic info:
CentOS 5.2, kernal version is 2.6.18
g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50)
CPU: Intel x86 family
libstdc++.so.6.0.8
My program will start multiple threads to process data. The segfault occurred in one of the threads.
Though it's a multi-thread program, the segfault seemed to occur on a local std::string object. I'll show this in the code snippet later.
The program is compiled with -g, -Wall and -fPIC, and without -O2 or other optimization options.
The core dump info:
Core was generated by `./myprog'.
Program terminated with signal 11, Segmentation fault.
#0 0x06f6d919 in __gnu_cxx::__exchange_and_add(int volatile*, int) () from /usr/lib/libstdc++.so.6
(gdb) bt
#0 0x06f6d919 in __gnu_cxx::__exchange_and_add(int volatile*, int) () from /usr/lib/libstdc++.so.6
#1 0x06f507c3 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::assign(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /usr/lib/libstdc++.so.6
#2 0x06f50834 in std::basic_string<char, std::char_traits<char>, std::allocator<char> >::operator=(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) () from /usr/lib/libstdc++.so.6
#3 0x081402fc in Q_gdw::ProcessData (this=0xb2f79f60) at ../../../myprog/src/Q_gdw/Q_gdw.cpp:798
#4 0x08117d3a in DataParser::Parse (this=0x8222720) at ../../../myprog/src/DataParser.cpp:367
#5 0x08119160 in DataParser::run (this=0x8222720) at ../../../myprog/src/DataParser.cpp:338
#6 0x080852ed in Utility::__dispatch (arg=0x8222720) at ../../../common/thread/Thread.cpp:603
#7 0x0052c832 in start_thread () from /lib/libpthread.so.0
#8 0x00ca845e in clone () from /lib/libc.so.6
Please note that the segfault begins within the basic_string::operator=().
The related code:
(I've shown more code than that might be needed, and please ignore the coding style things for now.)
int Q_gdw::ProcessData()
{
char tmpTime[10+1] = {0};
char A01Time[12+1] = {0};
std::string tmpTimeStamp;
// Get the timestamp from TP
if((m_BackFrameBuff[11] & 0x80) >> 7)
{
for (i = 0; i < 12; i++)
{
A01Time[i] = (char)A15Result[i];
}
tmpTimeStamp = FormatTimeStamp(A01Time, 12); // Segfault occurs on this line
And here is the prototype of this FormatTimeStamp method:
std::string FormatTimeStamp(const char *time, int len)
I think such string assignment operations should be a kind of commonly used one, but I just don't understand why a segfault could occurr here.
What I have investigated:
I've searched on the web for answers. I looked at here. The reply says try to recompile the program with _GLIBCXX_FULLY_DYNAMIC_STRING macro defined. I tried but the crash still happens.
I also looked at here. It also says to recompile the program with _GLIBCXX_FULLY_DYNAMIC_STRING, but the author seems to be dealing with a different problem with mine, thus I don't think his solution works for me.
Updated on 08/15/2011
Here is the original code of this FormatTimeStamp. I understand the coding doesn't look very nice(too many magic numbers, for instance..), but let's focus on the crash issue first.
string Q_gdw::FormatTimeStamp(const char *time, int len)
{
string timeStamp;
string tmpstring;
if (time) // It is guaranteed that "time" is correctly zero-terminated, so don't worry about any overflow here.
tmpstring = time;
// Get the current time point.
int year, month, day, hour, minute, second;
#ifndef _WIN32
struct timeval timeVal;
struct tm *p;
gettimeofday(&timeVal, NULL);
p = localtime(&(timeVal.tv_sec));
year = p->tm_year + 1900;
month = p->tm_mon + 1;
day = p->tm_mday;
hour = p->tm_hour;
minute = p->tm_min;
second = p->tm_sec;
#else
SYSTEMTIME sys;
GetLocalTime(&sys);
year = sys.wYear;
month = sys.wMonth;
day = sys.wDay;
hour = sys.wHour;
minute = sys.wMinute;
second = sys.wSecond;
#endif
if (0 == len)
{
// The "time" doesn't specify any time so we just use the current time
char tmpTime[30];
memset(tmpTime, 0, 30);
sprintf(tmpTime, "%d-%d-%d %d:%d:%d.000", year, month, day, hour, minute, second);
timeStamp = tmpTime;
}
else if (6 == len)
{
// The "time" specifies "day-month-year" with each being 2-digit.
// For example: "150811" means "August 15th, 2011".
timeStamp = "20";
timeStamp = timeStamp + tmpstring.substr(4, 2) + "-" + tmpstring.substr(2, 2) + "-" +
tmpstring.substr(0, 2);
}
else if (8 == len)
{
// The "time" specifies "minute-hour-day-month" with each being 2-digit.
// For example: "51151508" means "August 15th, 15:51".
// As the year is not specified, the current year will be used.
string strYear;
stringstream sstream;
sstream << year;
sstream >> strYear;
sstream.clear();
timeStamp = strYear + "-" + tmpstring.substr(6, 2) + "-" + tmpstring.substr(4, 2) + " " +
tmpstring.substr(2, 2) + ":" + tmpstring.substr(0, 2) + ":00.000";
}
else if (10 == len)
{
// The "time" specifies "minute-hour-day-month-year" with each being 2-digit.
// For example: "5115150811" means "August 15th, 2011, 15:51".
timeStamp = "20";
timeStamp = timeStamp + tmpstring.substr(8, 2) + "-" + tmpstring.substr(6, 2) + "-" + tmpstring.substr(4, 2) + " " +
tmpstring.substr(2, 2) + ":" + tmpstring.substr(0, 2) + ":00.000";
}
else if (12 == len)
{
// The "time" specifies "second-minute-hour-day-month-year" with each being 2-digit.
// For example: "305115150811" means "August 15th, 2011, 15:51:30".
timeStamp = "20";
timeStamp = timeStamp + tmpstring.substr(10, 2) + "-" + tmpstring.substr(8, 2) + "-" + tmpstring.substr(6, 2) + " " +
tmpstring.substr(4, 2) + ":" + tmpstring.substr(2, 2) + ":" + tmpstring.substr(0, 2) + ".000";
}
return timeStamp;
}
Updated on 08/19/2011
This problem has finally been addressed and fixed. The FormatTimeStamp() function has nothing to do with the root cause, in fact. The segfault is caused by a writing overflow of a local char buffer.
This problem can be reproduced with the following simpler program(please ignore the bad namings of some variables for now):
(Compiled with "g++ -Wall -g main.cpp")
#include <string>
#include <iostream>
void overflow_it(char * A15, char * A15Result)
{
int m;
int t = 0,i = 0;
char temp[3];
for (m = 0; m < 6; m++)
{
t = ((*A15 & 0xf0) >> 4) *10 ;
t += *A15 & 0x0f;
A15 ++;
std::cout << "m = " << m << "; t = " << t << "; i = " << i << std::endl;
memset(temp, 0, sizeof(temp));
sprintf((char *)temp, "%02d", t); // The buggy code: temp is not big enough when t is a 3-digit integer.
A15Result[i++] = temp[0];
A15Result[i++] = temp[1];
}
}
int main(int argc, char * argv[])
{
std::string str;
{
char tpTime[6] = {0};
char A15Result[12] = {0};
// Initialize tpTime
for(int i = 0; i < 6; i++)
tpTime[i] = char(154); // 154 would result in a 3-digit t in overflow_it().
overflow_it(tpTime, A15Result);
str.assign(A15Result);
}
std::cout << "str says: " << str << std::endl;
return 0;
}
Here are two facts we should remember before going on:
1). My machine is an Intel x86 machine so it's using the Little Endian rule. Therefore for a variable "m" of int type, whose value is, say, 10, it's memory layout might be like this:
Starting addr:0xbf89bebc: m(byte#1): 10
0xbf89bebd: m(byte#2): 0
0xbf89bebe: m(byte#3): 0
0xbf89bebf: m(byte#4): 0
2). The program above runs within the main thread. When it comes to the overflow_it() function, the variables layout in the thread stack looks like this(which only shows the important variables):
0xbfc609e9 : temp[0]
0xbfc609ea : temp[1]
0xbfc609eb : temp[2]
0xbfc609ec : m(byte#1) <-- Note that m follows temp immediately. m(byte#1) happens to be the byte temp[3].
0xbfc609ed : m(byte#2)
0xbfc609ee : m(byte#3)
0xbfc609ef : m(byte#4)
0xbfc609f0 : t
...(3 bytes)
0xbfc609f4 : i
...(3 bytes)
...(etc. etc. etc...)
0xbfc60a26 : A15Result <-- Data would be written to this buffer in overflow_it()
...(11 bytes)
0xbfc60a32 : tpTime
...(5 bytes)
0xbfc60a38 : str <-- Note the str takes up 4 bytes. Its starting address is **16 bytes** behind A15Result.
My analysis:
1). m is a counter in overflow_it() whose value is incremented by 1 at each for loop and whose max value is supposed not greater than 6. Thus it's value could be stored completely in m(byte#1)(remember it's Little Endian) which happens to be temp3.
2). In the buggy line: When t is a 3-digit integer, such as 109, then the sprintf() call would result in a buffer overflow, because serializing the number 109 to the string "109" actually requires 4 bytes: '1', '0', '9' and a terminating '\0'. Because temp[] is allocated with 3 bytes only, the final '\0' would definitely be written to temp3, which is just the m(byte#1), which unfortunately stores m's value. As a result, m's value is reset to 0 every time.
3). The programmer's expectation, however, is that the for loop in the overflow_it() would execute 6 times only, with each time m being incremented by 1. Because m is always reset to 0, the actual loop time is far more than 6 times.
4). Let's look at the variable i in overflow_it(): Every time the for loop is executed, i's value is incremented by 2, and A15Result[i] will be accessed. However, if you compile and run this program, you'll see the i value finally adds up to 24, which means the overflow_it() writes data to the bytes ranging from A15Result[0] to A15Result[23]. Note that the object str is only 16 bytes behind A15Result[0], thus the overflow_it() has "sweeped through" str and destroy it's correct memory layout.
5). I think the correct use of std::string, as it is a non-POD data structure, depends on that that instantiated std::string object must have a correct internal state. But in this program, str's internal layout has been changed by force externally. This should be why the assign() method call would finally cause a segfault.
Update on 08/26/2011
In my previous update on 08/19/2011, I said that the segfault was caused by a method call on a local std::string object whose memory layout had been broken and thus became a "destroyed" object. This is not an "always" true story. Consider the C++ program below:
//C++
class A {
public:
void Hello(const std::string& name) {
std::cout << "hello " << name;
}
};
int main(int argc, char** argv)
{
A* pa = NULL; //!!
pa->Hello("world");
return 0;
}
The Hello() call would succeed. It would succeed even if you assign an obviously bad pointer to pa. The reason is: the non-virtual methods of a class don't reside within the memory layout of the object, according to the C++ object model. The C++ compiler turns the A::Hello() method to something like, say, A_Hello_xxx(A * const this, ...) which could be a global function. Thus, as long as you don't operate on the "this" pointer, things could go pretty well.
This fact shows that a "bad" object is NOT the root cause that results in the SIGSEGV segfault. The assign() method is not virtual in std::string, thus the "bad" std::string object wouldn't cause the segfault. There must be some other reason that finally caused the segfault.
I noticed that the segfault comes from the __gnu_cxx::__exchange_and_add() function, so I then looked into its source code in this web page:
00046 static inline _Atomic_word
00047 __exchange_and_add(volatile _Atomic_word* __mem, int __val)
00048 { return __sync_fetch_and_add(__mem, __val); }
The __exchange_and_add() finally calls the __sync_fetch_and_add(). According to this web page, the __sync_fetch_and_add() is a GCC builtin function whose behavior is like this:
type __sync_fetch_and_add (type *ptr, type value, ...)
{
tmp = *ptr;
*ptr op= value; // Here the "op=" means "+=" as this function is "_and_add".
return tmp;
}
There it is! The passed-in ptr pointer is dereferenced here. In the 08/19/2011 program, the ptr is actually the "this" pointer of the "bad" std::string object within the assign() method. It is the derefenence at this point that actually caused the SIGSEGV segmentation fault.
We could test this with the following program:
#include <bits/atomicity.h>
int main(int argc, char * argv[])
{
__sync_fetch_and_add((_Atomic_word *)0, 10); // Would result in a segfault.
return 0;
}
There are two likely possibilities:
some code before line 798 has corrupted the local tmpTimeStamp
object
the return value from FormatTimeStamp() was somehow bad.
The _GLIBCXX_FULLY_DYNAMIC_STRING is most likely a red herring and has nothing to do with the problem.
If you install debuginfo package for libstdc++ (I don't know what it's called on CentOS), you'll be able to "see into" that code, and might be able to tell whether the left-hand-side (LHS) or the RHS of the assignment operator caused the problem.
If that's not possible, you'll have to debug this at the assembly level. Going into frame #2 and doing x/4x $ebp should give you previous ebp, caller address (0x081402fc), LHS (should match &tmpTimeStamp in frame #3), and RHS. Go from there, and good luck!
I guess there could be some problem inside FormatTimeStamp function, but without source code it's hard to say anything. Try to check your program under Valgrind. Usually this helps to fix such sort of bugs.
I'm trying to read a variable length char* from the user input. I want to be able to specify the length of the string to read when the function is called;
char *get_char(char *message, unsigned int size) {
bool correct = false;
char *value = (char*)calloc(size+1, sizeof(char));
cout << message;
while(!correct) {
int control = scanf_s("%s", value);
if (control == 1)
correct = true;
else
cout << "Enter a correct value!" <<endl
<< message;
while(cin.get() != '\n');
}
return value;
}
So, upon running the program and trying to enter a string, I get a memory access violation, so I figured something has gone wrong when accessing the allocated space. My first idea was it went wrong because the size of the scanned char * is not specified within scanf(), but it doesn't work with correct length strings either. Even if I give the calloc a size of 1000 and try to enter one character, the program crashes.
What did I do wrong?
You have to specify the size of value to scanf_s:
int control = scanf_s("%s", value, size);
does the trick.
See the documentation of scanf_s for an example of how to use the function:
Unlike scanf and wscanf, scanf_s and wscanf_s require the buffer size to be specified for all input parameters of type c, C, s, S, or [. The buffer size is passed as an additional parameter immediately following the pointer to the buffer or variable.
I omit the rest of the MSDN description here because in the example they're providing, they use scanf instead of scanf_s what is quite irritating...