Here is my simplified code:
static void WriteToFile(const wchar_t* msg) {
FILE* f = fopen("/sdcard/mytest.txt", "w");
fprintf(f, "%ls\n", msg);
fclose(f);
}
// Somewhere else in the code
const wchar_t* msg = L"Hello world";
WriteToFile(msg);
Using %ls to format a wchar string seems to work fine in Windows and Ubuntu. However, on Android, it writes only the first character H in the file.
I even tried to convert wchar to mbs:
char buf[100];
wcstombs(buf, msg, 100);
However, buf still ends up having just one character H in it.
I have a feeling that it is happening because wchar is four bytes long on Android. However, I would think the NDK must be aware of this.
How do I fix this? Regards.
Instead of using wcstombs, etc., use APIs from libiconv (https://www.gnu.org/software/libiconv/). You may have to build libiconv library for Android.
Related
public class MyOpaqueBasedJSONDict implements IMyJSONDict {
private final long _myNativeCPPObj;
...
public IMyJSONDict getMyJSONObj(String keyName) {
long retVal = nativeGetJSOBObject(_myNativeCPPObj,keyName);
return (new MyOpaqueBasedJSONDict(retVal));
}
native implementation
NIEXPORT jlong JNICALL
Java_com_hexample_myndkapplication_MyOpaqueBasedJSONDict_nativeGetJSOBObject(JNIEnv *env,
jobject instance,
jlong myNativeCPPObj,
jstring keyName_) {
const char *keyName = env->GetStringUTFChars(keyName_, 0);
Json::Value* nativeCppJson_ptr = reinterpret_cast<Json::Value*> (myNativeCPPObj);
Json::Value& map = *nativeCppJson_ptr;
Json::Value& jsonVal = map[keyName];
env->ReleaseStringUTFChars(keyName_, keyName);
return (jlong) &jsonVal;
}
I am not able to understand why I am getting
JNI DETECTED ERROR IN APPLICATION: use of invalid jobject 0xb4019a80
08-16 03:25:56.785 20537-20537/com.hexample.myndkapplication A/art:
art/runtime/java_vm_ext.cc:410] from long
com.hexample.myndkapplication.MyOpaqueBasedJSONDict.nativeGetJSOBObject
Any clue how to debug invalid memory errors in ndk. I am pretty new to Android and ndk development.
In my case,I used jobject in multi native thread:
javaCallBackObj = env->NewGlobalRef(jobj);
Note however that the jclass is a class reference and must be protected with a call to NewGlobalRef (see the next section).
Andorid Jni doc
For me it was the incoming const char * str parameter that was not playing ball with the CallStaticVoidMethod. To fix this we have to create a new jstring and pass that back to Java instead:
// str is a const char *
jstring x = env->NewStringUTF(str);
env->CallStaticVoidMethod(jclassMainClass, methodId, x);
env->DeleteLocalRef(x);
Makes sense really since JNI bridges C++ and Java and Java will only accept a Java string (jstring) not a const char *, despite passing the latter not causing a compile time error.
In my case the problem was that I was calling the native function with the rigth arguments but wrong return type
I got the similar problem in my android app. Further I found that the String argument is the "invalid jobject" mentioned by JNI. I tried input non-empty string as argument and the error gone. I don't know why it's that. I hope that it can help you as workaround.
I encountered this issue too,the situation is using wrong .so file .It should be x86 but I used x86_64.
I am using _aligned_malloc in my code. But it is throwing error error as shown in image.
CString sBuffer = _T("Hello");
TCHAR* pBuffer;
pBuffer = (TCHAR *)_aligned_malloc(1024, 16);
if (pBuffer == NULL) {
...............Error .. msg
}
pBuffer = sBuffer.GetBuffer(sBuffer.GetLength());
..................................................
.........................................................
sBuffer.ReleaseBuffer(sBuffer.GetLength());
if (pBuffer != NULL) {
_aligned_free(pBuffer);
}
The CString class implements (LPCTSTR) cast operator that you can use to get const TCHAR*.
Please note that TCHAR is defined as char in MBCS mode, and as wchar in UNICODE mode. For more details please refer to tchar.h where its defined.
If you'd like to modify the content of the buffer you'll need to use GetBuffer() method. Don't forget to call ReleaseBuffer() when you done. So, there is no need to allocate memory manually.
You can also easily construct CString from TCHAR*. There is a constructor to do that.
I'm primarily a C# dev (not much C++ since college), but working on integrating a large collection of existing C++ code into an application. I have a C++/CLI assembly that is buffering the two worlds and have communication from C# through to C++ working fine. The question I have, is that the C++ class has a method call that generates a binary blob (think array of bytes in C# world) that I need to get in C# and process (pass around like a solid bag).
What I'm looking for is advice on how to handle the buffer/wrapper method (C++/CLI) between the two worlds. I assumed that I'd pass a char* and length, but C# sees that as a byte* (I'm assuming that is some C++/CLI "magic").
I also tried passing array<Byte ^>^ but that haven't had much luck translating the char* from the rest of the C++ lib to the byte array... and what I have doesn't smell right.
Have the C++/CLI code hand off a System::IO::UnmanagedMemoryStream to the C# code, which can then use a System.IO.BinaryReader on said stream to extract data as needed.
As an aside, C++/CLI's char is synonymous with C#'s sbyte, C++/CLI's unsigned char is synonymous with C#'s byte, C++/CLI's wchar_t is synonymous with C#'s char, and C++/CLI's array<unsigned char>^ is synonymous with C#'s byte[]. Note that it's array<unsigned char>^ or array<System::Byte>^ rather than array<unsigned char^>^ or array<System::Byte^>^, as System.Byte is a value type rather than a ref type.
You can use an UnmanagedMemoryStream, like so:
byte[] message = UnicodeEncoding.Unicode.GetBytes("Here is some data.");
IntPtr memIntPtr = Marshal.AllocHGlobal(message.Length);
byte* memBytePtr = (byte*) memIntPtr.ToPointer();
UnmanagedMemoryStream writeStream = new UnmanagedMemoryStream(memBytePtr, message.Length, message.Length, FileAccess.Write);
writeStream.Write(message, 0, message.Length);
writeStream.Close();
The reverse route, roughly:
UnmanagedMemoryStream readStream = new UnmanagedMemoryStream(memBytePtr, message.Length, message.Length, FileAccess.Read);
byte[] outMessage = new byte[message.Length];
readStream.Read(outMessage, 0, message.Length);
readStream.Close();
// Convert back into string for this example
string converted = UnicodeEncoding.Unicode.GetString(outMessage);
Marshal.FreeHGlobal(memIntPtr);
I'm sure that MSDN will have more resources
I took a shot at it and came up with this. Nothing crazy, just allocate the managed array, copy the data and return it.
header:
#pragma once
using namespace System;
using namespace System::Runtime::InteropServices;
namespace CLRLib
{
public ref class TwiddlerFunctions
{
public:
static array< Byte >^ GetArray();
};
}
implementation:
#include "CLRLib.h"
array< Byte >^ CLRLib::TwiddlerFunctions::GetArray()
{
unsigned char data[] = { 1, 2, 34, 5 };
// convert the unmanaged array to a managed array
array< Byte >^ arr = gcnew array< Byte >(sizeof data);
Marshal::Copy((IntPtr)data, arr, 0, arr->Length);
return arr;
}
C# side:
using System;
using CLRLib;
class Program
{
static void Main(string[] args)
{
byte[] arr = TwiddlerFunctions.GetArray();
Console.WriteLine(String.Join(" ", arr)); // 1 2 34 5
}
}
I'm trying to migrate some managed c++ code to 64bits.
I have a function that gets varargs, and when I pass a System::String variable to it, it appears not to pass correctly.
Here is a simplification of the code that shows the problem:
#include <stdio.h>
#include <stdarg.h>
void test(char* formatPtr, ...)
{
va_list args;
int bufSize;
char buffer[2600];
/////////////////////////////////////
//parse arguments from function stack
/////////////////////////////////////
va_start(args, formatPtr);
bufSize = vsprintf(buffer, (const char*) formatPtr, args);
printf(buffer);
va_end(args);
}
void main() {
System::String^ s;
s = "Shahar";
test("Hello %s", s);
getchar();
}
When this code runs in 32 bits, it displays Hello Shahar.
When it runs in 64 bits, it displays Hello Çz∟⌠■.
Assuming I want to make the least amount of changes to the code, how should I fix this?
It looks as though the problem is in the mix between managed code and varargs. It appears that they are not compatible with each other.
I don't know why this works in 32-bits, but it looks like the wrong thing to do.
I changed the code, so as to be only managed code, with no varargs.
The %s specifier expects a C-style null-terminated string, not a System::String^. C++/CLI headers provide some methods that can convert System::String^ to std::string, which can be converted to a C-string, and can probably just convert straight to a C-string.
You have other problems too. void main()? Assigning a literal to a char*? Fixed-size buffer?
Is it possible to change strings (content and size) in Lua bytecode so that it will still be correct?
It's about translating strings in Lua bytecode. Of course, not every language has the same size for each word...
Yes, it is if you know what you're doing. Strings are prefixed by their size stored as an int. The size and endianness of that int is platform-dependent. But why do you have to edit bytecode? Have you lost the sources?
After some diving throught Lua source-code I found such a solution:
#include "lua.h"
#include "lauxlib.h"
#include "lopcodes.h"
#include "lobject.h"
#include "lundump.h"
/* Definition from luac.c: */
#define toproto(L,i) (clvalue(L->top+(i))->l.p)
writer_function(lua_State* L, const void* p, size_t size, void* u)
{
UNUSED(L);
return (fwrite(p,size,1,(FILE*)u)!=1) && (size!=0);
}
static void
lua_bytecode_change_const(lua_State *l, Proto *f_proto,
int const_index, const char *new_const)
{
TValue *tmp_tv = NULL;
const TString *tmp_ts = NULL;
tmp_ts = luaS_newlstr(l, new_const, strlen(new_const));
tmp_tv = &f_proto->k[INDEXK(const_index)];
setsvalue(l, tmp_tv, tmp_ts);
return;
}
int main(void)
{
lua_State *l = NULL;
Proto *lua_function_prototype = NULL;
FILE *output_file_hnd = NULL;
l = lua_open();
luaL_loadfile(l, "some_input_file.lua");
lua_proto = toproto(l, -1);
output_file_hnd = fopen("some_output_file.luac", "w");
lua_bytecode_change_const(l, lua_function_prototype, some_const_index, "some_new_const");
lua_lock(l);
luaU_dump(l, lua_function_prototype, writer_function, output_file_hnd, 0);
lua_unlock(l);
return 0;
}
Firstly, we have start Lua VM and load the script we want to modify. Compiled or not, doesn't matter. Then build a Lua function prototype, parse and change it's constant table. Dump Prototype to a file.
I hope You got that for the basic idea.
You can try using the decompiler LuaDec. The decompiler would allow the strings to be modified in generated Lua code similar to the original source.
ChunkSpy has A No-Frills Introduction to Lua 5.1 VM Instructions that may help you understand the compiled chunk format and make the changes directly to bytecode if necessary.