i've been working with NodeJS 0.11.x distros for some time now, mainly because i believe that generators and the yield statement bring big advances in terms of asynchronous manageability (see coffy-script and suspend).
that said, there's a serious setback when running bleeding-edge, unstable NodeJS installs: when doing npm install xy-module, gyp will fail (always? sometimes?) when trying to compile any C components.
is there a general reason this must be so? is there any trick / patch / configuration i can apply to remedy the situation? if a given module does compile on NodeJS 0.10.x, but fails on 0.11.x, should i expect it to compile on 0.12.x as soon as that becomes available?
Update i cross-posted the issue on the NodeJS mailing list, and ben noordhuis was kind enough to share some details. quoting his message:
The two main changes are as follows:
Persistent<T> no longer derives from Handle<T>. To recreate the
Handle from a Persistent, call Local<T>::New(isolate, persistent).
You can obtain the isolate with Isolate::GetCurrent() (but note that
Isolate::GetCurrent() will probably go away in newer versions of V8.)
The prototype of C++ callbacks and accessors has changed. Before,
your function looked like this:
Handle<Value> MyCallback(const Arguments& args) {
HandleScope handle_scope;
/* Do useful work, then: */
return handle_scope.Close(Integer::New(42));
/* Or: */
return handle_scope.Close(String::New("hello"));
/* Or: */
return Null();
}
In v0.11 and v0.12 that becomes:
void MyCallback(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
HandleScope handle_scope(isolate);
/* Do useful work, then: */
args.GetReturnValue().Set(42);
/* Or: */
args.GetReturnValue().Set(String::NewFromUtf8(isolate, "hello"));
/* Or: */
args.GetReturnValue().SetNull();
}
There have been more changes but these two impact every native add-on.
Answered in detail in NodeUp #52: http://nodeup.com/fiftytwo
Summary: major changes in the v8 API, some minor changes in Node, and the changes are still ongoing. But there are two projects that are designed to help with the problem, NAN (github/rvagg/nan) and shim / node-addon-layer (github/tjfontaine/node-addon-layer).
Related
I am trying to write a VST3 plugin using the Steinberg VST3 SDK, that uses the FFTW library to perform a Fast Fourier Transform on an incoming audio signal. I have followed all the stepts for including the FFTW library in my project and the linker resolves it correctly.
Whenever I use one of the functions that the library provides, such as fftw_malloc for example, the moduleinfotool.exe fails to generate the moduleinfo.json file, exiting with code 1 and failing the build with a very undescriptive error message.
Here is a part of the process function where I tried using the FFTW functions:
tresult PLUGIN_API kw_SquarifyProcessor::process (Vst::ProcessData& data)
{
if (data.numInputs == 0 || data.numOutputs == 0)
{
return kResultOk;
}
fftw_complex* in, * out;
// This is the code that, when included, makes the build crash.
// Using any other function provided by the FFTW library also crashes the build.
in = (fftw_complex*)fftw_malloc (sizeof (fftw_complex) * 1024);
}
I have no idea what to do right now, and am amazed there are virtually zero resources about the VST3 SDK (apart from the documentation which does not seem to cover cryptic errors like these), so if anyone could point me to some of those/some guide on how to perform FFTs in the VST3 SDK that would be much appreciated as well!
Main context
We're actually trying to get a multi-threading version of Ghostscript x64 DLL, to make use of it through Ghostscript .NET. This component is supposed to "allow runing multiple Ghostscript instances simultaneously within a single process", but, as we have checked in our project, works fine until concurrent requests are made to the application. Same behavior can be replicated lauching same method using Tasks. The error description that raises in both cases, just when a call to the process is made until the last is being executed, is:
An error occured when call to 'gsapi_new_instance' is made: -100
Even it does no seem to be related with .NET directly, I will post a sample of our C# method code, just for contextualize.
// Define switches...
string[] switchesArray = switches.ToArray();
using (GhostscriptProcessor procesador = new GhostscriptProcessor())
{
try
{
procesador.StartProcessing(switchesArray, null);
byte[] destinationFile = System.IO.File.ReadAllBytes(destinationPath);
return destinationFile;
}
catch (Exception ex)
{
throw ex;
}
finally
{
System.IO.File.Delete(sourceFile);
}
}
THREADSAFE solution
Starting our investigation, we found this KenS's answer on this post, indicating that Ghostscript DLL must be generated with GS_THREADSAFE compiler definition.
To clarify, as we make use of Ghostscript 9.52 x64 to generate our PDFs, we need this x64 DLL compiled for Release configuration. After trying to compile Ghostscript sources on Windows 10 x64 machine, using Visual Studio Community 2017 and Visual Studio Community 2019, we finally managed to build and generate all items (only with VS Community 2019) without GS_THREADSAFE parameter, just to confirm that compilation is fine, and we check that the DLLs and executables are working. For this process we took in mind all we found in Ghostscript official documentation.
As we have no other guide to include this GS_THREADSAFE parameter, we followed the instructions given in this solution, including XCFLAGS="-DGS_THREADSAFE=1" on nmake build commands, usign this sentence for Rebuild all option:
cd .. && nmake -f psi\msvc32.mak WIN64= SBR=1 DEVSTUDIO= XCFLAGS=-DGS_THREADSAFE=1 && nmake -f psi\msvc32.mak WIN64= DEVSTUDIO= XCFLAGS=-DGS_THREADSAFE=1 bsc
This approach, rises an error during build:
Error LNK2019 unresolved external symbol errprintf_nomem referenced in
function gs_log_error File \mkromfs.obj 1
As it seems, the file mkromfs.c has a method called errprintf_nomem, which can't be found when GS_THREADSAFE is set.
Questions
1 - Is there any public release of Ghostscript that include x64 DLLs compiled to be THREADSAFE?
And, if not (that's what I'm guessing...)
2 - Is it possible to get this DLL to be THREADSAFE without changing the source code?
3- Could anyone provide, please, a step by step guide or walkthrough to build a x64 Ghostscript DLL using GS_THREADSAFE using Visual Studio (or even any other possible alternative) over Windows 10 x64?
4 - A few posts talk about people achive to manage multithreading using Ghostscript .NET. I assume this examples are all using a GS_THREADSAFE DLL... Is any other workaround we have passed?
Thank a lot in advance.
To summarize all this questions, and as a guide for future developers having this same trouble, these are the answers we've found until now:
AS #KenS mentions in his reply: "No, the Ghostscript developers don't actually build thread-safe versions of the binaries."
At this very moment, clearly not, as it has been reported on this opened bug.
As it seems to be a matter of commercial licensing support, we avoid comment on this point anymore.
Thanks again to #HABJAN. I absolutely take back what I've stated on my question, as it is possible to have Ghostscript .NET working on multi-threading scenarios. Below comes the solution we applied, in case it could be useful for someone.
Based on HABJAN example, what we have done to achieve this was to create a custom class to capture Ghostscript logging:
protected class ConsoleStdIO : Ghostscript.NET.GhostscriptStdIO
{
public ConsoleStdIO(bool handleStdIn, bool handleStdOut, bool handleStdErr) : base(handleStdIn, handleStdOut, handleStdErr)
{
}
public override void StdIn(out string input, int count)
{
char[] userInput = new char[count];
Console.In.ReadBlock(userInput, 0, count);
input = new string(userInput);
}
public override void StdOut(string output)
{
//log
}
public override void StdError(string error)
{
//log
}
}
For our previous method, we simple include a call to this class and this avoids errors when multiple tasks are executed at the same time:
// Define switches...
string[] switchesArray = switches.ToArray();
using (GhostscriptProcessor procesador = new GhostscriptProcessor())
{
try
{
procesador.StartProcessing(switchesArray, new ConsoleStdIO(true, true, true));
byte[] destinationFile = System.IO.File.ReadAllBytes(destinationPath);
return destinationFile;
}
catch (Exception ex)
{
throw ex;
}
finally
{
System.IO.File.Delete(sourceFile);
}
}
Well, it seems to me that you are asking here for technical support.
You clearly want to use Ghostscript in a commercial undertaking, indeed one might reasonably say you want an enterprise version of Ghostscript. Presumably you don't want to alter the source in order to permit you to use an open source license, because you don't want to pay for a commercial license.
With that in mind the answers to your questions are:
No, the Ghostscript developers don't actually build thread-safe versions of the binaries.
Currently, no. That's probably an oversight.
That would be a technical support question, there's no guarantee of technical support to free users, it's the one of the few areas of leverage for dual license vendors to persuade people to take up a commercial license. So I hope you will understand that I'm not going to provide that.
as far as I can see, no.
I'm wondering is there a capability, in any programming language, that I can choose to compile only a certain part of code. See example below.
This is a block of pseudocode:
function foo() {
if (isDebug) {
checkSomethingForDebugging();
print(some debug info);
}
toSomeFooThings();
}
This block is for debugging purpose, I want to ignore them (even the if statement) in production.
if (isDebug) {
checkSomethingForDebugging();
print(some debug info);
}
One thing I can do is to comment out these lines,
function foo() {
//if (isDebug) {
// checkSomethingForDebugging();
// print(some debug info);
//}
toSomeFooThings();
}
But what if I have thousands of places like this? It will be good if there is a way (a flag) that I can choose to compile a certain part of the code or not. It's like a debugging build. Is there anything for this in any programming language? I did search online but was no luck.
Most languages don't have this, but you could certainly write a script which processed the source code somewhere in your build/deploy pipeline and deleted the debug only parts. An advanced way would be to properly parse the source code and delete the appropriate if blocks. For Python this would be quite easy using either the ast module or just looking for lines saying if is_debug: and then watching the indentation level. For other languages it might be harder. A simpler way in terms of the preprocessing script would be to use delimiting comments:
// DEBUGONLY
checkSomethingForDebugging();
print(some debug info);
// ENDDEBUGONLY
In this case the if statement is optional depending on how exactly you want to do things.
Well, that depends on the compiler you are using. For example, in GCC for the C programming language, you have a whole set of preprocessor instructions that could be used for that.
For example:
#ifdef DEBUG
// Your code here...
#endif /* DEBUG */
And, when you are compiling the debug version, you just have to include an extra header that defines the DEBUG macro. There's no need of setting any value, just define it.
#define DEBUG
And that's it.
There are languages (including C, C++ and C#) that can do this using preprocessor directives like #if or #ifdef:
#if DEBUG
checkSomethingForDebugging();
print(some debug info);
#endif
When the code is compiled, if the DEBUG symbol is not set, the code between the two directives is not compiled at all (and doesn't even have to be valid code).
But more importantly, why are you asking? If you're worried about performance, then such checks are very cheap (since they are easily predicted). And if the checks are written right (e.g. if isDebug is a global constant) and compiled using a good compiler, they can even be eliminated as dead code, which makes them completely free.
In compiling languages like C we have a preprocessor that can be used to skip parts of program without compiling them, effectively just excluding them from the source code:
#ifdef SHOULD_RUN_THIS
/* this code not always runs */
#endif
So that if SHOULD_RUN_THIS is not defined, then the code will never be run.
In node.js we don't have a direct equivalent of this, so the first thing I can imagine is
if (config.SHOULD_RUN_THIS) {
/* this code not always runs */
}
However in node there is no way to guarantee that config.SHOULD_RUN_THIS will never change, so if (...) check will be performed each time in vain.
What would be the most performant way to rewrite it? I can think of
a) create a separate function to allow v8-optimizations:
function f(args) {
if (config.SHOULD_RUN_THIS) {
/* this code not always runs */
}
}
// ...
f(args);
b) create a variable to store the function and set it to an empty function when not needed:
var f;
if (config.SHOULD_RUN_THIS) {
f = (args) => {
/* this code not always runs */
}
}
else {
f = function () {} // does nothing
}
// ...
f(args);
c) do not create a separate function, just leave it in place:
if (config.SHOULD_RUN_THIS) {
/* this code not always runs */
}
What is the most performant way? Maybe some other way...
i personally would adopt ...
if (config.SHOULD_RUN_THIS) {
require('/path/for/conditional/module');
}
the module code is only required where needed, otherwise it is not even loaded in memory let alone executed.
the only downside is that it is not readily clear which modules are being required since your require statements are not all positioned at the top of the file.
es6 modularity adopts this dynamic module request approach.
PS use of config like this is great since, you can for example, use an environment variable to determine your code path. Great when spinning up, for example, a bunch of docker containers that you want to behave differently depending on the env vars passed to the docker run statements.
apologies for this insight if you are not a docker fan :) apologies i am waffling now!
if you're looking for a preprocessor for your Javascript, why not use a preprocessor for your Javascript? It's node-compatible and appears to do what you need. You could also look into writing a plugin for Babel or some other JS mangling tool (or v8 itself!)
If you're looking for a way to do this inside the language itself, I'd avoid any optimizations which target a single engine like v8 unless you're sure that's the only place your code will ever run. Otherwise, as has been mentioned, try breaking out conditional code into a separate module so it's only loaded if necessary for it to run.
I'm trying to port a new versio of the Isis2 library from .NET on Windows to Mono/Linux. This new code uses MemoryMappedFile objects, and I suddenly am running into issues with the Mono.Posix.Helper library. I believe that my issues would vanish if I could successfully compile and run the following test program:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO.MemoryMappedFiles;
namespace foobar
{
class Program
{
static int CAPACITY = 100000;
static void Main(string[] args)
{
MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test", CAPACITY);
MemoryMappedViewAccessor mva = mmf.CreateViewAccessor();
for (int n = 0; n < CAPACITY; n++)
{
byte b = (byte)(n & 0xFF);
mva.Write<byte>(n, ref b);
}
}
}
}
... at present, when I try to compile this on Mono I get a bewildering set of linker errors: it seems unable to find libMonoPosixHelper.so, although my LD_LIBRARY_PATH includes the directory containing that file, and then if I manage to get past that stage, I get "System.NotImplementedException: The requested feature is not implemented." at runtime. Yet I've looked at the Mono implementation of the CreateNew method; it seems fully implemented, and the same is true for the CreateViewAccessor method. Thus I have a sense that something is going badly wrong when linking to the Mono libraries.
Does anyone have experience with MemoryMappedFile objects under Mono? I see quite a few questions about this kind of issue here and on other sites, but all seem to be old threads...
OK, I figured at least part of this out by inspection of the Mono code implementing this API. In fact they implemented CreateNew in a way that departs pretty drastically from the .NET API, causing these methods to behave very differently from what you would expect.
For CreateNew, they actually require that the file name you specify be the name of an existing Linux file of size at least as large as the capacity you specify, and also do some other checks for access permissions (of course), exclusive access (which is at odds with sharing...) and to make sure the capacity you requested is > 0. So if you had the file previously open, or someone else does, this will fail -- in contrast to .NET, where you explicitly use memory-mapped files for sharing.
In contrast, CreateOrOpen appears to be "more or less" correctly implemented; switching to this version seems to solve the problem. To get the effect of CreateNew, do a Delete first, wrapping it in a try/catch to catch IOException if the file doesn't exist. Then use File.WriteAllBytes to create a file with your desired content. Then call CreateOrOpen. Now this sounds dumb, but it works. Obviously you can't guarantee atomicity this way (three operations rather than one), but at least you get the desired functionality.
I can live with these restrictions as it works out, but they may surprise others, and are totally different from the .NET API definition for MemoryMappedFile.
As for my linking issues, as far as I can tell there is a situation in which Mono doesn't use the LD_LIBRARY_PATH you specify correctly and hence can't find the .so file or .dll file you used. I'll post more on this if I can precisely pin down the circumstances -- on this one, I've worked around the issue by statically linking to the library.