ERROR: In /home/kitware/dashboards/buildbot/paraview-debian6dash-linux-shared-release_opengl2_qt4_superbuild/source-paraview/VTK/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 286
vtkXOpenGLRenderWindow (0x529c2b0): bad X server connection. DISPLAY=Aborted
I understand that the main reason why it is not running is because it needs to use a window which it cannot create remotely.
I am able to run it locally
This is a benchmark provided by paraview.org
The issue arises because the process needs access to an X server to create a Window and then an OpenGL context for all the rendering. The default linux binaries shipped by paraview.org use rely on X to provide the context. If your sever is not going to provide X access to your processes (which is not unheard of), then you should build ParaView with either OSMesa support or EGL support and use that build instead. In those builds, ParaView uses non-X dependent methods to create the OpenGL context.
Related
There are many ways to run a WebAssembly module, besides Web, Node.js can load wasm module with --experimental-wasm-modules, standalone runtime like wasmtime and lucet can run it too.
So how do I detect the current environment in WASM and is it possible to apply a restriction on wasm modules that make it can only work on a specific website?
WebAssembly has no built-in I/O capabilities - it cannot access the DOM, read the filesystem, renders to the screen etc. In order to perform any of these tasks it needs to interoperate with the host environment (most typically JavaScript).
As a result, WebAssembly cannot detect its runtime environment. It could ask the host what environment it is executing within, although this could of course be faked!
I'm working on a HTTP/1.1 server in C as a learning experience and want to make it performant while still being dynamic. Performing a get or post on static files or scripts was easy enough, but I'd like to add the ability to call compiled binaries for greater speed.
Currently, I link these compiled binaries directly into the server binary, but I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries, but I don't want to relink them to handle every request. I also considered creating a new process to run them, however that incurs significant overhead every request and makes getting the response back to the client difficult (I'm using OpenSSL sockets).
How could I efficiently relink these compiled binaries when they update, without shutting down the server?
I'm testing on Debian Sid and running on an AWS ECS instance with CentOS 7. Both have Linux kernel versions 4.19+
I'd like to be able to update and hot swap them. I considered dynamically linking them as shared libraries
You appear to believe that you can update a shared library (on disk) that a running server binary is currently using, and expect that running server process to start using the updated library.
That is not how shared libraries work. If you try that, your server process will either crash, or continue using the old library (depending on exactly how you update the library on disk).
This can be made to work in limited circumstances if you use dlopen to load the library, and if you can quiesce your server, and have it dlclose to unload the previously loaded version, and then dlopen updated version. But the exact details of making this work are quite tricky.
I'm in the process of writing an application in C++ on Linux. The goal is to have it load dynamically linked libraries at run-time and to provide all the services that the libraries require. The main aim is to have it act as a black box where code loaded at run-time can not break out and damage the rest of the system.
I've never done anything like this before and am a little lost of the best method to take. If I load all the dynamically linked libraries under a special process and then use something like SELinux to limit the ability for the central daemon to do anything outside of its requirements would that seem like a reasonable solution?
The reason I ask is that I want to allow people to load code into this container application that then handles all the server side stuff for them, so things such as security, permissions, networking, logging etc are all provided with a simple, clean and cross platform API regardless of the version of UNIX that the container is running on.
I'm creating a virtual machine to mimic our production web server so that I can share it with new developers to get them up to speed as quickly as possible. I've been through the Vagrant docs however I do not understand the advantage of using a generic base box and provisioning everything with Puppet versus packaging a custom box with everything already installed and configured. All I can think of is;
Advantages of using Puppet vs custom packaged box
Easy to keep everyone up to date - Ability to put manifests under
version control and share the repo so that other developers can
simply pull new updates and re-run puppet i.e. 'vagrant provision'.
Environment is documented in the manifests.
Ability to use puppet modules defined in production environment to
ensure identical environments.
Disadvantages of using Puppet vs custom packaged box
Takes longer to write the manifests than to simply install and
configure a custom packaged box.
Building the virtual machine the first time would take longer using
puppet than simply downloading a custom packaged box.
I feel like I must be missing some important details, can you think of any more?
Advantages:
As dependencies may change over time, building a new box from scratch will involve either manually removing packages, or throwing the box away and repeating the installation process by hand all over again. You could obviously automate the installation with a bash or some other type of script, but you'd be making calls to the native OS package manager, meaning it will only run on the operating system of your choice. In other words, you're boxed in ;)
As far as I know, Puppet (like Chef) contains a generic and operating system agnostic way to install packages, meaning manifests can be run on different operating systems without modification.
Additionally, those same scripts can be used to provision the production machine, meaning that the development machine and production will be practically identical.
Disadvantages:
Having to learn another DSL, when you may not be planning on ever switching your OS or production environment. You'll have to decide if the advantages are worth the time you'll spend setting it up. Personally, I think that having an abstract and repeatable package management/configuration strategy will save me lots of time in the future, but YMMV.
One great advantages not explicitly mentioned above is the fact that you'd be documenting your setup (properly), and your documentation will be the actual setup - not a (one-time) description of how things were/may have been intended to be.
I'm using the yii framework and trying to get its unit tests running while connected over ssh on a CentOS server. When I run phpunit, it tries to launch Firefox, which fails with the error "no display specifiied"
General theory
Error: no display specified
To understand that error message you first have to understand how the X Windowing System works - that is the name of the framework used by Linux (and other types of Unix) systems used to display graphical user interfaces.
X consists of two parts - there is a client and a server. Client is the program that wants to draw the interface - in your case that would be Firefox. Server is a program that makes drawing possible. There are X servers available for all the major operating systems. Linuxes and OSX usually ship with one, on Windows you will have to find and install one - Cygwin/X is one option, but there are others.
So why is this client/server architecture even necessary?
Most of the time its not even needed. If you happen to run Linux locally, then you will not even notice that there is any kind of client/server communication happening somewhere - but there is.
Where X shines though is that this architecture means that network capabilities are built right into it. You can run a client (Firefox) on one machine and display the GUI on a completely different machine. You could run 10 different clients on 10 different machines and have them all display output on a single machine thanks to X. Think VNC or Remote Desktop - X is somewhat similar, but you could say that its on steroids compared to those. And X has had this ability for a really long time.
Whenever your start up a X client (a program that wants to display graphical user interface) it looks for an X server. One possibility for the client to find one is an environment variable named DISPLAY. I'm on OSX and this is what I see.
[~]> echo $DISPLAY
/tmp/launch-ihNtDq/org.x:0
This point to my local X server. It could point to any server on my local network. When the client finds this environment variable it will connect to it and the user interface pops up.
If client cannot find this environment variable - you will get the familiar
Error: no display specified
Back to Yii
Looks like Yii has Selenium tests bundled. PHPUnit needs to start up Selenium RC to control a Firefox instance to run those tests. Selenium RC (or maybe Firefox itself) fails to find DISPLAY environment variable. And dies with the above error.
How do you solve this issue?
There are 3 options
1) install Yii, PHPUnit and all their dependencies locally. Selenium runs just fine on Windows. It wont use the X protocol on Windows, so none of that business with X clients and X servers. And you can then run Yii testsuite locally.
2) install an X server on your Windows box. Then enable 'X Forwarding' in your ssh client settings (or use the -X command line parameter for ssh). When you do that, then there will be a DISPLAY variable set when you are logged in to that CentOS server. You can verify it by typing the echo command above. Then the X client on CentOS can talk (show GUI) to the X server on your Windows machine - all the X traffic is tunneled over the ssh connection. This however means that you need java (which selenium RC is built in) and Firefox on that CentOS server. You may or may not have them there.
3) use a virtual framebuffer - for example Xvfb - an X server that performs all drawing operations in memory, not showing any output anywhere.
What good is that? Selenium has commands for taking screenshots at any point during the test run and saving them to files. For example a typical Selenium test would check whether an element exist on the page - and to make a screenshot when it does not. Screenshot would then be saved in a file, which you can view later to determine what the reason for the failure was. Making screenshots works just fine with a virtual framebuffer.
Final clarification
Note that Selenium tests are merely one type of tests that PHPUnit can run. Selenium in not required to write PHPUnit tests, its an optional add-on. But Yii testsuite apparently relies upon it.
Last, but not least
Integration tests (which Selenium tests are) are not usually ran on production systems, because there is a chance that test data is left behind in your production database. Also, getting good test results means isolating from external factors as much as possible - the contents of your production database will be constantly changing and this may affect your tests.
Normally all tests will be executed somewhere else (your development machine, dedicated QA server, whatever you have), before the new code is deployed to production servers. After all the point of tests is to verify that the system works after the changes. There is not much value in running them on production systems - the code does not change after it has been deployed.
Of course - its up to you - if you see value in doing those tests on production system, go right ahead.
Its simpler than you think. Run Selenium locally on your desktop, make sure phpunit is setup on the remote server. Then start a reverse SSH tunnel in your SSH connection. This varies depending on your SSH client. In PuTTY, there is a setting for SSH tunnels and you can reverse the direction by selecting the remote option. Check out this page for details. With OpenSSH from the command line, its done like this:
ssh -R 4444:localhost:4444 user#remoteserver
This will listen on the remote server on port 4444 and forward it to your selenium server running on localhost on your desktop port 4444.
Once you've done that, you'll need to change the TEST_BASE_URL setting in yourproject/protected/tests/WebTestCase.php to go to the remote server's URL for your yii project.
The simplest way to run a gui test on from another agent machine on a Windows client is to use "psexec" (http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx).