Which files required for qt 5.4 qwebchannel linux deployment? - linux

All,
Have a QtWebEngine based application which uses all local html and javascript files. When deploying this to a test environment the Web page comes up and is navigable, but, webchannel based things aren't working. Everything is fine on development. Problem only happens on deployment to test machine.
This is a self contained .deb which installs creating a user and is meant to bring everything along with it. While it is running on a desktop, there is no network connection, everything is inside.
That said, if "everything" was inside the webchannel would be working. Does anyone have a link identifying what external pieces webchannel requires? There are only two oddities starting up on the target.
[0629/132921:WARNING:resource_bundle.cc(286)] locale_file_path.empty()
[0629/132921:WARNING:resource_bundle.cc(286)] locale_file_path.empty()
Trust me, I've surfed for that. There are thousands of posts flagging resource_bundle.cc throwing local_file_path.empty() errors at all kinds of lines and nothing offered as a solution. I am making the grand assumption when webchannel supporting files are identified and placed/pointed to, these will go away and life will be good.
qwebchannel.js is deployed, but, maybe there is an environment variable I need to set? the index.html file references qwebchannel.js exactly where it is.
Anyone have the list/link of what files are required when deploying something using qwebchannel.js? It isn't throwing up an error which identifies much.
Thanks,

Related

Variables from my .env are shown on error

I just started using laravel's lumen and managed to make it work both locally and on a server, when I was about to start exploring it, my index.php consisted in just:
$app = require __DIR__."/../lumenTest/bootstrap/app.php";
$app->run($app->make('request'));
echo $myundefinedvariable;
Which displays a ErrorException: Undefined variable: myundefinedvariable, but inside the "...at Application->Laravel\Lumen\Concerns{closure}" window I can see a giant wall of text with stuff like:
... 'APP_KEY' => 'fake0BqKgHeC72EmT7039B6pDCsJ90key' , ..., 'DB_PASSWORD' => 'secret', ...
And my first thoughts were, maybe it is because im running it localy with XAMPP or something, so I went and tried it on the server and the same thing happened.
Is it normal that sensitive data from my .env file gets shown to everyone after doing any php error?
Is there a way to avoid this happening? (different than not having any PHP errors, because I tend to have them a lot).
Additional info:
PHP version 7.1.12
Lumen (5.6.1) (Laravel Components 5.6.*)
The directory "lumenTest" is one level above my www or public and there is where the .env is located, the site is on a Linux server shared host
No, that's not normal. Professional developers consider this an amaturistic behavior. That's the exact reason why companies don't even consider using Laravel.
Many people (including me) already notified them that this is really not-done, but the developers don't really seem to care. In fact it's the only framework in the world that thinks it's OK to print critical information in a debug page. Surely a visitor should never see stack traces, sql queries, pieces of code... But environment variables are confidential and should never end up in a HTTP response.
The best advice I have is to use a professional MVC framework like ASP.net, codeigniter, or yii, since there's no telling what the Laravel devs also think is OK to do...
If on the other hand you do decide to use Laravel anyway, there's a package that counters this: https://github.com/GlaivePro/Hidevara
It's real easy to setup, just make sure you don't forget the app->extend instruction.
On a production server you must not run "composer install" but instead "composer instal --no-dev". This way filp/whoops will (should, hopefully) not be installed and cannot be triggered.
For professional development, i surely recommend not to use Laravel since the bar of what they think is acceptable seems to be very low.
As a sidenote: the developers claim that nothing can go wrong when APP_DEBUG=false, but incidents in the past have shown that the whoops handler can be triggered when debug mode is disabled. https://www.google.com/amp/s/blog.hacken.io/dangers-of-laravel-debug-mode-enabled%3fhs_amp=true
Yes, if you have debug mode enabled, any sort of data relating to an error can be displayed. This certainly would include sensitive data that would be useful when debugging.
For production, you want all errors to be privately logged, not publicly displayed. For this reason, you will want debug=false in your .env file.
If this is happening while debug mode is already set to false, you will want to configure the hiding/logging of errors at the server level.

Unable to publish node js site to azure using Visual Studio 2013

I am publishing my node js site to azure using this tutorial - http://blogs.technet.com/b/sams_blog/archive/2014/11/14/azure-websites-deploy-node-js-website-using-visual-studio.aspx
I get the following error, as mentioned in one of the comments on the blog, any idea what this error is about and how do I fix this ? I am able to run my app locally no issues with that.
Error: InvalidParameter
Parameter name: index
P.s : the site is like a very basic "Hello world" kind of site, this is the first time I am using and deploying to azure too.
I created a new project as a "Blank Azure Node.js web application", and replaced the resulting package.json and .js files with what I had before, and it publishes fine now
All was working fine for and suddenly got the error! I pretty sure it something in the project as it's now happening on vs2013 and vs2015 on different computers.
Its something to do with Templates after a lot of searching. For me Azure TFS CI got things working again if possible for you?
I had this issue with some projects but not with others, all created in a similar way. So I went thought every change and every setting I could until eventually i worked it out. I didn't want to give up and just remake them.
Basically its file paths, the first thing you notice is that it errors very quickly compared to a usual publish, the first thing that is triggered is a build but unlike heavy framework languages there not really much to actually build.
Like all builds for VS it pops out a bin folder take not of where this appears. This is the key, you want this to appear in the root of your deployment usually at the same level as the publish profile.
Before I moved my projects to VS, TFS and Azure, I used to use git and used the azure push and deployment as part of git, so I instinctively structured my folders in the similar fashion with src folder and all the extra VS baggage in the a directory higher.
This is where I noticed bin folder, so re-structured my solution and made changes to .njsproj (notepad) and moved to be inline with source code and re-added it yo my solution.
Technically speaking this a bug within VS as it allows to create the project and specify different locations which is all fine unless you want to build and publish locally.
Once you get your head around what is going on you should be able to solve this problem easily and not make the same mistake in the future. If anyone is still confused comment and ill grab some screen shots.

How to backup and restore IIS configuration from script

I'm writting a script that sets up a lot of different applications in Windows (mainly svn and open source servers for http, dns, mail, ftp and db). This script is intended to be executed in new/clean Windows workstations for new developers, it automatically sets everything up to create an environment very similar to the one in production. After it's executed, everything runs locally and the developer can start working right away.
This not only helps new developers, but all existing developers whenever there are changes in the whole system, everything is replicated locally.
The one thing I'm still not able to do is making some kind of backup of an IIS server that is running a web app (it's in the Prod server) and restoring it automatically to the new developer's machine so he doesn't have to install/configure IIS locally.
I've read about using appcmd.exe to create and restore backups, but that works only for the same machine (it uses encryption keys and those keys change between computers).
Is there a way, a scriptable way, to take everything IIS related from one server and restore it on another server, without user intervention and having the restored IIS run exactly as the original?
Thanks in advance!
Francisco
Just putting this here so anyone who comes across this will have an understanding as to why this wasn't answered. A website has a massive amount of variables associated with it that prevents any easy methods to copy all of its configuration through one or even just a few cmdlets.
To get started though you would want to become very familiar with the applicationHost.config file and how you access the properties within it using the Get-WebConfigurationProperty. One way to get familiar with how to script against webconfiguration properties is to use the Configuration Editor in IIS. Whenever you make a change in the Configuration Editor, before commiting the changes there is a nifty little link titled Generate Script, which will have a Powershell tab you can use to help you gather the proper Get/Set commands for the configuration elements within the applicationHost.config file.
I've created something almost exactly like what the OP is looking for and it spans 4 modules (over 20,000 lines of code) and has a SQL backend that holds all of the configuration elements.
When a website has everything from underlying DLLs that may need registered, IsapiCGI Restrictions and IsapiFilters, accounts that are tied to the AppPool that may need added to certain local groups on the server, to secure bindings that require a certificate to be loaded on the server. You can see that this isn't a simple undertaking. (and these are just a small portion of the variables that a website may contain)
There is however a large chunk of cmdlets that Microsoft provides you out of the box that you can leverage to aid you in developing something like this inside the WebAdministration module. I know this is four years old but hope anyone who stumbled on this will find the above useful.

Unable to run a .Net website locally using the System.Web.Security namespace

I took over this 4.0 webforms website and got the exact same code from the former developer.
It runs fine on his local machine, but it craps out on my local machine having anything to do with the “System.Web.Security” namespace.
If I put a breakpoint where it is failing and try to fall into the code for that namespace, it won't let me go any further. It just simply will not execute anything to do with the namespace. This happens with all of the three major browsers
Since this forum does not allow any attachments, I can't show you anything more.
Does anybody have any ideas what is wrong?
Turned out to be a simple permissions issue on our end not allowing my id to have access to the database.

Xpages Build process and Replication

I'm wondering if someone can enlighted me a little bit on the Xpages build process and how this works with other replica copies of a database. Much of the advice I've seen posted regarding working with the the Domino Designer indicates (logically), that you'll have much faster response working on local copies and then replicating those to the server.
I'll usually save my changes locally, build manually, and replicate to the server, and most of the time, that seems to work fine. However, on some occasions, I've found that when I view the work I've done in the browser on the server copy, it hasn't seemed to update... in fact in a couple of scary incidents, it displays a version from several weeks ago (where is it even getting that from??). This isn't a browser caching issue, and I've opened the design elements (xpages, custom controls) on the server copy and verified that the changes ARE there. I end up having to perform a Clean on the server copy (not just a build) of the application, and then it displays as expected.
This seems like a foolish question, but you shouldn't have to perform a build on each replica copy correct? Any thoughts as to what might be an issue here? There is another developer involved, and he works directly on the server as he's in the same location, but we are rarely working at the same time, and never on the same elements. We are not using source control at this time.
We have seen similar behavior ourselves.
In our case, we do development on a server, clean / build project and then copy that database as a template to a deployment server. From there, we update design in the production database.
We have noticed that build process sometimes fails, especially when working over slower links. So we always repeat clean / build / refresh process a couple of times and we try to do it while in office with fast connection between the work stations and the server.
We haven't experienced build problems lately, so this repeating of build process obviously helps.
We have also seen that replicating design between local and server copies sometimes causes build related problems, which could explain the problems you are seeing. We have stopped using replication because of that and are now always working on the server copy directly.
I don't think that your not-using of source control software has anything to do with it.
I usually do all changes inside local template, then perform "Project \ Clean", then update design in server database. It works in 99% of cases. If not, I perform "Project \ Clean" once again. I hate this, but looks like it's the only way to get consistent code on production.

Resources