Hklm/Init on Windows CE 6.0 - windows-ce

I've been trying to make my application run at startup on windows CE 6.0, unfortunately since the device (YFAtlas4) is customized by manufacturer I'm unable to place shortcut in \Windows\Startup. (for some misterious reasons)
So now I'm trying to place path to my application in Hklm\Init registry key, and here's my problem: I's there a way to place absolute path there ? In every example that I've seen there's only application name, and my application has to be instaled in \ResidentFlash\ folder.

Did you try to put the full path there?
There should not be any problem doing so.
If you edit your registry using code, then the string you want to store is L"\\ResidentFlash\\AppName.exe". Alternatively, in case you have an ActiveSync connection with the device you can use a remote registry editor and not mess with the double backslashes.
Also, since you are using the HKLM\Init functionality - make sure your application calls SignalStarted so other programs that are dependent on it can start as well.

A fully qualified path should be supported jusy fine. Be aware that if the path has a space in it, you'll need to quote delimit it. Also, if it's a Compact Framework app, it's not as simple as just adding your app to the Init key - often that will fail. See this blog entry on getting CF apps working with the Init key.

Related

Troi plugin function not working in FileMaker WebDirect

For purposes of being able to select from multiple file from the computer or network, I am using the following script command which works great in native FileMaker 14:
Set Variable [$dosFN; value: TrFile_SelectFileDialog( " -AllowMultipleFiles" ; "Please select one or more files" ;)]
In testing to make sure this works, I am doing a custom dialog to display the value of $dosFN and an example that successfully comes back would show:
From drive as:
C:\Files\img1.jpg
C:\Files\img2.jpg
or from network as:
\\ACI-2008-01\Files\img1.jpg
\\ACI-2008-01\FIles\img2.jpg
What is not work is when I attempt the same thing in a webdirect environment which is only showing the following when I perform the same script without even a file selection dialog box:
$$-4222
So how can I possibly make this work as desired in a webdirect environment?
This is not possible. This call is supposed to display the select file dialog. The plug-in does this by calling a function from one of the system libraries. In Web Direct you work with the database via a browser. Behind the scenes FileMaker silently converts the layouts and scripts into something that can run in the browser (lots of HTML, CSS, and JavaScript). But it cannot convert all and this call is one of things it cannot convert. As a result the plug-in only runs on the FileMaker server in a completely different environment and has no way to make a system call on another computer.
You may have better luck with FileMaker's own Insert File script step. It seems to be compatible with Web Direct. It cannot select multiple files though. (Also, other plug-in functions may still work in Web Direct but keep in mind that they actually run on the server, not the computer that runs the browser.)
The latest version of Troi file seems to be webdirect compatible, but it has to be installed on the FileMaker server as a server-side plugin. In any case check their documentation first, as it is usually quite detailed and if it does not help, you might get in touch with their support.
From how I understand it, the plug-in is running server side and has no way to display an interface on the client side (web browser). I do not believe there is a way to do what you are trying to do with Troi File, but you may just need to contact Troi.

How to use firebird embedded on Linux with IBPP without running a service?

We're about to integrate a firebird database in our software via IBPP. Accordingly to the firebird documantation this should be possible.
We already managed to use the firebird database via IBPP while the service was running. But, we want avoid to run a service. On windows we already accomplished to do this - but on the linux side there are two main differences:
Installation
On windows it is not neccessary to make an installation. On Linux it seems to be, as the docs say:
Finally, you can't just ship libfbembed.so with your application and use it to connect to local databases. Under Linux, you always need a properly installed server, be it Classic or Super.
Is this true? I found the firebird documentation beeing outdated sometimes. If this is still valid, how to deal with this installtion? Can we just run it on the customer's pc. I looked at the shell script. It starts a service. For me it seems running this service is needed during installation process. Anyway, this would be no problem if the service is running only for the installtion and is never needed afterwards - but I'm not sure about this.
IBPP
On windows you just load the DLL via loadlibrary: We put the fbembed.dll, icuuc30.dll and icudt30.dll on any_dirctory, changed the passage in IBPP where the embedded dll is called to loadlibary("any_directory\fbembed.dll") and added any_directory to PATH variable. Everything works now. (Aside: By doing this it is possible to call the database via a DLL we created using IBPP. This DLL can be used by every EXE we give to the customer withour caring about the path the EXE is places in).
But on Linux I didn't found the code where this is done. On this HOWTO it seems a special directory structure is needed. Is this really neccessary? Is it possible to place the .so-files on any_directory and run the application from another_dirctory? Is it neccessary to add loadlibary to Linux section in IBPP? (BTW: My problem is I can't really test things because Linux integration is doing someone else for me).

How to backup and restore IIS configuration from script

I'm writting a script that sets up a lot of different applications in Windows (mainly svn and open source servers for http, dns, mail, ftp and db). This script is intended to be executed in new/clean Windows workstations for new developers, it automatically sets everything up to create an environment very similar to the one in production. After it's executed, everything runs locally and the developer can start working right away.
This not only helps new developers, but all existing developers whenever there are changes in the whole system, everything is replicated locally.
The one thing I'm still not able to do is making some kind of backup of an IIS server that is running a web app (it's in the Prod server) and restoring it automatically to the new developer's machine so he doesn't have to install/configure IIS locally.
I've read about using appcmd.exe to create and restore backups, but that works only for the same machine (it uses encryption keys and those keys change between computers).
Is there a way, a scriptable way, to take everything IIS related from one server and restore it on another server, without user intervention and having the restored IIS run exactly as the original?
Thanks in advance!
Francisco
Just putting this here so anyone who comes across this will have an understanding as to why this wasn't answered. A website has a massive amount of variables associated with it that prevents any easy methods to copy all of its configuration through one or even just a few cmdlets.
To get started though you would want to become very familiar with the applicationHost.config file and how you access the properties within it using the Get-WebConfigurationProperty. One way to get familiar with how to script against webconfiguration properties is to use the Configuration Editor in IIS. Whenever you make a change in the Configuration Editor, before commiting the changes there is a nifty little link titled Generate Script, which will have a Powershell tab you can use to help you gather the proper Get/Set commands for the configuration elements within the applicationHost.config file.
I've created something almost exactly like what the OP is looking for and it spans 4 modules (over 20,000 lines of code) and has a SQL backend that holds all of the configuration elements.
When a website has everything from underlying DLLs that may need registered, IsapiCGI Restrictions and IsapiFilters, accounts that are tied to the AppPool that may need added to certain local groups on the server, to secure bindings that require a certificate to be loaded on the server. You can see that this isn't a simple undertaking. (and these are just a small portion of the variables that a website may contain)
There is however a large chunk of cmdlets that Microsoft provides you out of the box that you can leverage to aid you in developing something like this inside the WebAdministration module. I know this is four years old but hope anyone who stumbled on this will find the above useful.

How to turn off Internet Explorer enhanced security settings in Azure

My site is hosted on Azure. I need to programmatically turn off Internet Explorer's default enhanced security configuration settings whenever I repave or redeploy a new box on Azure.
How do I do this?
I found this article on another site http://jetlounge.net/blogs/teched/archive/2009/10/25/fix-ie-esc-won-t-turn-off-internet-explorer-enhanced-security.aspx. It included the following command line syntax, but on my local box I couldn't find the IEHARDEN.INF file it referred to. I also don't think this solution is Azure-specific.
rundll32.exe setupapi.dll,InstallHinfSection IESoftenAdmin 128 %windir%\inf\IEHARDEN.INF
I need to turn off these default hardening settings under Azure because I have a 3rd party IE screen capture DLL that needs to execute Javascript on webpages.
I think that this approach, shaped in a Windows Azure StartupTask running in Elevated execution context will help you.
Just remember that the .bat or .cmd file you create needs to be UTF8 encoded. There used to be some issues with the batch files if they are not UTF8.
UPDATE
I decided to update the answer, because it would have been too long for a second comment. I want to first make clear that I do not intend to offend anyone and the next is just mine personal view and thoughts.
Well, I mine vision might be (is) distorted through mine prism. But, I think that these specifics has nothing to do with Windows Azure itself.
These are OS related configuration specifics and the approach would be one and the same (with some variations) regardless of a (hosting/cloud) provider. If you had to deploy your solution to a dedicated (or virtual) server, you would had to create some kind of scheduled task, or startup task to make these configuration changes. Or even interactively login to make these changes.
Since Windows Azure offers the StartUp Task, it is up to us (developers) to decide what to do and how to shape the OS according to our needs.
The OS configuration changes that one can possibly need are only limited by the total ammount of all available Windows Server 2008/R2 configuration options. I personally do not believe that these needs to be reflected in Windows Azure documentation by any means. They have their place in Windows Server documentation. It is arguable which are "commonly used", because what might be common for one, might also be "never needed" for others ...

IIS executable not executing

I have been looking at an issue for a week straight and have been unable to figure it out and I am desperate for the fix.
On a client site, we have two environments: UAT and PROD. UAT works perfect (Please keep this in mind). We are now trying to deploy the solution to PROD but certain parts of the solution are not working.
We have developed an asp.net application that we provide to clients to allow them to invoke SSIS packages (there are a couple of drop downs that they first select then click a button named "invoke").
When the user clicks the Invoke button, a batch file named InvokeSSIS.bat is called that assembles a command line call to dtexec with the appropriate parameters.
I'm having a problem with a particular package that is responsible for calling an executable which generates a spreadsheet that i will be importing into my system.
The executable is on an mapped H:\ drive.
I have modified the InvokeSSIS.bat batch file to capture the command the batch file is generating. If I execute this command from the command line, it works perfectly. From the webapp Invoker, it executes the package but the tasks responsible for calling the executable doesn't execute as the entire package takes only 1 second to complete (whereas it should take about a minute.)
The executable DOES have a GUI, but it is NOT interactive. This is because when you call the GUI with specific parameters, it automatically runs in batch mode and executes a macro used to generate the desired spreadsheet.
I know this is ok because it works on the UAT server AND it works from the command line!
I have checked the permissions on the executable (bu right-clicking the executable and clicking properties.) I have granted Full Control on the executable to the same user specified as the identity tab of the application pool i am using.
Can someone please help me? As I said I am dying over here!
Please let me know if you have any ideas or what other info you need.
Environment (both UAT and PROD)
OS: Windows Server 2003
IIS 6
asp.net 2.0
SQL Server 2008
Thanks!
Steve
You can't use a mapped drive with IIS.
You must use the \\servername syntax to reach files on other systems.
I agree with user544284 that this is at least in part a mapping issue. I'll ignore for a minute the complete insanity of having a web application call a batch file to start an executable that's on a remote network drive through a drive letter mapping.
Most likely the UAT box has something set up that maps that drive letter for you which Prod is missing.
The only other possibility is a security violation is occurring. Running .exe's from a network drive is generally frowned on. Do the two environments have the exact same version of windows? Are they configured the same with regards to UAC? Any differences here are going to be important.
Which brings up an interesting thought. I wonder if someone logged in to the UAT server using the same account credentials the app pool is using and added the ip address of the machine where the exe lives to the list of "Local Intranet" sites... Or, if they installed SSIS on the UAT server itself.
Just because YOU can log in to the server and run it on the command line means nothing. You have to find out if the drive letter is mapped at all for the user that the web app is running under and whether that user has the required security bits and whether the local OS will allow it regardless.
Okay, I can't ignore it: hairbrained is the nicest adjective I can come up with for this "architecture". Do yourself a favor and go back to the drawing board on this one. It has the word "brittle" written all over it, as you have already found. Instead of building out a batch file to call dtexec, just do it directly either by something like this or this.

Resources