My development machine is a VirtualBox with Window Server 2008 R2.
We are using CRM2011 with roll-up 12.
For my development I use framework 4
From CRM I call an aspx-page, this page contains a grid with records I can select. After I made a selection, I press a button and passes the selection to an assembly. This assembly has a function that checks if a certain key in the registry is available. If so it continues, if not it returns.
The problem I'm facing here is that I receive an error trying to read the registry using OpenSubKey() telling me that I'm not authorized to do so. I use code below to retrieve the key. The assembly is not signed. Signing the assembly doesn't change the result.
RegistryKey localKey = null;
if (Environment.Is64BitOperatingSystem)
{
localKey = RegistryKey.OpenBaseKey(Microsoft.Win32.RegistryHive.CurrentUser, RegistryView.Registry64);
}
else
{
localKey = RegistryKey.OpenBaseKey(Microsoft.Win32.RegistryHive.CurrentUser, RegistryView.Registry32);
}
Doing the same thing from a console application using the same assembly is giving no problems.
Regards,
Martin
Verify that your ApplicationPool identity has read access to the registry key in question.
Check what User your Application Pool is using in IIS then open the registry key in Regedit and check the permissions.
Related
I have a WebJob that needs to create a JWT token to talk with an external service. The following code works when I run the WebJob on my local machine:
public static string SignES256(byte[] p8Certificate, object header, object payload)
{
var headerString = JsonConvert.SerializeObject(header);
var payloadString = JsonConvert.SerializeObject(payload);
CngKey key = CngKey.Import(p8Certificate, CngKeyBlobFormat.Pkcs8PrivateBlob);
using (ECDsaCng dsa = new ECDsaCng(key))
{
dsa.HashAlgorithm = CngAlgorithm.Sha256;
var unsignedJwtData = Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(headerString)) + "." + Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(payloadString));
var signature = dsa.SignData(Encoding.UTF8.GetBytes(unsignedJwtData));
return unsignedJwtData + "." + Base64UrlEncoder.Encode(signature);
}
}
However, when I deploy my WebJob to Azure, I get the following exception:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: NotificationFunctions.QueueOperation ---> System.Security.Cryptography.CryptographicException: The system cannot find the file specified. at System.Security.Cryptography.NCryptNative.ImportKey(SafeNCryptProviderHandle provider, Byte[] keyBlob, String format) at System.Security.Cryptography.CngKey.Import(Byte[] keyBlob, CngKeyBlobFormat format, CngProvider provider)
It says it can't find a specified file, but the parameters I am passing in are not looking at a file location, they are in memory. From what I have gathered, there may be some kind of cryptography setting I need to enable to be able to use the CngKey.Import method, but I can't find any settings in the Azure portal to configure related to this.
I have also tried using JwtSecurityTokenHandler, but it doesn't seem to handle the ES256 hashing algorithm I need to use (even though it is referenced in the JwtAlgorithms class as ECDSA_SHA256).
Any suggestions would be appreciated!
UPDATE
It appears that CngKey.Import may actually be trying to store the certificate somewhere that is not accessible on Azure. I don't need it stored, so if there is a better way to access the certificate in memory or convert it to a different kind of certificate that would be easier to use that would work.
UPDATE 2
This issue might be related to Azure Web Apps IIS setting not loading the user profile as mentioned here. I have enabled this by setting WEBSITE_LOAD_USER_PROFILE = 1 in the Azure portal app settings. I have tried with this update when running the code both via the WebJob and the Web App in Azure but I still receive the same error.
I used a decompiler to take a look under the hood at what the CngKey.Import method was actually doing. It looks like it tries to insert the certificate I am using into the "Microsoft Software Key Storage Provider". I don't actually need this, just need to read the value of the certificate but it doesn't look like that is possible.
Once I realized a certificate is getting inserted into a store somewhere one the machine, I started thinking about how bad of a think that would be from a security standpoint if your Azure Web App was running in a shared environment, like it does for the Free and Shared tiers. Sure enough, my VM was on the Shared tier. Scaling it up to the Basic tier resolved this issue.
I know the application name and trying to find the install location and GUID of the application using install shield.
I found the application registry values(like DisplayName, InstallLocation, UninstallString, etc) in following location manually:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall{GUID}
But GUID of the application is different in each client machine, So I'm not able to hard code the registry path to get these values using following function.
RegDBGetKeyValueEx();
Can we able to find the GUID of the application if we know the application name?
Thanks.
You can list the Uninstall keys with code similar to the RegDBQueryKey example:
#define UNINSTALLKEYPATH "SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall"
listKeys = ListCreate(STRINGLIST);
RegDBQueryKey(UNINSTALLKEYPATH, REGDB_KEYS, listKeys);
And then you can iterate these keys looking for the appropriate value using code similar to the ListGetNextItem example:
nResult = ListGetFirstItem(listKeys, sItem);
while (nResult != END_OF_LIST)
RegDBGetKeyValueEx(UNINSTALLKEYPATH ^ sItem, ...); // check each key
nResult = ListGetNextItem(listKeys, sItem);
endwhile;
Once you find it, you can leverage any other information in that key, or the name of the key itself. (Note: don't forget to destroy the list.)
If you know additional things about this setup, for instance if it's an MSI, there may be more direct approaches that leverage Windows Installer APIs.
This is regarding SharePoint 2010 Integration with MSCRM 2011.
While creating a record in CRM, trying to create a Custom Document location for that record and a similar folder in sharepoint, So that when user clicks on document link in the entity record it does not prompt user to create folder in Sharpoint (Trying to avoid sharepoint noise for better user experience)
I have implemented through post create asynchronous plug-in. (I did this through console program working fine). Build the plugenter code here-in and deployed to CRM.
When creating a record it error out with a message like "An internal server 500 error - Could not load the assembly with public key token etc…blab bla bla…”
But when I am debugging the plug-in it failed at the first line of command where I am instantiating sharePoint method Create client context of sharepoint, it says [System.Security.SecurityException]={“That assembly does not allow partially trusted callers”.}
As per google, per this issue it should be having one attribute “Allow partial users” in assembly info file. As per my understanding, this should be done in because the request goes from CRM plug-in to SharePoint dll. I mean share point dlls are not allowing request from my assembly. How can we change that?
I have referenced Microsoft.SharePoint.client.dll and Microsoft.SharePoint.Client.Runtime.dll
What is the alternate to overcome this issue?
Appreciate if some one can help me ..Thanks In advance.
Here is my code for SharePoint
ClientContext clientContext = new ClientContext(siteUrl)
CredentialCache cc = new CredentialCache();
Cc.Add(new Uri(siteUrl), "NTLM", CredentialCache.DefaultNetworkCredentials);
clientContext.Credentials = cc;
clientContext.AuthenticationMode = ClientAuthenticationMode.Default;
Web web = clientContext.Web;
SP.List list = web.Lists.GetByTitle(listName);
ListItemCreationInformation newItem = new ListItemCreationInformation();
newItem.UnderlyingObjectType = FileSystemObjectType.Folder;
newItem.FolderUrl = siteUrl + "/" + folderlogicalName;
if (!relativePath.Equals(string.Empty))
newItem.FolderUrl += "/" + relativePath;
newItem.LeafName = newfolderName;
SP.ListItem item = list.AddItem(newItem);
item.Update();
clientContext.ExecuteQuery();
Where I am passing the siteurl, folderlogicalname,relativepath and new foldername as parameters.
This works fine from my Console application. But when converted to CRM plug-in it gives the above specified issue
I've seen a similar issue before.
CRM plugins run inside a sandbox, so all assemblies and .NET libraries used must allow partial trust callers (since the CRM sandbox runs under partial trust). It works in the console because you are executing the code as a full trust user in that context.
This issue is not necessarily your code, but could be a dependency or a .NET library itself does not allow partial trust callers - in your case it sounds like the Sharepoint library is the culprit (but a stack trace of the error should reveal exactly where the cause is).
Since you don't have access to the source library causing the problem, to overcome the error you will likely have to create a wrapper. However, the problem is the wrapper cannot directly reference the problem library or you will get the same issue. So to get around this, you may have to create a web service which acts as your wrapper and then call the web service in your CRM plugin. This way the full trust code is executed by the web service (which is full trust) and then returns the result to your calling CRM plugin.
Here is more info on the error.
Thanks Jason. This works for me.
I Would like to add additional few points to the answer.
1. I have added the sharepoint dlls to the bin folder of CRM 2011 site.
2. Also deployed the same dlls in the folder whereever Async job is running to make my Async plug-in to work.
Thanks once again for the cooperation
Have you ever tried to run a hosted service in the windows azure emulator with full IIS and multiple role instances? Some days ago I noticed that only one of the multiple instances of a web role is startet in IIS at a time. The following screenshot illustrates the behavior and the message box in front of the screenshot shows the reason for this behavior. The message box appears on trying to start one of the stopped websites in the IIS Manager.
Screenshot: IIS with stopped Websites
The sample cloud application contains two web roles: MvcWebRole1 and WCFServiceWebRole1 each configured to use three instances. My first thought was: "Sure! No port collision will happen in the real azure world because every role instance is an own virtual machine. It cannot work in the emulator!" But after some research and analyzing many parts of the azure compute emulator I found out that the compute emulator creates a unique IP for each role instance (in my example from 127.255.0.0 up to 127.255.0.5). This MSDN blog article (http://blogs.msdn.com/b/avkashchauhan/archive/2011/09/16/whats-new-in-windows-azure-sdk-1-5-each-instance-in-any-role-gets-its-own-ip-address-to-match-compute-emulator-close-the-cloud-environment.aspx) of the microsoft employee Avkash Chauhan describes this behavior as well. After that conclusion I came to the following question: why the hell does the compute emulator (more precisely DevFC.exe) not add the IP of the appropriate role to the binding information of each Website???
I added the IP to each Website by hand and tadaaaaa: every Website can be started without any collisions. The next screenshot demonstrates it with the changed binding information highlighted.
Screenshot: IIS with started Websites
Once again: Why the hell does the emulator not do it for me? I wrote a small static helper method to do the binding extension thing for me on every role start. Maybe someone wants to use it:
public static class Emulator
{
public static void RepairBinding(string siteNameFromServiceModel, string endpointName)
{
// Use a mutex to mutually exclude the manipulation of the iis configuration.
// Otherwise server.CommitChanges() will throw an exeption!
using (var mutex = new System.Threading.Mutex(false, "AzureTools.Emulator.RepairBinding"))
{
mutex.WaitOne();
using (var server = new Microsoft.Web.Administration.ServerManager())
{
var siteName = string.Format("{0}_{1}", Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel);
var site = server.Sites[siteName];
// Add the IP of the role to the binding information of the website
foreach (Binding binding in site.Bindings)
{
//"*:82:"
if (binding.BindingInformation[0] == '*')
{
var instanceEndpoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints[endpointName];
string bindingInformation = instanceEndpoint.IPEndpoint.Address.ToString() + binding.BindingInformation.Substring(1);
binding.BindingInformation = bindingInformation;
server.CommitChanges();
}
else
{
throw new InvalidOperationException();
}
}
}
// Start all websites of the role if all bindings of all websites of the role are prepared.
using (var server = new Microsoft.Web.Administration.ServerManager())
{
var sitesOfRole = server.Sites.Where(site => site.Name.Contains(RoleEnvironment.CurrentRoleInstance.Role.Name));
if (sitesOfRole.All(site => site.Bindings.All(binding => binding.BindingInformation[0] != '*')))
{
foreach (Site site in sitesOfRole)
{
if (site.State == ObjectState.Stopped)
{
site.Start();
}
}
}
}
mutex.ReleaseMutex();
}
}
}
I call the helper method as follows
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
if (RoleEnvironment.IsEmulated)
{
AzureTools.Emulator.RepairBinding("Web", "ServiceEndpoint");
}
return base.OnStart();
}
}
I got it!
I have this behavior on three different machines which are all formatted and served with fresh clean windows 8, Visual Studio 2012 and Azure SDK 1.8 and Azure Tools installations recently. So a reinstallation of the Azure SDK and Tools (as Anton suggests) should not change anything. But the cleanliness of my three machines is the crucial point! Anton, do you have Visual Studio 2010 on your machine with at least VS2010 SP 1 installed? I analyzed IISConfigurator.exe with ILSpy and found the code which sets the IP in the binding information of the websites to '*' (instead of 127.255.0.*). It depends on the static property Microsoft.WindowsAzure.Common.Workarounds.BindToAllIpsWorkaroundEnabled. This method internally uses Microsoft.WindowsAzure.Common.Workarounds.TryGetVS2010SPVersion and leads to setting the IP binding to '*' if the SP level of Visual Studio 2010 is smaller than 1. TryGetVS2010SPVersion checks four registry keys and I don't know why but one of the keys exists in my registry und returns the Visual Studio 2010 SP level 0 (I never installed VS2010 on no one of the three machines!!!). As I changed the value of HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\DevDiv\vs\Servicing\10.0\SP from 0 to 10 (something greater 0 should do it) the Azure Emulator starts to set the 127.255.0.* IPs of the roles to the binding information on all of the websites in the IIS and all websites are started correctly.
I want my MVC3 web application to access %APPDATA% (e.g. C:\Users\MyUsername\AppData\Roaming on Windows 7) because I store configuration files there. Therefore I created an application pool in IIS with the identity of the user "MyUsername", created that user's profile by logging in with the account, and turned on the option "Load User Profile" (was true by default anyway). Impersonation is turned off.
Now I have the problem that %APPDATA% (in C#):
appdataDir = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData)
resolves to c:\windows\system32\inetsrv instead of C:\Users\MyUsername\AppData\Roaming.
UPDATE: More exactly, the above C# code returns an empty string, so that Path.GetFullPath(Path.Combine(appdataDir, "MyAppName")) prepends the current path to my application name, resulting in c:\windows\system32\inetsrv\MyAppName.
I know I made this work before with the same web application on a Windows Server 2008 R2, and now I'm getting this problem with the same major version 7.5 of IIS on my Windows 7.
I used the same procedure as before: Created a new user, logged in as that user to create the profile and APPDATA directories, then added the application pool with this identity and finally added the web application to this pool.
Any ideas?
Open your %WINDIR%\System32\inetsrv\config\applicationHost.config and look for <applicationPoolDefaults>. Under <processModel>, make sure you don't have setProfileEnvironment="false". If you do, set it to true.
Application Pools - Your application Pool - Advanced settings ...
Process Model - Load user Profile set True.
It Helps me.
Taken from
https://blogs.msdn.microsoft.com/vijaysk/2009/03/08/iis-7-tip-3-you-can-now-load-the-user-profile-of-the-application-pool-identity/
I experienced the same problem recently. As mentioned by Amit, the problem is that the user profile isn't loaded. The setting is for all application pools, and is in the applicationHost.config (typically C:\Windows\System32\inetsrv\config\applicationHost.config). If you update the applicationPoolDefaults elements as follows, it will work;
<applicationPoolDefaults managedRuntimeVersion="v4.0">
<processModel identityType="ApplicationPoolIdentity" loadUserProfile="true" setProfileEnvironment="true" />
</applicationPoolDefaults>
We've tried this with IIS 7.5, and taken it through to production without problem.
You can automate this if you want;
appcmd set config -section:system.applicationHost/applicationPools /applicationPoolDefaults.processModel.setProfileEnvironment:"true" /commit:apphost
or if you prefer powershell
Set-WebConfigurationProperty "/system.applicationHost/applicationPools/applicationPoolDefaults/processModel" -PSPath IIS:\ -Name "setProfileEnvironment" -Value "true"
Hope this helps
I am experiencing the same problem. Have you by chance installed the Visual Studio 11 beta? I did recently, and I've noticed a couple of differences in how the 4.0 compatible .dlls for that work with our code. I'm still trying to track down the problem for certain, but I didn't have this problem before that.
Edit:
After comparing the decompiled sources from 4.0 and 4.5 for GetFolderPath (and related), there are differences. Whether they are the source of the problem...I'm not sure yet.
Edit 2: Here are the relevant changes. I'm working on trying both to see if I get different results. [code removed]
Edit 3:
I've now tried calling SHGetFolderPath directly, which is what the .NET Framework ends up doing, anyway. It returns E_ACCESSDENIED (-2147024891 / 0x80070005). I don't know what has changed where I'm getting that in some specific cases, but not in others.
Edit 4:
Since you're getting a empty string, you may want to switch your code to use SHGetFolderPath so you can get the HResult and at least know what exactly is happening.
void Main() {
Console.WriteLine( GetFolderPath( Environment.SpecialFolder.ApplicationData ) );
}
[System.Runtime.InteropServices.DllImport("shell32.dll")]
static extern int SHGetFolderPath(IntPtr hwndOwner, int nFolder, IntPtr hToken, uint dwFlags, StringBuilder pszPath);
private string GetFolderPath( Environment.SpecialFolder folder ) {
var path = new StringBuilder( 260 );
var hresult = SHGetFolderPath( IntPtr.Zero, (int) folder, IntPtr.Zero, 0, path );
Console.WriteLine( hresult.ToString( "X" ) );
return ( (object) path ).ToString( );
}
The problem is with your IIS settings. The answer is here: Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData) returns String.Empty