How to sign message with TPM2? - tpm

From MethodA() :
First I created template, then the signing key. Then saved context with ContextSave(); And Marshalled it to file.
From MethodB() :
I unmarshalled the file, And did the ContextLoad(); Here it fails with Integrity check. What did i do wrong?
I created signing key like this:
var keyTemplate = new TpmPublic(TpmAlgId.Sha1, // Name algorithm
ObjectAttr.UserWithAuth | ObjectAttr.Sign | // Signing key
ObjectAttr.FixedParent | ObjectAttr.FixedTPM | // Non-migratable
ObjectAttr.SensitiveDataOrigin,
null, // No policy
new RsaParms(new SymDefObject(),
new SchemeRsassa(TpmAlgId.Sha1), 2048, 0),
new Tpm2bPublicKeyRsa());
TpmHandle keyHandle = tpm[ownerAuth].CreatePrimary(
TpmRh.Owner, // In the owner-hierarchy
new SensitiveCreate(keyAuth, null), // With this auth-value
keyTemplate, // Describes key
null, // Extra data for creation ticket
new PcrSelection[0], // Non-PCR-bound
out keyPublic, // PubKey and attributes
out creationData, out creationHash, out creationTicket); // Not used here
EDIT 1:
MethodA();
public static void MethodA()
{
try
{
Tpm2Device tpmDevice = new TcpTpmDevice(tpm_host, tpm_port);
//Tpm2Device tpmDevice = new TbsDevice();
tpmDevice.Connect();
var tpm = new Tpm2(tpmDevice);
if (tpmDevice is TcpTpmDevice)
{
tpmDevice.PowerCycle();
tpm.Startup(Su.Clear);
}
//
// The TPM needs a template that describes the parameters of the key
// or other object to be created. The template below instructs the TPM
// to create a new 2048-bit non-migratable signing key.
//
var keyTemplate = new TpmPublic(TpmAlgId.Sha1, // Name algorithm
ObjectAttr.UserWithAuth | ObjectAttr.Sign | // Signing key
ObjectAttr.FixedParent | ObjectAttr.FixedTPM | // Non-migratable
ObjectAttr.SensitiveDataOrigin,
null, // No policy
new RsaParms(new SymDefObject(),
new SchemeRsassa(TpmAlgId.Sha1), 2048, 0),
new Tpm2bPublicKeyRsa());
//
// AuthValue encapsulates an authorization value: essentially a byte-array.
// OwnerAuth is the owner authorization value of the TPM-under-test. We
// assume that it (and other) auths are set to the default (null) value.
// If running on a real TPM, which has been provisioned by Windows, this
// value will be different. An administrator can retrieve the owner
// authorization value from the registry.
//
//var ownerAuth = new AuthValue();
//
// Authorization for the key we are about to create.
//
var keyAuth = new byte[] { 1, 2, 3 };
TpmPublic keyPublic;
CreationData creationData;
TkCreation creationTicket;
byte[] creationHash;
//
// Ask the TPM to create a new primary RSA signing key.
//
TpmHandle keyHandle = tpm[ownerAuth].CreatePrimary(
TpmRh.Owner, // In the owner-hierarchy
new SensitiveCreate(keyAuth, null), // With this auth-value
keyTemplate, // Describes key
null, // Extra data for creation ticket
new PcrSelection[0], // Non-PCR-bound
out keyPublic, // PubKey and attributes
out creationData, out creationHash, out creationTicket); // Not used here
//
// Print out text-versions of the public key just created
//
//Console.WriteLine("New public key\n" + keyPublic.ToString());
Context ctx = tpm.ContextSave(keyHandle);
File.WriteAllBytes("key.bin", Marshaller.GetTpmRepresentation(ctx));
// Clean up.
tpm.FlushContext(keyHandle);
tpm.Dispose();
}
catch (Exception e)
{
Console.WriteLine("Exception occurred: {0}", e.Message);
}
}
MethodB():
public static void MethodB()
{
try
{
Tpm2Device tpmDevice = new TcpTpmDevice(tpm_host, tpm_port);
//Tpm2Device tpmDevice = new TbsDevice();
tpmDevice.Connect();
var tpm = new Tpm2(tpmDevice);
if (tpmDevice is TcpTpmDevice)
{
tpmDevice.PowerCycle();
tpm.Startup(Su.Clear);
}
Context ctx2 = Marshaller.FromTpmRepresentation<Context>(File.ReadAllBytes("key.bin"));
TpmHandle keyHandle = tpm.ContextLoad(ctx2); //integrity check fail

This code is present in both MethodA() and MethodB():
if (tpmDevice is TcpTpmDevice)
{
tpmDevice.PowerCycle();
tpm.Startup(Su.Clear);
}
This is a common pattern in TSS MSR examples. It checks whether the TPM you're talking to is a simulated device, and if so executes the Clear command on it, making sure you're starting with a clean slate. Doing this in MethodA() is fine, but by doing it in MethodB() as well, you're basically undoing what you did in MethodA(): the key you just created is removed and the integrity check fails because of it.

Related

Unable to get the PublishAssetURL in azure media Service

I am trying to upload the mp4 file from Controller into azure blob storage right after when i done uploading, i am creating asset from the same blob which i just uploaded every thing seems working fine but i don't know some how i am unable to get the publishAssetURL
var manifestFile = asset.AssetFiles.Where(x =>
x.Name.EndsWith(".ism")).FirstOrDefault();
This issue is on this line the manifestFile is coming null.
public string CreateAssetFromExistingBlobs(CloudBlobContainer sourceBlobContainer, CloudStorageAccount _destinationStorageAccount, CloudMediaContext _context, AzureStorageMultipartFormDataStreamProvider provider )
{
CloudBlobClient destBlobStorage = _destinationStorageAccount.CreateCloudBlobClient();
// Create a new asset.
IAsset asset = _context.Assets.Create("NewAsset_" + Guid.NewGuid(), AssetCreationOptions.None);
IAccessPolicy writePolicy = _context.AccessPolicies.Create("writePolicy",
TimeSpan.FromHours(24), AccessPermissions.Write);
ILocator destinationLocator =
_context.Locators.CreateLocator(LocatorType.Sas, asset, writePolicy);
// Get the asset container URI and Blob copy from mediaContainer to assetContainer.
CloudBlobContainer destAssetContainer =
destBlobStorage.GetContainerReference((new Uri(destinationLocator.Path)).Segments[1]);
if (destAssetContainer.CreateIfNotExists())
{
destAssetContainer.SetPermissions(new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
});
}
var blob = sourceBlobContainer.GetBlockBlobReference(provider.FileData.FirstOrDefault().LocalFileName);
blob.FetchAttributes();
var assetFile = asset.AssetFiles.Create(blob.Name);
CopyBlob(blob, destAssetContainer);
assetFile.ContentFileSize = blob.Properties.Length;
assetFile.Update();
asset.Update();
destinationLocator.Delete();
writePolicy.Delete();
// Set the primary asset file.
// If, for example, we copied a set of Smooth Streaming files,
// set the .ism file to be the primary file.
// If we, for example, copied an .mp4, then the mp4 would be the primary file.
var ismAssetFile = asset.AssetFiles.ToList().
Where(f => f.Name.EndsWith(".mp4", StringComparison.OrdinalIgnoreCase)).ToArray().FirstOrDefault();
// The following code assigns the first .ism file as the primary file in the asset.
// An asset should have one .ism file.
if (ismAssetFile != null)
{
ismAssetFile.IsPrimary = true;
ismAssetFile.Update();
}
IAsset encodedAsset = EncodeToAdaptiveBitrateMP4Set(asset, _context);
return PublishAssetGetURLs(encodedAsset, _context);
}
private void CopyBlob(ICloudBlob sourceBlob, CloudBlobContainer destinationContainer)
{
var signature = sourceBlob.GetSharedAccessSignature(new SharedAccessBlobPolicy
{
Permissions = SharedAccessBlobPermissions.Read,
SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24)
});
var destinationBlob = destinationContainer.GetBlockBlobReference(sourceBlob.Name);
if (destinationBlob.Exists())
{
Console.WriteLine(string.Format("Destination blob '{0}' already exists. Skipping.", destinationBlob.Uri));
}
else
{
// Display the size of the source blob.
Console.WriteLine(sourceBlob.Properties.Length);
Console.WriteLine(string.Format("Copy blob '{0}' to '{1}'", sourceBlob.Uri, destinationBlob.Uri));
destinationBlob.StartCopy(new Uri(sourceBlob.Uri.AbsoluteUri + signature));
while (true)
{
// The StartCopyFromBlob is an async operation,
// so we want to check if the copy operation is completed before proceeding.
// To do that, we call FetchAttributes on the blob and check the CopyStatus.
destinationBlob.FetchAttributes();
if (destinationBlob.CopyState.Status != CopyStatus.Pending)
{
break;
}
//It's still not completed. So wait for some time.
System.Threading.Thread.Sleep(1000);
}
// Display the size of the destination blob.
Console.WriteLine(destinationBlob.Properties.Length);
}
}
private IAsset EncodeToAdaptiveBitrateMP4Set(IAsset asset, CloudMediaContext _context)
{
// Declare a new job.
IJob job = _context.Jobs.Create("Media Encoder Standard Job");
// Get a media processor reference, and pass to it the name of the
// processor to use for the specific task.
IMediaProcessor processor = GetLatestMediaProcessorByName("Media Encoder Standard",_context);
// Create a task with the encoding details, using a string preset.
// In this case "Adaptive Streaming" preset is used.
ITask task = job.Tasks.AddNew("My encoding task",
processor,
"Adaptive Streaming",
TaskOptions.None);
// Specify the input asset to be encoded.
task.InputAssets.Add(asset);
// Add an output asset to contain the results of the job.
// This output is specified as AssetCreationOptions.None, which
// means the output asset is not encrypted.
task.OutputAssets.AddNew("Output asset",
AssetCreationOptions.None);
job.StateChanged += new EventHandler<JobStateChangedEventArgs>(JobStateChanged);
job.Submit();
job.GetExecutionProgressTask(CancellationToken.None).Wait();
return job.OutputMediaAssets[0];
}
public void JobStateChanged(object sender, JobStateChangedEventArgs e)
{
//Console.WriteLine("Job state changed event:");
//Console.WriteLine(" Previous state: " + e.PreviousState);
//Console.WriteLine(" Current state: " + e.CurrentState);
switch (e.CurrentState)
{
case JobState.Finished:
//Console.WriteLine();
//Console.WriteLine("Job is finished. Please wait while local tasks or downloads complete...");
break;
case JobState.Canceling:
case JobState.Queued:
case JobState.Scheduled:
case JobState.Processing:
//Console.WriteLine("Please wait...\n");
break;
case JobState.Canceled:
case JobState.Error:
// Cast sender as a job.
IJob job = (IJob)sender;
// Display or log error details as needed.
break;
default:
break;
}
}
private IMediaProcessor GetLatestMediaProcessorByName(string mediaProcessorName, CloudMediaContext _context)
{
var processor = _context.MediaProcessors.Where(p => p.Name == mediaProcessorName).
ToList().OrderBy(p => new Version(p.Version)).LastOrDefault();
if (processor == null)
throw new ArgumentException(string.Format("Unknown media processor", mediaProcessorName));
return processor;
}
private string PublishAssetGetURLs(IAsset asset, CloudMediaContext _context)
{
// Create a 30-day readonly access policy.
// You cannot create a streaming locator using an AccessPolicy that includes write or delete permissions.
IAccessPolicy policy = _context.AccessPolicies.Create("Streaming policy",
TimeSpan.FromDays(30),
AccessPermissions.Read);
// Create a locator to the streaming content on an origin.
ILocator originLocator = _context.Locators.CreateLocator(LocatorType.OnDemandOrigin, asset,
policy,
DateTime.UtcNow.AddMinutes(-5));
// Display some useful values based on the locator.
//Console.WriteLine("Streaming asset base path on origin: ");
//Console.WriteLine(originLocator.Path);
//Console.WriteLine();
// Get a reference to the streaming manifest file from the
// collection of files in the asset.
var manifestFile = asset.AssetFiles.Where(x => x.Name.EndsWith(".ism")).FirstOrDefault();
// Create a full URL to the manifest file. Use this for playback
// in streaming media clients.
string urlForClientStreaming = originLocator.Path + manifestFile.Name + "/manifest";
// Console.WriteLine("URL to manifest for client streaming using Smooth Streaming protocol: ");
// Console.WriteLine(urlForClientStreaming);
// Console.WriteLine("URL to manifest for client streaming using HLS protocol: ");
return urlForClientStreaming + "(format=m3u8-aapl)";
// Console.WriteLine("URL to manifest for client streaming using MPEG DASH protocol: ");
// Console.WriteLine(urlForClientStreaming + "(format=mpd-time-csf)");
// Console.WriteLine();
}
Checked our logs - the reason you are not getting the streaming URL is that the encode Job failed. At the end of EncodeToAdaptiveBitrateMP4Set(), you should confirm that the final Job status was Finished (i.e. successful). Looking at the encoder logs, it appears that the input file was corrupt.

OrganizationServiceProxy: No authentication error when wrong password is setup

I'm creating Organization service proxy object using following way:
[ThreadStatic]
public static OrganizationServiceProxy OrgServiceProxy;
// ...
sLog.DebugFormat("Get AuthenticationProviderType...");
AuthenticationProviderType _crmAuthType = this.GetServerType(parameters.DiscoveryUri);
sLog.DebugFormat("Get AuthenticationProviderType - DONE!");
// ...
sLog.Info("Perform metadata download (ServiceConfigurationFactory.CreateConfiguration)...");
IServiceConfiguration<IOrganizationService> _crmServiceConfiguration = ServiceConfigurationFactory.CreateConfiguration<IOrganizationService>(parameters.OrgServiceUri);
sLog.Info("Perform metadata download (ServiceConfigurationFactory.CreateConfiguration) - DONE");
// ...
// enable proxy types
var behavior = new ProxyTypesBehavior() as IEndpointBehavior;
behavior.ApplyClientBehavior(_crmServiceConfiguration.CurrentServiceEndpoint, null);
// ...
public OrganizationServiceProxy GetServiceProxy(ICRMConnectionParameters parameters)
{
// ...
ClientCredentials clientCreds = new ClientCredentials();
clientCreds.Windows.ClientCredential.UserName = parameters.UserName;
clientCreds.Windows.ClientCredential.Password = parameters.Password;
clientCreds.Windows.ClientCredential.Domain = parameters.Domain;
sLog.DebugFormat("Setup client proxy...");
OrgServiceProxy = new OrganizationServiceProxy(_crmServiceConfiguration, clientCreds);
sLog.DebugFormat("Setup client proxy - DONE.");
return OrgServiceProxy;
}
Just note here that AuthenticationProviderType and IServiceConfiguration are statically cached. This code above is part of class named CRMConnection.
I have one more abstract class (ProxyUser) which contains following property:
private CRMConnection conn;
// ...
protected OrganizationServiceProxy OrgServiceProxy
{
get
{
//return orgService;
return this.Conn.GetServiceProxy();
}
}
protected CRMConnection Conn
{
get
{
conn = conn ?? new CRMConnection();
return conn;
}
}
In another class that inherits ProxyUser I have method with following code:
ColumnSet columnSet = new ColumnSet();
ConditionExpression condition1 = new ConditionExpression("new_id", ConditionOperator.NotNull);
FilterExpression filter = new FilterExpression(LogicalOperator.And);
filter.AddCondition(condition1);
QueryExpression query = new QueryExpression()
{
EntityName = new_brand.EntityLogicalName,
ColumnSet = columnSet,
Criteria = filter,
NoLock = true
};
EntityCollection res = OrgServiceProxy.RetrieveMultiple(query);
And now we come to the point :)
If I setup correct parameters - organization service url, discovery service url, username, password and domain, everything works as expected. BUT, in case when wrong password is set, in line below, service is simply unresponsive. It doesn't happen anything.
EntityCollection res = OrgServiceProxy.RetrieveMultiple(query);
Of course, I'm expecting authentication failed error. Any suggestions what I'm missing here?
Thanks in advance!
I solved this problem with adding line below in GetServiceProxy method - when ClientCredentials are created:
clientCreds.SupportInteractive = false;
I figured this out after I moved whole logic in console app. When wrong password is set and app is in debug mode, I'm getting windows login prompt. Then I found this answer.

What does GenerateCorrelationId() and ValidateCorrelationId() do?

I see this code within a custom owin handler to do Oauth2. For example here: https://github.com/RockstarLabs/OwinOAuthProviders/blob/master/Owin.Security.Providers/Reddit/RedditAuthenticationHandler.cs
Can someone explain to me in plain English what these two methods do in the context of oauth2? It seems to be related to CSRF but not sure how.
When a redirect to an "OAuth 2" partner occurs there must be someway of correlating the eventual redirect back to your own application with the original redirect that you sent.
The way the Microsoft.Owin AuthenticationHandler accomplishes this:
generates a nonce of random bytes and retains it in a browser cookie
(GenerateCorrelationId)
encrypts this nonce and other information and your job is to pass this in a state query string parameter to the partner (recall that the partner's job is to return this value right back to your application after authenticating the user)
validates the nonce by decrypting the state query string parameter and verifying it matches the value in the cookie stored (ValidateCorrelationId)
Here is the source:
protected void GenerateCorrelationId(AuthenticationProperties properties)
{
if (properties == null)
{
throw new ArgumentNullException("properties");
}
string correlationKey = Constants.CorrelationPrefix +
BaseOptions.AuthenticationType;
var nonceBytes = new byte[32];
Random.GetBytes(nonceBytes);
string correlationId = TextEncodings.Base64Url.Encode(nonceBytes);
var cookieOptions = new CookieOptions
{
HttpOnly = true,
Secure = Request.IsSecure
};
properties.Dictionary[correlationKey] = correlationId;
Response.Cookies.Append(correlationKey, correlationId, cookieOptions);
}
protected bool ValidateCorrelationId(AuthenticationProperties properties,
ILogger logger)
{
if (properties == null)
{
throw new ArgumentNullException("properties");
}
string correlationKey = Constants.CorrelationPrefix +
BaseOptions.AuthenticationType;
string correlationCookie = Request.Cookies[correlationKey];
if (string.IsNullOrWhiteSpace(correlationCookie))
{
logger.WriteWarning("{0} cookie not found.", correlationKey);
return false;
}
var cookieOptions = new CookieOptions
{
HttpOnly = true,
Secure = Request.IsSecure
};
Response.Cookies.Delete(correlationKey, cookieOptions);
string correlationExtra;
if (!properties.Dictionary.TryGetValue(
correlationKey,
out correlationExtra))
{
logger.WriteWarning("{0} state property not found.", correlationKey);
return false;
}
properties.Dictionary.Remove(correlationKey);
if (!string.Equals(correlationCookie, correlationExtra, StringComparison.Ordinal))
{
logger.WriteWarning("{0} correlation cookie and state property mismatch.",
correlationKey);
return false;
}
return true;
}

Servicestack RabbitMQ: Infinite loop fills up dead-letter-queue when RabbitMqProducer cannot redeclare temporary queue in RPC-pattern

When I declare a temporary reply queue to be exclusive (e.g. anonymous queue (exclusive=true, autodelete=true) in rpc-pattern), the response message cannot be posted to the specified reply queue (e.g. message.replyTo="amq.gen-Jg_tv8QYxtEQhq0tF30vAA") because RabbitMqProducer.PublishMessage() tries to redeclare the queue with different parameters (exclusive=false), which understandably results in an error.
Unfortunately, the erroneous call to channel.RegisterQueue(queueName) in RabbitMqProducer.PublishMessage() seems to nack the request message in the incoming queue so that, when ServiceStack.Messaging.MessageHandler.DefaultInExceptionHandler tries to acknowlege the request message (to remove it from the incoming queue), the message just stays on top of the incoming queue and gets processed all over again. This procedure repeats indefinitely and results in one dlq-message per iteration which slowly fills up the dlq.
I am wondering,
if ServiceStack handles the case, when ServiceStack.RabbitMq.RabbitMqProducer cannot declare the response queue, correctly
if ServiceStack.RabbitMq.RabbitMqProducer muss always declare the response queue before publishing the response
if it wouldn't be best to have some configuration flag to omit all exchange and queue declaration calls (outside of the first initialization). The RabbitMqProducer would just assume every queue/exchange to be properly set up and just publish the message.
(At the moment our client just declares its response queue to be exclusive=false and everything works fine. But I'd really like to use rabbitmq's built-in temporary queues.)
MQ-Client Code, requires simple "SayHello" service:
const string INQ_QUEUE_NAME = "mq:SayHello.inq";
const string EXCHANGE_NAME="mx.servicestack";
var factory = new ConnectionFactory() { HostName = "192.168.179.110" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
// Create temporary queue and setup bindings
// this works (because "mq:tmp:" stops RabbitMqProducer from redeclaring response queue)
string responseQueueName = "mq:tmp:SayHello_" + Guid.NewGuid().ToString() + ".inq";
channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work (RabbitMqProducer tries to declare queue again => error):
//string responseQueueName = Guid.NewGuid().ToString() + ".inq";
//channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work either (RabbitMqProducer tries to declare queue again => error)
//var responseQueueName = channel.QueueDeclare().QueueName;
// publish simple SayHello-Request to standard servicestack exchange ("mx.servicestack") with routing key "mq:SayHello.inq":
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
channel.BasicPublish(EXCHANGE_NAME, INQ_QUEUE_NAME, props, Encoding.UTF8.GetBytes("{\"ToName\": \"Chris\"}"));
// consume response from response queue
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
// print result: should be "Hello, Chris!"
Console.WriteLine(Encoding.UTF8.GetString(ea.Body));
}
}
Everything seems to work fine when RabbitMqProducer does not try to declare the queues, like that:
public void PublishMessage(string exchange, string routingKey, IBasicProperties basicProperties, byte[] body)
{
const bool MustDeclareQueue = false; // new config parameter??
try
{
if (MustDeclareQueue && !Queues.Contains(routingKey))
{
Channel.RegisterQueueByName(routingKey);
Queues = new HashSet<string>(Queues) { routingKey };
}
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
catch (OperationInterruptedException ex)
{
if (ex.Is404())
{
Channel.RegisterExchangeByName(exchange);
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
throw;
}
}
The issue got adressed in servicestack's version v4.0.32 (fixed in this commit).
The RabbitMqProducer no longer tries to redeclare temporary queues and instead assumes that the reply queue already exist (which solves my problem.)
(The underlying cause of the infinite loop (wrong error handling while publishing response message) probably still exists.)
Edit: Example
The following basic mq-client (which does not use ServiceStackmq client and instead depends directly on rabbitmq's .net-library; it uses ServiceStack.Text for serialization though) can perform generic RPCs:
public class MqClient : IDisposable
{
ConnectionFactory factory = new ConnectionFactory()
{
HostName = "192.168.97.201",
UserName = "guest",
Password = "guest",
//VirtualHost = "test",
Port = AmqpTcpEndpoint.UseDefaultPort,
};
private IConnection connection;
private string exchangeName;
public MqClient(string defaultExchange)
{
this.exchangeName = defaultExchange;
this.connection = factory.CreateConnection();
}
public TResponse RpcCall<TResponse>(IReturn<TResponse> reqDto, string exchange = null)
{
using (var channel = connection.CreateModel())
{
string inq_queue_name = string.Format("mq:{0}.inq", reqDto.GetType().Name);
string responseQueueName = channel.QueueDeclare().QueueName;
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
var message = ServiceStack.Text.JsonSerializer.SerializeToString(reqDto);
channel.BasicPublish(exchange ?? this.exchangeName, inq_queue_name, props, UTF8Encoding.UTF8.GetBytes(message));
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
//channel.BasicAck(ea.DeliveryTag, false);
string response = UTF8Encoding.UTF8.GetString(ea.Body);
string responseType = ea.BasicProperties.Type;
Console.WriteLine(" [x] New Message of Type '{1}' Received:{2}{0}", response, responseType, Environment.NewLine);
return ServiceStack.Text.JsonSerializer.DeserializeFromString<TResponse>(response);
}
}
~MqClient()
{
this.Dispose();
}
public void Dispose()
{
if (connection != null)
{
this.connection.Dispose();
this.connection = null;
}
}
}
Key points:
client declares anonymous queue (=with empty queue name) channel.QueueDeclare()
server generates queue and returns queue name (amq.gen*)
client adds queue name to message properties (props.ReplyTo = responseQueueName;)
ServiceStack automatically sends response to temporary queue
client picks up response and deserializes
It can be used like that:
using (var mqClient = new MqClient("mx.servicestack"))
{
var pingResponse = mqClient.RpcCall<PingResponse>(new Ping { });
}
Important: You've got to use servicestack version 4.0.32+.

How to generate random bytes via Cryptographic Service Provider (CSP) without .NET/COM?

Is there a way to generate strong random bytes via Microsoft's Cryptographic Service Provider (CSP) without using .NET/COM? For example, using command line or some other way?
I'd like to use it in NodeJS to be more specific.
Refer to http://technet.microsoft.com/en-us/library/cc733055(v=ws.10).aspx
netsh nap client set csp name = <name> keylength = <keylength>
If this command works for you, just exec it through nodejs. (require('child_process').exec)
Yes, using the Windows API. Here is a sample C++ code:
#include "Wincrypt.h"
// ==========================================================================
HCRYPTPROV hCryptProv= NULL; // handle for a cryptographic provider context
// ==========================================================================
void DoneCrypt()
{
::CryptReleaseContext(hCryptProv, 0);
hCryptProv= NULL;
}
// --------------------------------------------------------------------------
// acquire crypto context and a key ccontainer
bool InitCrypt()
{
if (hCryptProv) // already initialized
return true;
if (::CryptAcquireContext(&hCryptProv , // handle to the CSP
NULL , // container name
NULL , // use the default provider
PROV_RSA_FULL , // provider type
CRYPT_VERIFYCONTEXT )) // flag values
{
atexit(DoneCrypt);
return true;
}
REPORT(REP_ERROR, _T("CryptAcquireContext failed"));
return false;
}
// --------------------------------------------------------------------------
// fill buffer with random data
bool RandomBuf(BYTE* pBuf, size_t nLen)
{
if (!hCryptProv)
if (!InitCrypt())
return false;
size_t nIndex= 0;
while (nLen-nIndex)
{
DWORD nCount= (nLen-nIndex > (DWORD)-1) ? (DWORD)-1 : (DWORD)(nLen-nIndex);
if (!::CryptGenRandom(hCryptProv, nCount, &pBuf[nIndex]))
{
REPORT(REP_ERROR, _T("CryptGenRandom failed"));
return false;
}
nIndex+= nCount;
}
return true;
}

Resources