We're building out an api service for a node application that's running on Google app engine. Currently, I have setup passport to use the 'passport-http-bearer' strategy to handle browserless http requests to our api. This takes a token from the authorization header of the request and uses that to authenticate it.
We're also building out an on-prem python program that will request Google for a token, which we will send to the node app to make an api call. Based on what I've seen around the web, it seems like the best way to do this is to use a service account that is associated with the GCP project. Unfortunately, all the tutorials I've seen use the service account credentials to make authorized calls to Google APIs. I would like to use the service account credentials to make authorized calls to our application's API. My problem is that I can't seem to find any code to take the bearer token from the request and then check against the service account to say either "Yes this was generated from the right account" or "No this request should be rejected". Any insights into how to bridge this gap would be very helpful. Currently my (initial, very poor) bearer strategy is:
passport.use(new BearerStrategy((token, done) => {
console.log('Bearer called with token: ', token);
if (token === '<Fake test token for SO>') {
console.log(' valid token!');
return done(null, { name: 'api_service' });
}
console.log(' invalid token...');
return done(null, false);
}));
We ended up using an https request directly to the google auth endpoint. Here's the code:
// Bearer token strategy for headless requests. This is used to authenticate API calls
passport.use(new BearerStrategy((token, done) => {
//forming the request to hit the google auth endpoint
const options = {
host: 'www.googleapis.com',
path: `/oauth2/v1/tokeninfo?access_token=${token}`,
headers: {
Accept: 'application/json'
}
};
//ask google endpoint if the token has the service account's email
https.get(options, (res) => {
res.setEncoding('utf8');
res.on('data', (chunk) => {
if (JSON.parse(chunk).email === config.get('SVCACCT_NAME')) {
//good request from the service account
return done(null, { name: 'api_service' });
}
//not the service account
return done(null, false);
});
}).on('error', (err) => {
console.log('Got API auth error: ', err.message);
//error or bad token. Either way reject it
return done(err, false);
});
}));
We used a shell script with the service account json file from the project console to generate the token for testing purposes (This won't run on mac. I had to use a docker container with jq installed.). Later we'll translate this to python:
#!/bin/bash
if [ -z "${1}" ]; then
PROG=$( basename $0 )
echo "usage: ${PROG} <JSON account file>"
exit 1
fi
keyfile="${1}"
client_email=$( jq -r '.client_email' $keyfile )
if [ -z "${client_email}" ]; then
echo "JSON file does not appear to be valid"
exit 2
fi
private_key=$( jq '.private_key' $keyfile | tr -d '"' )
if [ -z "${private_key}" ]; then
echo "JSON file does not appear to be valid"
exit 3
fi
keyfile=$( mktemp -p . privkeyXXXXX )
echo -e $private_key > $keyfile
now=$( date "+%s" )
later=$( date -d '+30 min' "+%s" )
header=$( echo -n "{\"alg\":\"RS256\",\"typ\":\"JWT\"}" | base64 -w 0 )
claim=$( echo -n "{ \"iss\":\"${client_email}\", \"scope\":\"email profile\", \"aud\":\"https://www.googleapis.com/oauth2/v4/token\", \"exp\":${later}, \"iat\":${now} }" | base64 -w 0 )
data="${header}.${claim}"
sig=$( echo -n $data | openssl dgst -sha256 -sign $keyfile -keyform PEM -binary | base64 -w 0 )
rm -f $keyfile
stuff=$( echo "${header}.${claim}.${sig}" | sed 's!\/!%2F!g' | sed 's/=/%3D/g' | sed 's/\+/%2B/g' )
curl -d "grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion=${stuff}" https://www.googleapis.com/oauth2/v4/token
Google offers google.oauth2.id_token module to help verify tokens.
verify_oauth2_token can ve used to check a Google token:
verify_oauth2_token(id_token, request, audience=None)[source]
Verifies an ID Token issued by Google’s OAuth 2.0 authorization server.
[ ... ]
Returns: The decoded token.
Related
I am working on an application with Ruby on Rails back-end and EmberJS Front-end.
I would like to achieve the following.
Log in user with MSAL with Ember FE
Get the auth_code and pass on to the back end
From the back end, fetch access token and refresh token.
Use it to send email using azure Graph API.
Basically I would like to perform step 1 of azure ad auth, i.e. fetching the auth_code, at the front end. And then perform step 2 at the back end. And then refresh the tokens at the back-end as needed.
I believe MSAL provides no good way to achieve this. When user consents (Step 1), it uses the auth_code by itself to fetch access_token and refresh_token (Step 2), and cashes it without exposing the values. It invokes acquireTokenSilent method to refresh the tokens when needed.
I can see that MSAL also has a ssoSilent method, which performs only step 1 of auth and returns auth code. I tried to use it in the following manner
signIn = () => {
myMSALObj
.loginPopup(loginRequest)
.then(this.handleResponse)
.catch((error) => {
console.error(error);
});
};
handleResponse = (resp) => {
console.log('>>>> Response', resp);
// this.token = resp;
if (resp !== null) {
username = resp.account.username;
// showWelcomeMessage(resp.account);
resp.account = myMSALObj.getAccountByUsername(username);
console.log('Resp => ', resp);
myMSALObj
.ssoSilent(resp)
.then((r) => {
console.log(r);
})
.catch((e) => {
console.log('Error ->', e);
});
} else {
// loadPage();
}
};
This always ends up in the following error
InteractionRequiredAuthError: interaction_required: Silent authentication was denied. The user must first sign in and if needed grant the client application access to the scope ...
Even when the user has just consented for these scopes.
Is there something I am missing?
or
Is there any better way to achieve this functionality?
Thanks in advance for any help
I am using Rails 6 with Ember 4.1
I am making a module in Node.js where the user can make their own Livestream Bot for youtube. The module requires the user to provide their client id and client secret of their application. But how do I check if the client id and client secret they entered is valid and if it isn't throw an appropriate error. Is there an endpoint to do that?
There is no way of validating the client id or the client secret. The only way is to try them and if they dont work then display an error to the user.
You could maybe come up with a regex test for the client_id but i wouldn't recommend it as the format has changed several times over the years which would cause problems in your application.
I would just say try to use it if it fails give them an error message.
Note: I hope you are not prompting people to give you their client id and client secret as its against googles TOS for people to shire their client credentials if you are enc urging people to give you their credentials you are telling them to break the TOS.
Valid Code:
function ProjectValid($PostArray){
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => 'https://oauth2.googleapis.com/token',
CURLOPT_POST => true,
CURLOPT_CUSTOMREQUEST => "POST",
CURLOPT_RETURNTRANSFER => true,
CURLOPT_TIMEOUT => 30,
CURLOPT_HTTPHEADER => [
'Content-Type: application/x-www-form-urlencoded',
],
CURLOPT_POSTFIELDS => http_build_query([
'code' => md5(time()),
'client_id' => $PostArray['ClientID'],
'client_secret' => $PostArray['ClientSecret'],
'redirect_uri' => $_SERVER['REQUEST_SCHEME'].'://'.$_SERVER["HTTP_HOST"].'/'.$URL,
'grant_type' => 'authorization_code'
]),
));
$Response = curl_exec($curl);
curl_close($curl);
if($Array=json_decode($Response,true) and isset($Array['error']) and $Array['error'] == 'invalid_grant'){
return true;
}else{
return false;
}
}
I have a folder with list of files in my storage account and having been trying to delete one of the files using pipeline. In-order to get that done I have used "Web" in pipeline, copied the blob storage url and access keys.
Tired using the access keys directly under Headers|Authorization. Also tried the concept of Shared Keys at https://learn.microsoft.com/en-us/azure/storage/common/storage-rest-api-auth#creating-the-authorization-header
Even tried getting this work with curl, but it returned an Authentication Error every time I tried to run
# List the blobs in an Azure storage container.
echo "usage: ${0##*/} <storage-account-name> <container-name> <access-key>"
storage_account="$1"
container_name="$2"
access_key="$3"
blob_store_url="blob.core.windows.net"
authorization="SharedKey"
request_method="DELETE"
request_date=$(TZ=GMT LC_ALL=en_US.utf8 date "+%a, %d %h %Y %H:%M:%S %Z")
#request_date="Mon, 18 Apr 2016 05:16:09 GMT"
storage_service_version="2018-03-28"
# HTTP Request headers
x_ms_date_h="x-ms-date:$request_date"
x_ms_version_h="x-ms-version:$storage_service_version"
# Build the signature string
canonicalized_headers="${x_ms_date_h}\n${x_ms_version_h}"
canonicalized_resource="/${storage_account}/${container_name}"
string_to_sign="${request_method}\n\n\n\n\n\n\n\n\n\n\n\n${canonicalized_headers}\n${canonicalized_resource}\ncomp:list\nrestype:container"
# Decode the Base64 encoded access key, convert to Hex.
decoded_hex_key="$(echo -n $access_key | base64 -d -w0 | xxd -p -c256)"
# Create the HMAC signature for the Authorization header
signature=$(printf "$string_to_sign" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:$decoded_hex_key" -binary | base64 -w0)
authorization_header="Authorization: $authorization $storage_account:$signature"
curl \
-H "$x_ms_date_h" \
-H "$x_ms_version_h" \
-H "$authorization_header" \
-H "Content-Length: 0"\
-X DELETE "https://${storage_account}.${blob_store_url}/${container_name}/myfile.csv_123"
The curl command returns an error:
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:XX
Time:2018-08-09T10:09:41.3394688Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request 'xxx' is not the same as any computed signature. Server used following string to sign: 'DELETE
You cannot authorize directly from the Data Factory to the storage account API. I suggest that you use an Logic App. The Logic App has built in support for Blob store:
https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azureblobstorage
You can call the Logic App from the Data Factory Web Activity. Using the body of the Data Factory request you can pass variables to the Logic app like the blob path.
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.Rest;
using Microsoft.Azure.Management.ResourceManager;
using Microsoft.Azure.Management.DataFactory;
using Microsoft.Azure.Management.DataFactory.Models;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Microsoft.WindowsAzure.Storage;
namespace ClearLanding
{
class Program
{
static void Main(string[] args)
{
CloudStorageAccount backupStorageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=https;AccountName=yyy;AccountKey=xxx;EndpointSuffix=core.windows.net");
var backupBlobClient = backupStorageAccount.CreateCloudBlobClient();
var backupContainer = backupBlobClient.GetContainerReference("landing");
var tgtBlobClient = backupStorageAccount.CreateCloudBlobClient();
var tgtContainer = tgtBlobClient.GetContainerReference("backup");
string[] folderNames = args[0].Split(new char[] { ',', ' ' }, StringSplitOptions.RemoveEmptyEntries);
foreach (string folderName in folderNames)
{
var list = backupContainer.ListBlobs(prefix: folderName + "/", useFlatBlobListing: false);
foreach (Microsoft.WindowsAzure.Storage.Blob.IListBlobItem item in list)
{
if (item.GetType() == typeof(Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob))
{
Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob blob = (Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob)item;
if (!blob.Name.ToUpper().Contains("DO_NOT_DEL"))
{
var tgtBlob = tgtContainer.GetBlockBlobReference(blob.Name + "_" + DateTime.Now.ToString("yyyyMMddHHmmss"));
tgtBlob.StartCopy(blob);
blob.Delete();
}
}
}
}
}
}
}
I tried resolving this by compiling the above code and referencing it using custom activity in C# pipeline. The code snippet above transfers files from landing folder to a backup folder and deletes the file from landing
I'm trying to use a Service Principle to access a Batch pool from an Azure Function and running into authentication issues that I don't understand. The initial login with the Service Principle works fine, but then using the credentials to access the batch pool returns a 401.
Below is a condensed version of my code with comments at the key points
module.exports.dispatch = function (context) {
MsRest.loginWithServicePrincipalSecret('AppId', 'Secret', 'TennantId', function(err, credentials){
if (err) throw err;
// This works as it prints the credentials
context.log(credentials);
var batch_client = new batch.ServiceClient(credentials, accountUrl);
batch_client.pool.get('mycluster', function(error, result){
if(error === null)
{
context.log('Accessed pool');
context.log(result);
}
else
{
//Request to batch service returns a 401
if(error.statusCode === 404)
{
context.log('Pool not found yet returned 404...');
}
else
{
context.log('Error occurred while retrieving pool data');
context.log(error);
}
//'Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly.
context.res = { body: error.body.message.value };
context.done();
}
});
});
};
How can the initial login with a service principle work no problem, but then the credentials it returns not be able to access the batch pool?
The actual error says to check the auth header on the request, which I can see and the Authorisation header isn't even present.
I've triple checked the Active Directory access control for the batch account the App ID and secret are the ones belonging to the owner of the batch account. Any ideas what to try next?
The credentials expected by the Azure Batch npm client aren't the Azure Active Directory credentials/token, but the keys for the batch account. You can list your keys using the Azure CLI with a command like the following:
az batch account keys list -g "<resource-group-name>" -n "<batch-account-name>"
sample here
Then you can create the credentials parameter with those keys:
var credentials = new batch.SharedKeyCredentials('your-account-name', 'your-account-key');
You could still involve a Service Principal here if you wanted to store your batch keys in something like Key Vault, but then your code would be:
Get Service Principal auth against key vault to fetch name and key
Use name and key to create credentials
You cannot use the same OAuth token returned from the Azure Resource Management endpoint with Batch. Assuming your service principal has the correct RBAC permissions, auth with the Azure Batch endpoint: https://batch.core.windows.net/ instead (assuming you are using Public Azure).
You do not need to get the shared key credentials for the Batch account, credentials via AAD should be used instead if you are using an AAD service principal.
I happened to run across this same issue and I didn't have the option of using SharedKeyCredentials so I wanted to share my solution in case anyone else finds it helpful.
As fpark mentions, we need to get an OAuth token to use with Batch instead of the default Azure Resource Management. Below is the original code posted by Mark with the minor modification needed to make it work with Batch:
module.exports.dispatch = function (context) {
let authOptions = {tokenAudience: 'batch'};
MsRest.loginWithServicePrincipalSecret('AppId', 'Secret', 'TennantId', authOptions, function(err, credentials){
if (err) throw err;
// This works as it prints the credentials
context.log(credentials);
var batch_client = new batch.ServiceClient(credentials, accountUrl);
batch_client.pool.get('mycluster', function(error, result){
if(error === null)
{
context.log('Accessed pool');
context.log(result);
}
else
{
//Request to batch service returns a 401
if(error.statusCode === 404)
{
context.log('Pool not found yet returned 404...');
}
else
{
context.log('Error occurred while retrieving pool data');
context.log(error);
}
//'Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly.
context.res = { body: error.body.message.value };
context.done();
}
});
});
};
I have had great success with the Microsoft Graph API to access users etc etc within Azure Active Directory, however two things that still require EWS and SOAP are retrieving user photos and adding a mail rule to a users mail account.
I'm using Service accounts for everything, and impersonating an account admin to make requests.
After attempting to use the same access token that I am using against the Graph API, I receive the error:
The access token is acquired using an authentication method that is too weak to allow access for this application. Presented auth strength was 1, required is 2.
Reading around, I understand that because EWS requires full privileges against the accounts, you can't just pass the access token, but you also have to "do something" with an x509 certificate.
In my registered app, within Azure, I have adjusted the manifest so to include a self signed certificate so that I have:
"keyCredentials": [{
"customKeyIdentifier": "lhbl...../w0bjA6l1mQ8=",
"keyId": "774D2C35-2D58-.....-AC34B15472BA",
"type": "AsymmetricX509Cert",
"usage": "Verify",
"value": "MIIFtTCCA52gAwIB.....mmgufQ2rW5GSjEEXOlO1c7qw=="
}],
My understanding is the customKeyIdentifier is the Base64 of the key, from the command: echo $(openssl x509 -in cert.pem -fingerprint -noout) | sed 's/SHA1 Fingerprint=//g' | sed 's/://g' | xxd -r -ps | base64
the value is literally the key content, with the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- removed, and all new lines removed too (otherwise in the manifest, the json isn't valid).
The keyId is a GUID I just generated on the terminal with the uuidgen command, I don't think its related directly to the certificate in any way.
What I'm not sure then, is what I have to change within my code, that is going to try to auth against EWS.
I have started out with the node-ews library, my configuration looks like:
var ewsConfig = {
username: userEmail,
token: self.accessToken,
host: 'https://outlook.office365.com/EWS/Exchange.asmx',
auth: 'bearer'
};
var ews = new EWS(ewsConfig);
var ewsFunction = 'UpdateInboxRules';
ews.run(ewsFunction, ewsArgs)
.then(result => {
cb(null, result)
})
.catch(err => {
cb(err);
});
};
self.accessToken is the same token that I receive when accessing the Microsoft Graph API.
So, in conclusion, my questions are:
What do I need to do to my request so that I am telling the server to also auth the x509 certificate, I read that I may need to convert it to a PKCS12 certicificate also?
Can I use the same accessToken that I am successfully using to access the graph api?
Is there a code snippet anywhere for Nodejs doing this?
Is the keyId ok to be any identifier I want to give it?
The response I get back contains:
{ 'content-length': '0',
server: 'Microsoft-IIS/8.5',
'request-id': '9b0d7a1b-85e6-40f6-9af0-7f65fc6669dc',
'x-calculatedfetarget': 'MM1P123CU001.internal.outlook.com',
'x-backendhttpstatus': '401, 401',
'set-cookie': [Object],
'x-feproxyinfo': 'MM1P123CA0026.GBRP123.PROD.OUTLOOK.COM',
'x-calculatedbetarget': 'MM1P123MB1337.GBRP123.PROD.OUTLOOK.COM',
'x-ms-diagnostics': '2000001;reason="The access token is acquired using an authentication method that is too weak to allow access for this application. Presented auth strength was 1, required is 2.";error_category="invalid_token"',
'x-diaginfo': 'MM1P123MB1337',
'x-beserver': 'MM1P123MB1337',
'x-feserver': 'MM1P123CA0026, VI1PR0701CA0059',
'x-powered-by': 'ASP.NET',
'www-authenticate': 'Bearer client_id="00000002-0000-0ff1-ce00-000000000000", trusted_issuers="00000001-0000-0000-c000-000000000000#*", token_types="app_asserted_user_v1 service_asserted_app_v1", authorization_uri="https://login.windows.net/common/oauth2/authorize", error="invalid_token",Basic Realm="",Basic Realm="",Basic Realm=""',
date: 'Tue, 02 May 2017 18:08:54 GMT',
connection: 'close' } }
Thanks, much appreciated
I followed this article to generate access_token https://blogs.msdn.microsoft.com/arsen/2015/09/18/certificate-based-auth-with-azure-service-principals-from-linux-command-line/, did have some issues with jwt signing, I had to use openssl rsa -check -in key.pem to decrypt the key and save it in a text file. then jwt signing worked. You also need to be impersonating, see this https://github.com/CumberlandGroup/node-ews/issues/39
it may help with node-ews. I have not tested this scenario with node-ews. If you are interested in looking at more robust approach with ews managed api like coding, I have ported c# version of ews-managed-api to ews-javascript-api , here is the sample code to achieve same with ews-javascript-api, tested and confirmed working code.
var ews = require("ews-javascript-api");
ews.EwsLogging.DebugLogEnabled = false;
var exch = new ews.ExchangeService(ews.ExchangeVersion.Exchange2013);
exch.Credentials = new ews.OAuthCredentials("oauth access_token");
exch.Url = new ews.Uri("https://outlook.office365.com/Ews/Exchange.asmx");
exch.ImpersonatedUserId = new
ews.ImpersonatedUserId(ews.ConnectingIdType.SmtpAddress, "user#domain.com");
exch.HttpHeaders = { "X-AnchorMailbox": "user#domain.com" };
var rule = new ews.Rule();
rule.DisplayName = "MoveInterestingToJunk";
rule.Priority = 1;
rule.IsEnabled = true;
rule.Conditions.ContainsSubjectStrings.Add("Interesting");
rule.Actions.MoveToFolder = new ews.FolderId(ews.WellKnownFolderName.JunkEmail);
var ruleop = new ews.CreateRuleOperation(rule);
exch.UpdateInboxRules([ruleop], true)
.then(function (response) {
console.log("success - update-inboxrules");
ews.EwsLogging.Log(response, true, true);
}, function (err) {
debugger;
console.log("error in update-inboxrules");
ews.EwsLogging.Log(err, true, true);
});