I followed the Swashbuckle installation manual for IIS hosted APIs. I installed the NuGet package and deployed it on the IIS. The documentation is now available. However all URLs are postfixed with "-v1", for example "/api/v1/myresource-v1()".
As a consequence, IIS complaint that there is no route for this URL. "/api/v1/myresource" would be correct.
Why does this happen and how do I remove this postfix for all routes?
Thank you.
I don't have direct solution for this problem but this workaround works for me:
c.MultipleApiVersions((apiDesc, targetApiVersion) =>
{
// remove any Version text from the tags
apiDesc.ActionDescriptor.ControllerDescriptor.ControllerName = Regex.Replace(apiDesc.ActionDescriptor.ControllerDescriptor.ControllerName, #"v([\d]+)", string.Empty, RegexOptions.IgnoreCase);
// remove -vX() from Controller Names relativ path
apiDesc.RelativePath = Regex.Replace(apiDesc.RelativePath, #"(-v[\d]+\(\))", string.Empty, RegexOptions.IgnoreCase);
// replace the relative Path Apiverision with the current api version
apiDesc.RelativePath = Regex.Replace(apiDesc.RelativePath, #"(v[{]+version[}])", targetApiVersion, RegexOptions.IgnoreCase);
var apiParameterDescription = apiDesc.ParameterDescriptions.FirstOrDefault(p => p.Name == "version");
if (apiParameterDescription != null)
{
apiDesc.ParameterDescriptions.Remove(apiParameterDescription);
}
// now filter out any controllers that aren't the target version
var controllerNamespace = apiDesc.ActionDescriptor.ControllerDescriptor.ControllerType.FullName;
return CultureInfo.InvariantCulture.CompareInfo.IndexOf(controllerNamespace, $".{targetApiVersion}.", CompareOptions.IgnoreCase) >= 0 &&
apiDesc.RelativePath.Contains($"{targetApiVersion}/");
},
vc =>
{
vc.Version("v4", "Api V4");
vc.Version("v3", "Api V3");
vc.Version("v2", "Api V2");
vc.Version("v1", "Api V1");
}
);
Related
I am currently developing Web Api .net core 5.0 with swagger.
I have hosted my application in IIS.I am able to see my Web APi working but my swagger is not responding.
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "TestWebApi", Version = "v1" });
c.AddSecurityDefinition("Bearer", new OpenApiSecurityScheme
{
In = ParameterLocation.Header,
Description = "Please insert JWT with Bearer into field",
Name = "Authorization",
Type = SecuritySchemeType.ApiKey
});
c.AddSecurityRequirement(new OpenApiSecurityRequirement {
{
new OpenApiSecurityScheme
{
Reference = new OpenApiReference
{
Type = ReferenceType.SecurityScheme,
Id = "Bearer"
}
},
new string[] { }
}
});
});
Also in configure services I have used double dots as mentioned solutions in one of the forums.GitHub
app.UseSwaggerUI(c => c.SwaggerEndpoint("../swagger/v1/swagger.json", "TestWebApi v1")
);
I tried checking the web api as localapi but I am getting 404 error.
Thanks
app.UseSwaggerUI was in condition that isDevelopment loop. For this reason it was working in VS 2019 but not in IIS deployment.
Whatever I tried I cannot set an extension property on a User object, here is a reproducible piece of code:
public async Task CleanTest(string extName)
{
ExtensionProperty ep = new ExtensionProperty
{
Name = extName,
DataType = "String",
TargetObjects = { "User" }
};
App app = (App)(await _client.Applications.Where(a => a.AppId == _managementAppClientId).ExecuteSingleAsync());
app.ExtensionProperties.Add(ep);
await app.UpdateAsync();
GraphUser user = (GraphUser)(await _client.Users.Where(u => u.UserPrincipalName.Equals("email")).ExecuteSingleAsync());
string propName = FormatExtensionPropertyName(extName); //formats properly as extesion_xxx_name
user.SetExtendedProperty(propName, "testvalue");
//user.SetExtendedProperty(extName, "testvalue");
await user.UpdateAsync(); // fails here
}
user.UpdateAsync() according to Fiddler doesn't even go out and application fails with an exception:
"The property 'extension_e206e28ff36244b19bc56c01160b9cf0_UserEEEqdbtgd3ixx2' does not exist on type 'Microsoft.Azure.ActiveDirectory.GraphClient.Internal.User'. Make sure to only use property names that are defined by the type."
This issue is also being tracked here:
https://github.com/Azure-Samples/active-directory-dotnet-graphapi-console/issues/28
I've got an alternative workaround for this bug, for those that want to use the version 5.7 OData libraries rather than redirecting to the v5.6.4 versions.
Add a request pipeline configuration handler.
// initialize in the usual way
ActiveDirectoryClient activeDirectoryClient =
AuthenticationHelper.GetActiveDirectoryClientAsApplication();
// after initialization add a handler to the request pipline configuration.
activeDirectoryClient.Context
.Configurations.RequestPipeline
.OnMessageWriterSettingsCreated(UndeclaredPropertyHandler);
In the handler, change the ODataUndeclaredPropertyBehaviorKinds value on the writer settings to SupportUndeclaredValueProperty.
private static void UndeclaredPropertyHandler(MessageWriterSettingsArgs args)
{
var field = args.Settings.GetType().GetField("settings",
BindingFlags.NonPublic | BindingFlags.Instance);
var settingsObject = field?.GetValue(args.Settings);
var settings = settingsObject as ODataMessageWriterSettings;
if (settings != null)
{
settings.UndeclaredPropertyBehaviorKinds =
ODataUndeclaredPropertyBehaviorKinds.SupportUndeclaredValueProperty;
}
}
Just in case you still looking for solution to this problem or someone else is facing the same issue:
I got similar issue and it looks like, at least for me, the problem was in latest version of "Microsoft.Data.Services.Client" package - 5.7.0 (or in one of it dependencies). When I downgraded to previous version - 5.6.4 it worked as a charm.
I had same symptoms - updating of extended property was failing even w/o any request is made (also used Fiddler)
Hope it helps!
Artem Liman
I am trying to use pkgcloud (node.js) openstack with bluemix object storage, but when I put all the requested parameters as on official page, it always returns 401. I tried using postman as described on bluemix and it works.
I created a package, which is able to to authorize it right. It is just a copy of pkgcloud, with a few fixes.
EDIT: IT IS WORKING! The V2 supports was shot down by bluemix and it has only V3 support now, but I once again find the issues.
Remember to use newest version (2.0.0)
So this is how you can use it now :
var pkgcloud = require('pkgcloud-bluemix-objectstorage');
// Create a config object
var config = {};
// Specify Openstack as the provider
config.provider = "openstack";
// Authentication url
config.authUrl = 'https://identity.open.softlayer.com/';
config.region= 'dallas';
// Use the service catalog
config.useServiceCatalog = true;
// true for applications running inside Bluemix, otherwise false
config.useInternal = false;
// projectId as provided in your Service Credentials
config.tenantId = 'xxx';
// userId as provided in your Service Credentials
config.userId = 'xxx';
// username as provided in your Service Credentials
config.username = 'xxx';
// password as provided in your Service Credentials
config.password = 'xxx';
// This is part which is NOT in original pkgcloud. This is how it works with newest version of bluemix and pkgcloud at 22.12.2015.
//In reality, anything you put in this config.auth will be send in body to server, so if you need change anything to make it work, you can. PS : Yes, these are the same credentials as you put to config before.
//I do not fill this automatically to make it transparent.
config.auth = {
forceUri : "https://identity.open.softlayer.com/v3/auth/tokens", //force uri to v3, usually you take the baseurl for authentication and add this to it /v3/auth/tokens (at least in bluemix)
interfaceName : "public", //use public for apps outside bluemix and internal for apps inside bluemix. There is also admin interface, I personally do not know, what it is for.
"identity": {
"methods": [
"password"
],
"password": {
"user": {
"id": "***", //userId
"password": "***" //userPassword
}
}
},
"scope": {
"project": {
"id": "***" //projectId
}
}
};
console.log("config: " + JSON.stringify(config));
// Create a pkgcloud storage client
var storageClient = pkgcloud.storage.createClient(config);
// Authenticate to OpenStack
storageClient.auth(function (error) {
if (error) {
console.error("storageClient.auth() : error creating storage client: ", error);
}
else {
// Print the identity object which contains your Keystone token.
console.log("storageClient.auth() : created storage client: " + JSON.stringify(storageClient._identity));
}
});
PS : You should be able to connect to this service outside of bluemix, therefore you can test it on your localhost.
Lines below are for old content for version 1.2.3, read only if you want to use v2 version of pkgcloud which was working with bluemix before January 2016
EDIT: It looks like that bluemix shut down support for v2 openstack and only supports v3, which is not supported by pkgcloud at all. So this does not work anymore (at least for me).
The problem is actually between pkgcloud and bluemix authorization process. Bluemix is expecting a little diffent authorization. I created a package, which is able to to authorize it right. It is just a copy of pkgcloud, with a few fixes.
And this is how you can use it :
var pkgcloud = require('pkgcloud-bluemix-objectstorage');
// Create a config object
var config = {};
// Specify Openstack as the provider
config.provider = "openstack";
// Authentication url
config.authUrl = 'https://identity.open.softlayer.com/';
config.region= 'dallas';
// Use the service catalog
config.useServiceCatalog = true;
// true for applications running inside Bluemix, otherwise false
config.useInternal = false;
// projectId as provided in your Service Credentials
config.tenantId = 'xxx';
// userId as provided in your Service Credentials
config.userId = 'xxx';
// username as provided in your Service Credentials
config.username = 'xxx';
// password as provided in your Service Credentials
config.password = 'xxx';
// This is part which is NOT in original pkgcloud. This is how it works with newest version of bluemix and pkgcloud at 22.12.2015.
//In reality, anything you put in this config.auth will be send in body to server, so if you need change anything to make it work, you can. PS : Yes, these are the same credentials as you put to config before.
//I do not fill this automatically to make it transparent.
config.auth = {
tenantId: "xxx", //projectId
passwordCredentials: {
userId: "xxx", //userId
password: "xxx" //password
}
};
console.log("config: " + JSON.stringify(config));
// Create a pkgcloud storage client
var storageClient = pkgcloud.storage.createClient(config);
// Authenticate to OpenStack
storageClient.auth(function (error) {
if (error) {
console.error("storageClient.auth() : error creating storage client: ", error);
}
else {
// Print the identity object which contains your Keystone token.
console.log("storageClient.auth() : created storage client: " + JSON.stringify(storageClient._identity));
}
});
We decided to implement a search functionality in our API which is developed in ServiceStack, we decided to use Lucene.Net since we heard it was a great indexer to make searches.
We created a worker role whose job is to create the indexes in a Azure Storage folder, we guided ourselves using Leon Cullen's tutorial. We use the AzureDirectory library specified in that post, so we could use the latest Azure SDK.
Then in our API project we added the references for Lucene.Net and AzureDirectory too, our endpoint ended up looking like this:
public object Post(SearchIndex request)
{
List<Product> products = new List<Product>();
var pageSize = -1;
var totalpages = -1;
int.TryParse(ConfigurationManager.AppSettings["PageSize"], out pageSize);
if (request.Page.Equals(0))
{
request.Page = 1;
}
// Get Azure settings
AzureDirectory azureDirectory ;
try
{
// This is the line where we get the Access denied exception thrown at us
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
IndexSearcher searcher;
using (new AutoStopWatch("Creating searcher"))
{
searcher = new IndexSearcher(azureDirectory);
}
using (new AutoStopWatch(string.Format("Search for {0}", request.SearchString)))
{
string[] searchfields = new string[] { "Id", "Name", "Description" };
var hits = searcher.Search(QueryMaker(request.SearchString, searchfields), request.Page * pageSize);
int count = hits.ScoreDocs.Count();
float temp_totalpages = 0;
temp_totalpages = (float)hits.ScoreDocs.Count() / (float)pageSize;
if (temp_totalpages > (int)temp_totalpages)
{
totalpages = (int)temp_totalpages + 1;
}
else
{
totalpages = (int)temp_totalpages;
}
foreach (ScoreDoc match in hits.ScoreDocs)
{
Document doc = searcher.Doc(match.Doc);
int producId = int.Parse(doc.Get("Id"));
Product product = Db.Select<Product>("Id={0}", producId).FirstOrDefault();
products.Add(product);
}
}
return new SearchIndexResult { result = products.Skip((int)((request.Page - 1) * 10)).Take(pageSize).ToList(), PageSize = pageSize, TotalPages = totalpages };
}
catch (Exception e)
{
return new HttpResult(HttpStatusCode.NoContent, "azureDirectory. Parameter: " + request.SearchString + ". e: " + e.Message);
}
}
If we run this locally it works as expected, returning us the results we were expecting. But when we published our API to Azure and tried to access to the search endpoint we received an 403 error message with the message 'Access to the path "D:/AzureDirectory" is denied".
We're confused as to why is trying to access to such folder at all, the name of the folder is wrong and I think it's trying to access a local route, we really don't know why does it work fine locally but once it's deployed to Azure it stops working.
The worker role runs without a problems, but it's the API side that cannot access to the folder in Azure Storage. Are we missing some important step in the configuration? The tutorial we followed wasn't very clear for beginners using Lucene.Net or Azure Storage so we fear we might have missed an important step. We've checked our connection strings and everything seems ok though.
As for reference:
https://github.com/azure-contrib/AzureDirectory/blob/master/AzureDirectory/AzureDirectory.cs
when you do this
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
This executes
var cachePath = Path.Combine(Path.GetPathRoot(Environment.SystemDirectory), "AzureDirectory");
var azureDir = new DirectoryInfo(cachePath);
if (!azureDir.Exists)
azureDir.Create();
var catalogPath = Path.Combine(cachePath, _containerName);
var catalogDir = new DirectoryInfo(catalogPath);
if (!catalogDir.Exists)
catalogDir.Create();
_cacheDirectory = FSDirectory.Open(catalogPath);
So simple solution for you might be to have that directory on site root
DirectoryInfo info = new DirectoryInfo(HostingEnvironment.MapPath("~/"));
azureDirectory = new AzureDirectory(storageAccount, containerName, new SimpleFSDirectory(info), true);
I got it to work.
I just got the latest version of AzureDirectory from GitHub.
Got the latest nuGet packages for Azure Storage etc.
Recreated the index.
In addition to #brykneval answer, I tried his solution but last parameter bool compressBlob = false which he set to true made my local debug fail with 404 exception from AzureDirectory library and when I published to Azure web app, it had exception with message: System.IO.InvalidDataException: Block length does not match with its complement.
I removed last parameter from constructor and everything works like a charm. Hope this helps anyone.
I'm trying to read a value from a list in a remote SharePoint site (different SP Web App). The web apps are set up with Claims Auth, and the client web app SP Managed account is configured with an SPN. I believe Kerberos and claims are set up correctly, but I am unable to reach the remote server, and the request causes an exception: "The remote server returned an error: (401) Unauthorized."
The exception occurs in the line ctx.ExecuteQuery(); but it does not catch the exception in the if (scope.HasException) instead, the exception is caught by the calling code (outside of the using{} block).
When I look at the traffic at the remote server using Wireshark, it doesn't look like the request is even getting to the server; it's almost as if the 401 occurs before the Kerberos ticket is exchanged for the claim.
Here's my code:
using (ClientContext ctx = new ClientContext(contextUrl))
{
CredentialCache cc = new CredentialCache();
cc.Add(new Uri(contextUrl), "Kerberos", CredentialCache.DefaultNetworkCredentials);
ctx.Credentials = cc;
ctx.AuthenticationMode = ClientAuthenticationMode.Default;
ExceptionHandlingScope scope = new ExceptionHandlingScope(ctx);
Web ctxWeb = ctx.Web;
List ctxList;
Microsoft.SharePoint.Client.ListItemCollection listItems;
using (scope.StartScope())
{
using (scope.StartTry())
{
ctxList = ctxWeb.Lists.GetByTitle("Reusable Content");
CamlQuery qry = new CamlQuery();
qry.ViewXml = string.Format(ViewQueryByField, "Title", "Text", SharedContentTitle);
listItems = ctxList.GetItems(qry);
ctx.Load(listItems, items => items.Include(
item => item["Title"],
item => item["ReusableHtml"],
item => item["ReusableText"]));
}
using (scope.StartCatch()) { }
using (scope.StartFinally()) { }
}
ctx.ExecuteQuery();
if (scope.HasException)
{
result = string.Format("Error retrieving content<!-- Error Message: {0} | {1} -->", scope.ErrorMessage, contextUrl);
}
if (listItems.Count == 1)
{
Microsoft.SharePoint.Client.ListItem contentItem = listItems[0];
if (SelectedType == SharedContentType.Html)
{
result = contentItem["ReusableHtml"].ToString();
}
else if (SelectedType == SharedContentType.Text)
{
result = contentItem["ReusableText"].ToString();
}
}
}
I realize the part with the CredentialCache shouldn't be necessary in claims, but every single example I can find is either running in a console app, or in a client side application of some kind; this code is running in the codebehind of a regular ASP.NET UserControl.
Edit: I should probably mention, the code above doesn't even work when the remote URL is the root site collection on the same web app as the calling code (which is in a site collection under /sites/)--in other words, even when the hostname is the same as the calling code.
Any suggestions of what to try next are greatly appreciated!
Mike
Is there a reason why you are not using the standard OM?
You already said this is running in a web part, which means it is in the context of application pool account. Unless you elevate permissions by switching users, it won't authenticate correctly. Maybe try that. But I would not use the client OM when you do have access to the API already.