I deployed an ASP.NET Core 7 application to Linux Web Application in Azure.
When I access the URL I get an Application Error and the Logs shows:
System.IO.FileNotFoundException:
The configuration file 'settings..json' was not found and is not optional.
It seems it is missing the Environment value so it should be:
settings.production.json
In the Azure Application Service Configuration I have:
[
{
"name": "ASPNETCORE_ENVIRONMENT",
"value": "production",
"slotSetting": false
}
]
And the application Program.cs code is:
Serilog.Log.Logger = new
Serilog.LoggerConfiguration()
.WriteTo.Console(LogEventLevel.Verbose)
.CreateBootstrapLogger();
try {
Serilog.Log.Information("Starting up");
WebApplicationBuilder builder = WebApplication.CreateBuilder(new WebApplicationOptions {
Args = args,
WebRootPath = "webroot"
});
builder.Configuration
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("settings.json", false, true)
.AddJsonFile($"settings.{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")}.json", false, true)
.AddEnvironmentVariables();
// Remaining code
Am I doing something wrong or something change in Net 7?
In short, this problem occurs because the settings.production.json file was not included at the time of release.
We can verify this by uploading the 'settings.production.json' file to the scm site. The URL is https://your_appname_azurewebsites.net/newui .
Solution:
Official doc : Include files
Sampleļ¼ Use ResolvedFileToPublish in ItemGroup
Related
so I've been working in the Net7 preview and have been trying to deploy a WASM project with identity and authentication which works fine locally. When I deploy the website 500s and digging into some of the logs, I get:
2022-11-07T13:42:28.854805951Z fail: Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler[3]
2022-11-07T13:42:28.854856853Z Exception occurred while processing message.
2022-11-07T13:42:28.854865053Z System.NullReferenceException: Object reference not set to an instance of an object.
2022-11-07T13:42:28.856255318Z at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.IdentityServerJwtBearerOptionsConfiguration.ResolveAuthorityAndKeysAsync(MessageReceivedContext messageReceivedContext)
2022-11-07T13:42:28.856286120Z at Microsoft.AspNetCore.Authentication.JwtBearer.JwtBearerHandler.HandleAuthenticateAsync()
In my Program.cs I have
builder.Services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(connectionString));
builder.Services.AddDatabaseDeveloperPageExceptionFilter();
builder.Services.AddDefaultIdentity<ApplicationUser>(options => options.SignIn.RequireConfirmedAccount = true)
.AddEntityFrameworkStores<ApplicationDbContext>();
builder.Services.AddIdentityServer()
.AddApiAuthorization<ApplicationUser, ApplicationDbContext>();
builder.Services.AddAuthentication()
.AddIdentityServerJwt()
.AddJwtBearer()
.AddGoogle(googleOptions =>
{
googleOptions.ClientId = builder.Configuration["Authentication:Google:ClientId"];
googleOptions.ClientSecret = builder.Configuration["Authentication:Google:ClientSecret"];
});
builder.Services.AddControllersWithViews();
builder.Services.AddRazorPages();
builder.Services.AddHttpContextAccessor();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseMigrationsEndPoint();
app.UseWebAssemblyDebugging();
}
else
{
app.UseExceptionHandler("/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseBlazorFrameworkFiles();
app.UseStaticFiles();
app.UseRouting();
app.UseIdentityServer();
app.UseAuthentication();
app.UseAuthorization();
app.MapRazorPages();
app.MapControllers();
app.MapFallbackToFile("index.html");
app.Run();
I've been trying to follow different Duende guides but even when I eventually get it to run locally, I still get the same error. I've tried removing the JWT lines in AddAuthentication() and that also did not seem to help.
You need to have on your appsettings.json
"IdentityServer": {
"Key": {
"Type": "Development"
} },
Change the type to the correct one.
Here is my config:
// cypress/plugins/index.js
module.exports = (on, config) => {
require('#cypress/code-coverage/task')(on, config);
//require('#bahmutov/cypress-extends')(on, config);
return config
}
I am getting an ERROR when trying to run cypress in a Azure pipeline script (within a cypress/included container). This error doesn't occur when I run on my local.
The function exported by the plugins file threw an error.
We invoked the function exported by `/root/e2e/cypress/plugins/index.js`, but it threw an error.
Error: Cannot find module '#cypress/code-coverage/task'
Require stack:
- /root/e2e/cypress/plugins/index.js
- /root/.cache/Cypress/9.1.1/Cypress/resources/app/packages/server/lib/plugins/child/run_plugins.js
The only unusual thing I am doing is this:
// cypress/config/cypress.local.json
{
"extends": "../../cypress.json",
"baseUrl": "https://localhost:4200"
}
And a normal cypress.json config:
// /cypress.json
{
"baseUrl": "http://localhost:4200",
"proxyUrl": "",
"defaultCommandTimeout": 10000,
"video" : false,
"screenshotOnRunFailure" : true,
"experimentalStudio": true,
"projectId": "seixri",
"trashAssetsBeforeRuns" : true,
"videoUploadOnPasses" : false,
"retries": {
"runMode": 0,
"openMode": 0
},
"viewportWidth": 1000,
"viewportHeight": 1200
}
The problem here might be that Cypress does not support extending the configuration file in the way you did, as also stated here: https://www.cypress.io/blog/2020/06/18/extending-the-cypress-config-file/
In my opinion there are two suitable solution approaches:
1. Approach: Use separate configuration files (my recommendation)
As extending an existing configuration file does not work, I would recommend having separate configuration files, e.g. one for local usage and one for the execution in Azure pipelines. You could then simple add two separate commands in your package.json like:
"scripts": {
"cy:ci": "cypress run --config-file cypress/cypress.json",
"cy:local": "cypress run --config-file cypress/cypress.local.json"
},
Docs: https://docs.cypress.io/guides/references/configuration
2. Approach: Set configuration options in your tests
Cypress gives you the option to overwrite configurations directly in your tests. For example, if you have configured the following in cypress.json:
{
"viewportWidth": 1280,
"viewportHeight": 720
}
You can change the viewportWidth in your test like:
Cypress.config('viewportWidth', 800)
Docs: https://docs.cypress.io/api/cypress-api/config#Syntax
I am building a chatbot on Bot Framework 4 on .NetCore2.2. The chatbot has LUIS, QnA Maker integrated in it and it works perfectly fine locally in Emulator with and without security (Microsoft APP ID/ Password). After I deploy it on Azure using Azure DevOps it gives me the error below:
I have followed the instructions here. It works fine locally but not after deployment on Azure.
Here is my appsetting.json.
{
"Logging": {
"LogLevel": {
"Default": "Warning"
}
},
"botFilePath": "nlp-with-dispatch.bot",
"botFileSecret": "",
"MicrosoftAppId": "a8402bb0-3a7a-4727-a2b1-e8012b009732",
"MicrosoftAppPassword": "<password here>",
"QnAKnowledgebaseId": "55c79164-f0f1-4b4e-ab7e-1a5481227683",
"QnAEndpointKey": "<key here>",
"QnAEndpointHostName":
"https://<name>.azurewebsites.net/qnamaker",
"LuisAppId": "44d2cf32-153d-4d57-b5ac-30e34be7faa3",
"LuisAPIKey": "<key here>",
"LuisAPIHostName": "westus",
"AllowedHosts": "*"
}
EDIT 1: I am getting the following in browser console when I try to test from Test in Web Chat.
EDIT 2:
When I add Microsoft APP ID and Password in Emulator when working on localhost, my bot gets Authentication error in Emulator.
EDIT 3: This is the exception I am getting
POST to CivicTheBot failed: POST to the bot's endpoint failed with HTTP status 403 System.Exception at Microsoft.Bot.ChannelConnector.BotAPI.ThrowOnFailedStatusCode
EDIT 4:
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Bot.Builder;
using Microsoft.Bot.Builder.BotFramework;
using Microsoft.Bot.Builder.Integration.AspNet.Core;
using Microsoft.Bot.Connector.Authentication;
using Microsoft.Extensions.DependencyInjection;
using IntermediatorBotSample.Middleware;
using Microsoft.Extensions.Configuration;
using Microsoft.Bot.Builder.Core.Extensions;
using System;
using Microsoft.Bot.Builder.TraceExtensions;
namespace Microsoft.BotBuilderSamples
{
public class Startup
{
public IConfiguration Configuration
{
get;
}
public Startup(IHostingEnvironment env)
{
var builder = new ConfigurationBuilder()
.SetBasePath(env.ContentRootPath)
.AddJsonFile("appsettings.json", optional: true,
reloadOnChange: true)
.AddJsonFile($"appsettings.{env.EnvironmentName}.json",
optional: true)
.AddEnvironmentVariables();
Configuration = builder.Build();
}
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion
(CompatibilityVersion.Version_2_1);
// Create the Bot Framework Adapter with error handling enabled.
services.AddSingleton<IBotFrameworkHttpAdapter,
AdapterWithErrorHandler>();
// Create the bot services (LUIS, QnA) as a singleton.
services.AddSingleton<IBotServices, BotServices>();
// Create the bot as a transient.
services.AddTransient<IBot, DispatchBot>();
// Create the User state.
services.AddSingleton<UserState>();
services.AddMvc().AddControllersAsServices();
services.AddSingleton(_ => Configuration);
services.AddBot<DispatchBot>(options =>
{
// options.CredentialProvider = new
ConfigurationCredentialProvider(Configuration);
options.Middleware.Add(new CatchExceptionMiddleware<Exception>(async
(context, exception) =>
{
await context.TraceActivityAsync("Bot Exception",
exception);
await context.SendActivityAsync($"Sorry, it looks like
something went wrong: {exception.Message}");
}));
// Handoff middleware
options.Middleware.Add(new HandoffMiddleware(Configuration));
});
services.AddMvc(); // Required Razor pages
}
// This method gets called by the runtime. Use this method to
configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment
env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseHsts();
}
app.UseDefaultFiles();
app.UseStaticFiles();
app.UseBotFramework();
//app.UseHttpsRedirection();
app.UseMvc();
}
}
}
I am new to docker as well as azure batch. The problem i am having currently is i have 2 dotnet console applications one of them runs locally (which creates the pool, job and task on azure batch programmatically) and for second one i have created a docker image and pushed to azure container registry. Now the things is when i create the cloudtTask from locally running application as monetione below
TaskContainerSettings cmdContainerSettings = new TaskContainerSettings(
imageName: "myrepository.azurecr.io/pipeline:latest",
containerRunOptions: "--rm"
);
CloudTask containerTask = new CloudTask(
id: "task1",
commandline: cmdLine);
containerTask.ContainerSettings = cmdContainerSettings;
Console.WriteLine("Task created");
await batchClient.JobOperations.AddTaskAsync(newJobId, containerTask);
Console.WriteLine("-----------------------");
and add it to the BatchClient, the expcetion i get in azure batch (Azure portal) is this:
System.UnauthorizedAccessException: Access to the path '/home/_azbatch/.dotnet' is denied. ---> System.IO.IOException: Permission denied
--- End of inner exception stack trace ---
What can be the problem? Thank you.
As the comment ended up being the answer, I'm posting it here for clarity for future viewers:
The task needs to be run with elevated rights.
eg.
containerTask.UserIdentity = new UserIdentity(new AutoUserSpecification(elevationLevel: ElevationLevel.Admin, scope: AutoUserScope.Task));
See the docs for more info
i am still not able to pull image from docker, i am using nodejs .. following are configs for creating task
const taskConfig = {
"id": "task-new-2",
"commandLine": "bash -c 'node index.js'",
"containerSettings": {
"imageName": "xxx.xx.io/xx-test:latest",
"containerRunOptions": "--rm",
"username": "xxx",
"password": "tfDlZ",
"registryServer": "xxx.xx.io",
// "workingDirectory": "AZ_BATCH_NODE_ROOT_DIR"
},
"userIdentity": {
"autoUser": {
"scope": "pool",
"elevationLevel": "admin"
}
}
}
Question regarding SAML configuration.
I'm currently running Gitlab 9.1 CE edition on CentOs 7. I have an Apache instance on the front end for a reverse proxy to Gitlab handling http(s)
My gitlab.rb has the following configured
external_url 'http://external.apache.server/gitlab/'
gitlab_rails['omniauth_enabled'] = true
gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
gitlab_rails['omniauth_auto_sign_in_with_provider'] = 'saml'
gitlab_rails['omniauth_block_auto_created_users'] = false
# gitlab_rails['omniauth_auto_link_ldap_user'] = false
gitlab_rails['omniauth_auto_link_saml_user'] = true
# gitlab_rails['omniauth_external_providers'] = ['twitter', 'google_oauth2']
# gitlab_rails['omniauth_providers'] = [
# {
# "name" => "google_oauth2",
# "app_id" => "YOUR APP ID",
# "app_secret" => "YOUR APP SECRET",
# "args" => { "access_type" => "offline", "approval_prompt" => "" }
# }
# ]
In order to setup SAML my provider is asking for the information returned from http://external.apache.server/gitlab/users/auth/saml/metadata which returns a 404.
In reading the SAML documentation, it mentions that Gitlab needs to be configured for SSL, not sure if this is why the URL mentioned above is returning a 404.
The problem with enabling SSL is that my external URL is already providing that and if I use it as is https://external.apache.server then Gitlab is looking for key/cert for that domain on the box which doesn't seem correct. I don't want to change the external URL as it should be fronted by Apache. Bit confused on what the proper configuration should be.
Thanks