Kentico 13 - Any Docker image available for kentico CMS Admin - kentico

I am using Kentico 13 and planning to use two docker containers, one to host the admin site and the to host the user-facing site. I have used a linux container to host the user-facing site, do you guys provide any image for Kentico 13 Admin like you have for Kentico 12.
One more question I have is related to connection string, The user-facing site (.net core application) uses a connectionstring which is present in app.config and the Kentico dll uses that connection string. Is it possible for me to move it to an environment variable instead of putting it in app settings. It would be easier for me to use a single docker image and change the environment variable to point to different DB instances.
Docker Image: Is it possible to have two docker images (one windows based for admin and the other linux based)? And is there any docker image available for kentico 13 just like Kentico 12.
Connection String: Since the connection string is present in the appsettings file, I need to make multiple docker images (one per environment). If I can use the connection string from environment variables then it would be easier for me to use the single image and use differebt environment variable values for different environment.

To my knowledge Kentico does not have an updated / public docker image for Kentico Xperience 13. Also since the admin tool is running on .NET Full Framework (4.x) it has some known issues with running inside of a container that isn't Windows based in early versions of 12 and 13. So I would check out this guide and make sure you have updated to a recent hotfix as well.
https://dev.to/truelime/top-3-reasons-to-use-docker-with-kentico-xperience-2689

Related

Using GitLab + Read the Docs for documentation on a Private VM: RTD build fails

Background
I am a technical writer trying to use Read the Docs to generate documentation for one of our product. As we have a non-disclosure agreement for any publication, I have to host the documentation on a virtual machine for customers with intranet access to read.
Installation
GitLab
My VM is a CentOS 8. I installed GitLab Community Edition through Docker. I created a repository for my Markdown source code under the root account, the address of the repo being http://${vm_address}/root/${repo_name}. The GitLab container runs on Port 20 of my VM.
Read the Docs
As RTD does not officially support On-premise deployment, I pulled an unofficial image from Docker. See vassilvk/readthedocs. This RTD container runs on Port 8000 of my VM. I use username "admin" to log into RTD.
Procedure I Took to Integrate GitLab and RTD
To import the source code in my GitLab, I did the following:
On the Project page, click Import a Project.
Click Import Manually on the left panel.
In the Project Details page, fill in the fields as follows:
Project name: ${my_project_name}
Repository URL: ${Clone_With_HTTP_Address} I copied the URL from the "Clone with HTTP" field under the Clone button dropdown in GitLab
Repository Type: Git
In the Advanced Project Options, I set Documentation Type to Sphinx HTML.
Click Finish.
Result
The build fails with error code 1.
Question
Where did I do wrong with the RTD project settings?
Is something wrong going on with my RTD or GitLab container settings?
Do I still need to install Sphinx on the VM?
As we have a non-disclosure agreement for any publication, I have to host the documentation
This does not follow at all. You must be looking at the wrong ReadTheDocs. There are two sites:
ReadTheDocs.org - that one is the free, publicly visible hosting.
ReadTheDocs.com - that's the one you want, it hosts private repositories for businesses exactly like yours.
Unless you're in a well managed, secure IT environment, running random Docker images on your own VM will almost certainly lead to inadvertent disclosure. Are you in hosting business? No. Don't play a hosting business when all you want is to write some private documentation. There are products for that.

Problems running .NET Core 3.1 and SQL Server 2017 Linux images on Linux host

I'm running a Web API using .NET Core 3.1 with EF Core 3.1 connecting to a SQL Server 2017 database. Every component is built in a Linux Docker image and I'm using docker-compose with Nginx as a reverse proxy to route to my Web API.
When I test my deployment on Windows 10 using Docker Desktop, everything works perfectly. When I test the exact same images, with the exact same docker-compose.yml file on a Linux host (Ubuntu 18.04), the connection from the API to the Databse seems to drop from time to time, randomly, which is a problem when it happens in the middle of a DB write operation. I either get a 504 (timeout), or worse, a lock on a database row which blocks access to that table.
Again, if I deploy in Windows, this never happens. Only happens when the Docker host is Linux.
Any help would be greatly appreciated. I've been trying to find some solution for the past 3 days, but nothing I've tried corrected that behaviour on my Linux box.
One more thing. On Windows, instead of connecting the API to the DB by name (the container's name), I'm using host.docker.internal in the connection string. There is no equivalent for Linux. I've read about a couple of hacks, but none seem to solve my problem.

Deploying Strapi to Azure

I want to deploy Strapi to my Azure. Anyone here who has an experience doing such and making it up and running completely? Somehow I couldn't find any detailed instructions how to do that in Azure.. I'm looking for something that is as easy as deploying it to Heroku - but it's fine though if it'll require more steps as long as I can make it to work completely.
This is the complete instruction I have also created in the README of the repository.
Strapi-Azure 3.1.3
This is a working repository of Strapi 3.1.3 which you can already deploy as an Azure Web App. This requres a paid subscription, minimum of B1 plan (32 USD estimated), so we can enable the 64-bit platform configuration and the Always On feature.
To get started, let us first create and configure our Azure Web App:
Create an instance:
Name: The name of your choice that is still available
Publish: Code
Runtime staci: Node 12 LTS
Operating System: Windows
Region: select near you
Sku and Size: select B1 (minimum)
Configure the Environment variables:
Add the following key-value pairs:
For the HOST make a ping to your .azurewebsites.net instance and get the IP
Configure the Platform Settings
In the General Settings tab (beside the Application Settings), change the Platform from 32 Bit to 64 Bit
To confirm if you are indeed now on 64 Bit mode, go to Console and run node -p "process.arch"
Install yarn:
Go again to Console and run: npm install -g yarn
Deploy from your github account a copy of strapi-azure repo
In the Deployment Center tab, connect your GitHub account and browse your copy of strapi-azure
Select App Service build service as your build provider
Select repository and branch
Deploy!
Build your Admin UI using Kudu service
Go to Advance Tools -> Go -> expand Debug console from the toolbar -> CMD
Inside the wwwroot directory (site/wwwroot/), execute yarn build
See it in action 😊
It should not be any different than installing Strapi on any VM (Azure, AWS, GCP or even local VM).
Quick start guide should help you setup things and run Strapi server --> https://strapi.io/documentation/3.x.x/getting-started/quick-start.html
Primarily: Install nodejs, npm and strapi (via npm). Execute strapi new cms --quickstart and you should be good to go (with default configuration).
Assuming you have it within a GIT repository, I may have some useful insights.
When I set mine up, I created an app service hosted on windows - for some reason I found the Linux ones very unstable. I then used the Deployment Center to then setup the connection between my repository hosted on Azure Devops onto my App Service. When it deploys IISNode will automatically be setup with an appropriate web.config file for getting a NodeJS server up and running.
You may need to ensure you are running in production (assuming this is what you want), you can set this up by going to the App Service - Configuration - Application Settings (tab) - set up new variable called
"NODE_ENV" and set this "PRODUCTION".
I also found it useful to set
"WEBSITE_NODE_DEFAULT_VERSION" and specify the version - in my case it was "10.15.2".
For the database I used a ComosDB with the Mongo API, this was hosted on azure and it worked OK - the main problem I found was that I was getting charged a lot for the usage of it, not quite sure at this stage how to get around it.
One thing that did catch me out was setting the "port" variable within the config/environments/production/server.json - I was hard coding a port which doesn't work within IISNode - this needs to be set to something like
"host": "your.domain.com"
"port": "${process.env.PORT || 1280}"
You will also need to setup your database settings in config/environments/production/database.json file.
Happy to work through any further points, let me know

Local cluster does not allow same application type with a different version in local service fabric cluster

The following post (on stackoverflow.com):
Design of Application in Azure Service Fabric
suggested that it is possible to have side by side installation of same application type with a different version. I tried to install a new version of application (fabric:/ServiceFabApp1 with a new version of 2.0.0 and of ServiceFabApp1Type) on my local cluster (that already has same application name with same application type with version 1.0.3 i.e. fabric:/ServiceFabApp1 with a existing version of 1.0.3 and of ServiceFabApp1Type) and got following error:
An application with name 'fabric:/ServiceFabApp1' already exists, its Type is 'ServiceFabApp1Type' and Version is
'1.0.3'.
You must first remove the existing application before a new application can be deployed or provide
a new name for the application.
Is this by design that application type (for multiple versions) can be same but the application name must be different for each version? Or it simply does not work on the local cluster but works in the azure cloud? Or is my interpretation of the information in the above link is incorrect?
Application types (eg. ServiceFabricApp1Type) can have one or more versions but an application instance (eg. fabric:/ServiceFabricApp1) can only be running one version at a time.
Thus, if you want to have two different versions of your application type running in your local cluster, you will need two different application instances, such that you can have, say, fabric://ServiceFabricApp1 running version 1.0.0 and fabric:/ServiceFabricApp2 running version 2.0.0. The easiest way to do this with the VS tools is to create two application parameters files, each of which defines a distinct app instance name. You can then choose which of the current instances to target with the current version that you're building. To move back and forth between versions of the type in VS, you'll probably want to just create a branch for each.
When you deploy a SF application, there are several steps:
1. Copying application package to the service SF image store
2. Provision application
3. Deploy/upgrade application
Step #1 is just copying the package to the SF cluster image store.
Step #2 provisions a new version of the application so that SF can either deploy that application, or upgrade an existing application if it has already been deployed.
Step #3 depends on what you've done before. If you have already deployed version X of your app, you can't deploy version X+1. You can only upgrade/downgrade.
If you need to run multiple instances of applications with the same version, you'll need to create different packages where name of the app is a unique name (a multi-tenant scenario).

Ready to use Image for SharePoint 2013 environment

I have a requirement in which I am required to have a ready to use image of the SharePoint 2013 development environment with all the necessary installations like SQL server, language pack, etc and configurations already done- App configuration, service application configuration, search configuration etc, which the developers can directly use by either mounting it or by running it and right away start the development.
The environment can be considered here is single server environment with all the tiers in the same server. However any suggestions for the multi-server environment will also be helpful.
Thanks

Resources