Using GitLab + Read the Docs for documentation on a Private VM: RTD build fails - gitlab

Background
I am a technical writer trying to use Read the Docs to generate documentation for one of our product. As we have a non-disclosure agreement for any publication, I have to host the documentation on a virtual machine for customers with intranet access to read.
Installation
GitLab
My VM is a CentOS 8. I installed GitLab Community Edition through Docker. I created a repository for my Markdown source code under the root account, the address of the repo being http://${vm_address}/root/${repo_name}. The GitLab container runs on Port 20 of my VM.
Read the Docs
As RTD does not officially support On-premise deployment, I pulled an unofficial image from Docker. See vassilvk/readthedocs. This RTD container runs on Port 8000 of my VM. I use username "admin" to log into RTD.
Procedure I Took to Integrate GitLab and RTD
To import the source code in my GitLab, I did the following:
On the Project page, click Import a Project.
Click Import Manually on the left panel.
In the Project Details page, fill in the fields as follows:
Project name: ${my_project_name}
Repository URL: ${Clone_With_HTTP_Address} I copied the URL from the "Clone with HTTP" field under the Clone button dropdown in GitLab
Repository Type: Git
In the Advanced Project Options, I set Documentation Type to Sphinx HTML.
Click Finish.
Result
The build fails with error code 1.
Question
Where did I do wrong with the RTD project settings?
Is something wrong going on with my RTD or GitLab container settings?
Do I still need to install Sphinx on the VM?

As we have a non-disclosure agreement for any publication, I have to host the documentation
This does not follow at all. You must be looking at the wrong ReadTheDocs. There are two sites:
ReadTheDocs.org - that one is the free, publicly visible hosting.
ReadTheDocs.com - that's the one you want, it hosts private repositories for businesses exactly like yours.
Unless you're in a well managed, secure IT environment, running random Docker images on your own VM will almost certainly lead to inadvertent disclosure. Are you in hosting business? No. Don't play a hosting business when all you want is to write some private documentation. There are products for that.

Related

Deploying Strapi to Azure

I want to deploy Strapi to my Azure. Anyone here who has an experience doing such and making it up and running completely? Somehow I couldn't find any detailed instructions how to do that in Azure.. I'm looking for something that is as easy as deploying it to Heroku - but it's fine though if it'll require more steps as long as I can make it to work completely.
This is the complete instruction I have also created in the README of the repository.
Strapi-Azure 3.1.3
This is a working repository of Strapi 3.1.3 which you can already deploy as an Azure Web App. This requres a paid subscription, minimum of B1 plan (32 USD estimated), so we can enable the 64-bit platform configuration and the Always On feature.
To get started, let us first create and configure our Azure Web App:
Create an instance:
Name: The name of your choice that is still available
Publish: Code
Runtime staci: Node 12 LTS
Operating System: Windows
Region: select near you
Sku and Size: select B1 (minimum)
Configure the Environment variables:
Add the following key-value pairs:
For the HOST make a ping to your .azurewebsites.net instance and get the IP
Configure the Platform Settings
In the General Settings tab (beside the Application Settings), change the Platform from 32 Bit to 64 Bit
To confirm if you are indeed now on 64 Bit mode, go to Console and run node -p "process.arch"
Install yarn:
Go again to Console and run: npm install -g yarn
Deploy from your github account a copy of strapi-azure repo
In the Deployment Center tab, connect your GitHub account and browse your copy of strapi-azure
Select App Service build service as your build provider
Select repository and branch
Deploy!
Build your Admin UI using Kudu service
Go to Advance Tools -> Go -> expand Debug console from the toolbar -> CMD
Inside the wwwroot directory (site/wwwroot/), execute yarn build
See it in action 😊
It should not be any different than installing Strapi on any VM (Azure, AWS, GCP or even local VM).
Quick start guide should help you setup things and run Strapi server --> https://strapi.io/documentation/3.x.x/getting-started/quick-start.html
Primarily: Install nodejs, npm and strapi (via npm). Execute strapi new cms --quickstart and you should be good to go (with default configuration).
Assuming you have it within a GIT repository, I may have some useful insights.
When I set mine up, I created an app service hosted on windows - for some reason I found the Linux ones very unstable. I then used the Deployment Center to then setup the connection between my repository hosted on Azure Devops onto my App Service. When it deploys IISNode will automatically be setup with an appropriate web.config file for getting a NodeJS server up and running.
You may need to ensure you are running in production (assuming this is what you want), you can set this up by going to the App Service - Configuration - Application Settings (tab) - set up new variable called
"NODE_ENV" and set this "PRODUCTION".
I also found it useful to set
"WEBSITE_NODE_DEFAULT_VERSION" and specify the version - in my case it was "10.15.2".
For the database I used a ComosDB with the Mongo API, this was hosted on azure and it worked OK - the main problem I found was that I was getting charged a lot for the usage of it, not quite sure at this stage how to get around it.
One thing that did catch me out was setting the "port" variable within the config/environments/production/server.json - I was hard coding a port which doesn't work within IISNode - this needs to be set to something like
"host": "your.domain.com"
"port": "${process.env.PORT || 1280}"
You will also need to setup your database settings in config/environments/production/database.json file.
Happy to work through any further points, let me know

How could I prohibit anonymous access to my NodeRed UI Dashboard on IBM Cloud(Bluemix)?

I'm working with node-red, on boilerplate IBM cloud. I know that there is a way, changing the value of enviroments variables(NODE_RED_USERNAME and NODE_RED_PASSWORD), to change username and password of the editor flow. But, what about UI dashboard? I mean using dashboard nodes. Forbid access to
https://noderedservicename.mybluemix.net/ui/
I know that on the code, changing the variable httpNodeAuth on the file settings.js I can do what I want. What is the way for doing that on IBM Cloud?
Thank you in advance!
You need to add the httpNodeAuth (not the httpAdminAuth as this is for controlling access to the Node-RED editor and can done with the environment variables discussed in the other answer.) to the app/bluemix-settings.js file.
Something like this:
...
httpStatic: path.join(__dirname,"public"),
httpNodeAuth: {user:"user",pass:"$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN."},
functionGlobalContext: { },
...
Details of how to generate the pass can be found here
There are a number of ways you can edit the file, some of which include linking the Node-RED deployment to a git repository or downloading the whole app, editing the file and pushing it back to Bluemix (when you first deploy Node-RED from the starter pack it gives you instructions on how to download the source to make changes and then push them back. You can get to these instructions by clicking on the "Getting started" link in your Node-RED Bluemix console page).
But the quickest/simplest/dirtiest way is probably to just SSH into the instance and change the file with something like vi. Details on how to ssh to an app instance can be found here. But the following should work:
cf ssh [app name]
Once you have edited the file you will need to tell bluemix to restart the app. You can do this from the web console or with the cf command line tool.
(The changes made by this method will not survive if the app is restaged, or bluemix decides to move your instance to another machine internally because it will rebuild the app from the pushed sources. The permanent solution is to download the source, edit and push back)
This link will help you but it's written in Japanese.
http://dotnsf.blog.jp/archives/1030376575.html
Summary
You can define the "user-defined" environment variables through the IBM Cloud dashboard.
It contains the variables to protect Node-RED GUI.
You have to be set as follows
NODE_RED_USERNAME : username
NODE_RED_PASSWORD : password

How do you permanently delete a folder inside an Azure VSTS project?

I have folders within both Azures VSTS (a TFVC repository) and TFS that I needed to permanently delete. On TFS this was quite easily done using the tf destroy $/<MyProject>/<Folder_To_Delete> command in a command window on the server on which the TFS is running. The web page in learn.microsoft.com that describes the "tf destroy" command (Https:// learn.microsoft.com/en-us/vsts/tfvc/destroy-command-team-foundation-version-control) shows that this command is also available for VSTS however I have been unable to get the command to work within a Developer Command Prompt window running on my local box.
tf destroy $/<MyProject>/<Folder_To_Delete> /collection:https://<MyTeamService>.visualstudio.com/<MyProject> /login:<userid>,<password>
The error I receive back is
TF31002: Unable to connect to this Team Foundation Server:
Team Foundation Server Url:
Possible reasons for failure include:
- The name, port number, or protocol for the Team Foundation Server is incorrect.
- The Team Foundation Server is offline.
- The password has expired or is incorrect.
Technical information (for administrator):
The remote server returned an error: (404) Not Found.
However if I put the URL into a browser my VSTS instance shows up. So the 404 looks to me like azure is blocking outside efforts to permanently delete on VSTS. I have logged onto the Azure portal expecting to find something like the Advanced tools option you would find on Web App Services, But the Team Services / Team Projects has nothing like this. Can someone explain to me how to properly execute the "tf destroy" command on Azure Team Services? Or does Azures VSTS just lack the support to permanently delete individual folders and files?
The tf destroy command requires a collection URL. In VSTS, there is no concept of a collection, only team projects. All team projects are created under the Default Collection.
To use the tf destroy command with VSTS, your collection URL must be in the following format:
https://accountname.visualstudio.com/DefaultCollection
By putting a collection URL of https://accountname.visualstudio.com/Project Name, the command was looking for a collection called Project Name in the VSTS account, which does not exist.
This command works:
Open a Developer Command Prompt in administrator mode and issue the following command and supply your credentials.
tf destroy $/Project Name/Folder To Delete /collection:https://accountname.visualstudio.com/DefaultCollection
To Permanently destroy an item/folder in VSTS is also using Destroy Command (Team Foundation Version Control)
tf destroy [/keephistory] <itemspec1>[;<versionspec>][<itemspec2>...<itemspecN>]
[/stopat:<versionspec>] [/preview] [/startcleanup] [/noprompt] [/silent] [/login:username,[password]] [/collection:TeamProjectCollectionUrl]]
/collection which specifies the team project collection. However, in VSTS you only have one collection. There is no collection name like TFS in the url. And multiple collections under user voice:
Let us create multiple collections on Visual Studio Team Services
https://developercommunity.visualstudio.com/idea/365419/let-us-create-multiple-collections-on-visual-studi.html
So, when you specify /collection for VSTS in tf command line, you just need to enter https://xxx.visualstudio.com
Also pay attention to the /login:<userid>,<password>, you are using the wrong format, it should be /login:userid,[password], add a /preview for test first(When tf destroy runs in the preview mode, the files are not actually destroyed.)
Finally the result will be
When you remove the /preview and perform the really destroy, you will also get a prompt info:
Do you want to destroy $/Scrum/NugetTest/Capture1025.PNG and all of
its children? (Yes/No/All)
Select Yes to delete folder and files in it, All with all of its children.
https://<MyTeamService>.visualstudio.com/<MyProject> is the wrong URL. It should just be https://<MyTeamService>.visualstudio.com/.
The parameter is asking for a project collection, not a team project within the collection.

How to do remote staging in liferay 6.1.1 GA2?

I have a site when I tried to apply local staging it's worked fine,but we I tried to connect it through remote server it's not working giving error connection can't be established.Does any one tried it?
This is the configuration with the error message:
This blog post (disclaimer: my own) explains how to do it with https - you can omit long parts of it if you don't want encryption. It also covers 6.0, but the general principle is still the same.
You want to pay special attention to the paragraph Allow access to webservices in that article and check if your publishing server (the "stage") has access to the live server. In general, if this is not on localhost, it requires configuration as mentioned in that article.
As you indicate that you can't connect to your production server from your staging server, please check by opening a browser, running on the staging server and connect it to the production server - go to http://production-server-name:8080/api/axis and validate that you can connect (note: You get the authoritative result for this test only when not accessing localhost as the production system: Do run the browser on the staging system!) - with this test you can eliminate the first chance of your remote system being disallowed. Once this succeeds, you'll need credentials for the production server to be entered on the staging server - the account that you use needs to have permissions to change all the data it needs to change when publishing content (and pages etc.)
The error message you give in the added screenshot can appear when the current user on staging does not have access to the production system (with the credentials used) - verify that you have the same user account that you are using on your staging system (the one that gets the error message from the screenshot) in your production system. Synchronize the passwords of the two.
I your comment you give the information that you're using different version for the staging and the production environment - I don't expect that to work, so this might be the root cause. Test with both systems at the same version.
A couple important points to keep in mind with remote publishing:
If you're not on LDAP (or you have different LDAPs for different environments), you should validate that your user account is exactly the same in both source and target environments. So, if you're on the QA site and you want to remote publish to production, your screen name, email address, and password should all be the same.
Email address is uber important. Depending on which distribution (version) of Liferay you are on, the remote publish code uses your email address to irrespective of whether or not you have portal-ext.properties configured to use screenname.
You should have the Administrator role in on both sides. It may not be required in every scenario, but giving that role out to users that do remote publishing has saved me time and effort debugging why someone's remote publish didn't work. Debugging this process takes a very long time.
If remote publishing is causing you problems (and it probably is or you wouldn't be here), try doing lar file exports / imports. This is important since remote publish failures are not exactly helpful in telling you what failed, they just tell you then failed. Surprisingly, there are often problems in the export process and you can sometimes pinpoint some bad documents or a funky development thing you did using Global scope and portlet preferences that caused your RP to fail. I generally use this order in this situation a) documents and media [exclude thumbnails or your lar file will likely double in size, also exclude ranks if you're not using them] from the wrench icon in the control panel b) web content from the wrench icon in the control panel c) public pages [include data > web content display, but remove all the other data check boxes], include permissions, include categories d) private pages [same options as public pages].
If you already have Administrator role and it's saying you don't have permissions to RP to the remote site, setup your user on the target environment with the "Site Administrator" or "Site Owner" role.
A little late for first and foremost, but anytime you have something that's not working (remote publishing or otherwise), check the logs before you do anything else. The Liferay code base doesn't include a lot of helpful logging, but you do occasionally get a nugget of information that helps you piece together enough to do root cause analysis.
Cheers! HTH

Setting Up Kudu On IIS

A couple of days ago, Microsoft released the engine they're using to do git deployments to Azure. I've had a task on my TODO list for a while to get that kind of functionality set up on my DEV IIS server, so I'm interested in trying out Kudu for that purpose.
The "Getting Started" document shows how to run the web front-end, but everything in there uses "http://localhost:PORTNUMBER" type URL's for git repositories, site URL's, etc.
I realize this is probably getting too far ahead of them, but I'm wondering if anyone has pointers for how to set it up using real domains on "regular" IIS instead of all of the localhost bits?
This is an old question, so I'm giving an updated answer with more current info since I just worked through setting up Kudu on an internal deployment server. The currently selected answer only deals with if you are directly running Kudu from within a development environment.
If you are deploying to a "production" type environment and don't want to install Visual Studio on the target server, there is a good guide on the project website on github.
https://github.com/projectkudu/kudu/wiki/Deploying-to-a-server
On the target server, you will need to install:
MSBuild ( http://go.microsoft.com/fwlink/?LinkId=309745 ) - comes with Visual Studio
NodeJS ( http://nodejs.org )
Git ( http://git-scm.com/downloads )
Back on your development machine, clone the git repo and build using the "build.cmd" file, following instructions in the above link.
In running build.cmd I got several test failures which blocked the build from producing artifacts. These were all related to Mercurial, which we don't use. Installing a Mercurial client didn't make them magically go away, so I disabled the tests rather than sink a bunch of time into debugging my environment.
Your build output will indicate the failures. I disabled by commenting out the [Fact] attribute.
These are the tests I disabled:
tests/Kudu.Core.Test/HgRepositoryFacts.cs (all tests)
Once you have a successful build that has created all the items in the artifacts you can move to deploying the Kudu website and web service code. The below instructions are for setting up a distinct web application instance, rather than dumping everything in c:\inetpub\wwwroot, which is how the instructions read.
Copy "artifacts\Release\KuduWeb" to the target area on the server where your website will run from. I run my kudu install with a separate host header, but you could as easily use a separate port or run as the root website. This directory will be the root for your web application.
Create an empty "App_Data" folder immediately under the KuduWeb folder.
Copy "artifacts\Release\SiteExtensions\Kudu" to the same level as the folder in step 1 and rename to "Kudu.Services.Web". This location is set as a relative path in the KuduWeb web.config file - setting serviceSitePath.
Open IIS Admin and create a website pointing to the "KuduWeb" folder from step 1.
Configure the app pool from step 4 to run as "LocalSystem". This is required to manage IIS Sites.
Create a new folder "apps" at the same level as KuduWeb. This is where deployments will be sent. Note: this location is controlled in the KuduWeb web.config file - setting "sitesPath"
Change filesystem permissions to grant "Users" full access to the "apps" folder created in the above step.
On starting my Kudu website, I got the following error.
Parser Error Message: Could not load file or assembly 'System.Web.Mvc, Version=5.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
For some reason it didn't copy the appropriate MVC version into the deployment artifacts.
If you hit this error, the MVC 5 file can be obtained via NuGet. I found that my source code was built against 5.1.0, so this is the appropriate link:
https://www.nuget.org/packages/Microsoft.AspNet.Mvc/5.1.0
In order to extract the dll, I set up a new dummy project and used NuGet to pull down the dll via the package manager console.
Install-Package Microsoft.AspNet.Mvc -Version 5.1.0
Once you get the binary, copy it from package directory ( .\packages\Microsoft.AspNet.Mvc.5.1.0\lib\net45\System.Web.Mvc.dll ) to the website bin directory on the target machine.
At this point you are up and running. Use the web interface to create your application. It will create a subfolder under the "apps" directory with a tree that should be self explanatory. It will also have created two new websites for your application:
kudu_{your-app-name}
kudu_service_{your-app-name}
In a production situation, you should create an additional website running on appropriate port/host header that points to: .\apps\\site\wwwroot
Now you can add a git remote for your deployment. Go to your source location in a git console (ex: Git Bash) and add the remote as identified by Kudu. Note: you may need to change localhost in the url to be the appropriate server name.
git remote add deploy http://:52711/your-app-name.git
Push your code to the new "deploy" remote and see what happens. You should see all the normal push messages, plus the build output.
git push deploy master
My initial push failed to build and deploy due to "node" not being recognized. It was in the path, so a server reset convinced the path environment variable to be refreshed. You may find additional errors to work through. For instance, I had an issue with MSBuild being imported and causing a hiccup.
error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\Visual Studio\v11.0\WebApplications\Microsoft.WebApplication.targets" was not found.
YMMV, but these are all solvable problems now. Good continuous deploying!
The project automatically sets up two websites on IIS for each application you add using the web front end. Kudu doesn't automatically map the bindings for them but it's relatively easy to open IIS and find the two sites named "kudu_appname" and "kudu_appname_service". The service website is the one that you point GIT too and the other one is the site itself. Just add public bindings to them by right-clicking and "edit bindings". You can then add public hostnames to them.
This is the easy part. The hard part that I'm still working on is getting authentication working so any random Joe isn't able to push to my kudu repository!

Resources