MarkLogic rest service returns 405 Method Not Allowed from selected servers - linux

MarkLogic 8, Linux (Centos & RH6). I've set-up the same REST service, User, and Roles on each of three MarkLogic instances (2x Centos,1x RH6). I've separately checked the settings for these entries are identical for each host. The Centos boxes are VMs (VirtualBox on my local machine) where one is the original and the other a clone of this original VM). The RH6 machine is a networked development server. I'm using 'curl', via windows 7 command line, to 'PUT' a single test file into the Documents database. The Curl command I use is :
curl --basic --user <user>:<pwd> --upload-file "<file path>" -H "Content-type: text/plain" -X PUT "http://<host name>:<port number>/v1/documents?database=<database name>&uri=<test uri>"
I get a "405 Method not allowed", as a simple XML document [source = MarkLogic?] from ML on the RH6 and the cloned Centos machine but not from my original Centos VM where ML shows the file has loaded correctly. MarkLogic error logs show no errors on any of the hosts.
Any ideas on where should I start looking to resolve this issue?

Missed the fact I hadn't entered the default error handler and url rewriter settings of "/MarkLogic/rest-api/error-handler.xqy" and "/MarkLogic/rest-api/rewriter.xml" on the appserver on the two ML hosts that were reporting the issue.

Related

How to set up custom hostnames and ports for servers (eg node.js) running in WSL 2

(I've provided a simple working solution in response)
I recently moved from macOS to WSL 2. I have two node servers running within WSL 2 (Ubuntu distro). Each must be accessible through a custom hostname for development vs production purposes. I've had difficulty accessing the node servers via custom hostnames (ie set in some ../etc/hosts file) especially given WSL 2's dynamic IP that changes per WSL/pc 'boot'. How does one go about setting custom hostnames in WSL 2?
Scenario:
Each node.js app server (again running within WSL 2) must be accessed from the browser with the following urls/custom hostnames:
www.app1.com:3010
www.app2.com:3020
After searching around I have found the following relatively simple process works. I thought I'd share and save some time and headache for those new to WSL 2. Note, although I'm using node as the server stack, this process should more or less be the same for other app/web server stacks.
Note the following SE post is the basis of the solution. It's also worthwhile to examine MSFT's reference on WSL vs WSL 2. Also note, I haven't provided deep rationale on why these steps are required, why we might need custom hostnames, ipv6 options in ../etc/hosts, the meaning of 127.0.0.1, loopback addresses, WSL 2 and distro management, etc. These are subjects beyond the scope of this post.
Simple scenario:
nodeApp1: node application server with custom hostname: 'www.app1.com' on port 3010 (or whatever)
nodeApp2: node application serverwith custom hostname: 'www.app2.com' on port 3020 (or whatever)
Each node.js app server (again running within wsl 2) can be accessed from the browser with the following urls:
www.app1.com:3010
www.app2.com:3020
Two key items:
The correct etc/hosts files to be modified is on the Windows side (not WSL distro) at: C:\Windows\System32\drivers\etc\hosts (yes in Windows folders). This is a 'hot' update so no need for WSL 2 reboot. The content for this scenario is:
127.0.0.1 localhost
127.0.0.1 www.app1.com
127.0.0.1 www.app2.com
255.255.255.255 broadcasthost
::1 localhost www.app1.com www.app2.com
Please add C:\Users\"you"\.wslconfig with the following content (yes in Windows folders):
[wsl2]
localhostForwarding=true
Note: there's a reference to this in WSL 2 Ubuntu distro's /etc/hosts.
Also note, this requires WSL shutdown and reboot. Shutting down your terminal is insufficient. Also total machine boot is not
required. Simply run:
wsl --shutdown (in Powershell) or
wsl.exe --shutdown (within Ubuntu)
Then restart the Windows Terminal app (or any WSL terminal) to access the updated WSL 2 environment. The apps with custom urls/hostnames will now work in the browser permanently and WSL 2's dynamic IP is circumvented.

IIS deployment issue

I am running IIS in a standalone Windows Server 2012, and getting a mysterious issue:
I have a webapi (developed with .NET 4.6) which impelement SCEP protocol under the HTTP GET, and the app folder is in C:\inetpub\wwwroot\app\mywebapi.
If I create a virtual directory under Default Web Site,then it works (for instance, http://localhost/app/mywebapi).
Test procedures:
using browsers to surf: http://localhost/app/mywebapi?operation=GetCACert
using sscep (an command line test tool) sscep getca -u http://localhost/app/mywebapi -c ca.crt
Results: Both cases work
if I create a new website, then it does not work in some case (for instance: http://mywebapi) (hosts file has been edited, so it understand mywebapi as a hostname already)
using browsers to surf: http://mywebapi?operation=GetCACert
using sscep (an command line test tool) sscep getca -u
http://mywebapi -c ca.crt
Results:
OK
does not work, IIS returns 404
Does anybody know about this issue?

Jenkins Error 128 / Git Error 403: Jenkins can't connect to my Bitbucket repository

OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.

Web2Py on AWS EC2 Linux

I have an instance running Linux at Amazon AWS EC2 after carefully following the instructions provided by Amazon here: Setting Up to Host a Web App on AWS.
I have set-up the security groups as mentioned in the documentation provided by Amazon.
The default security group has all traffic, all protocols, on all ports open.
In addition to the above security rule, I have setup SSH on port 22 and then, using CyberDuck (a great FTP app), I have uploaded the Web2Py source code into a folder named web2py at AWS.
After successfully FTP the source code into this web2py folder, I have SSH'ed into the AWS machine using the Terminal (on Mac locally) having the my-keys-file.pem on hand:
ssh -i my-keys-file.pem ec2-user#ec2-xx-xx-xx-xx.compute-1.amazonaws.com
(where the xx are the numbers in the Public DNS as they appear on my instance on EC2 page)
Then I have checked whether my AWS instance has python installed and it does have it.
Thus, I have proceeded to install Web2Py.
python2.6 web2py.py
password = pwd
it warns that GUI not available since Tlk library is not installed, but Massimo says here (http://comments.gmane.org/gmane.comp.python.web2py/129181) that it's not critical.
Running the Web2Py ....
If I try:
python web2py.py -a pwd -i 0.0.0.0 -p 80
It says:
there is an error with the Rocket Server with that specific port (used by another process that is not willing to share...)
If I try:
python web2py.py -a pwd
it says nothing (which begs the question: is web2py running ?) and when I try to access the web2py server
http://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/
or
https://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/admin
in both cases it says page is not available since it takes too long to access it (nothing about security cause).
If I try:
python web2py.py -a pwd -i 0.0.0.0 -p 8000
again - it says nothing (is web2py running ?)
trying to access the Web2Py server at
http://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/
or
https://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/admin
in both cases it says page is not available, same as above.
I have tried to use the IP address instead, but it is immediately translated to the amazon format of ec2-xx-xx-xx-xxx.etc...
I have tried to access web2py by explicitly mentioning the port (8000) in the address - still it doesn't work while giving no reason except page is not available
My questions:
Is there any DETAILED recipe on how to install AND run Web2Py on AWS EC2 ?
Is the web2py server running ? How can I know if it is running ? If it is not - what am I doing incorrectly ?
If the web2py server is running how can I access it ?
Any help would be much appreciated.
Thanks
I have deployed my Web2py to an EC2 instance running Ubuntu, but I guess you can adapt the same approach to your system.
The simplest way to deploy Web2py is following the 'One step production deployment' script introduced in the official Web2py book.
wget http://web2py.googlecode.com/hg/scripts/setup-web2py-ubuntu.sh
chmod +x setup-web2py-ubuntu.sh
sudo ./setup-web2py-ubuntu.sh
Running this will install and configure everything you need.
When finished, simply type your IP or domain name into a web browser and you will see the default web2py website.

How to send file to Sharepoint from Linux creating non existend directories

I have a problem while sending file from linux to SharePoint. Everything is fine if I am uploading to existing directory, I use this method:
curl --ntlm --user username:password --upload-file myfile.xls https://sharepointserver.com/sites/mysite/myfile.xls
Unfortunately problem arises when I point the target to non existing directory, like:
curl --ntlm --user username:password --upload-file myfile.xls https://sharepointserver.com/sites/mysite/nonexist/myfile.xls
I would like it to create all necessary directorie on the path. I've tried to use "--create-dirs" CURL option, but it doesn't work.
Any ideas how to achieve the goal? It doesn't have to be CURL actually, i can use different method available on linux.
As the name (CLIENT URL) suggests, you will not be able to create new directories on remote SERVERS involving http/https while uploading files.
For downloads involving http/https server, --create-dirs option is applicable only on local machines to create new directories (for instance, when you are downloading a content on to your local linux machine).
However, while using ftp/sftp to a server, you will be able to create new directories on the remote server.

Resources