Standard install of gitlab produces 404 - gitlab

I have installed GitLabv13 on ubuntu 20.04 using the standard procedure where external_url is set to a relative URL, i.e., http://www.example.com/gitlab.
I get a 404 when I navigate to the URL via the web browser.
I tried the basic troubleshooting found on GitLab's site, but that does fix the problem.
I am not running any firewall, and port 80 is not blocked.
What else should I try?

First, the CPU/Memory requirements for GitLab are more than 1 CPU/2GB.
Second, make sure all configuration files listed in "Install GitLab under a relative URL" are modified.
The troubleshooting page suggests checking the format in gitlab.rb.
And issue 244 suggests, for testing, to force the IP address in /etc/hosts.

My solution was to shutdown the existing Apache webserver running on the same machine, at which point, I no longer received a 404. However, due to the minimum requirements - I am only running a single core CPU with 2GB RAM - I now receive a 502.

Related

How should I configure openproject?

Hello everyone I would like some help on the installation of openproject; I tell you right away that I have tried them all and the result is always the same when I try to access from the browser with http: // ip address the error message is always the following:
"" "" Service Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
Apache / 2.4.41 (Ubuntu) Server at 54.172.171.10 Port 80 "" ""
I wonder I performed the installation as per the procedure from the official website. Here guide https://www.openproject.org/docs/installation-and-operations/installation/packaged/#ubuntu-2004
Not happy, I also decided to follow some videos on youtube and magically both the installation and the browser access succeed. Here video https://www.youtube.com/results?search_query=how+install+and+configure+openproject
I also state that I have enabled all ports both inbound and outbound
Also I am wondering, if the installation includes apache and postgresql installed during the openproject installation process, why am I getting apache not available error? Finally, I add that no error occurred during the installation and the linux distribution was virgin as soon as it was installed and updated from all the repositories with all the dependencies necessary for the installation of openproject
Hope someone can help me solve

ReInstall Sitecore IIS can't resolve host (with ERR_NAME_NOT_RESOLVED)

I decided to reinstall Sitecore 8 instance via wizard, I have removed instance and install a new one with the same name XYZ.
but after reinstall it I am getting an error in browser - ERR_NAME_NOT_RESOLVED
I checked IIS binding, I checked hosts file, reset DNS, restart PC and etc I still get this error in any browsers.
How I can fix it? What is the issue?
I have found solution https://support.microsoft.com/en-us/kb/2823477 but I can't understend how sitecore installer can change it.
Generally, the process of site resolution goes in the following consequence:
DNS - find ip address by the hostname (from request header)
Access IIS with that IP (and port if not default 80)
IIS checks bindings by hostname from header and serves corresponding website.
Website being resolved has (merged) web.config in root folder. It has node with all sites served by current Sitecore instance, being listed. Order does matter! First successful match (by hostname or port or default) works it out.
Site being found on previous step has startItem property which is your Sitecore item served by dafault.
Please go and carefully check all those steps to see where it breaks. Also I have previously write a blog post, you may find it helpful with more details on that:
http://blog.martinmiles.net/post/how-websites-are-resolved-with-sitecore
Do any sites work with a Local name configured in 'hosts'?
You may need to disable the Loopback Check in your tcpip stack. Windows uses this as a countermeasure for man in the middle attacks by default on many systems. A registry change is needed to allow a machine to refer to itself using a name that is not its own Hostname. Sorry, but I can't remember the actual key.

Configuring Request Tracker 4.0 with Apache2 on Linux Mint 14 Nadia

My coworker installed Linux Mint 14 Nadia onto a VM (using VirtualBox) and followed the following tutorial to install Apache, MySQL and PHP: http://community.linuxmint.com/tutorial/view/486. He then used the readme from http://www.bestpractical.com/rt/docs/4.0/ to install Request Tracker 4.0. Both of those went pretty well with very few hiccups along the way from what he told me. Now he's forwarded over the task to me and I'm attempting to get Request Tracker 4.0 configured correctly with the Apache server. Currently I can visit localhost and get the following message:
It works! This is the default web page for this server. The web server
software is running but no content has been added, yet.
I also configured it so when you visit localhost/rt you SHOULD see the Request Tracker interface, but I'm instead receiving the following page, and this is where I've spent most of my time stumped:
You're almost there! You haven't yet configured your webserver to run
RT. You appear to have installed RT's web interface correctly, but
haven't yet configured your web server to "run" the RT server which
powers the web interface. The next step is to edit your webserver's
configuration file to instruct it to use RT's mod_perl or FastCGI
handler. If you need commercial support, please contact us at
sales#bestpractical.com.
After a few moments it redirects me to bestpractical.com/rt/rt-broken-install.html. (only allowed 2 links apparently?)
I assume I have something misconfigured but am unsure what. I've been googling and fiddling around with this most of yesterday and today with no luck. It doesn't help that I'm fairly inexperienced with the linux environment, I'm sure.
If I understand how he installed it, he wants to set it up using FastCGI so I visited this site requesttracker.wikia.com/wiki/FastCGI and followed the guides there, but the documentation is quite awful and doesn't always line up with my environment, so I've had to put in a lot of guess and check work. I'll provide the code I've added to my config files so you see where I'm at for now
000-default in /etc/apache2/sites-enabled:
Alias /rt /opt/rt4/share/html
Alias /NoAuth/images /var/www/rt/share/html/NoAuth/images/
AddHandler fastcgi-script fcgi
ScriptAlias / /var/www/rt/sbin/rt-server.fcgi/
<Directory /opt/rt4/share/html/>
Order allow,deny
Allow from all
</Directory>
RT_SiteConfig.pm in /opt/rt4/etc:
Set($WebPath, '/rt');
Set($WebBaseURL, 'http://localhost');
If anymore information is needed, please let me know. Thanks in advance for any help!
The RT docs for web deployment give more detailed info for setting up Apache with fastcgi and for running at '/rt'. I think you'll want to initially try using the suggested Apache configurations and see if that gets you past the setup page.
(Note that those docs are available in the RT install as well in the docs directory.)

404 not found on client website on Mac only - works on other computers, smartphones, tablets

I created a website based on wordpress with a custom theme setup. The live website renders fine on every computer, smartphone and tablet i've tried, except for my Mac, which I use as my local development machine. I have tried various browsers on my mac. The off-line development version renders fine.
When I visit the website http://www.redroselimos.com/ I get the following error:
Not Found
The requested URL / was not found on this server.
I suspect it is a DNS issue or perhaps a Mac configuration issue?
Mac OSX 10.7.4
Chrome 22.0.1229.79
MAMP PRO 2.0.5
DNS: 208.67.222.222/208.67.220.220 (openDNS)
If it shows up fine on every other machine then the site is validly there and it can only be a networking issue for your Mac. Most likely as it is your dev setup you have other settings in place. The 404 error means you are at least reaching a webserver, so you are getting out to the internet. There are a few things you can do:
You could check your /etc/hosts file to see if you send that domain to another IP - which would be my first suspicion.
Also check the httpd.conf on the server to see if you handle your dev machine's IP differently, or if it's rerouted in .htaccess.
You can also try tail -f /var/log/httpd/access_log or whatever your log file is and then try hitting it again from your Mac and see what comes through.
Try going through a proxy from your Mac to your site. proxify.com and hidemyass.com both work for this, that way you'll see if it's strictly an IP issue.
This should point out exactly what the error is.
I have Mac, Lion 10.7.5, Safari 6.0.1
no error. Surfed to the link you wrote, I can see the site (top left a form to login, center header a kind of gallery, some other stuff in the center)
I have no special configuration, only default. If you didn't change the configuration before that is not the problem. If you do, try to restore the default settings.
I think that is a problem with localhost, because you want to surf to your pc from the outside being inside. I don't see a valid reason to do that, if your mac is the host of the site you just need to surf to localhost.
By the way, there is an update to version 10.7.5 for Lion
What causes the Hosts File 404 Error in Mac OS X
The Hosts File is found on the user's computer machine. It is not found using the Search or Finder function. It is woven into the machine code.
The hosts file is used to map DOMAIN hostnames to IP NUMBER addresses. Example: MyDomain.com shown in browser is really a number on the internet 111.11.11.11
This map process is called "resolving" the browser alpha letter name into the numerical number address used by the internet.
Most hosts files do not have many entries because most browser name entries are resolved in DNS lookup tables provided to the user by the internet provider.
A hosts file can cause the local computer machine to present a 404 Cannot Find Page error. This is because a webpage domain entry is in the list that is wrong or it is out of date.
One day the domain entry was correctly shown to be 222.22.22.22 but really it has been changed to 333.33.33.33
Another computer machine will go to 333.33.33.33 without any problem because there is an up-to-date listing in the local machine's hosts file. Or there is not even an entry.
So, the local computer hosts file needs to correct the error. To stop the 404 error. This is done by editing the local machine's host file. The bad out-of-date entry needs to be is updated. Or the entry is needs to be completely removed from the hosts file.
If the entry is completely removed from the hosts file then the internet provider just sends the local computer machine to a internet DNS file to "resolve" the browser alpha letter entry into a numerical interet address
Instructions for editing the hosts file are here: How to Edit the Hosts File in Mac OS X with Terminal. Knowing how to use Terminal is required.

Hgweb "Push" in IIS returning 502 (bad gateway)

I've got hgweb up and running on II7 7 (on windows server 2008). The web interface works, and I can view, pull, and clone the repositories there. But I cannot push, doing so gives me a 502 error right after "searching for changes". Using --debug shows the last few lines as:
sending unbundle command
sending 622 bytes
HTTP Error: 502 (Bad Gateway)
I am using TortoiseHG to push, but the result is the same when using the mercurial command line.
I had followed the tutorial here: http://www.sjmdev.com/blog/post/2011/03/30/setting-mercurial-18-server-iis7-windows-server-2008-r2.aspx to setup hgweb.
Looks like an old question but someone is bound to come across it again. I was close to drawing a black circle on a wall and ... anyhow the issue for us was the way central repository was created. We cloned it from BitBucket while being Remote connected to the machine as local administrator.
The issue was in [Repository].hg folder. You need to set correct permissions on it. Try it with adding Everyone -> Full permissions for test purpose. Please make sure you change this to a dedicated network login or appropriate local account afterwards.
I was seeing the exact same behaviour - even push worked fine with exception of getting a Bad Gateway after all the time. After correct permissions were set the issue was gone.
Thinking about it now, probably the best solution is to add each network login that uses the repo to machine users and then set up access permissions to .hg folder to local users.
Hope it helps someone.
Try using the ISAPI module method instead of the CGI that executes phython.exe as documented here. There's also another related, and possibly duplicate question here as well.
Take a look at the 'Push_ssl' setting in your hgweb.config file.
I was getting the same error (had mine set to '*'), and was able to resolve it by removing the line entirely. Granted, this makes Mercurial somewhat less secure, but it lets me get by the configuration issue (for now) while I investigate properly configuring SSL on the server.
You may also have to review the 'Allow_push' setting in order to get past further errors (or take another look at your authorization).
NOTE: At least in my case, having 'push_ssl = false' wasn't enough as that resulted in further errors (authorization failed).
(Again this is simply a temporary solution until the server can be properly secured.)
It could happen by different reasons, to get more details about the error run
hg push --config ui.usehttp2=true --config ui.http2debuglevel=info
For example, problem may occur because of proxy server or just in case when the Mercurial Web Server "forgets" about repositories it needs to serve: in case if you are using TortoiseHg workbench go to Workbench UI, Repository -> Start Web Server, make sure that your repository is in the list of the served repos.
Try use https instead http in .hg/hgrc, I have resolve this problem for code.google.com.
I had this issue, and the problem ended up being the server running out of disk space.

Resources