Stack fails to download any ghc - haskell

I am renting a Ubuntu server. I want to execute a build on it, but Stack is failing to download the GHC. I could not found any solution on the internet. I also tried to downgrade Stack, but that fails when trying to download GHC. Can you help me? Do you have a solution or workaround for that?
stack install
Preparing to install GHC to an isolated location.
This will not interfere with any system-level installation.
Preparing to download ghc-8.10.3 ...
Download expectation failure: HttpExceptionRequest Request {
host = "downloads.haskell.org"
port = 443
secure = True
requestHeaders = [("User-Agent","The Haskell Stack")]
path = "/~ghc/8.10.3/ghc-8.10.3-x86_64-deb9-linux.tar.xz"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
ConnectionTimeout
Thank you for your time.

I suspect that this is an issue with the firewall settings on the rented Ubuntu server. The industry standard is to have these very tight by default when first created. It is very likely that Stack does not have permission to create the connection it needs to download GHC. To get a more detailed answer, it would help to know more about the Ubuntu server you are renting. You also might see if you can connect to https://downloads.haskell.org with something like wget or curl.

Related

Haskell stack connection timeout

I have installed stack on WSL Ubuntu using WSL2 on Windows 10. The installation completed successfully, but when I test stack with
stack path --local-bin
I get the following error message:
Writing implicit global project config file to:
/home/jdgallag/.stack /global-project/stack.yaml
Note: You can change snapshot via the resolver field there.
HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Accept","application/json"),("User-Agent","The Haskell Stack")]
path = "/haddock.stackage.org/snapshots.json"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
ConnectionTimeout
I have seen some other posts about issues like this one, but none that are resolved, and they are older. Also, I am not on a proxy, this is my personal computer, and I turned the firewall completely off. That said, when I attempt this over a vpn connection I get a different error. Could it be an ssl/https issue since WSL2 is technically a different IP address from Windows, and so the connection is being blocked on the amazon side?
For the record when attempting the command on a VPN, the error I get is
Writing implicit global project config file to:
/home/jdgallag/.stack/global-project/stack.yaml
Note: You can change the snapshot via the resolver field there.
HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Accept","application/json"),("User-Agent","The Haskell Stack")]
path = "/haddock.stackage.org/snapshots.json"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
(InternalException (HandshakeFailed Error_EOF))
Update
Reverting to WSL-1 "solves" the problem, so the issue is something specific to WSL-2. I replicated the problem with a fresh install of Windows on a separate machine, but haven't found a way around the issue yet.
I have wls2 ubuntu 20.02 installed on my pc
fixed this problem with changing the contents of /etc/resolv.conf
cd /etc
sudo *your favorite editor* resolv.conf
added Google DNS servers as
nameserver 8.8.8.8
nameserver 8.8.4.4
this fixed stack not working for me.

RemoteCertificateNameMismatch when auth using SslStream on linux

I've come across some strange behaviour of dotnet SslStream, when running my dotnet-core app on linux environment.
here is the code:
TcpClient cl = new TcpClient();
cl.Connect("52.209.63.190", 443);
var ssl = new SslStream(cl.GetStream());
ssl.AuthenticateAsClient("api.bitfinex.com");
Auth result is success, when running on windows.
But same code ends with auth error (RemoteCertificateNameMismatch), when linux.
dotnet --info:
.NET Command Line Tools (2.1.4)
Product Information:
Version: 2.1.4
Commit SHA-1 hash: 5e8add2190
Runtime Environment:
OS Name: fedora
OS Version: 27
OS Platform: Linux
RID: linux-x64
Base Path: /usr/share/dotnet/sdk/2.1.4/
Microsoft .NET Core Shared Framework Host
Version : 2.0.5
Build : 17373eb129b3b05aa18ece963f8795d65ef8ea54
Why code behaviour is so different on linux?
How can I handle it and pass ssl auth?
Thank you in advance
So, you can connect to that host with
TcpClient cl = new TcpClient();
cl.Connect("api.bitfinex.com", 443);
var ssl = new SslStream(cl.GetStream());
ssl.AuthenticateAsClient("api.bitfinex.com");
I don't know how you got the IP address of api.bitfinex.com, but it's under cloudflare, and may be you don't need to connect bitfinex with his real IP address.
But if it is required to connect that special IP address, you can override verification callback before you do any connection
System.Net.ServicePointManager.ServerCertificateValidationCallback =
(sender, certificate, chain, errors) => true;
Looks like the answer is simple: too old dotnet version.
2.0 seems to have some ssl issues, which were fixed since 2.1
When I installed the newest one (2.1.3), my app still didn't work, decause I had to uninstall prev version (2.0.5) manually to be able to use 2.1.3
Now the app ends with the same result on both windows and linux environments.
Many thanks to M. Hovhannisyan. I've started trying different linux versions, and figured out what I did wrong

Error 500 when accessing Gitlab browser interface

I'm running an instance of Gitlab Omnibus CE, version 8.15.2, on CentOS 7.3.1611. Upgrading from the 8.14 release family didn't go quite according to plan; since doing that, I've been unable to access the Gitlab browser interface.
When I try to access the browser interface, I can access the login screen and log in, but after I'm logged in, going to any page results in an Error 500: Whoops, something went wrong on our end.
So I used gitlab-ctl tail to grab some log data for what's happening, and it looks like it's a problem with Postgresql's data for one of my projects:
http://pastebin.com/VDMk0eKr
But I'm not sure how I should fix this. Any ideas?
It's known issue that's been fixed with newest release 8.15.3. If you don't want to upgrade GitLab, there is an existing workaround (Edit: as mentioned in comment, the workaround does not always work so consider upgrade primary)
File:
/opt/gitlab/embedded/service/gitlab-rails/app/models/concerns/has_status.rb
Replace
builds = scope.select('count(*)').to_sql
created = scope.created.select('count(*)').to_sql
success = scope.success.select('count(*)').to_sql
pending = scope.pending.select('count(*)').to_sql
running = scope.running.select('count(*)').to_sql
skipped = scope.skipped.select('count(*)').to_sql
canceled = scope.canceled.select('count(*)').to_sql
with
builds = scope.select('count(*)').reorder(nil).to_sql
created = scope.created.select('count(*)').reorder(nil).to_sql
success = scope.success.select('count(*)').reorder(nil).to_sql
pending = scope.pending.select('count(*)').reorder(nil).to_sql
running = scope.running.select('count(*)').reorder(nil).to_sql
skipped = scope.skipped.select('count(*)').reorder(nil).to_sql
canceled = scope.canceled.select('count(*)').reorder(nil).to_sql
And restart GitLab.
I had the same issue and the above didn't work so I ran the following command to downgrade.
To check the current version istalled:
sudo dpkg -l | grep gitlab-ce
To see which versions were available:
sudo apt-cache madison gitlab-ce | less
and the following to "downgrade", since I was at 9.2.0-rc2.ce.0 shown by the above command:
sudo apt-get install gitlab-ce=9.2.0-rc1.ce.0

how to configure component pubsub to tigase server in ubuntu machine(localhost)?

I have installed tigase server in my linux machine.
After successfully installed tigase i found following in etc/init.properties.
--user-db = mysql
--admins = admin#username
--user-db-uri = jdbc:mysql://localhost/tigasedb?user=tigase&password=tigase12
config-type = --gen-config-def
--virt-hosts = username
--debug = server
Now, I want to install pubsub component with already installed server.
You need to reinstall with advance configuration options instead of basic installation.
You will get options for which component you need to configure with server.

Database error during MODx setup

Using MAMP/phpMyAdmin on Mac OS X Lion, I'm trying to install MODx on a virtual host. During the process, I encounter this:
I've been looking around and have as of yet not found anyone with the same problem. The file it claims is missing does in fact exist at that location. Attaching my database setup as well in case it helps. I'd greatly appreciate any help with this, as databases/virtual hosts are very much not my forté.
Is your login and password correct? MAMP by default uses root for each of these. Try that out.
Sounds like a configuration problem.
http://rtfm.modx.com/display/revolution20/Troubleshooting+Installation
If it's not a connection error (that will be found below), I'd suspect that PDO is not installed or active, or that caching (eAccellerator,APC, etc) is interfering.
This is from modx site:
1)
You have eAccelerator disabled during install. eAccelerator can cause problems when doing the heavy lifting during the install process.
2)
PDO Error Messages
If you are getting PDO-related error messages during install, before proceeding to specific error messages as below, please confirm that your PDO configuration is setup correctly. You can do so by running this code (replace user/password/database/host with your setup):
< ? php
/* Connect to an ODBC database using driver invocation */
$dsn = 'mysql:dbname=testdb;host=localhost';
$user = 'dbuser';
$password = 'dbpass';
try {
$dbh = new PDO($dsn, $user, $password);
} catch (PDOException $e) {
echo 'Connection failed: ' . $e->getMessage();
}
? >
If this fails, then your PDO setup is not configured correctly.
As it turns out, I had forgotten to turn off Web Sharing, so somewhere along the way there was a conflict between the two Apache configurations.

Resources