Haskell stack connection timeout - haskell

I have installed stack on WSL Ubuntu using WSL2 on Windows 10. The installation completed successfully, but when I test stack with
stack path --local-bin
I get the following error message:
Writing implicit global project config file to:
/home/jdgallag/.stack /global-project/stack.yaml
Note: You can change snapshot via the resolver field there.
HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Accept","application/json"),("User-Agent","The Haskell Stack")]
path = "/haddock.stackage.org/snapshots.json"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
ConnectionTimeout
I have seen some other posts about issues like this one, but none that are resolved, and they are older. Also, I am not on a proxy, this is my personal computer, and I turned the firewall completely off. That said, when I attempt this over a vpn connection I get a different error. Could it be an ssl/https issue since WSL2 is technically a different IP address from Windows, and so the connection is being blocked on the amazon side?
For the record when attempting the command on a VPN, the error I get is
Writing implicit global project config file to:
/home/jdgallag/.stack/global-project/stack.yaml
Note: You can change the snapshot via the resolver field there.
HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Accept","application/json"),("User-Agent","The Haskell Stack")]
path = "/haddock.stackage.org/snapshots.json"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
(InternalException (HandshakeFailed Error_EOF))
Update
Reverting to WSL-1 "solves" the problem, so the issue is something specific to WSL-2. I replicated the problem with a fresh install of Windows on a separate machine, but haven't found a way around the issue yet.

I have wls2 ubuntu 20.02 installed on my pc
fixed this problem with changing the contents of /etc/resolv.conf
cd /etc
sudo *your favorite editor* resolv.conf
added Google DNS servers as
nameserver 8.8.8.8
nameserver 8.8.4.4
this fixed stack not working for me.

Related

Stack fails to download any ghc

I am renting a Ubuntu server. I want to execute a build on it, but Stack is failing to download the GHC. I could not found any solution on the internet. I also tried to downgrade Stack, but that fails when trying to download GHC. Can you help me? Do you have a solution or workaround for that?
stack install
Preparing to install GHC to an isolated location.
This will not interfere with any system-level installation.
Preparing to download ghc-8.10.3 ...
Download expectation failure: HttpExceptionRequest Request {
host = "downloads.haskell.org"
port = 443
secure = True
requestHeaders = [("User-Agent","The Haskell Stack")]
path = "/~ghc/8.10.3/ghc-8.10.3-x86_64-deb9-linux.tar.xz"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
ConnectionTimeout
Thank you for your time.
I suspect that this is an issue with the firewall settings on the rented Ubuntu server. The industry standard is to have these very tight by default when first created. It is very likely that Stack does not have permission to create the connection it needs to download GHC. To get a more detailed answer, it would help to know more about the Ubuntu server you are renting. You also might see if you can connect to https://downloads.haskell.org with something like wget or curl.

error whilst trying to create digital ocean droplet via terraform

Hi So I am trying to run my terraform script to get my server up but I get this very strange issue. Google results have come up with nothing.
digitalocean_droplet.ubuntubox: Creating...
Error: Error creating droplet: Post "https://api.digitalocean.com/v2/droplets": dial tcp: lookup api.digitalocean.com on [::1]:53: read udp [::1]:52870->[::1]:53: read: connection refused
on droplet_backup.tf line 2, in resource "digitalocean_droplet" "ubuntubox":
2: resource "digitalocean_droplet" "ubuntubox" {
this is my droplet_backup.tf file with the droplet block
resource "digitalocean_droplet" "ubuntubox" {
image = "ubuntu-20-04-x64"
name = "Valheim_Server"
region = "LON1"
#size = "s-4vcpu-8gb"
size = "s-1vcpu-1gb"
private_networking = "true"
ssh_keys = [var.ssh_fingerprint]
}
These errors suggest that your host is unable to lookup DigitalOcean's API endpoint (api.digitalocean.com).
[::1]:53: read udp [::1]:52870->[::1]:53 port :53 is DNS and that your error includes these suggests that this is where the issue is arising.
Can you dig api.digitalocean.com A or nslookup api.digitalocean.com or perhaps (although this is ICMP not TCP) ping api.digitalocean.com?
If I dig the host:
;; ANSWER SECTION:
api.digitalocean.com. 169 IN A 104.16.182.15
api.digitalocean.com. 169 IN A 104.16.181.15
And, using a DNS lookup service (e.g. Google's link), these values are corroborated.
Symlinked /run/systemd/resolve/resolv.conf/ to /etc/resolv.conf fixed the issue.

terraform-provider-vsphere winrm config reset upon clone customization

Environment
Vsphere 6
VM OS = Win Server 2016
terraform version = 0.11.7
terraform-provider-vsphere version = 1.4.1
Issue / Question
I've noticed that using the customization block will reset the winrm config I had preconfigured on the template.
I've attempted to work around this by configuring winrm on the fly with run_once_command_list, but that seems to operate as fire-and-forget...the provisioner is triggered prior to the command list execution (completion).
Any ideas?
Specific details can be found here ->
terraform-provider-vsphere github issue
For windows 10 you can install the built-in OpenSSH server to transfer a file or use SSH.
provisioner "file" {
source = "BuildAgent1/buildAgent.properties"
destination = "f:\\BuildAgent\\conf\\buildAgent.properties"
connection {
type = "ssh"
user = "user"
password = "password"
timeout = "30m"
}
}

Cannot configure Openshift 3 with Tornado server

I'm trying to migrate my Tornado app from Openshift2 to Openshift3 and don't know how to actually setup route, service and etc.
First I'm creating simple Python 3.5 application on RHEL 7. In advanced options I'm set up git repo, add APP_FILE variable. Cloning and app build finishing successfully. And I executed curl localhost:8080 in web console terminal, it seems working.
But service root link returns me this message:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
I actually not changed anything in route and service configuration, I guess what I should set it up somehow. But now haven't any thoughts how to do this.
Here is my wsgi.py:
#!/usr/bin/env python
import importlib.machinery
if __name__ == '__main__':
print('Executing __main__ ...')
ip = 'localhost'
port = 8080
app = importlib.machinery.SourceFileLoader("application", 'wsgi/application').load_module("application")
from wsgiref.simple_server import make_server
httpd = make_server(ip, port, app.application)
print('Starting server on http://{0}:{1}'.format(ip, port))
httpd.serve_forever()
And application:
#!/usr/bin/env python
import os
import sys
import tornado.wsgi
from wsgi.openshift import handlers
if 'OPENSHIFT_REPO_DIR' in os.environ:
sys.path.append(os.path.join(os.environ['OPENSHIFT_REPO_DIR'], 'wsgi',))
virtenv = os.environ['OPENSHIFT_PYTHON_DIR'] + '/virtenv/venv'
os.environ['PYTHON_EGG_CACHE'] = os.path.join(virtenv, 'lib/python3.3/site-packages')
virtualenv = os.path.join(virtenv, 'bin/activate_this.py')
try:
exec(compile(open(virtualenv).read(), virtualenv, 'exec'), dict(__file__=virtualenv))
except IOError:
pass
settings = {
'cookie_secret': 'TOP_SECRET',
'static_path' : os.path.join(os.getcwd(), 'wsgi/static'),
'template_path' : os.path.join(os.getcwd(), 'wsgi/templates'),
'xsrf_cookies': False,
'debug': True,
'login_url': '/login',
}
application = tornado.wsgi.WSGIApplication(handlers, **settings)
EDIT:
Here is some console oc output:
> oc status
In project photoservice on server https://api.starter-us-west-1.openshift.com:443
http://photoservice-photoservice.a3c1.starter-us-west-1.openshiftapps.com to pod port 8080-tcp (svc/photoservice)
dc/photoservice deploys istag/photoservice:latest <-
bc/photoservice source builds git#bitbucket.org:ashchuk/photoservice.git#master on openshift/python:3.5
deployment #1 deployed 3 minutes ago - 1 pod
View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
> oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
photoservice photoservice-photoservice.a3c1.starter-us-west-1.openshiftapps.com photoservice 8080-tcp None
Just changed ip = 'localhost' to ip = '0.0.0.0' as Graham said and this worked.
Here is an explanation:
If you use localhost or 127.0.0.1 it will only accept requests from the network loopback device. This can only be connected to by clients running on the same host (container). You need to listen on all network interfaces, indicated by 0.0.0.0 to be able to accept requests from outside of the host (container). If you don't do that, OpenShift cannot connect to your application to proxy requests to it.

Getting a WCF service (both host/client) to work on https on Linux with Mono

I have a small test console application that serves as the WCF host and another console application that serves as the client.
The client can reach the host via http, everything works fine so far.
But when switching to https, I get the following error:
Error: System.Net.WebException: Error: SendFailure (Error writing headers) --->
System.Net.WebException: Error writing headers --->
System.IO.IOException: The authentication or decryption has failed. --->
Mono.Security.Protocol.Tls.TlsException: The authentication or decryption has failed.
...
The steps so far I have attempted to solve the issue:
I have verified that the ca-certificates-mono package is installed
I have imported the CA certs to the machine store with (why do I need this if I work with a selfsigned cert?)
sudo mozroots --import --machine --sync
I created a selfsigned cert for testing with (as described in the Mono Security FAQ)
makecert -r -eku 1.3.6.1.5.5.7.3.1 -n "CN=Cert4SSL" -sv cert.pvk cert.cer
I added it to the mono cert store
sudo certmgr -add -c -m Trust cert.cer
I have also did tests with other stores (Root, My) and also using not the maching but the user's store - none did work, the same error on each attempt
I assigned port my service uses to the cert
httpcfg -add -port 6067 -cert cert.cer -pvk cert.pvk
I added ignoring the certificate validation
ServicePointManager.ServerCertificateValidationCallback += (o, certificate, chain, errors) => true;
This did not help either (but it got called, the cert object looked allright in the debugger).
The client uses this code to call the WebService:
IService svcClient2 = null;
string address2 = "https://localhost:6067/TestService";
BasicHttpBinding httpBinding2 = new BasicHttpBinding();
httpBinding2.TransferMode = TransferMode.Buffered;
httpBinding2.Security.Mode = BasicHttpSecurityMode.Transport;
httpBinding2.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;
httpBinding2.MessageEncoding = WSMessageEncoding.Text;
httpBinding2.UseDefaultWebProxy = true;
ChannelFactory<IService> channelFac2 = new ChannelFactory<IService>( httpBinding2, new EndpointAddress( address2 ) );
svcClient2 = channelFac2.CreateChannel();
string res2 = svcClient2.TestHello( "Bob" ); // <----- this is where I get the exception
Any help is appreciated, I feel like running in circles.
A few infos about the environment:
I am using Ubuntu 14.04 LTS and Mono 4.0.2, IDE is MonoDevelop
edit: I have now built the very same projects with visual studio and C#, there it works as expected. The client can connect to the host on both http and https.
If i copy over the mono version to my Windows machine, I run into the same issue and error message as on Ubuntu.
Could this be a mono-related issue?

Resources