error whilst trying to create digital ocean droplet via terraform - linux

Hi So I am trying to run my terraform script to get my server up but I get this very strange issue. Google results have come up with nothing.
digitalocean_droplet.ubuntubox: Creating...
Error: Error creating droplet: Post "https://api.digitalocean.com/v2/droplets": dial tcp: lookup api.digitalocean.com on [::1]:53: read udp [::1]:52870->[::1]:53: read: connection refused
on droplet_backup.tf line 2, in resource "digitalocean_droplet" "ubuntubox":
2: resource "digitalocean_droplet" "ubuntubox" {
this is my droplet_backup.tf file with the droplet block
resource "digitalocean_droplet" "ubuntubox" {
image = "ubuntu-20-04-x64"
name = "Valheim_Server"
region = "LON1"
#size = "s-4vcpu-8gb"
size = "s-1vcpu-1gb"
private_networking = "true"
ssh_keys = [var.ssh_fingerprint]
}

These errors suggest that your host is unable to lookup DigitalOcean's API endpoint (api.digitalocean.com).
[::1]:53: read udp [::1]:52870->[::1]:53 port :53 is DNS and that your error includes these suggests that this is where the issue is arising.
Can you dig api.digitalocean.com A or nslookup api.digitalocean.com or perhaps (although this is ICMP not TCP) ping api.digitalocean.com?
If I dig the host:
;; ANSWER SECTION:
api.digitalocean.com. 169 IN A 104.16.182.15
api.digitalocean.com. 169 IN A 104.16.181.15
And, using a DNS lookup service (e.g. Google's link), these values are corroborated.

Symlinked /run/systemd/resolve/resolv.conf/ to /etc/resolv.conf fixed the issue.

Related

Terraform Vcloud provider is crashing when using terraform plan

I am trying to automate the deployment of VM's in Vcloud using terraform.
The server that I am using doesn't have an internet connection so I had to install terraform and VCD provider offline.
Terrafom init worked but when I use terraform plan is crashing...
Terraform version: 1.0.11
VCD provider version: 3.2.0(I am using this version because we have vcloud 9.7).
This is a testing script, to see if terraform works
terraform {
required_providers {
vcd = {
source = "vmware/vcd"
version = "3.2.0"
}
}
}
provider "vcd" {
user = "test"
password = "test"
url = "https://test/api"
auth_type = "integrated"
vdc = "Org1VDC"
org = "System"
max_retry_timeout = "60"
allow_unverified_ssl = "true"
}
resource "vcd_org_user" "my-org-admin" {
org = "my-org"
name = "my-org-admin"
description = "a new org admin"
role = "Organization Administrator"
password = "change-me"
}
When I run terraform plan I get the following error:
Error: Plugin did not respond
...
The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ConfigureProvider call. The plugin logs may contain more details
Stack trace from the terraform-provider-vcd_v3.2.0 plugin:
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xaf3b75]
...
Error: The terraform-provider-vcd_v3.2.0 plugin crashed!
In the logs I can see a lot of DEBUG messages where the provider is trying to connect to github. provider.terraform-provider-vcd_v3.2.0: github.com/vmware/go-vcloud-director/v2/govcd.(*VCDClient).Authenticate(...)
And for ERROR messages I only saw 2:
plugin.(*GRPCProvider).ConfigureProvider: error="rpc error: code = Unavailable desc = transport is closing"
Failed to read plugin lock file .terraform/plugins/linux_amd64/lock.json: open .terraform/plugins/linux_amd64/lock.json: no such file or directory
This is the first time when am I am configuring Terraform offline and am using VCD provider.
Did I miss something?
I have found the issue.
At the URL I was using the IP address of the Vcloud api, and for some reason terraform didn't like that and was causing the crash, after changing to the FQDN, terraform started working again.
Kind regards

Google Drive API in Celery Task is Failing

Latest Update:http request within task are working but not https.
I am trying to use Celery Task to Upload Files to Google Drive, once the files have been Uploaded to Local Web Server for Backup.
I saw multiple question asking similar things .I cannot make Google API work in a celery task but it works when I run it without delay().The questions didn't recieve any answers.
Question 1 where #chucky struggling like me.
Implementation and Information:
Server: Django Development Server (localhost)
Celery: Working with RabbitMQ
Database: Postgres
GoogleDriveAPI: V3
I was able to get credentials and token for accessing drive files and
display first ten files,If the quickstart file is run separately.
Google Drive API Quickstart.py
Running this Quickstart.py shows files and folder list of drive.
So I added the same code with all included imports in tasks.py task
name create_upload_folder() to test whether task will work and show
list of files.
I am running it with a Ajax Call but i keep getting this error.
So tracing back show that this above error occurs due to:
Root of the Error is :
[2021-07-13 21:10:03,979: WARNING/MainProcess]
[2021-07-13 21:10:04,052: ERROR/MainProcess] Task create_upload_folder[2463ad5b-4c7c-4eba-b862-9417c01e8314] raised unexpected: ServerNotFoundError('Unable to find the server at www.googleapis.com')
Traceback (most recent call last):
File "f:\repos\vuetransfer\vuenv\lib\site-packages\httplib2\__init__.py", line 1346, in _conn_request
conn.connect()
File "f:\repos\vuetransfer\vuenv\lib\site-packages\httplib2\__init__.py", line 1136, in connect
sock.connect((self.host, self.port))
File "f:\repos\vuetransfer\vuenv\lib\site-packages\eventlet\greenio\base.py", line 257, in connect
if socket_connect(fd, address):
File "f:\repos\vuetransfer\vuenv\lib\site-packages\eventlet\greenio\base.py", line 40, in socket_connect
err = descriptor.connect_ex(address)
It's failing on the name resolution (can't find the IP of www.googleapis.com) because most likely it can't contact a DNS server that has the IP (or can't contact any DNS server).
Make sure you have your DNS server properly set up or if you are behind a corporate proxy/VPN that you're using it.
You can verify it working by fetching the IPs manually:
nslookup www.googleapis.com
$ nslookup www.googleapis.com
Non-authoritative answer:
Name: www.googleapis.com
Address: 172.217.23.234
Name: www.googleapis.com
Address: 216.58.201.74
Name: www.googleapis.com
Address: 172.217.23.202
Name: www.googleapis.com
Address: 2a00:1450:4014:80c::200a
Name: www.googleapis.com
Address: 2a00:1450:4014:800::200a
Name: www.googleapis.com
Address: 2a00:1450:4014:80d::200a
In case you can fetch the IPs manually there's a connectivity problem with Python itself not being aware of the proxies (that might have been set up on your PC) and for this try to use:
http_proxy=http://your.proxy:port
https_proxy=http://your.proxy:port
in the environment or as a command prefix or directly in the HTTP client configuration httplib2 uses.
The major problem is with using httplib2 with python3 or some other complication even though google_client_api for python says it is fully supported you have some problems with requests.Atleast the problem is there for me with python3 on Windows.
Which after a lot of research i found that falling back to python2 is a solution but another one can be using httplib2shim after creating a credentials for your service and before .build() for your service you need to call
.
.
httplib2shim.patch()
service = build(API_SERVICE_NAME, API_VERSION, credentials=creds)
This will solve the issue of httplib2 not able to find the www.googleapis.com

ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

I'm attempting to install Nginx on an ec2 instance using the Terraform provisioner remote-exec but I keep running into this error.
ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
This is what my code looks like
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
connection {
type = "ssh"
host = self.public_ip
user = "ec2-user"
private_key = file(var.private_key_path)
}
provisioner "remote-exec" {
inline = [
"sudo yum install nginx -y",
"sudo service nginx start"
]
}
}
Security group rules are set up to allow ssh from anywhere.
And I'm able to ssh into the box from my local machine.
Not sure if I'm missing really obvious here. I've tried a newer version of Terraform but it's the same issue.
If your EC2 instance is using an AMI for an operating system that uses cloud-init (the default images for most Linux distributions do) then you can avoid the need for Terraform to log in over SSH at all by using the user_data argument to pass a script to cloud-init:
resource "aws_instance" "nginx" {
ami = data.aws_ami.aws-linux.id
instance_type = "t2.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.allow_ssh.id]
user_data = <<-EOT
yum install nginx -y
service nginx start
EOT
}
For an operating system that includes cloud-init, the system will run cloud-init as part of the system startup and it will access the metadata and user data API to retrieve the value of user_data. It will then execute the contents of the script, writing any messages from that operation into the cloud-init logs.
What I've described above is the official recommendation for how to run commands to set up your compute instance. The documentation says that provisioners are a last resort, and one of the reasons given is to avoid the extra complexity of having to correctly configure SSH connectivity and authentication, which is the very complexity that has caused you to ask this question and so I think trying to follow the advice in the documentation is the best way to address it.

Terragrunt + Terraform with modules + GITLab

I'm using my infrastructure (IAC) at aws with terragrunt + terraform.
I already added the ssh key, GPG key to the git lab and left the repository unprotected in the branch, to do a test, but it didn't work
This would be the module call, coming to be equal to the main.tf of terraform.
# ---------------------------------------------------------------------------------------------------------------------
# Configuração do Terragrunt
# ---------------------------------------------------------------------------------------------------------------------
terragrunt = {
terraform {
source = "git::ssh://git#gitlab.compamyx.com.br:2222/x/terraform-blueprints.git//route53?ref=0.3.12"
}
include = {
path = "${find_in_parent_folders()}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# Parâmetros da Blueprint
#
zone_id = "ZDU54ADSD8R7PIX"
name = "k8s"
type = "CNAME"
ttl = "5"
records = ["tmp-elb.com"]
The point is that when I give an init terragrunt, in one of the modules I have the following error:
ssh: connect to host gitlab.company.com.br port 2222: Connection timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[terragrunt] 2020/02/05 15:23:18 Hit multiple errors:
exit status 1
I took the test
ssh -vvvv -T gitlab.companyx.com.br -p 2222
I also got timed out
This doesn't appear to be a terragrunt or terraform issue at all, but rather, an issue with SSH access to the server.
If you are getting a timeout, it seems like it's most likely a connectivity issue (i.e., a firewall/network ACL is blocking access on that port from where you are attempting to access it).
If it were an SSH key issue, you'd get an "access denied" or similar issue, but the timeout definitely leads me to believe it's connectivity.

Haskell stack connection timeout

I have installed stack on WSL Ubuntu using WSL2 on Windows 10. The installation completed successfully, but when I test stack with
stack path --local-bin
I get the following error message:
Writing implicit global project config file to:
/home/jdgallag/.stack /global-project/stack.yaml
Note: You can change snapshot via the resolver field there.
HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Accept","application/json"),("User-Agent","The Haskell Stack")]
path = "/haddock.stackage.org/snapshots.json"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
ConnectionTimeout
I have seen some other posts about issues like this one, but none that are resolved, and they are older. Also, I am not on a proxy, this is my personal computer, and I turned the firewall completely off. That said, when I attempt this over a vpn connection I get a different error. Could it be an ssl/https issue since WSL2 is technically a different IP address from Windows, and so the connection is being blocked on the amazon side?
For the record when attempting the command on a VPN, the error I get is
Writing implicit global project config file to:
/home/jdgallag/.stack/global-project/stack.yaml
Note: You can change the snapshot via the resolver field there.
HttpExceptionRequest Request {
host = "s3.amazonaws.com"
port = 443
secure = True
requestHeaders = [("Accept","application/json"),("User-Agent","The Haskell Stack")]
path = "/haddock.stackage.org/snapshots.json"
queryString = ""
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = ResponseTimeoutDefault
requestVersion = HTTP/1.1
}
(InternalException (HandshakeFailed Error_EOF))
Update
Reverting to WSL-1 "solves" the problem, so the issue is something specific to WSL-2. I replicated the problem with a fresh install of Windows on a separate machine, but haven't found a way around the issue yet.
I have wls2 ubuntu 20.02 installed on my pc
fixed this problem with changing the contents of /etc/resolv.conf
cd /etc
sudo *your favorite editor* resolv.conf
added Google DNS servers as
nameserver 8.8.8.8
nameserver 8.8.4.4
this fixed stack not working for me.

Resources