TL;DR: Everything network-related is working perfectly except one specific binary (in this case - elm).
I am running a new arch machine - I am connected via wifi and have networks access.
However - elm does not seem to know that. Running elm make fails when it tries to download the dependencies. (This is a project imported from somewhere else).
I could not connect to https://package.elm-lang.org to get the latest list of
packages, and I was unable to verify your dependencies with the information I
have cached locally.
Are you able to connect to the internet? These dependencies may work once you
get access to the registry!
Adding the IP of package.elm-lang.org to /etc/hosts fixes that, but it then throws a similar error for github.com. I can keep doing that, but surely there is a way to convince elm to access the internet.
I'm not using a proxy or anything like that. My connection obviously works and seem stable. elm init also fails for the same reasons so i'm unable to test on a brand new directory.
Thank you all for your help :)
Apparently fresh arch uses the systemd-resolved daemon for DNS, but elm decides to just read resolv.conf directly (which is blank), and then defaults to 127.0.0.1 as the DNS server.
Setting a DNS server manually in resolv.conf did the trick.
Related
I am not sure if this is the correct place to ask my question, but really I am out of ideas, and my clock is ticking.
In short, I got a new machine that I need to make development ready.
This project is based on rather old program versions, that is a task to update.
In short I have set up the Vagrant (1.8.1) in VirtualBox (5.0.14). Chef (0.10.0) created all dependencies successfully and I can SSH to machine and see all is fine, all services are running as set in VagrantFile.
Vagrant box is latest ubunty/trystu64. My host machine is MacOs HighSierra(10.13.3).
Now, I open for example an mySQL editor (mySQL Workbench) and it connects to the Box, I can see DB and manipulate it.
My problem is with the NodeJS (I think). When I run my tests, it simply refuses to connect to the Box. More precisely, it attempts to connect to 127.0.0.1: 3306 (mySQL) and it errors. While MySQL Workbench performs the same connection without problems.
It seems the port forwarding in Vegrant works fine, as mySQL workbench is being forwarded to a box. Nodejs is not being forwarded, or something.
Is it Node doing it? Something else that I need to allow?
I have tried many different things, I have lost count. And always the same issue.
Is there something that I can do to Node, so it behaves as mySQL Workbench? Any idea is appreciated.
This identical setup used to work before, but not now.
I'm trying to setup a Rust programming environment for a local user on a Windows 10 laptop that is usually connected to my company domain. Installing the stable version of Rust with rustup via rustup-init.exe completed without problems, but every time I try to use cargo to install tools or libraries I get an error message like the following:
warning: spurious network error (5 tries remaining): [2/-1] failed to send request: The operation timed out
This happens both from my company network and from my home one. I managed to setup Rust for my domain account without problems.
I suppose this is network related or it might involve the Sophos software my company uses as firewall/anti-virus; what is puzzling me is the fact that just about every other network related utility I tried works without problems, from git to curl.
I'd like to use this additional user because there are utilities my company blocks for domain users but not for local ones, such as Dropbox.
For me, I used a proxy once and it was set as a variable in CMD.
To show your proxy settings, in an administrator cmd type:
netsh winhttp show proxy
if you have one you can reset it by:
netsh winhttp reset proxy
I spent a good hour trying to figure this out and came across 2 potential solutions
There could be an issue with ssh: dependency, fixed it by starting ssh agent:
eval `ssh-agent -s`
ssh-add
cargo build
setup in a global ~/.gitconfig
[url "ssh://git#github.com/"]
insteadOf = https://github.com/
Removing this in ~/.gitconfig also solved the issue
I don't have a definite explanation, but cargo works correctly when the VPN towards my office is active. I guess it really is something to do with security software.
I've just installed Neo4j 1.8.2 onto Azure by following this step-by-step process...
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
Unfortunately, when I browse to http://:7474/webadmin Fiddler says Error 10061 - No connection could be made because the target machine actively refused it.
I've followed the instructions exactly and haven't received any errors.
Any help much appreciated.
So, I think I got to the bottom of this. I think it was due to the size of compute / VM I was creating. It looks like the problem is caused when running on Extra Small instances. I created a new installation using a Small instance and everything now works :).
Try setting the server to accept connections form all hosts, and maybe use a newer Neo4j, say 1.9.4
http://docs.neo4j.org/chunked/stable/security-server.html#_secure_the_port_and_remote_client_connection_accepts
The way the VM Depot image is set up, it's pre-configured to allow all hosts to connect, and the Neo4j server will auto-start. The only thing you need to take care of, when constructing your VM, is to open an Input Endpoint, with any public port you want (preferably 7474 to stay true to Neo4j) and internal port 7474.
Note that the UI changed a bit since the how-to was published: You can specify the endpoint as the last step before creating your virtual machine. Other than that, the instructions should be the same. And... once the VM is up and running (it'll take about 5-10 minutes), you just visit http://yourservicename.cloudapp.net:7474 and you should see the web admin. Note: this is not the same as your vm name. If you named your VM something like 'neo' then you do not want http://neo:7474 or http://neo.cloudapp.net:7474. You need to use your cloud service name (you had to create a name for the service when you deployed the VM.
I've deployed that image several times in demos, and just tried again right now to make sure nothing wonky happened. Worked perfectly.
I have pretty strange problem with Collectd. I'm not new to Collectd, was using it for a long time on CentOS based boxes, but now we have Ubuntu TLS 12.04 boxes, and I have really strange issue.
So, using version 5.2 on Ubuntu 12.04 TLS. Two boxes residing on Rackspace (maybe important, but I'm not sure). Network plugin configured using two local IPs, without any firewall in between and without any security (just to try to set simple client server scenario).
On both servers collectd writes in configured folders as it should write, but on server machine it doesn't write data received from client.
Troubleshooted with tcpdump, and I can clearly see UDP traffic and collectd data, including hostname and plugin names from my client machine, received on server, but they are not flushed to appropriate folder (configured by collectd) ever. Also running everything as root user, to avoid troubleshooting permissions.
Anyone has any idea or similar experience with this? Or maybe some idea what could I do for troubleshooting this beside trying to crawl internet (I think I clicked on every sensible link Google gave me in last two days) and checking network layer (which looks fine)?
And just small note: exactly the same happened with official 4.10.2 version from Ubuntu's repo. After trying to troubleshoot it for hours moved to upgrade to version five.
I'd suggest trying out the quite generic troubleshooting procedure based on the csv and logfile plugins, as described in this answer. As everything seems to be fine locally, follow this procedure on the server, activating only the network plugin (in addition to logfile, csv and possibly rrdtool).
So after no way of fixing this, I upgraded my Ubuntu to 12.04.2 LTS (3.2.0-24-virtual) and this just started working fine, without any intervention.
Is anyone aware of an easy way of duplicating and renaming a virtual PC (can be MS VPC, VMWare or Virtual Box), which is running SharePoint, K2 and acting as a domain controller? I’m looking for a method of creating an image which can be quickly and easily copied and run by multiple parties on the same network simultaneously without name conflicts. It’s either that or go through a ground-up build on each and every machine as far as I can see.
I'd advise against it.. renaming an installed SharePoint machine is sure to cause you pain indefinately and unexpectedly. The way to go is with scripted installs:
create copy of a VM with OS
rename machine + run sysprep
script install SQL
script install MOSS
script configure MOSS (replaces config wizard + a lot of manual settings)
It can all be done unattended.
As a shortcut to install short-lived development machines I have used the following. Just make sure the SharePoint configuration wizard runs after the rename and there should be no problem.
create a copy of a VM having: OS+SQL+MOSS(no config wiz)
rename machine
script configure MOSS
It has the advantage of your development machines being identically installed. Takes about 10 minutes to create a fresh one. It doesn't have sysprep but they are renamed so you can run them all on your network. Not running sysprep has never caused me grief but I wouldn't do it for production environments. Running the configuration of MOSS scripted makes sure it will work on the renamed environment (and all MOSS farms are configured exactly the same, same ports, SSP setup, etc, yay!)
For MOSS configuration scripting see h tt p://stsadm.blogspot.com/2008/03/sample-install-script.html
Plently of samples for SQL out there too.
SharePoint doesn't like having the server re-named from under it's feet (so to speak). Neither does SQL Server (which I assume you'd have installed on the VM for the installation). Not sure about a DC being renamed, there's probably problems there as well...
Having said that, there are some instructions I've read for renaming both SharePoint machines and SQL Server machines, so you might get somewhere.
On the third hand, I've tried it a few times and always ended up rebuilding the server from the ground up for SharePoint as it can get subtly mangled in ways which aren't always apparent straight away (the admin interface and shared services seem to be especially easy to confuse). I've found that I can build a vanilla MOSS install pretty quickly these days...
Sharepoint writes the name of the server into configuration tables in SQL Server. So if you change the name of the server, things stop working.
What you can do, is to install just the OS. Then take a copy each time you need a new machine. Run sysprep
to give the machine a new name. Then install SQL Server and MOSS.
This is not exactly what you are after but it should save you some time.
I've done this, and it wasn't too bad.
Rename the SharePoint-server first, then rename the Windows server.
This posting has a nice checklist.
Don't forget to remove the NIC node from the settings file of the virtual machine, otherwise you get name collision due to duplicate MAC addresses. Here's a how-to.
I believe the solutions above are really good. But I would suggest an alternative ...
If this is a development virtual PC I would suggest that you do the following
Do not rename the server
Change the IP address to be on different network
Change the MAC address so that there are no packet collisions
Since you are using it as a development VPC, edit the computer's lmhosts file edit the entry to point to the new IP address
You might want to skip the step 2 and be on the same network. But changing the hosts file will still point back to you. For example you server name was "myserver" and it was pointed 192.168.1.100 which was the local ip (has hosts file entry) , then if you copy the server give it ip 192.168.1.150 and edit the hosts file and point myserver to 192.168.1.150, the system will still work flawlessly. There will some domain name collisions in the event log of the machine, but it wont affect your development.