Cargo fails with "spurious network error: The operation timed out" on Windows 10 when using a local user - rust

I'm trying to setup a Rust programming environment for a local user on a Windows 10 laptop that is usually connected to my company domain. Installing the stable version of Rust with rustup via rustup-init.exe completed without problems, but every time I try to use cargo to install tools or libraries I get an error message like the following:
warning: spurious network error (5 tries remaining): [2/-1] failed to send request: The operation timed out
This happens both from my company network and from my home one. I managed to setup Rust for my domain account without problems.
I suppose this is network related or it might involve the Sophos software my company uses as firewall/anti-virus; what is puzzling me is the fact that just about every other network related utility I tried works without problems, from git to curl.
I'd like to use this additional user because there are utilities my company blocks for domain users but not for local ones, such as Dropbox.

For me, I used a proxy once and it was set as a variable in CMD.
To show your proxy settings, in an administrator cmd type:
netsh winhttp show proxy
if you have one you can reset it by:
netsh winhttp reset proxy

I spent a good hour trying to figure this out and came across 2 potential solutions
There could be an issue with ssh: dependency, fixed it by starting ssh agent:
eval `ssh-agent -s`
ssh-add
cargo build
setup in a global ~/.gitconfig
[url "ssh://git#github.com/"]
insteadOf = https://github.com/
Removing this in ~/.gitconfig also solved the issue

I don't have a definite explanation, but cargo works correctly when the VPN towards my office is active. I guess it really is something to do with security software.

Related

Elm not able to access the network

TL;DR: Everything network-related is working perfectly except one specific binary (in this case - elm).
I am running a new arch machine - I am connected via wifi and have networks access.
However - elm does not seem to know that. Running elm make fails when it tries to download the dependencies. (This is a project imported from somewhere else).
I could not connect to https://package.elm-lang.org to get the latest list of
packages, and I was unable to verify your dependencies with the information I
have cached locally.
Are you able to connect to the internet? These dependencies may work once you
get access to the registry!
Adding the IP of package.elm-lang.org to /etc/hosts fixes that, but it then throws a similar error for github.com. I can keep doing that, but surely there is a way to convince elm to access the internet.
I'm not using a proxy or anything like that. My connection obviously works and seem stable. elm init also fails for the same reasons so i'm unable to test on a brand new directory.
Thank you all for your help :)
Apparently fresh arch uses the systemd-resolved daemon for DNS, but elm decides to just read resolv.conf directly (which is blank), and then defaults to 127.0.0.1 as the DNS server.
Setting a DNS server manually in resolv.conf did the trick.

TortoiseSVN Error: Could not send request body: an existing connection was forcibly closed by the remote host

Let me preface this by saying I have basically 0 knowledge of web development. That being said, I'll still try to provide you with as much information as I possibly can. Our client is using IIS7 on a Windows Server 2008 R2 machine. The TortoiseSVN error they're getting is this:
Error: Could not send request body: an existing connection was forcibly closed by the remote host.
Using the powers of Google, it seems that there's two possible things that could be occurring here. As it is a 4GB file, I've seen people mention that it could be a configuration issue in that the timeout could be a little short, that I might need to enable a setting somewhere to allow committing of larger files or that it could be a network issue. It might be useful to note that they can commit smaller files.
I've all ready tried disabling the firewall, as well as the antivirus, on the server and having them retry, but that didn't work. They are trying to upload from a desktop to the server and they are on the same network through a gigabit switch. I'm sure I'm missing useful information for you guys but I'm a total noob to web dev, their set up, and actually understanding what they're trying to do. If you need any more information from me I'll be glad to provide it.
The problem could be the too strict timeout options configured in Apache2's reqtimeout module. I simply disabled it
a2dismod reqtimeout
/etc/init.d/apache2 restart
Chocolate to: https://serverfault.com/questions/297562/svn-https-problem-could-not-read-status-line-connection-was-closed-by-ser

Using GIT to clone from a windows machine to a linux webserver (in house)

OK, I am looking for a way to use GIT to keep a web site up to date between my local machine (git repository) and my web site (git clone of repository).
I have initialized the repository (on windows 7 machine) and added all the files to the repo on my local machine. I now need to get the repo to the webswerver (a linux-based machine). I can access the webserver via putty and ssh. How do I go about cloning the repo into the appropriate directory to serve the web site?
I have tried the following from my linux based machine: git clone git+ssh://myuser#10.1.0.135/d/webserver/htdocs/repo
I keep receiving a connect to host 10.1.0.35 port 22: connection time out
Both machines are in house with the webserver being outside of the network on a different IP range (outside of firewall). I came from subversion and can easily svn commit/update to and from the webserver and my machine without issue.
Thanks for any guidance on this!
The best resource I've found for doing this is located here.
The problem I had was that issuing a git clone from the *nix environment using the above suggestions was unable to find the path to the repo properly.
I was able to fix this by starting the git daemon with the --base-path and --export-all params.
So, from the windows box:
git daemon --base-path=C:/source/ --export-all
Then from the *nix box (mac in my case):
git clone git://<local ip>/<project name>
My directory structure on the windows box is:
c:\source\<project name> - this is where the .git folder lives
Here is a walkthrough someone else did. It goes step by step showing how to do what you want.
The IP address 10.1.0.135 is reserved for private networks, which means that it only refers to your local Windows computer when used within your home network. If you're running the git clone command with that address on your server, 10.1.0.135 refers to a completely different computer, which explains why the connection isn't working.
Here's my suggestion: instead of trying to clone the repository on your home computer, first create an empty repository on the server
server$ git init /path/to/repository
and then push changes from your computer to the server's repository
home$ git remote add website ssh://myuser#server/path/to/repository
home$ git push website
You can call the remote something other than "website" if you want.
For slightly more advanced usage, I've written a blog post explaining how to set up staging and production servers and maintain them with git. If you don't want to deal with a staging server, though, I also link to a couple of tutorials about a simple two-repository setup to manage a website with git, which is basically what it sounds like you're looking for.
Sounds like your windows 7 machine (in particular, port 22) may not be accessible from outside of the firewall. With subversion, the webserver is likely accessible to both machines. Also, the IP for your Windows machine is a non-routable IP, which means your firewall is likely also NAT'ing your internal network.
You could approach this by opening port 22 in the firewall, or setting up port-forwarding in the firewall to point to your Windows machine. But you should probably create the git repo on the server, then clone from that to your Windows machine instead. You could use scp -r to get that initial repo on the server, though someone with more git experience may be able to tell you a better way.
Good idea to do this with Git, if you need to check it into a version control system anyhow.
Just wanted to mention you could also look at the rsync utility - e.g. google "Rsync Windows" brings up some nice results.
Rsync is specifically made for keeping directory trees in-sync across machines. And it does it smart, not transfering files which are already on the other side, and you can use compression.. it has tons of features,
and is typically used in UNIX production environments. There are ways to run it also on Windows.
In any case:
Check your firewall settings on both machines - the relevant ports need to be open. In your case port 22 is probably blocked

Is Mercurial Server a must for using Mercurial?

I am trying to pick a version control software for our team but I don't have much experience for it before. After searching and googling, it seems Mercurial is a good try. However, I am a little bit confused about some general information about it. Basically, our team only have 5 people and we all connect to a server machine which will be used to store the repositories. The server is a Redhat Linux system. We probably use a lot of the centralized workflow. Because I like the local commit idea, I still prefer the DVCS kind software. Now I am trying to install mercurial. Here are my questions.
1) Does the server used for repositories always need to be installed the software "mercurial-server "? Or it depends on what kind of workflow it uses ? In other words, is it true if there is no centralized workflow used for works, then the server can be installed by "mercurial client" ?
I am confused about the term "mercurial-server". Or it means the mercurial installed on the server is always called "mercurial server" and it does matter if it is centralized or not. In addition, because we all work on that server, does it mean only one copy of mercurial is required to install there ? We all have our own user directory such as /home/Cassie, /home/John,... and /home/Joe.
2) Is SSH a must ? Or it depends on what kind of connection between users and the server ? So since we all work in the server, the SSH is not required right ?
Thank you very much,
There are two things that can be called a "mercurial server".
One is simply a social convention that "repository X on the shared drive is our common repository". You can safely push and pull to that mounted repository and use it as a common "trunk" for your development.
A second might be particular software that allows mercurial to connect remotely. There are many options for setting this up yourself, as well as options for other remote hosting.
Take a look at the first link for a list of the different connection options. But as a specific answer to #2: No, you don't need to use SSH, but it's often the simplest option if you're in an environment using it anyways.
The term that you probably want to use, rather than "mercurial server", is "remote repository". This term is used to describe the "other repository" (the one you're not executing the command from) for push/pull/clone/incoming/outgoing/others-that-i'm-forgetting commands. The remote repository can be either another repository on the same disk, or something over a network.
Typically you use one shared repository to share the code between different developers. While you don't need it technically, it has the advantage that it is easier to synchronize when there is a single spot for the fresh software.
In the simplest case this can be a repository on a simple file share where file locking is possible (NFS or SMB), where each developer has write access. In this scenario there is no need to have mercurial installed on the server, but there are drawbacks:
Every developer must have a mercurial version installed, which can handle the repo version on the share (as an example, when the repo on the share is created with mercurial 1.9, a developer with 1.3 can't access this repo)
Every developer can issue destructive operations on the shared repo, including the deletion of the whole repo.
You can't reliably run hooks on such a repo, since the hooks are executed on the developer machines, and not on the server
I suggest to use the http or ssh method. You need to have mercurial installed on the server for this (I'm not taking the http-static method into account, since you can't push into a http-static path), and get the following advantages:
the mercurial version on the server does not need to be the same as the clients, since mercurial uses a version-independent wire protocol
you can't perform destructive operations via these protocols (you can only append new revisions to a remote repo, but never remove any of them)
The decision between http and ssh depends on you local network environment. http has the advantage that it bypasses many corporate firewalls, but you need to take care about secure authentication when you want to push stuff over http back into the server (or don't want everybody to see the content). On the other hand ssh has the drawback that you might need to secure the server, so that the clients can't run arbitrary programs there (it depends on how trustworthy your clients are).
I second Rudi's answer that you should use http or ssh access to the main repository (we use http at work).
I want to address your question about "mercurial-server".
The basic Mercurial software does offer three server modes:
Using hg serve; this serves a single repository, and I think it's more used for quick hacks (when the main server is down, and you need to pull some changes from a colleague, for example).
Using hgwebdir.cgi; this is a cgi script that can be used with an HTTP server such as Apache; it can serve multiple repositories.
Using ssh (Secure Shell) access; I don't know much about it, but I believe that it is more difficult to set up than the hgwebdir variant
There is also a separate software package called "mercurial-server". This is provided by a different company; its homepage is http://www.lshift.net/mercurial-server.html. As far as I can tell, this is a management interface for option 3, the mercurial ssh server.
So, no, you don't need to have mercurial-server installed; the mercurial package already provides a server.

linux gedit: I always get "GConf Error: failed to contact configuration server ..."

How come I always get
"GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)"
when I start 'gedit' from a shell from my superuser account?
I've been using GUI apps as a logged-in user and as a secondary user for 15+ years on various UNIX machines. There's plenty of good reasons to do so (remote shell, testing of configuration files, running multiple sessions of programs that only allow one instance per user, etc).
There's a bug at launchpad that explains how to eliminate this message by setting the following environment variable.
export DBUS_SESSION_BUS_ADDRESS=""
The technical answer is that gedit is a Gtk+/Gnome program, and expects to find a current gconf session for its configuration. But running it as a separate user who isn't logged in on the desktop, you don't find it. So it spits out a warning, telling you. The failure should be benign though, and the editor will still run.
The real answer is: don't do that. You don't want to be running GUI apps as anything but the logged-in user, in general. And you never want to be running any GUI app as root, ever.
For some (RHEL, CentOS) you may need to install the dbus-x11 package ...
sudo yum install dbus-x11
Additional details here.
Setting and exporting DBUS_SESSION_BUS_ADDRESS to "" fixed the problem for me. I only had to do this once and the problem was permanently solved. However, if you have a problem with your umask setting, as I did, then the GUI applications you are trying to run may not be able to properly create the directories and files they need to function correctly.
I suggest creating (or, have created) a new user account solely for test purposes. Then you can see if you still have the problem when logged in to the new user account.
I ran into this issue myself on several different servers. It I tried all of the suggestions listed here: made sure ~/.dbus had proper ownership, service messagbus restart, etc.
I turns out that my ~/.dbus was mode 755 and the problem went away when I changed the mode to 700. I found this when comparing known working servers with servers showing this error.
I understand there are several different answers to this problem, as I have been trying to solve this for 3 days.
The one that worked for me was to
rm -r .gconf
rm -r .gconfd
in my home directory. Hope this helps somebody.

Resources