I am quite new to perforce and I am facing an issue concerning the P4HOST value.
Here's the situation : I have one, let's say, classic setup with a connection, a workspace name, etc. and a host set to the local machine name. Everything works perfectly.
I have another connection with quite the same but the host should not be the local machine name to connect to the correct server. If I set the host to my local machine I am referring to a bad server. If I set the host in p4v I get this error : Client can only be used from host. and it breaks everything for this setup.
To fix this I tried to set the host value manually with this command : p4 set P4HOST=myhost and it works well unless I can't access my other repositories because, I think, it's a global value and as other configurations are not using a specific host it fails.
Anyway, according to my configurations what can I do ? Is it possible to manually set P4HOST for a specific setup without affecting everything ? Is there another way ?
Thank you very much !
Edit : I don't know if this is useful but the classic host I am using is like myname-PC and the other one which is failing is something like apath/toanotherpath
P4HOST's job is to keep you from using the same workspace from different machines. If you use the same workspace from different machines, you're going to have a bad time. (Why exactly is its own topic -- for purposes of this answer, just take my word for it that you do not want to use one workspace from different client machines. Dead rising from their graves, cats and dogs living together, that kind of thing. Bad time.)
When you create a workspace, its Host: value is set to your current P4HOST value (which defaults to the client machine hostname). If you try to use that workspace with a DIFFERENT host value, it's a strong clue to the server that you're trying to use it from more than one machine (which, as established, is a Bad Time), and so the server gives you that error (to try to stop you before you have a BAD TIME).
So it sounds like this workspace that you're trying to use was created on a different client host machine -- which means that using that workspace is probably going to lead to a bad time. Create a new workspace for the client machine that you're on.
Alternatively (and only if you're really sure it's the right thing to do), you can change the Host in that workspace to match your current machine. Note that if you find yourself having to do this more than once, you're probably in the process of generating a bad time for yourself.
Related
So, I have a collection of Windows Server 2016 virtual machines that are used to run some tests in pairs. To perform these tests, I copy a selection of scripts and files from the network on to the machine, before performing the tests.
I'm basically using a selection of scripts that have existed around here since before my time and whilst i would like to use other methods, so much of our infrastructure relies on these scripts that overhauling the system would be a colossal task.
First up, i sort out the mapped drives with
net use X: \\network\location1 /user:domain\user password
net use Y: \\network\location2 /user:domain\user password
and so on
Soon after, i use rsync to copy files from a location in /cygdrive/y/somewhere to /cygdrive/c/somewhere_else
During the rsync, i will get errors that "files have vanished" (I'm currently unable to post the exact error, I will edit this later to include this). When i check what's currently in the /cygdrive directory, all i see is /cygdrive/c and everything else has disappeared.
I've tried making a symbolic link to /cygdrive/y in a different location, I've tried including persistent:yes on the net use command, I've changed the power settings on the network card to not sleep. None of these work.
I'm currently looking into the settings for the virtual machines themselves at this point, but I have some doubts as we have other virtual windows machines that do not seem to have this issue.
Has anyone has heard of anything similar and/or knows of a decent method to troubleshoot this?
Right, so I've been working on this all day and finally noticed a positive change, but since my systems are in VMware's vCloud, this may not work for some people. It's was simply a matter of having the VM turned off and upgrading the Virtual Hardware Version to the latest version. I have noticed with this though, that upon a restart, one of the first messages that comes up mentions that the computer is "disabling group policies".
I did a bit of research into this and found out that Windows 8 and 10 (no mention of any Windows Server machines) both automatically update Group Policies in the background, disconnecting and reconnecting mapped drives to recreate them.
It's possible that changing the Group Policy drive from "recreate" to "update" should fix this issue, and that the Virtual Hardware update happened to resolve this in a similar manner.
I just installed Matlab's Distributed Computing Server on a bunch of machines and it works, but only for those physically connected to the cluster's network. For remote access those machines are 2 SSH hops away. How this problem is usually solved? I thought in setting up a VPN, but to me this seems like last resort.
What I want is that everybody in the lab, using their own versions of Matlab, with the correct Toolbox, just run their code in the cluster somewhat effortlessly. I guess I could ask to everybody just tar-ball their files and access a remote installation of matlab, somehow forwarding the GUI session (VNC or X-Forward), but that seem ugly.
Any help?
It is possible to set up "remote access" to a cluster running MDCS so that clients without direct access can submit jobs there. The documentation for this starts here:
http://www.mathworks.com/help/mdce/configure-parallel-computing-products-for-a-generic-scheduler.html
I'm not quite sure how to configure things so that the submission can work across two SSH connections - the example integration scripts shipping with MDCS all presume only one. However, it should be possible providing that:
The client can put the job and task files somewhere the execution nodes can see them
The client can trigger the appropriate qsub or whatever on the cluster headnode
You might also consider simply contacting MathWorks installation support.
I've just installed Neo4j 1.8.2 onto Azure by following this step-by-step process...
http://de.slideshare.net/neo4j/neo4j-on-azure-step-by-step-22598695
Unfortunately, when I browse to http://:7474/webadmin Fiddler says Error 10061 - No connection could be made because the target machine actively refused it.
I've followed the instructions exactly and haven't received any errors.
Any help much appreciated.
So, I think I got to the bottom of this. I think it was due to the size of compute / VM I was creating. It looks like the problem is caused when running on Extra Small instances. I created a new installation using a Small instance and everything now works :).
Try setting the server to accept connections form all hosts, and maybe use a newer Neo4j, say 1.9.4
http://docs.neo4j.org/chunked/stable/security-server.html#_secure_the_port_and_remote_client_connection_accepts
The way the VM Depot image is set up, it's pre-configured to allow all hosts to connect, and the Neo4j server will auto-start. The only thing you need to take care of, when constructing your VM, is to open an Input Endpoint, with any public port you want (preferably 7474 to stay true to Neo4j) and internal port 7474.
Note that the UI changed a bit since the how-to was published: You can specify the endpoint as the last step before creating your virtual machine. Other than that, the instructions should be the same. And... once the VM is up and running (it'll take about 5-10 minutes), you just visit http://yourservicename.cloudapp.net:7474 and you should see the web admin. Note: this is not the same as your vm name. If you named your VM something like 'neo' then you do not want http://neo:7474 or http://neo.cloudapp.net:7474. You need to use your cloud service name (you had to create a name for the service when you deployed the VM.
I've deployed that image several times in demos, and just tried again right now to make sure nothing wonky happened. Worked perfectly.
I'm trying to set up a Jenkins system where a certain program has to be run on a board on the network, accessed using telnet. We're talking about hundreds of such jobs here, therefore we will be setting up multiple boards. Therefore, each job has to be allocated a board, but the catch is that only one job can have a certain board at the same time, otherwise the program fails.
The solution I have right now is using a master-slave set-up where I connect to the same machine using SSH (so a master and multiple slaves on the same machine). Each of the slave nodes then has a label for the IP address the program has to telnet to. This works, scheduling wise, but it might cause issues because all nodes connect using SSH to the same machine. Connecting to the boards using SSH is not an option.
Is there any way to get the same functionality as above, but then without using SSH to connect to the same machine? So basically I want to be able to say: we have n available machines, when a job comes in give it one of those machines and pass it a label belonging to that machine (its IP address in this case); now there are n-1 machines left.
Mutual exclusion comes close, but does not allow the above functionality, and jobs waiting for a resource take up one of the executors of a node.
Thanks a lot!
I realize your problem is probably solved already years ago, but in case someone else is looking for the answer and runs into this.
You can use "Lockable resources" plugin and set the ip address as the name of the resource and use label such use test-board-ip.It is simple and easy to use.
Another possibility is to use "External resources dispatcher" plugin. It provides a bit more possibilities, but it has a bug that causes it to hang sometimes. And it seems there is no maintenance any more (last updates from 2013).
Maybe you should hava a look at the Lock and Latches Plugin. You are able to lock a resource with this plugin with only requireing the job to lock the board you want to.
https://wiki.jenkins-ci.org/display/JENKINS/Locks+and+Latches+plugin
I'm supposed to access a server, but when I use WinSCP with FTP protocol to log in, I just get a warning that
The requested name is valid, but no data of the requested type was found.
Connection failed.
I really have very little experience with working remotely on servers, or even logging into them. What are my alternatives?
This is the WSANO_DATA. error Quoting Microsoft documentation:
The usual example for this is a host name-to-address translation attempt ... which uses the DNS (Domain Name Server). An MX record is returned but no A record—indicating the host itself exists, but is not directly reachable.
(This can possibly happen for newly registered domain names that are no fully setup yet.)
See:
https://learn.microsoft.com/en-us/windows/win32/winsock/windows-sockets-error-codes-2#WSANO_DATA or
https://winscp.net/eng/docs/message_name_no_data
It could have been a temporary issue. Also make sure you specify your hostname without the leading ftp:// (though the latest version of WinSCP will strip it automatically).
You can find a very nice discussion on the same issue with WinSCP here
You can also try FileZilla or Putty
If you are typing your address like ftp://ftp.domain.com or things like that, remove the first part and just keep ftp.domain.com in your host address box.
You might want to consider PuTTY, which comes with a number of tools including a ssh client and a secure copy tool like WinSCP called pscp. Possibly even more valuable is the psftp client, which allows secure ftp to remote servers. PuTTY can be run from a usb drive, making it easy to carry with you to any computer, allowing you to remote into your server from all over the world.
You're probably using WinSCP to send or get files from/to the server, right? You might want to state that in your question. For that, you're probably better off with FileZilla. (You need the FileZilla client, not the Server)