I am working on trying to get some linux Clearcase clients to work with our existing Windows infrastructure. All of the vobs and servers are stored on Windows machines.
Using an existing vob, I was able to create a tag in the linux region to refer to the vob, and then create a view on the linux client. This client was able to connect to the vob and pull the files in when I updated the config spec. However, while it can view files, it does not have permission to edit them.
The usernames match
linux: user1
windows: DOMAIN\user1
The Clearcase admin panel is set to "Use this domain to map UNIX user and group names" with DOMAIN selected.
However, the authentication does not appear to be working. From the unix client, if I run
credmap windowsServer
I get Nobody/Nobody for the remote username and group ID. If I run from the Windows server
credmap linuxClient
It times out and I get
credmap: Error: Unable to contact albd_server on host
Investigating further albd_list on unix shows that the albd server is running, and even finds the albd_server on the windows machine.
albd_server addr = 166.20.20.81, port= 371
albd_list 166.20.17.118
albd_server addr = 166.20.17.118, port= 371
Going in the opposite direction returns
albd_list 166.20.20.81
noname: Error: Unable to contact albd_server on host '166.20.20.81'
cannot contact albd
Ping works from the windows host to the linux host, and I am even connected to the linux host by SSH from the parent at the moment.
If anyone has any ideas on what to look for next, you'd be my hero :(
You get limited ClearCase functionality when you access a ClearCase server (running on Windows) from a Linux client. To learn more, read about CCFS.
I'd suggest you to consider migrating your VOB server to Linux. This way you get all ClearCase functionality, including dynamic views.
I confirm having Vob server on windows mean they won't be fully accessible from linux client, even though the official documentation from IBM describes the CCFS setting to follow.
(See "Configure UNIX or Linux clients to access Windows VOBs", which you must have seen)
In particular, I never managed to have a credential mapping fully working from Linux to windows (the other way works well).
And you need to make sure your view storage is accessible from Linux (see "Creating a view on a NAS device")
That leaves you with inter-environment solutions, like:
CCRC (ClearCase Remote Client, for CC7.x)
ClearTeam (for CC 8.x)
See "Feature Comparison Matrix for CCRC, CTE, CCWeb, Native ClearCase GUI and SCM Adapter".
This wound up being something stupid. There was a firewall running on the linux machine blocking the albd_server port.
This also did not resolve the credential mapping issues, but it at least let me eliminate one more potential cause. Thanks
Related
I have created an ACL to access a web server from a Oracle XE 11GR2 user on Windows, and using UTL_HTTP.request() works fine.
I have created the same ACL in an Oracle Enterprise 11GR2 DB user on a Linux Redhat server, but the command UTL_HTTP.request() gives the classical error 24247 - network access denied by ACL.
I have checked and rechecked the ACLs definitions and are identical in boths machines.
All the Windows, Linux and remote host computers are on the same network.
I tested curl from the Linux machine to the remote server and worked OK.
Running utl_http.request() in Linux with sys user and worked OK.
No proxys.
So 2 questions:
Is the sys user not affected by the ACL rules?
Is something to be done in Linux server not documented, to make this work?.
I read the post where the OID was the problem, but no idea how to check that.
Any tip what to do would be appreciated.
Thanks in advance.
Please see "How to configure Access Control List" at:
http://www.oracleflash.com/36/Oracle-11g-Access-Control-List-for-External-Network-Services.html
Above the "How to configure Access Control List", provides an explanation of why you could be receiving the error. It also may be because Oracle is installed on the Windows machine as SYSTEM or similar, either way, creating the ACL via that method should do the job.
I am maintaining a Virtual Machine on a Cloud Service with Linux (SLES) operating system. At some point, someone logged in, did some major things (e.g. chmod 777 on ALL files, etc) and, with some other things that he did, messed up the system.
It would be no surprise if he actually hacked it, but...
The vm is hosted inside a VPN (unreachable from outside the VPN), and last root command specifies a user connected through tty1 (!!!), with no IP address, while all my connections, root and user are pts/X.
My thoughts (not like I am an expert) are concluding on one thing, this user must have physical (?) access to cloud service, since tty is reachable locally.
Which means, that if that is true, the "attacker" must be someone from inside the Cloud-Service hosting company.
Question:
Is there ANY way you can connect remotely to a server/cloud service virtual machine using ttyX?
Correct me at any point you see wrong; as I mentioned I am not an expert but I am more than willing to learn.
Depending on the hypevisor, it provides a remote console, so, it is kind of local console connected from a remote place. Also, there is a ipmi protocol that can connected to the hypervisor and use the sol (serial-over-lan) command.
Other than that, the user might be connecting using a VNC, that would also be shown as a tty connection
IPMI SOL: http://www.alleft.com/sysadmin/ipmi-sol-inexpensive-remote-console/
Remote qemu guest console: How to switch to qemu monitor console when running with "-curses"
VNC on guests: https://askubuntu.com/questions/262700/qemu-kvm-vnc-support
I was setting up a Perforce server and only noticed options for localhosts and such. What I'm trying to do is setup up the server on a desktop machine at one household, and then be able to connect to it using the P4V Client to access the files over the internet form a another household. I no that I'll have to forward some ports and stuff but what set up files do I need to do this? I can only find info for servers that are all being run on the same network like at a business or something nothing that is over the internet. I've set up a team speak server like this where you go to connect and type in the ip address and port and then connect to the server but this dosn't have options like that, that I've seen anyway. This will be done all on Windows 7 64 bit machines. Server on desktop and clients will be on desktops and laptops. All help is appreciated and I'll be posting back with updates on what i'm doing so others can follow this as well if needed.
The server accepts TCP/IP connections, which can be from any client machine which has TCP/IP connectivity. The Perforce server configuration for telling it which IP address and port number to listen at is the P4PORT setting: http://www.perforce.com/perforce/r12.2/manuals/cmdref/env.P4PORT.html
Since you're on a Windows machine, your server will probably be run as a Windows Service, and hence its P4PORT setting will be held in the registry section for that service. You can edit the service's configuration using a registry editor such as RegEdit, or more simply you can use 'p4 set -S Perforce P4PORT=my-host-name:my-port-number' to specify the desired IP address and port.
Then restart your Perforce service from the Services Control Panel and you're good to go!
I've recently been working with the NAO. We're trying to connect the NavChip to it and do some experiments related to robot navigation. The NAO uses a modified 2.6 linux kernel on it's geode system. I've managed to make my NavChip work on it (needed to compile the linux cp210x kernel module etcetera). I can therefore run a C program that came with the NavChip and collect data from it. However, the data can only be logged on the local file system. I'd like to stream this data over the network to a windows machine, since all the processing is MATLAB based. Would anyone have any suggestions on how I can send this data from the NAO to a windows machine?
The NAO's system is pretty limited. It has ssh, and some common utilities like cat etc., but nothing advanced.
I'm not sure I understand the problem properly but I think you've answered your own question you mentioned ssh is installed so why not just scp the file? Using some ssh client on the windows box to remotely connect and download relevent log file.
If you really do need to push the file from the remote host to local machine (rather then connect to remote host and download to local) then netcat should work see here: http://www.g-loaded.eu/2006/11/06/netcat-a-couple-of-useful-examples/
Other wise just write your own socket program in C and pipe the file accross (should be pretty trivial).
I used to use MAMP (or just a local Apache/PHP/MySQL stack) to work on web projects. I've since graduated to a live Ubuntu server which is much closer to the production environments for the sites I work on.
Now I'm trying to take this a step further to optimize my workflow. My goal is to have a Linux server running in VirtualBox that automounts a local folder share (from the host) and uses a symlink to gain access to the files (i.e. client:/var/www/dev is a symlink to host:/Users/charlie/dev/).
I don't want to keep my files stored on the virtual server if it can be avoided. I prefer having direct local access to the files and not having to wait for buffering issues between the host and the client. i.e., if I have several files that are located on the client open in my IDE and I close my laptop, as soon as I open it there's a bit of a buffer issue. My IDE has open project(s) that reference folders and files located on a network share that isn't yet available. In the few seconds it takes for the virtual machine to wake up, OSX is already reporting that the share can't be found and was disconnected, the IDE chokes up, etc.
So what am I asking? Well, is this safe / are there obvious pitfalls I'm not seeing / better ways to do this?
Edit: For anyone that stumbles upon this post, the final setup is a Linux virtual machine running in VirtualBox on a Mac with NFS and a symlink from my Apache web root to my mount.
I used NFS Manager (http://www.bresink.com/osx/NFSManager.html) to setup the NFS Server on my host computer with user mapping to my primary account. This ensures that when my VM mounts the NFS share it can do whatever it needs (reading, writing, modifying). Then I added this line to /etc/fstab on my VM to automount the share on boot: "123.456.89.1:/Users/charlie/nfs_share /mnt/nfs_share nfs" (where 123 is my host IP on the virtual NAT).
The result is a killer development environment where I can use Finder, Aptana (or whatever your editor of choice is) Photoshop, etc to work on files locally and simultaneously test them out in my "real" Apache/Lighttpd/MySQL/PHP environment!
I am using the exact same setup for accessing my documents folder between my Ubuntu host and the windows guest. Idem on my iMac. The only issues are when editing on the 2 platforms are the CR/LS, but that will be no issue on your setup.