opc, server not connect - rpc

I need to realize OPC server, on Windows XP. I download OPC library, and OPC client (application not library). I realize my OPC server, when i use client on my machine all runs normally. But when i connect from remote computer i do not see my server. I understand that the technology dkom potentially dangerous. I get this manual, and did everything on it, but nothing changed. I disable my windows firewall, add 135 port in windowds firewall exception. In dcomcnfg grants local and remote access to "anonymous" and "all" groups, grants local and remote launch & activation to "administrators" and "all" groups. And nothing changed, i did not give the right of my DCOM component because i thought the following: i get list of servers not work with them. In my microsoft network no domain and active directory, can i achieve the desired result in this case?

There's a number of things which can go wrong with OPC DA over DCOM. From the top of my head, you could try the following:
Check if OPCEnum service is running on the server computer. This service provides the list of OPC servers on to the potential clients. It's part of the OPC foundation redistributable.
Make sure that whatever dcomcnfg changes you applied, they are done both on the server and client computer.
If you're using only local users, try creating a dedicated user for OPC access on both server and client computer, e.g. call him "opc". Then grant all the rights to this user in "COM security" section of dcomcnfg. Run both the server and client as "opc". Make sure the local users authenticate as themselves (see "Security options" in local policies).
If all else fails, a workaround can be to deploy the server on the client computer, register it, then remove it. Worked for me once.

The most common error is DCOMs have not been configured properly. I find this guide very useful:
ftp://ftp.nist.gov/pub/mel/michalos/Software/Github/MTConnectSolutions/MtcOpcAgent/doc/DCOM_Config_Step_by_Step.pdf
Also this other guide gives you a big understanding of a Remote OPC DA:
http://www.kepware.com/Support_Center/SupportDocuments/Remote%20OPC%20DA%20-%20Quick%20Start%20Guide%20(DCOM).pdf
I had a similar problem when I tried to communicate with a remote OPC server in a different PC. Please pay attention to the point number 2 of the second guide (2.Users and Groups), make sure both PCs are logging in under the same user account with the same password.
2.1 Domains and Workgroups When working within a workgroup, each user will need to be created locally on each computer involved in the
connection. Furthermore, each user account must have the same password
in order for authentication to occur. A blank password is not valid in
most cases. Because changes may need to be made to the local security
policy on each computer, remote connectivity within a workgroup has
the potential to be the least secure connection. For more information,
refer to Local Security Policies. When working within a domain, local
users and groups are not required to be added to each computer. A
domain uses a central database that contains the user accounts and
security information. If working within a domain is preferred, a
network administrator may have to implement the changes. Mixing
domains and workgroups will require both computers to authenticate
with the lesser of the two options. This means that the domain
computer will require the same configuration as it would if it were on
a workgroup. Local user accounts must be added to the domain computer.

Related

How do I connect Release Management 2013 client on a non-domain Windows 10 box?

I've got 2 machines:
A corporate desktop machine which is running Windows 7 SP1 which resides on the corporate domain and which I log into using a corporate domain account.
A personal laptop that I use when working from home via the Cisco VPN client but presently sits on my desk connected to the corporate WiFi (though I had it connected to the wire and on the same subnet as my desktop machine today also). This machine is not on the corporate domain; I log into this machine with a Microsoft Account.
I need to run Visual Studio 2013 Release Management Client from both machines. The machine on my desktop works fine when entering either the IP address or the URL into the Release Management Server URL entry field and everything hooks up and all is glorious.
On my Windows 10 laptop however, it's a different story. Every attempt to connect is met with the error:
The server specified could not be reached. Please ensure the
information that is entered is valid (please contact your Release
Management administrator for assistance). <-- I'm the admin
I can ping the machine both with IP address and with hostname, ruling out DNS issues. Both client machines are on the same subnet. Both machines are using the same outbound port.
Checking the event log I see a bunch of Message: The remote server returned an error: (401) Unauthorized.
Checking with Fiddler, on my desktop machine, I can walk through the handshake of each of the stages of startup and all is good. But in Fiddler on my laptop I see 3 401 Unauthorized errors before Release Management Client bombs and returns the rather uninformative message I posted above.
I've attempted to create a shadow account on my laptop and do the Shift-Right Click-Run As Different User dance, but I must be missing something because I can't get this to run.
I've talked to the network administrator who suggests that I should be able to access all of the same resources from both machines and that it must be a Release Management issue.
Is this an incompatibility between VS2013 Release Management & Windows 10 or something else? Has anyone else had this issue and overcome it? I have access to be able to administer the Release Management environment if there's changes that need to be made there and I'm a local administrator on both machines. I'm not however a domain administrator if changes need to be made there.
I would bet you simply have a security issue as the workstation is not domain-joined and the WPF client is using Integrated Authentication.
Often creating a local "shadow" user with same username and password, and running the client app under that account (run as) works.
Another option is to join the workstation to the domain or use a domain-joined VM.
After fully investigating the situation, it appears to have been a combination of factors. I am posting a response because this appears to be a relatively common problem:
The workstation was sending an unexpected credential to the server. To get around this, you have to configure the user account on the server without a domain in the username and create a shadow account on your local machine. When running the client application, you must either log into this shadow account on the local machine or you must SHIFT+RIGHT CLICK and choose "Run as" entering your local shadow credentials. This will then pass the shadow account to the server which will now authenticate without referencing the domain. OR
Create a user account on the server that matches the credentials on your local machine including MACHINENAME\LocalUsername
There appeared to be a network issue when attempting to connect to the Release Management Server from the non-domain machine when connected inside the network. When connecting via the VPN from home, this situation was resolved, but only after we'd ensured the account and local machine accounts were correctly configured. The domain connected machine always connected properly.

TF30063: You are not authorized to access

We're currently having an issue where when someone tries to access our TFS server via Visual Studio, they're hit with an Error TF30063: You are not authorized to access
The TFS server is on a different domain to what the client machines trying to connect are on. There is a domain trust between the two and other shared resources work fine.
I have found that it does temporarily work if you open up an RDP (remote) connection to the server in the background and login using your local domain credentials. After leaving your remote session connected and trying to connect again via Visual Studio, it works fine.
Another thing to point out which indeed would be related is, looking at the Administrator group permissions on the TFS server it does not resolve the usernames of the users in the list until they initiate an RDP connection atleast once after a reboot has occurred. Instead it shows their SID.
Things I’ve tried so far are;
Adding Windows and Generic Credentials to the Credential Manager on the TFS server for their domain accounts. I thought it might be an issue with the server not caching their credentials which meant an RDP connection needed to exist each time.
Enabling Windows Authentication in IIS
Adding the path to Trusted Sites in Internet Options
Enabling Network access: Allow anonymous SID/Name translation in Group Policy for the machine.
Creating a registry key under HKLM\System\CurrentControlSet\Control\Lsa called TurnOffAnonymouseBlock and set to 1 which essential is what the GP above does.
None of these however have seemed to fix the issue.
Any suggestions would be greatly appreciated!
If there is a domain trust in place, you should just add the users AD account that they log into their machine with, as a valid user in TFS.
For example, if TFS is in Domain A, and the user's laptop is in domain B (and they login to their laptop with a domain B account), then you need to ensure that Domain A trusts Domain B (either a two-way trust, or one way with A trusting B). Then you just need to make sure to add the user's domain B account as a TFS Contributor for example, and they should be able to access TFS without doing anything special.

Active FTP on Azure virtual machine

I have setup FTP in IIS 8.0 on an Azure windows server 2012 virtual machine.
After followed the instructions in this post (http://itq.nl/walkthrough-hosting-ftp-on-iis-7-5-a-windows-azure-vm-2/) I've been able to make FTP work fine in passive mode, but it fails when trying to connect in active mode from FileZilla. FTP client can connect to the server in active mode but fails with timeout error message when trying to execute LIST command.
I carefully revised 20 and 21 endpoints are set in azure vm without pointing to a probe port and that windows firewall allows external connections to 20 and 21 VM ports.
I can't figure out why active mode doesn't work while passive mode works fine.
I know there are other users with some issue.
Please is there someone who had succed setting active ftp in azure VM?.
This previous response is incorrect. https://stackoverflow.com/a/20132312/5347085
I know this because I worked with Azure support extensively. The issue has nothing to do with the server not being able to connect to the client, and my testing method eliminated client side issues as a possibility.
After working with Azure support for 2 weeks, their assessment of the problem was essentially that “Active Mode FTP uses a series of random ports from a large range for the data channel from the client to the server. You can only add 150 endpoints to an Azure VM so you couldn’t possibly add all those ports and get Active FTP working 100%. In order to do this you would need to use “Instance level public IP” and essentially bypass the endpoint mechanism all together and put your VM directly on the internet and rely entirely on the native OS firewall for protection.
If you HAVE to use Active Mode FTP on Azure and are ok with putting your VM on a public IP, he provided this link:
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-instance-level-public-ip/
UPDATE: Official response from Azure Support:
Josh,
First of all thanks with your patience on this. As I mentioned in my
last email I was working with our Technical Advisors which are Support
Escalation Engineers on reproducing this environment in Azure. Our
tests were configured using WS_FTP 7.7 (Your version 7.1) and WS_FTP
12 client as well as the Windows FTP client. The results of our
testing were the same as you are experiencing. We were able to log in
to the server, but we get the same Command Port/List failures.
As we discussed previously Active FTP uses a random port for the data
plane on the client side. The server connects via 21 and 20, but the
incoming port is a random ephemeral port. In Passive FTP, this can
be defined and therefore endpoints can be created for each port you
use for part of the data plane.
Based on our extensive testing yesterday I would not expect any other Active FTP solution to work. The escalation Engineer that
assisted yesterday also discussed this with other members of his team
and they have not seen any successful Active FTP deployments in Azure.
In conclusion, my initial thoughts have been confirmed with our
testing and Active FTP will not work in the Azure environment at
this time. We are always striving to improve Azure’s offering so
this may be something that will work in the future as we continue to
grow.
You will need to move to a passive FTP setup if you are going to host
this FTP server in Azure.
When using active ftp, the client initiates the connection to port 21 on the FTP server. This is the command or control channel and this connection usually succeeds. However, the FTP server then attempts to open port 20 on the client. This is the data channel. This channel is used for all data transfers, including directory listings.
So, in your case, active FTP isn't working because the server can't initiate a connection to the client. This is either a problem on the server (outbound firewall rule) or on the client itself. This is usually a good thing because you don't want internet-based servers to be able to open connections on client machines.
In passive mode there is a clear client/server distinction where the client initiates connections to the server. Passive mode is recommended so if you got that working I'd stick with that.

How to access a shared network drive in node.js

In IIS, I have a web service which runs under an application pool which has the identity of a user that has access to a drive on a remote machine. In this way, when the web service runs and it tries access the remote machine to read a file, we do not get any invalid authorization errors.
I have now written my first Node.js app but I am not sure how to allow access to a file stream from the app to the remote machine. I have the unc path name to the remote machine's file I want to read but I am not sure if I have to pass in the credentials of the user to access the file or I have to run the Node.js app under certain credentials.
Any clues?
I know there is node for IIS, but is there another way of doing this without IIS.
Update:
Just ran my app under my user account and this account is configured to allow access to a remote machine and I have access to the remote machine without changing my code (in other words just using the Unc path directly). However, how can I do this using another user's credentials (i.e. impersonation in Node.js?)
Node does not handle impersonation like .NET applications do, since NodeJS is not Windows specific, so it does not know about the windows way of handling rights and elevation.
But that is not a problem. I have used the following technique on large financial networks.
As you pointed out yourself, the best solution is to have a dedicated user account for your nodeJS application with sufficient rights to access the UNC, but with no other rights. Then when you run the application, run it as this user.
Let me suggest that your setup a service to run the nodejs application and in the service specify the user account. This makes it much easier and safer. If you application is ever hacked, the hacker will not be able to escape the restrictions of the account.

IIS moving virtual directory to file share breaks impersonation of logged on user

We have an instance of IIS6 running an intranet website with Windows Authentication and Impersonate = true so that it uses the NT credentials passed in by the clients browser.
The AppPool is set to run as a network service user: serviceAcctX so that we can undo impersonation in rare cases (to read or write a resource that the client user does not have access to)
It works perfectly when the source of the virtual directory is on a local drive. The logged in user is authenticated and page content is customized based on authorization settings.
Our infrastructure team is trying to move the virtual directory source to a file share on a remote server. We have already gotten past the issue with changing the .Net security policy by adding a full trust for that specific file share path. We have set the Connect As property to the same serviceAcctX, the same one that the AppPool is running as.
The site starts fine. However, the client user is not impersonated. The request is processed using the default serviceAcctX credentials instead of with the client's NT credentials as before.
Is there a way to have the client impersonation still work as before and still have the virtual directory on a file share? Any pointers are greatly appreciated.
I'd put this in the category of Not A Good Idea.
There are a number of potential problems that crop up and you are introducing a lot of dependent complexity.
Instead, I'd go for something a little more "offline" than this. Use File Replication to keep the files in sync between your web server(s) and remote server.
Although slightly complex, it increases the survivability of your application. Meaning, if the remote server reboots, goes down, or there is a network problem between the two, your app is still functional. Further, you are still able to have the files on the remote server.
You may have to check the "trust this computer for delegation" check box in Active Directory for the web server in order for the user's token to be passed on.

Resources