Connecting Blue Prism in local machine to Blue Prism Application Server in different machine - blueprism

I am not able to connect Blue Prism to Blue Prism application server in different machine. I get error as
The following error has occurred:
The caller was not authenticated by the service.
However, I was able to connect Blue Prism to Application Server in same machine. What to do for application server and Blue Prism in different machines? Is there some change to be done in BPServer.exe.config file for allowing remote machines to connect to application server?

The default server connection mode (WCF: SOAP with Message Encryption & Windows Authentication) authenticates using Windows / Active Directory.
If the remote machine and the server are in different domains or don't share an active directory that may be the cause.
WCF: Insecure connection mode does not validate server identity using active directory -- try that instead.

The Blue Prism Service needs to be set-up with what we call a "service account". When doing this you should be logged in as the service account, because it uses "Windows Authentication" i.e. use the credentials of the logged in account. This part I assume is working fine for you. In order to reach the App Server connection from an Interactive Client on a different box you need to set the SPN for this particular service account, which will clear this error. Follow these steps: https://bpdocs.blueprism.com/bp-7-0/en-us/installation/bp-enterprise/SPN-configuration.htm
Reach out to the individuals that handle Active Directory at your organisation - for us they had the right level of access to run the "setspn" command.
It can also be because of the reason described above with different domains - meaning the the Active Directory is not shared - but this is less likely within one organisation.
Its similar steps to this issue that arose after a Microsoft Update in Jan 2022, described here: https://help.blueprism.com/Alerts/1784860762/Latest-on-Windows-updates-from-11th-January-2022-causing-authentication-issues-in-Blue-Prism.

Related

Serve static files from network share as gmsa

I am trying to serve static files from a file server running Windows Server 2016. I would like to use a group managed service account for the connection.
I have attempted configuring IIS on Windows Server 2012 to use the gmsa. The Test-ADServiceAccount cmdlet returns True for the gmsa I am attempting to use on the IIS host. I have gone under the basic settings option of the IIS site configuration and used the "connect as" button and set it to the gmsa account with no password. The prompt then says "Connect as 'gmsa-foo$'". However, when I attempt to press "ok", I get an error that the specified password is invalid.
Can I use a gmsa to allow access to the remotely hosted static files that I want to serve? Do I need to use a particular version of Windows Server to do so?
Make sure you added the gMSA account in the application pool identity.
It should be noted that this account may show unexpected behavior in IIS manager. For example, if you click on “Basic Settings” for an application that uses this account for its application pool, “Test Settings” may give you an error indicating “the user name or password is incorrect”. Usually, this can be ignored. Browsing any page in the application would be a better test – as long as you don’t receive a 503 response, the application pool username/password is fine.
You could get more information from the below document:
Windows Server 2012: Group Managed Service Accounts

Azure VM: the user account used to connect to remote PC did not work

I have an Azure Virtual Machine connected with Azure Active Directory. A user from this AD is added to this machine as an admin. Other people can successfully RDP to the machine with this user's credential, but I get error saying "The user account used to connect to remote PC did not work. Try again". Well, I am trying the whole day. Does anyone know what can cause this?
The fun fact is, I can RDP to the machine using the local admin, but again it fails with AD user.
I tried connecting with Microsoft Remote Desktop for Mac, mstsc for Windows and with Remote Desktop Connection Manager. The same result everywhere.
I tried different usernames format:
alex.sikilinda#mydomain.com - other people can successfully login using this format
AzureAD\alex.sikilinda#mydomain.com - for windows client getting the same error, for Microsoft Remote Desktop for Mac getting "Your session ended because of an error. If this keeps happening, contact your network administrator for assistance. Error code: 0x807"
AzureAD\AlexSikilinda mstsc error - "Remote machine is AAD joined. If you are signing in to your work account, try using work email instead", Mac - "Your session ended because of an error. If this keeps happening, contact your network administrator for assistance. Error code: 0x807"
Microsoft Remote Desktop for Mac version 10.2.3 (1343)
Windows 10 version 16299 (also tried with 1803 on another machine, the same result).
I also came across the same error for the win10 that is AAD join, and I tried the following way to solve this:
Change VM Remote desktop settings same as the picture
Create a new RDP config file
Open mstsc.exe, click on Show Options and then click Save As(give it a new name such as AzureAD_RDP, save it somewhere easy to find).
Open the saved file using Notepad. Verify that the following two lines are present, if not, add them, and save.
enablecredsspsupport:i:0
authentication level:i:2
RDP to the target VM
Open the RDP config file that you just edited, enter the IP address of the VM, do not enter any username, and then connect.
Here you could use AzureAD\UPN or username to log in.
I haven't tried disabling the NLA (and wouldn't recommend), however in my case was the legacy MFA getting in the way of getting into the VM, even if only enabled for the account, and not forced.
In my case, we're using the Conditional Access with MFA, but we have to exclude the VM from the cloud apps (Azure Windows VM Sign-In), because we're not using Windows Hello (thanks Microsoft for a half baked solution!).
See Login to Windows virtual machine in Azure using Azure Active Directory authentication for more details.

How do I connect Release Management 2013 client on a non-domain Windows 10 box?

I've got 2 machines:
A corporate desktop machine which is running Windows 7 SP1 which resides on the corporate domain and which I log into using a corporate domain account.
A personal laptop that I use when working from home via the Cisco VPN client but presently sits on my desk connected to the corporate WiFi (though I had it connected to the wire and on the same subnet as my desktop machine today also). This machine is not on the corporate domain; I log into this machine with a Microsoft Account.
I need to run Visual Studio 2013 Release Management Client from both machines. The machine on my desktop works fine when entering either the IP address or the URL into the Release Management Server URL entry field and everything hooks up and all is glorious.
On my Windows 10 laptop however, it's a different story. Every attempt to connect is met with the error:
The server specified could not be reached. Please ensure the
information that is entered is valid (please contact your Release
Management administrator for assistance). <-- I'm the admin
I can ping the machine both with IP address and with hostname, ruling out DNS issues. Both client machines are on the same subnet. Both machines are using the same outbound port.
Checking the event log I see a bunch of Message: The remote server returned an error: (401) Unauthorized.
Checking with Fiddler, on my desktop machine, I can walk through the handshake of each of the stages of startup and all is good. But in Fiddler on my laptop I see 3 401 Unauthorized errors before Release Management Client bombs and returns the rather uninformative message I posted above.
I've attempted to create a shadow account on my laptop and do the Shift-Right Click-Run As Different User dance, but I must be missing something because I can't get this to run.
I've talked to the network administrator who suggests that I should be able to access all of the same resources from both machines and that it must be a Release Management issue.
Is this an incompatibility between VS2013 Release Management & Windows 10 or something else? Has anyone else had this issue and overcome it? I have access to be able to administer the Release Management environment if there's changes that need to be made there and I'm a local administrator on both machines. I'm not however a domain administrator if changes need to be made there.
I would bet you simply have a security issue as the workstation is not domain-joined and the WPF client is using Integrated Authentication.
Often creating a local "shadow" user with same username and password, and running the client app under that account (run as) works.
Another option is to join the workstation to the domain or use a domain-joined VM.
After fully investigating the situation, it appears to have been a combination of factors. I am posting a response because this appears to be a relatively common problem:
The workstation was sending an unexpected credential to the server. To get around this, you have to configure the user account on the server without a domain in the username and create a shadow account on your local machine. When running the client application, you must either log into this shadow account on the local machine or you must SHIFT+RIGHT CLICK and choose "Run as" entering your local shadow credentials. This will then pass the shadow account to the server which will now authenticate without referencing the domain. OR
Create a user account on the server that matches the credentials on your local machine including MACHINENAME\LocalUsername
There appeared to be a network issue when attempting to connect to the Release Management Server from the non-domain machine when connected inside the network. When connecting via the VPN from home, this situation was resolved, but only after we'd ensured the account and local machine accounts were correctly configured. The domain connected machine always connected properly.

Active FTP on Azure virtual machine

I have setup FTP in IIS 8.0 on an Azure windows server 2012 virtual machine.
After followed the instructions in this post (http://itq.nl/walkthrough-hosting-ftp-on-iis-7-5-a-windows-azure-vm-2/) I've been able to make FTP work fine in passive mode, but it fails when trying to connect in active mode from FileZilla. FTP client can connect to the server in active mode but fails with timeout error message when trying to execute LIST command.
I carefully revised 20 and 21 endpoints are set in azure vm without pointing to a probe port and that windows firewall allows external connections to 20 and 21 VM ports.
I can't figure out why active mode doesn't work while passive mode works fine.
I know there are other users with some issue.
Please is there someone who had succed setting active ftp in azure VM?.
This previous response is incorrect. https://stackoverflow.com/a/20132312/5347085
I know this because I worked with Azure support extensively. The issue has nothing to do with the server not being able to connect to the client, and my testing method eliminated client side issues as a possibility.
After working with Azure support for 2 weeks, their assessment of the problem was essentially that “Active Mode FTP uses a series of random ports from a large range for the data channel from the client to the server. You can only add 150 endpoints to an Azure VM so you couldn’t possibly add all those ports and get Active FTP working 100%. In order to do this you would need to use “Instance level public IP” and essentially bypass the endpoint mechanism all together and put your VM directly on the internet and rely entirely on the native OS firewall for protection.
If you HAVE to use Active Mode FTP on Azure and are ok with putting your VM on a public IP, he provided this link:
https://azure.microsoft.com/en-us/documentation/articles/virtual-networks-instance-level-public-ip/
UPDATE: Official response from Azure Support:
Josh,
First of all thanks with your patience on this. As I mentioned in my
last email I was working with our Technical Advisors which are Support
Escalation Engineers on reproducing this environment in Azure. Our
tests were configured using WS_FTP 7.7 (Your version 7.1) and WS_FTP
12 client as well as the Windows FTP client. The results of our
testing were the same as you are experiencing. We were able to log in
to the server, but we get the same Command Port/List failures.
As we discussed previously Active FTP uses a random port for the data
plane on the client side. The server connects via 21 and 20, but the
incoming port is a random ephemeral port. In Passive FTP, this can
be defined and therefore endpoints can be created for each port you
use for part of the data plane.
Based on our extensive testing yesterday I would not expect any other Active FTP solution to work. The escalation Engineer that
assisted yesterday also discussed this with other members of his team
and they have not seen any successful Active FTP deployments in Azure.
In conclusion, my initial thoughts have been confirmed with our
testing and Active FTP will not work in the Azure environment at
this time. We are always striving to improve Azure’s offering so
this may be something that will work in the future as we continue to
grow.
You will need to move to a passive FTP setup if you are going to host
this FTP server in Azure.
When using active ftp, the client initiates the connection to port 21 on the FTP server. This is the command or control channel and this connection usually succeeds. However, the FTP server then attempts to open port 20 on the client. This is the data channel. This channel is used for all data transfers, including directory listings.
So, in your case, active FTP isn't working because the server can't initiate a connection to the client. This is either a problem on the server (outbound firewall rule) or on the client itself. This is usually a good thing because you don't want internet-based servers to be able to open connections on client machines.
In passive mode there is a clear client/server distinction where the client initiates connections to the server. Passive mode is recommended so if you got that working I'd stick with that.

opc, server not connect

I need to realize OPC server, on Windows XP. I download OPC library, and OPC client (application not library). I realize my OPC server, when i use client on my machine all runs normally. But when i connect from remote computer i do not see my server. I understand that the technology dkom potentially dangerous. I get this manual, and did everything on it, but nothing changed. I disable my windows firewall, add 135 port in windowds firewall exception. In dcomcnfg grants local and remote access to "anonymous" and "all" groups, grants local and remote launch & activation to "administrators" and "all" groups. And nothing changed, i did not give the right of my DCOM component because i thought the following: i get list of servers not work with them. In my microsoft network no domain and active directory, can i achieve the desired result in this case?
There's a number of things which can go wrong with OPC DA over DCOM. From the top of my head, you could try the following:
Check if OPCEnum service is running on the server computer. This service provides the list of OPC servers on to the potential clients. It's part of the OPC foundation redistributable.
Make sure that whatever dcomcnfg changes you applied, they are done both on the server and client computer.
If you're using only local users, try creating a dedicated user for OPC access on both server and client computer, e.g. call him "opc". Then grant all the rights to this user in "COM security" section of dcomcnfg. Run both the server and client as "opc". Make sure the local users authenticate as themselves (see "Security options" in local policies).
If all else fails, a workaround can be to deploy the server on the client computer, register it, then remove it. Worked for me once.
The most common error is DCOMs have not been configured properly. I find this guide very useful:
ftp://ftp.nist.gov/pub/mel/michalos/Software/Github/MTConnectSolutions/MtcOpcAgent/doc/DCOM_Config_Step_by_Step.pdf
Also this other guide gives you a big understanding of a Remote OPC DA:
http://www.kepware.com/Support_Center/SupportDocuments/Remote%20OPC%20DA%20-%20Quick%20Start%20Guide%20(DCOM).pdf
I had a similar problem when I tried to communicate with a remote OPC server in a different PC. Please pay attention to the point number 2 of the second guide (2.Users and Groups), make sure both PCs are logging in under the same user account with the same password.
2.1 Domains and Workgroups When working within a workgroup, each user will need to be created locally on each computer involved in the
connection. Furthermore, each user account must have the same password
in order for authentication to occur. A blank password is not valid in
most cases. Because changes may need to be made to the local security
policy on each computer, remote connectivity within a workgroup has
the potential to be the least secure connection. For more information,
refer to Local Security Policies. When working within a domain, local
users and groups are not required to be added to each computer. A
domain uses a central database that contains the user accounts and
security information. If working within a domain is preferred, a
network administrator may have to implement the changes. Mixing
domains and workgroups will require both computers to authenticate
with the lesser of the two options. This means that the domain
computer will require the same configuration as it would if it were on
a workgroup. Local user accounts must be added to the domain computer.

Resources