Can Coded UI Test Agent and Test Controller communicate with ech other being on different networks - coded-ui-tests

We are trying to get Test Controller and Test Agent to communicate with each other and get Coded UI Test Scripts executed remotely. Given that both Test Agent and Test Controller are on machines on two different networks.
Can anyone suggest what needs to be done inorder to get it going, we have tried alot but were not able to get Test Agent Registered with Test Controller outside hetwork.

The Troubleshooting Guide from Microsoft states
The technology used to connect remote test execution components is .Net Remoting over Tcp ports.
The network diagrams indicate that the two components need to be network reachable of each other. If they are not on the same network, you would need some sort of VPN or port forwarding solution. See also the guide on configuring ports.

Related

Connection Monitor Vs Connection Troubleshoot Vs IP Flow Verify

There are a whole suit of tools available to monitor networks in Azure. Of these, I am trying to understand the difference between Connection Monitor, Connection Troubleshoot and IP Flow Verify. The last two are part of Network Watcher.
To me, it seems like these are meant for the same purpose: to check if communication is allowed between a source and a destination.
What exactly are the differences between these services and in what are use cases for each?

What is the best architecture for a web-app communicating with a gRPC service?

I have built a website with chess.js and java chess libraries that communicates with a custom c++ chess engine via gRPC with python. I am new to web dev and especially gRPC, so I am not sure on the architecture I should be going for when it comes to hosting.
My questions are below:
Do the website and gRPC service need to be hosted on separate server instances and connected via API?
Everything right now is hosted locally and I use two ports as it is right now (5000 for the website and 8080 for the server). If the site and server aren't separate, is this how they will communicate to each other on a single server (one local port)?
I am using this website just for a showcase of my portfolio for job searching, so I am looking for free/cheap hosting that also provides a decent RAM availability since the c++ chess engine is fairly computationally intense. Does anyone have any suggestions for what hosting service I should use for this?
I was considering a free hosting for the website and then a cheap dedicated server for the service (if the two should be separate). Is this a bad idea?
Taking all tips and tricks that anyone has to offer. Again, totally novice to web dev, hosting, servers, etc.
NOTE This is an architecture rather than a programming question and discouraged on stack overflow.
The website and gRPC service may be hosted on the same server (as you're doing locally). You have the flexibility in running both processes (website and gRPC service) on a single more powerful host or separately on two hosts.
NOTE Although most often gRPC communicates over TCP sockets, it is possible to use UNIX sockets and even buffered memory too.
If you run both processes on a single host, you will want to consider connecting the website to the gRPC service via localhost (127.0.0.1 or the loopback device). Using localhost, network traffic doesn't leave the host.
If you run both processes on different hosts, traffic must travel across a network. This is slower and will likely incur charges when hosted.
You will want to decide whether the gRPC service should be exposed to any network traffic other than your website. In many cases, a gRPC service is used to provide an API to facilitate integration by 3rd-parties. If you definitely don't want the gRPC service accessed by other things, then you'll want to ensure either that it's bound to localhost (see above; and thereby inaccessible to anything other than other processes e.g. your website on the host) or firewalled such that only the website is permitted to send traffic to it.
You can find cheap hosting of virtual machines (VMs) and you'll likely want to consider hosting both processes on a single VM, ensure that you constrain the resources that you pay for and that you secure traffic (as above).
You may wish to consider containerizing the application. In this case, while it's possible to run both processes in a single container, this is considered not good practice. You should thus consider 2 containers (website and gRPC server). Many hosting|cloud platforms provide container hosting and this is generally easier than managing VMs (since you don't need to patch|update the OS and any dependencies). If you can find a platform that accepts a Docker Compose describing or a Kubernetes Deployment in which you describe both your services and how they interact such that the gRPC service is only accessible to the website, that could be ideal.

How can I mock remote network devices in Linux?

I am making a system that needs to be able to determine if a host is reachable or not by pinging it.
As part of a small end-to-end smoke test suite, I want to be able to bring up hosts and tear them down during the test suite, to test that my system responds correctly. Unfortunately, actually spinning up remote hosts and tearing them down is costly and extremely slow.
Is there any way I can mock this in Linux?
Bonus points if this doesn't require running the test suite as root.
My hope is that I can create a few virtual interfaces, assign arbitrary IP addresses for them and bring them up/down during the test to simulate hosts going down and coming back up. I should even be able to simulate open ports on the hosts using netcat, which would also be tremendously useful. I haven't had any luck figuring this out yet though (if it's even possible), I suspect my combined Google-fu and network engineering skill points are too low.
Depending on your network requirements I think Docker could fit your needs.

Node Red - Accessing dashboard from remote server

I have a question regarding the Node Red dashboard. I've got my dashboard all set up and working. Now, I want to be able to access the dashboard outside of my local network. Right now I do this through a VNC server. What needs to happen next is that clients need to able to access the dashboard, but they are not getting access to my VNC server of course. I have done my fair amount of Google work. I (somewhat) understand that a service like ngrok (ngrok.com) or dataplicity (dataplicity.com) is what I am looking for. What would be the best way of setting this up safely?
Might be useful to clarify: I'm using a raspberry Pi!
Thanks in advance!
If you want to give the outside world access to your dashboard, you can also consider to host your node-red application in the cloud. See links at the bottom-left of page https://nodered.org/docs/getting-started/
Most of those services have a free tier - so it might you cost nothing.
If you cannot deploy your complete node-red in the cloud (e.g. because it is reading local sensors) then you can split your node-red application into 2 node-red applications: one running locally and one (with the dashboard) running in the cloud. Of course then the 2 node-red applications need to exchange messages: for this the cloud services mentioned on that page also provides a secure way to send and receive events from the node-red cloud application that you can use.

create a load balancer and radius authentication under linux

i have a centos 6 server working as a gateway receiving two internet connection from 2 isp, what i need to do is to load balance those two connection and forward the traffic to a third network card into the internal network
i also need to use Radius server to perform network authentication for the users.
solution already tried:
i tried to create a bridge between the two input connection, it worked but i'm not able to perform traffic control
i also tried to install FreeRadius
my question is:
1- is it possible to perform the load balancing from the FreeRadius mean that i can only use it for the whole solution
2- if not can anyone please guide me to a solution or a utility to perform such task
P.S i can't use a dedicated Firewall such as ZeroShell or EndianFirewall i need to implement the solution under Centos 6
Yes, you can create pools of home servers and forward requests between them, see raddb/proxy.conf.

Resources