In azure-java-sdk samples there is an example - PublishEventsWithWebSocketsAndProxy.java
// By default, the AMQP port 5671 is used, but clients can use web sockets, port 443.
// When using web sockets, developers can specify proxy options.
// ProxyOptions.SYSTEM_DEFAULTS can be used if developers want to use the JVM configured proxy.
ProxyOptions proxyOptions = new ProxyOptions(ProxyAuthenticationType.DIGEST,
new Proxy(Proxy.Type.HTTP, InetSocketAddress.createUnresolved("10.13.1.3", 9992)),
"digest-user", "digest-user-password");
// Instantiate a client that will be used to call the service.
EventHubProducerClient producer = new EventHubClientBuilder()
.transportType(AmqpTransportType.AMQP_WEB_SOCKETS)
.proxyOptions(proxyOptions)
.connectionString(connectionString)
.buildProducerClient();
But I don't understand the paramter values - 10.13.1.3 , digest-user , digest-user-password
can anyone tell me, if this is what I have to use, if the organization has Web App Firewall, connecting to Azure
The parameters that you're referring to are configuring access to a proxy server. In this case, 10.13.1.3 is the address of the proxy server to use, digest-user is the username for authentication, while digest-user-password is its password. This set of credentials is used because the options are using ProxyAuthenticationType.DIGEST for the example.
With respect to your scenario, how you need to configure the client to use your organization's proxy isn't something that can be answered generally. It will depend heavily on how your organization configures the environment, manage network flow, and apply security practices.
I'd recommend reaching out to an IT Ops representative in your organization for that information; once you have that, we can help with any questions or problems that you run into configuring the client to work within that context.
Related
I am planning on building a K8s cluster with many microservices (each running in pods with services ensuring communication). I'm trying to understand how to ensure communication between these microservices is secure. By communication, I mean HTTP calls between microservice A and microservice B's API.
Usually, I would implement an OAuth flow, where an auth server would receive some credentials as input and return a JWT. And then the client could use this JWT in any subsequent call.
I expected K8s to have some built-in authentication server that could generate tokens (like a JWT) but I can't seem to find one.
K8s does have authentication for its API server, but that only seems to authenticate calls that perform Kubernetes specific actions such as scaling a pod or getting secrets etc.
However, there is no mention of simply authenticating HTTP calls (GET POST etc).
Should I just create my own authentication server and make it accessible via a service or is there a simple and clean way of authenticating API calls automatically in Kubernetes?
Not sure how to answer this vast question, however, i will try my best.
There are multiple solutions that you could apply but again there is nothing in K8s for auth you can use.
Either you have to set up the third-party OAuth server or IAM server etc, or you write and create your own microservice.
There are different areas which you cannot merge,
For service interconnection service A to service B it would be best to use the service mesh like Istio and LinkerD which provide the mutual TLS support for security and are easy to set up also.
So the connection between services will be HTTPS and secured but it's on you to manage it and set it up.
If you just run plain traffic inside your backend you can follow the same method that you described.
Passing plain HTTP with jwt payload or so in backend services.
Keycloak is also a good idea to use the OAuth server, i would also recommend checking out Oauth2-proxy
Listing down few article also that might be helpful
https://medium.com/codex/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-a980c996c259
My Own article on Keycloak with Kong API gateway on Kubernetes
https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56
GitHub files for POC : https://github.com/harsh4870/POC-Securing-the--application-with-Kong-Keycloak
Keycloak deployment on K8s : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment
I have currently configured express-gateway to communicate with a service on my backend exposed on a unique port on my machine and it's working fine. The gateway serves as a proxy to the services and currently does some security checks and jwt authentication. Then, only authorized request (through jwt validation) gets sent to services as configured. However, I'm concerned that if I don't put some sort of authentication on my service, then anyone who knows the port (or URL) my service runs on can directly access it and bypass the gateway directly. I'm looking for a way I can set up a sort of auth between the gateway and the service (maybe through keys) so that only the gateway can communicate with the services and not any other client. I currently can't find anything in the docs specifically for that. Also, if there's something wrong with my architecture, I'd appreciate it if you could point it out. Thank you.
The path between Express Gateway and your back end should be on a private, encrypted network so there is no way for anyone to bypass the gateway.
With this architecture, you don’t need to authenticate on the server side, and if you use Express Gateway scopes, you don’t even need to check whether the user is authorized to perform the requested action.
We need some async workers for some 1-2 min tasks and then provide the user feedback from this tasks.
The idea would be to use the rabbitmq mqtt websocket plugin and provide the user feedback when the calculations done directly in the browser.
For our "old" stack we have some api endpoints as a layer between the user (browser) and rabbitmq services which more or less act as fire and forget.
As mentioned, we now need to provide feedback where we thought it would be create to user websockets (rabbitmq mqtt plugin).
But we are wondering how do we secure the exposed websocket endpoint for each user? Currently its not a problem as we have an amqps clients with X.509.
Our new features need has public access so we can not auth the user beforehand.
Is there a way to directly and securly use the exposed endpoint or do we need a layer in between as we have now?
The RabbitMQ Web MQTT plugin supports TLS. You can then use a username / password to authenticate the user, or use client certificates.
If you need public access then there is no way to secure the endpoint. This applies to all MQTT brokers, not just RabbitMQ.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
If you use a plugable authentication source (sorry, not familiar with what RabbitMQ offers here) e.g. that stores user/password in a database. Then you can generate a short lived set of credentials for each session and the webpage can request these from the server via a REST API and then use these to authenticate the MQTT connection over WebSockets.
This means that credentials are only exposed as variables for a short time as temporary variables in the browser, which can be revoked easily as soon as the web session/actions are complete
I want to extend IdentityServer 3 with a 'admin' part where users can manage things like users, clients, etc. This part should be secured by the same ID server implementation (same app in IIS). Do I have to build a separate app or can I extend the same ID server solution? How do I configure the OWIN start up then? When I have
app.Map("/Identity"....)
how do I add:
app.UseOpenIdConnectAuthentication
This results in an 'external' login provider, but that is not what I want. I also tried to add:
app.Map("/admin", config => config.UseOpenIdConnectAuthentiaction())
But that does not work as well, so:
How to have ID server and a client combined in one Solution?
Please help.
Have a look at IdentityManager provided by developers of IdentityServer. This will get you up & running very quickly.
Security Model
The security model can be configured to only allow users running on the same machine or can be configured to use any Katana based authentication middleware to authenticate users.
Hosting Options
IdentityManager is hosted as OWIN middleware. It can be configured with the UseIdentityManager extension method for Katana
This is how you "Get started"
I created an iWidget for IBM Connections, which has to retrieve data from our external web application through the provided proxy. However, said application requires the user to be authenticated before providing an answer.
Is there any general recommendation on how to solve this? I'm aware that I can get the current user from the iScope of the widget, but just forwarding this information to our application is not secure - since everyone could just create such a request, pretending to be any user. I also know that the proxy can be configured to forward ltpa credentials, but I dont know how to validate such a token - maybe IBM provides a library for this task, that I'm just not aware of?