my team and I are planning to develop a new customer administration panel from scratch for organizing servers and their configs as well as customers and invoices. In the end, we are going manage our whole company with one system.
Our team concludes two companies, which split approx. one year ago. That's why we want a secure but still connected backend.
There will be an independent server which runs the website itself and a web socket for communication between server side and front end. And now we're edging the following obstacle: For reasons of secureness and our customer's privacy, we have to store customer data at our company's own servers. So the web socket on the independent server has to grab data from server A or server B – depending on which user (company) is logged in.
We are running out of ideas, how the data can be stored separated securely. Because, if we want to grab data from server A, the auth data for it has to be stored on the independent server, hasn't it? And developer from company B would be able to query data.
In addition, there should be the ability to share files, configs, users and customers, because of the co-working departments of our companies dealing with the same customers. This collaboration can be compared with various doctors all needing the same records of each patient.
Can you, guys, help us how to secure our planned customer administration panel? It's not like a next cloud where every cloud builds it's own instance because the "brain" – just collecting data from our company servers – should be at the independent server.
Hoping for some ideas.
Regards!
Related
I ask your help to better understand if my cross-domain tracking (Adobe Analytics via Experience Cloud ID) is working properly. To me it seems not.
As you see in screenshot 1, my visits might come from Domain A and go, within the same session, to Domain B. We're collecting data, from both domains, to the same AA Report Suite.
The Experience Cloud ID Service is active, in the same way, to both configurations (same mc org id, as you see).
Into Analytics Workspace (screenshot 2) I created a fallout analysis to show how visits move from Domain A to B. The Analysis is based on two segments including visits that in turn include hits for domain A or B.
I expected to see Domain A visits to be distributed also to the Domain B, but it seems not. No visits are going there...how could it be?
Am I missing something with the Experience Cloud ID configuration ?
Thanks so much
The out of box setup for Adobe Experience Cloud Visitor ID Service requires the browser to be able to access a third party call to a subdomain under demdex.net, and then stores a cookie containing the user's identifier under demdex.net domain. See Adobe's KB for more detailed description of the process.
If the browser for whatsoever reasons cannot save or read the cookie, then as the visitor goes from site A to site B, the JavaScript JS library (i.e. Visitor.js) will keep requesting on a set of identifiers from demdex.net or failing contacting demdex.net generate a set of identifiers locally.
I am a beginner web designer and I am struggling to find relevant information online as to how I should go about managing my API keys for clients! I would really appreciate any tips or insights on how I should go about this!
I hold my own google account and already have my own API key (Javascript API) for my own website. Although, when creating websites for clients, is it okay to use the same API Key? Or should i create a new API Key for each client in my own account (creating new "projects")? Or should i be creating a google account for each client and then creating each client an API Key through their own account?
I also know that there are usage limits on API Keys so I want to ensure I dont exceed these if using one API for multiple sites. How can I monitor this?
Looking for any advice on the best and most efficient way to go about this. I do not know too much on how API Keys work!
Much appreciated :)
I will be using Google API as an example. Yes, you should always Create a new project for each client there are a multitude of reasons why you should do this and you already mentioned some of this
API query usage limit.
Separated client billing & usage breakdown for each project.
Security and revocation of compromised APIs.
Restricted security profiles, domain whitelisting, IP address, device usage etc..
Access management and role management.
Traffic and analytical reasons.
Creating credentials
Depending on your organisation needs and project scale, for us, we Create credentials (API key/ OAuth ID/ Service Account Key) for every platform the key will be used. For example, if we are developing an e-commerce website that comes with an app, we would issue 3 keys. (1 for web, 1 for Android apk, 1 for iOS app). This allows us to fine tune the access permissions and let us track usage.
What works for you?
If you are a freelancer or work in a small enterprise, the least you should do is separate every client by projects. There is no need to create a new Google account for each project. (You can always transfer ownership of projects to another account if your client requests at a later time)
The above screenshot is how we categorize items in our account, for each project we are contracted for (could be the same client) we will create a separate project entry.
We are planning a new multi tenency azure application based on ASP.NET MVC.
The customer data must be completely separated from each other. Customer A may know nothing about customer B nor its existance.
In addition to our business logic the customer may create own users and groups, maintain private contacts and calendars with user management.
To meet these criteria, I would like to use the Active Directory and the Exchange Server. According to my research the Exchange Server 2013 is capable of multiple tenents and domains.
So my idea is the following
I'm not able to post images. So please take a look to http://img144.imagevenue.com/img.php?image=551844509_AD_Structure_122_27lo.jpg
A main domain is created in Azure (parent domain)
users in this domain are just for our global app support
Each customer is sperated into its own active directory child domain
Our support has administrative privleges in each customer domain
Customer has a single admin account and is able to creates several users and groups
He can discover his domain, but not other customer domains or the parent domain
A Exchange Server 2013 server is installed on a VM.
each customer domain is connected to the exchange (multi tenency feature)
contacts, tasks and calendars are managed with exchange for each customer
customer A is not able to discover and cannot find any other customer or his data
login will be done with WIF and user will intact as domain user in his own domain
We do not want to use Office 365
Is this scenario and structure possible? And it is possible with Azure?
We will migrate about 3-5000 customers to this application and we will grow up the next years up to 20.000 customers.
Other features would be nice:
We want to host our own database servers in our datacenters and connect them through VPN to our Azure Application to prevent copies of them to the U.S. by Microsoft
Same for shared files and customer files
Single Sign On from customer site to the application
Yes, the scenario is possible and can be done within Azure. You can also connect your datacenter via VPN to Azure. As for SSO support within your application, that's up to you to build in, but is doable. One thing I can point out without knowing too much about your application and what you're actually trying to build as far as an end product is concerned is that you'll have to take into account the per user license ramifications for the Exchange portion. It sounds like you're leveraging Exchange for most features minus mail. You'll need to look into SPLA if you haven't done so already.
Another item to point out is the density per server with Exchange. You don't go into detail of what your Exchange architecture looks like so I assume you're installing an Exchange server with all roles on a single server.
At the moment, I would say your design is at its rough stages and needs to be worked out into some more detail to ensure what you're trying to do is practical and that the tools you're using are the correct ones for what you want to accomplish.
I have a client that has chosen to use Business Catalyst for their public facing services, and they want to access roughly four different servers for various activities. The design team has put forth a requirement to be able to log into these various servers using unique login forms on Business Catalyst for each destination.
The first issue is in having a login form within an https page. Business Catalyst has "secure zones" which can be exposed to users that have already logged into Business Catalyst, and I believe there is a way to do so without login by opening up the secure zone to a range of IP addresses. That doesn't feel like a good faith move by any developer (the secure zone is an oxymoron if it has to be exposed to everybody), so let me know if that passes the insanity check. Having the user login to Business Catalyst just so they can login to one of the secure servers is not going to work from a UX perspective.
The second issue is that Business Catalyst states that it must be within a secure zone before it can do any work with the external tools I need it to work with. This might be solved by resolving the first issue, but this has more to do with form queries in general. I have content modules that need to query these servers, without login, to pull non-critical information down as a response.
I have performed a non-exhaustive search over this weekend to try and find a graceful solution to this challenge, but it doesn't appear to be something that Business Catalyst was designed to handle.
For those of you who TLDR;
I need a secure way to login to 1 of 4 servers from Business Catalyst without login to Business Catalyst (Current implementation theory noted above).
I need a way to query non-critical information responses from 1 of 4 servers, again without login to Business Catalyst (Such as returning cost estimate results).
It is not acceptable to have the user login to Business Catalyst, just to pull queries or login to 1 of 4 servers.
It may not be possible to allow a user to access the other servers using their Business Catalyst session handles.
When user logs in to BC, he will get cookie in form VSVxxxxx, where xxxxxx is BC site ID. Content of cookie is hashed active session ID. Then BC exports two web service API - CRM and eCommerce. In CRM web service there's method Contact_IsLoggedIn, which take two parameters - user ID and session ID. Session ID is one from user VSVxxxxx cookie. It returns true/false, whether user is really logged in BC.
Note that BC have bit strange session handling... it lasts for 30min. no matter whether user clicks on site, or no.
I work in a large company, and I'm interested in best practices for internal security standards. We have a large ($500 million +) investment in SAP, and we also have .Net and a bit of Java EE in our internal environment.
I've found some documentation from MS and SAP, but it's outdated and not very specific.
So far, it looks like we could end up using Active Directory as the standard user store for all non-SAP applications, and SAP CUA / Portal for SAP applications.
Some concerns I have about AD are:
Being able to aggressively time-out for applications on shared computers (A small number of our applications run in remote offices in rural areas with a limited number of shared machines. In these cases, a supervisor with "power user" privilages could use an application, and then a clerk who should have only basic privaleges could use the same machine immediately after)
Being able to force the user to enter a username and password instead of just having the credentials read from the user's workstation - Because it's pulling the same credentials for the desktop and email, it won't currently ask users to log in. This is a concern for applications on shared computers as well. (See the explanation in the previous bullet)
As far as synchronization between AD and CUA is concerned, I want to approach this very carefully. We have a limited budget, and I want to make sure that if we end up putting something in place to synchronize the stores, that it's rock sold and provides excellent value. If we can't find something like this, I'd be comfortable coming back with a recommendation that the stores remain independent. SSO would be ideal, but I've worked with trying to get an SSO application up before SAML, and it wasn't pretty.
Acronyms:
SSO: Single Sign-On SAML: Security
Assertion Markup Language
CUA: Central User Administration (For SAP)
There is a lot of possibilities on this subject.
We had a customer that updated both their AD and their SAP user list from SAP HR. The idea was that the OM module contained all employees. You could export daily a list of all active employees to the LDAP, with basic informations (firstname, lastname, employeeId, login...). For the SAP system, unit/function/job needing a sap access where tagged and user where created/removed daily.
In fact, all employees had a SAP account, but only those tagged had a "dialog" one. Those account are allowed to connect via SAPGUI, others had to use the portal, which is a less costly licence. A set of rules allowed to set the roles for the managed users. The goal was to minimize user management and limit the inexorable grows of autorisation that comme from moving from job to job an organisation. (this was for 105000 employe, with a lot of personnel movement).
Thus SAP was not directly linked to the AD, but they where synchronised. Depending on the system (Development, qulity, integration, production), SAP was configured with time-out. You could also have différent password for separate systems.
Of course the reverse is also possible : interrogate a LDAP from SAP to manage SAP's accounts, without beeing directly linked to the LDAP. transaction LDAP can problably give you some informations.
hope this helps
Edit : the synchronisation was done by an ABAP program. that program was run every day at four, and created/deleted/modifed some accounts in the LDAP. After that, another program added some technical informations to the LDAP entries, informations that where not available to the SAP RH system (such as the mail server to use for a given employee, depending on its location around the world). The entries where then checked for consistency, and send to the master LDAP.
This program only managed personnel and units. Groups (authorization for others application) where managed either manually, or by others programs. Thus non SAP data were also stored in the LDAP.
Regards
Why is it a problem if users don't have to log in? Wouldn't that be more convenient for users? And wouldn't it give them further incentive to log out of the application?
The project I'm working on now uses AD, and we have a mapping table inside of SAP to map AD accounts and SAP accounts. Syncronisation is manual, which may or may not work for you, but there's no real technical risk.
I wish I could give you more information, but I haven't been very involved with that side of things. I can look into it,though.
You might want to look at OpenSSO - it has agents for SAP and it will integrate with AD as the user store. It's also pretty solid - Verizon use it for 40 million customers to log in to their web site.
IMHO.
This is not good solution to use different users in one windows session. Especially users authenticated in AD.
Usually it will be going that USER1 running sap client without closing , and work another USER2.
You get non-personified users. And don't forget users don't like perform all instructions.
We used thin client like citrix and SSO. It is full split data and authorization between users. And you have to use different sessions for users on workstation. The good think is no critical data store on workstation.
Not good idea and not secure but you can use run as different users
application in Windows environment in same session. But it is not secure solution for big company.