Google Hangout: Handle visibility from server - security

I am making a game with Google Hangout where I need to control which participant could communicate with others.
I want to be sure that players could not change the list of participant he could see by calling a javascript function. Because from what I understand each participant could change his visibility of other participants, and I want to block that.
So I wonder if it is possible to control the visibility between participants at server-side.

Short answer: No.
Medium answer: It really depends exactly what you need, but probably not.
Much longer answer:
It sounds like you want finer grained control over both visibility and message sending/state sharing than the Hangout API allows for. The Hangout API reflects what participants can see in the actual hangout today - everyone else who is in the hangout. The shared state is shared with all other members of the hangout running the same app, and the visibility is for all users in the hangout or the same app.
If you want to restrict or limit this (for example, if people are divided into teams and you want a "team chat"), you would need to use your own server to coordinate this communication, on at least some level. Your server would either need to actually do the communication between team members, or distribute a shared secret that each team would use as a cypher for their shared state.
One possible trickier solution might be to have each team run a different app. Since each app only shares state with the same app running on another participants machine, and can list only the other members who are running the same app, this might be a valid solution in some cases.

Related

How should I build this app over communcation apps?

These days I am finding myself in the position of having to implement for one of my college courses a system that should act as a giant wrapper over many communications apps like Gmail , Facebook Messenger maybe even WhatsApp .To put it simply you should have a giant web interface where you can authorize Gmail , Messenger and use them at once when required. I am thinking of going with an REST API to manage the user's services authorized by OAuth2.Also I am thinking of using Node.JS and Express.js in the backend and React.js in the frontend. I found some sweet libraries in npm that should take care of interacting with the involved APIs(https://www.npmjs.com/package/node-gmail-api this one for instance), but I am also doubtful about this approach , for example I have no idea how to keep the use notified about its incoming mails or messages for example . I am in dire need of some expertise since I forgot to mention but I am quite the newbie in this field. To sum it up for once my question is how would you implement such an infrastructure ? Is it my approach viable or I am bound to hit some really hard to overcome obstacles?
As a college exercise, it would be a really fun experiment, so it definitely worth the time you want to put into it. However, once you want to add more features, the complexity will go up pretty fast.
Here are a couple of ideas you can think of:
It's pretty clear that your system can't do more things than the capabilities exposed by the APIs of communication apps (e.g. you can't have notifications in gmail if the API doesn't have this capability).
For that reason, you should carefully study the APIs and what functionalities they expose. They have public docs that you can check out: (Gmail API, Facebook Messanger API)
Some of the apps you want to communicate with may not have an official API (e.g. WhatsApp) - those kinds of details you definitely want to know from the start.
Based on the analysis of those APIs, you should lay out a list of requirements for your system, which can be extracted from all the APIs, for example: message notifications, file transfers, user profiles, etc.
In this way, you know exactly what capabilities your system should have, and you don't end up implementing a feature that is available only in 1 API out of 4.
Also, it would be a bit challenging to design your system from a user perspective, because the apps have different usage patterns - chat apps, where messages are coming in real-time, vs email, which is not real-time communication. That's just a detail anyway, the gist of your project is to play with those APIs.
Also, it may worth checking out the Gateway Aggergation Pattern, which is related to this project - you may want the user to send a message to multiple apps, by using a single request to your service.

One agent, several customers

I'm using Google Dialogflow (former API.ai) with one agent, and I would like to be able to handle queries for many different customers with one agent.
The reason behind this is that I simply can't create one new GCD project / agent for each customer, due to the overhead and the quota limits on GCD projects.
I am looking for suggestions on tackling this. I am afraid that Dialogflow's algorithm will be cluttered and start confusing intents if I add too many of them that are closely related. I would like the agent to only go through the customer's list of intents and not the whole list of intents.
At the same time, I have common Small Talk intents that should be shared between clients. This means that setting the context as the client's ID may not be a fully viable solution, because the common pool of intents will not be used then.
To sum up, there are:
Common intents, shared between all clients (small talk for instance)
Client-specific intents (for instance, "What is [client-specific acronym]?")
How can I identify my user's (attached to one client) intent (ideally in a single request) given this set-up?
Thank you.
What you're doing is probably sufficient for simple agents.
For more complicated agents, you might want to identify the client in the webhook for the first Intent (which you certainly have to do anyway) and then set a long-lived context for just that client. Then you can have other Intents which are tailored for the client and only get triggered if the context is present. To be clear, and quoting the documentation:
Input contexts limit intents to be matched only when certain contexts are set.
If you have components of the conversation that apply to all of them (small talk, common questions about using the service, etc) then you could make those versions not require a Context. If you have fulfillment for them, you'll still get any Contexts (and their parameters) that are active, so you can handle with client-specific information still.
Finally, however - you shouldn't worry too much about how many projects you have. If you get close to the limit, you can request a higher quota.

Expression Engine: When to use channels and when not to use them?

I am still a relative newcomer to Expression Engine as a developer and a user. I am faced with the problem that a lot of my knowledge is being passed to me by users who have found ways to accomplish tasks traditionally undertaken by developers (eg product libraries) by using the channels system.
What I wondered was what people's views are on when it is best to advise a client to use this and when not to.
Let me use an example, a client wants a system which had venues where events could take place. The previous developer had chosen to use the membership system for the venues and the channels system for the events and write some custom code to attempt to knit the two together, specifically because there are not enough hooks to accomplish some background automated tasks like looking up the long/lat of the address of a venue when it is created or updated.
I am picking up after someone else's work largely but its not their fault, it was the information they were given as they were also new to the system.
In any other project this would be a master-detail type setup, events belong to venues, i'd probably write 2 custom tables, editors in the admin area via modules and then use regular custom code in the pages to display and act upon the info - this way, I could control what's happening when a user hits submit.
However, the instigating party is a veteran user of Expression Engine and instructed the previous developer in the manner of "oh, just put it all in the channels and then there's this tag and that tag and so on".
So, am I missing the point or am I right that this does not fit the channels system and when should you use channels and when not?
Thanks friends.
This question is very hypothetical and every developer will give you a different answer as it all depends on the requirement and how that EE developer rolls.
Fundamentally ExpressionEngine allows you to approach builds in many ways, none are right and wrong, albeit some are easier, some harder, others just plain daft.
Basically Channels are groups of data "entries" - these can be anything. Using your example, venues could be one channel with fields created relevant to the subject (e.g. location, size, price, etc). And another channel for events with different fields (e.g. date, type, location).
Mostly anything can be slotted into a channel. But member details are best held within the native member functionality (although there is a commercial add-on that holds member data in a channel).
You reference the previous developers approach - this could be because they used a third-party add-on that required the data to be held separately to channels, or a lack of understanding on best approach. Or just because the developer decided to approach it that way! I suspect the last developer then associated a member (venue) to an entry (event) to link the event to the venue. Basic EE functionality allows for related entries which allows you to associate 1 entry with another (e.g. Event -> Venue), or using the excellent Playa add-on, so this approach is really not necessary.
Personally I would always store the data in channels, and people/members in the native membership functionality (e.g. admin, visitors to the site, customers, etc). I'd only build an add-on (utilising it's own tables/data) to store additional information if it was way outside what EE could store.
To answer your practical question (it's stretching the scope of what Stack Overflow questions are supposed be honestly): you should use a channel for Venues and channel for Events, and the Venue field in the Event entry is a "Related Entries" fieldtype linked to the Venues channel. That's the "standard" EE way, and the most similar to a traditional database schema.

what is the simplest protocol to securely tether a hardware device to a network?

After the Sony PSN debacle, I am trying to find examples of secure hardware tethering to a network. There are two use cases in particular:
1- computer downloads a piece of software that then uniquely and securely labels it to a cloud service
2- a hardware manufacturer uniquely labels a hardware device that then negotiates membership on the network.
Given the fact that the hardware device might have to change (revoke or service enhancements) it feels like #2 becomes #1.
The broad outline is this:
- connect to the service via HTTPS to protect against man in the middle
- device generates a GUID and presents it via HTTPS to service
- service records GUID against account
- on success, service 'enables' device
But how do you protect the GUID so that it cannot be stolen?
I just wanted to comment here:
Sony's PSN issues started with horrible practices with regards to their QA environment.
First, they defaulted to trusting anything that was sent to those servers using their developers toolkit. The reason they did this was that the dev kit used to cost upwards of $10k US and therefore they thought anyone who paid that amount would be on the up and up. However, when they radically lowered the price things changed externally and they didn't account for it.
The second issue with PSN was that the security between QA and live was, well, weak at best and easily circumvented. My understanding is that you could send commands to live using QA credentials. Because QA credentials were used, all chargeable actions were approved without money changing hands and the actions were applied to live accounts. When several people told Sony about this they did nothing.
A third issue was a reliance on hardware based encryption keys. Even hardware encryption keys installed on the devices can be figured out.
Point is, Sony dug their own grave on it so I wouldn't use anything they did as a template for how to do things. Heck, a lot of their websites were open to SQL injection which in today's day and age should get you fired.
Another example here is the iPhone. Each iPhone has a unique identifier that installed apps can grab and send back across the network; similar to a serial number. Some apps use this ID to try and tie a particular device to a person. However, it's trivial to create ID's and broadcast them, so this hasn't worked out so well for the partners. Also Apple does not expose a way to ensure a given ID (UUID) is valid to app producers.
A third example is mobile phone carriers. They use a particular ID baked into your SIM card to identify your account in order to know who to bill when a call is made. This ID is verified whenever the phone checks in with the network. However, we're dealing with radio signals and any device that can broadcast a correct ID can gain access. Point is, honest people think that only AT&T approved devices can get on an AT&T network. Reality is, anything can but they are going to bill the owner of the particular ID...
That said, any software you have running on a remote device that is not under your direct control is likely to be hacked. The popularity of the device will increase the likelihood of it happening sooner rather than later.
Where do we go from here?
On a basic level you associate an ID with an account in your service. PSN, Apple and others have done this. When an ID is broadcast, you need to verify that it exists AND that it's tied to an active account. If both pass then you have two options: either perform the action requested OR request additional verification.
For any actions that require money to be spent, do the additional verification (usually some form of username/password), capture the funds, then perform the action. Go one step further and every time a bad login is entered, send an email to the user on file. Further, automatically send a receipt. These are typically done so that your honest users can tell when something is going on.
Anything else just let through.
Bearing in mind, of course, that QA credentials should NOT work in your Live environment. Those systems should not be tied to each other under any condition and, quite frankly, should even live on separate hardware. In other words, QA and Live should NOT share a login database.
The thing here is that you shouldn't care about the device itself; just the account. You can't control the device as it's out of your hands; heck you can't even be sure it hasn't been physically tampered with. (XBox has been fighting this one with people adding resistors or burning out certain components to get past physical security features).
So, IMHO, do a bit to keep honest people honest but overall don't worry about it. Now, you should transfer everything via SSL or someother encrypted connection between the device and your cloud so that you don't leak ID's to anyone that wants to grab them. This will help protect those honest people.
Further, you shouldn't have a direct way to query whether an ID is valid or not from the outside. This will make it a bit more difficult for a hacker to find existing valid IDs and take over accounts. If you want to get fancy you could honey pot those and track the hackers down in order to sue them into oblivion, but that takes time and resources companies don't normally have. Also you could log all of the requests that contained bad IDs and use that to track hackers down.
Note that even after the device has been "enabled" I still suggest you have two levels of authentication. The first is for simple actions like downloading free content; the second kicks in anytime there is a fee associated. Again, we're trying to protect your honest subscribers.
For the dishonest ones you will have to apply some statistical analysis on the transactions coming across. Things like the transaction rate can help identify bots that are running and allow you to kill their IDs. There are others but they'll be unique to your application.
This was long winded. But my point is:
You can't secure the ID or anything else you pass out.
You can't ensure the requests are coming from your devices or your own approved devices.
You better take actions to keep QA and production separate for those building software for these devices using your services.
You better take actions to protect your normal honest users.
Trust NOTHING.
Due to the above you should evaluate your business model so that you don't care what device was used and instead focus on the individual accounts themselves; which you do have control over.
I am not sure I entirely understand the question, but I think you want some sort of device to hold on to a GUID assigned to it by a web service, and you don't want someone finding out what that GUID is, correct?
If so, there isn't a lot you can do. You have already mentioned one option... using HTTPS during the assigning of the ID. That is a good start, but remember that anyone who has physical access to the device can do a lot of things to look up this ID.
In short, it is impossible to completely hide. Someone can always reverse engineer it. There are folks out there reading data right out of memory with hardware.

Considerations regarding a p2p social network

While the are many social networks in the wild, most rely on data stored on a central site owned by a third party.
I'd like to build a solution, where data remains local on member's systems. Think of the project as an address book, which automagically updates contact's data as soon a a contact changes its coordinates. This base idea might get extended later on...
Updates will be transferred using public/private key cryptography using a central host. The sole role of the host is to be a store and forward intermediate. Private keys remain private on each member's system.
If two client are both online and a p2p connection could be established, the clients could transfer data telegrams without the central host.
Thus, sender and receiver will be the only parties which are able create authentic messages.
Questions:
Do exist certain protocols which I should adopt?
Are there any security concerns I should keep in mind?
Do exist certain services which should be integrated or used somehow?
More technically:
Use e.g. Amazon or Google provided services?
Or better use a raw web-server? If yes: Why?
Which algorithm and key length should be used?
UPDATE-1
I googled my own question title and found this academic project developed 2008/09: http://www.lifesocial.org/.
The solution you are describing sounds remarkably like email, with encrypted messages as the payload, and an application rather than a human being creating the messages.
It doesn't really sound like "p2p" - in most P2P protocols, the only requirement for central servers is discovery - you're using store & forward.
As a quick proof of concept, I'd set up an email server, and build an application that sends emails to addresses registered on that server, encrypted using PGP - the tooling and libraries are available, so you should be able to get that up and running in days, rather than weeks. In my experience, building a throw-away PoC for this kind of question is a great way of sifting out the nugget of my idea.
The second issue is that the nature of a social network is that it's a network. Your design may require you to store more than the data of the two direct contacts - you may also have to store their friends, or at least the public interactions those friends have had.
This may not be part of your plan, but if it is, you need to think it through early on - you may end up having to transmit the entire social graph to each participant for local storage, which creates a scalability problem....
The paper about Safebook might be interesting for you.
Also you could take a look at other distributed OSN and see what they are doing.
None of the federated networks mentioned on http://en.wikipedia.org/wiki/Distributed_social_network is actually distributed. What Stefan intends to do is indeed new and was only explored by some proprietary folks.
I've been thinking about the same concept for the last two years. I've finally decided to give it a try using Python.
I've spent the better part of last night and this morning writing a sockets communication script & server. I also plan to remove the central server from the equation as it's just plain cumbersome and there's no point to it when all the members could keep copies of their friend's keys.
Each profile could be accessed via a hashed string of someone's public key. My social network relies on nodes and pods. Pods are computers which have their ports open to the network. They help with relaying traffic as most firewalls block incoming socket requests. Nodes store information and share it with other nodes. Each node will get a directory of active pods which may be used to relay their traffic.
The PeerSoN project looks like something you might be interested in: http://www.peerson.net/index.shtml
They have done a lot of research and the papers are available on their site.
Some thoughts about it:
protocols to use: you could think exactly on P2P programs and their design
security concerns: privacy. Take a great care to not open doors: a whole system can get compromised 'cause you have opened some door.
services: you could integrate with the regular social networks through their APIs
People will have to install a program in their computers and remeber to open it everytime, like any P2P client. Leaving everything on a web-server has a smaller footprint / necessity of user action.
Somehow you'll need a centralized server to manage the searches. You can't just broadcast the internet to find friends. Or you'll have to rely uppon email requests to add somenone, and to do that you'll need to know the email in advance.
The fewer friends /contacts use your program, the fewer ones will want to use it, since it won't have contact information available.
I see that your server will be a store and forward, so the update problem is solved.

Resources