Why has JXTA been abandoned? Any alternatives out there? - p2p

P2p/Grid Computing seem like a promising concepts. JXTA looks like the only all in one framework for it. Is there a reason this field is so sparsely pursued?

I have lead the release of JXTA 2.6 and 2.7 - JXTA is not completely abandoned. Some people have posted patches on the 2.6 branch and it could easily be merged with the 2.7 branch.
There are many reasons why people did not carry on participating to JXTA:
Oracle did not follow-up on their duties regarding project governance, which left the project in a limbo state.
Oracle did not follow-up on a request to move the project to Apache.
The code base was old. We cleaned it and implemented unit tests. But in order to move the project to the next level, it would have required a lot of rewriting. Not enough volunteers.
But more fundamentally, the reason few P2P frameworks took off is because P2P is fundamentally complex when you get into details. Most people don't get it until they start putting their hands in the dirt. It is not possible to implement P2P 'in a simple way'.
So nothing to do with all-Java clients, licensing fees or others.
Update (August 2013): You thought JXTA/JXSE was dead? Well someone worked further on it and developed a DZone tutorial (unfortunately, SO does not allow links to Dzone, so Google: JXSE and Equinox Tutorial).
Update (November 2013): A group of people is working on new releases of JXTA. For more information, register on the mailing lists.

Interestingly what was missing with all the P2P initiatives of the past was a motivation for a peer to stay active. Question always was why would a peer keep running a CPU draining and XML based verbosed protocols.
Trust was another factor - how can I trust a peer. As a key member of the team, we introduced security. But security doesn't address trust.
To make it even worse, JXTA introduced the concept of super nodes - defeating the very concept of peer to peer.
However, not everything was that bad. JXTA provided a lots of new concepts. One being Edge Computing with JXME and JXTA sitting together - you can call it to be current day Fog computing where heavy lifting was on the JXTA node and some intelligence on the constrained JXME nodes.
Fast forward, Blockchain addressed gaps with addressing most if not all the questions that any P2P platform could not answer: trust, incentivizing peers, tamper proof and much more.
P2P is still alive :)

I think it's for the same reasons that RMI, CORBA, and Jini aren't much in favor: complex and closed.
Simple and open win most of the time.
It might have had something to do with all-Java clients or licensing fees or something else.
It could be competition. MPI is a widely accepted messaging standard for computing. Hadoop is getting a lot of traction.
UPDATE: The answer that was accepted discusses why people may or may not choose to participate in JXTA. I think my answer has more to do with user adoption, which is different. Mine go back to the origins of JXTA, not the details of releases 2.6 and 2.7.

If you work with Linux, try this: http://www.p2pns.org/
"P2PNS (Peer-to-Peer Name Service) is a distributed name service using a peer-to-peer network. The current focus of P2PNS is to provide a secure and efficient SIP name resolution for decentralized VoIP ( P2PSIP)."
In most cases Name Resolution is enough to build up a P2P-App on top of it.

Related

BACnet does not emphasize on secure communication?

When I look into BACnet communication protocol, I found so little about secure communication, almost non-existence. BACnet is so commonly used in building automation controls. However, without mandatory authentication or encryption, wouldn't it be easy to hack by just walking into a building and tap on the building network? Am I missing something?
Your assumptions are correct - for the older standard. The BACnet ASHRAE SSPC 135 committee has addressed the security issue with the new BACnet/SC datalink. It has not been officially released yet, but you can get a sneek peek: http://www.bacnet.org/Bibliography/B-SC-Whitepaper-v10_Final_20180710.pdf
To be fair, it's not necessarily that easy to rock-up to a building, and plug-in to the building controls network; that's the unsaid part - that a lot of BACnet real-world security relies upon physical security/access (- where explicitly/actively so or implicitly).
But to be fair, BACnet did introduce a section # 24 that covered "Network Security" (which was present in the 2012 rendition) but was later "DELETED", and even with it present at the time, companies (/a fair amount of the community) didn't seem to show much active/delivered interest.
But even with the advent of BACnet/SC, BACnet/SC is a lot to take-on (in terms of complexity & time / overall investment) for a technical personal, never a mind a less technical Building Manager and/or Supervisors; there needs to be more support from the Committee.

How do we "test" our security policy?

DISCLAIMER: At my place of work we are aware that, as none of us are security experts, we can't avoid hiring security consultants to get a true picture of our security status and remedial actions for vulnerabilities. This question is asked in the spirit of trying to be a little less dumb and a bit more aware of the issues.
In my place of work, a small business with a sum total of 7 employees, we need to do some work on reviewing our application for security flaw and vulnerabilities. We have identified two main requirements in a security tester:
They are competent, thorough and know their stuff.
They are able to leave us with a clear idea of the work we need to do to make our security better.
This process will be iterative so we will have a scan, do the remedial work and repeat. This will be a regular occurrence going forward.
The problem we have is: How do we know 1? And, even if we're reasonably sure of 1, how on earth do we proceed to 2?
Our first idea was to do some light security scanning on our code ourselves and see if we could identify any definite issues. Then, if the security consultants we choose identify those issues and a few more we're well on the way to 1 and 2. The only problem is that I've been trawling the interweb for days now looking at OWASP, Metasploit, w3af, burp, wikto, sectools (and Stack Overflow, natch)...
As far as I can tell security software seems to come in two flavours, complex open source security stuff for security experts and expensive complex proprietary security stuff for security experts.
I am not a security expert, I am an intermediate level business systems programmer looking for guidance. Is there no approachable scanner type software or similar which will give me an overview of the state of my codebase? Am I just going to have to take a part time degree in order to understand this stuff at a brass tacks level? Or am I missing something?
I read that you're first interested in hiring someone and knowing they're good. Well, you've got a few options, but the easiest is to talk to someone in the know. I've worked with a few companies, and can tell you that Neohapsis and Matasano are very good (though it'll cost you).
The second option you have is to research the company. Who have they worked with? Can they give you references? What do the references have to say? What vulns has the company published to the world? What was the community response (were they shouted down, was the vuln considered minor, or was it game changing, like the SSL MitM vuln)? Have any of the company's employees talked at a conference? Was it a respected conference? Was the talk considered good by the attendees?
Second, you're interested in understanding the vulnerabilities that are reported to you. A good testing company will (a) give you a document describing what they did and did not do, what vulnerabilities they found, how to reproduce the vulnerabilities, and how they know the vulnerability is valid, and (b) will meet with you (possibly teleconference) to review the vulnerabilities and explain how the vulns work, and (c) will have written into the contract that they will retest once after you fix the vulns to validate that they are truly fixed.
You can also get training for your developers (or hire someone who has a good reputation in the field) so they can understand what's what. SafeLight is a good company. SANS offers good training, too. You can use training tools like OWASP's webgoat, which walks you through common web app vulns. Or you can do some reading - NIST SP 800 is a freely downloadable fantastic intro to computer security concepts, and the Hacking Exposed series do a good job teaching how to do the very basic stuff. After that Microsoft Press offers a great set of books about security and security development lifecycle activities. SafeCode offers some good, short recommendations.
Hope this helps!
If you can afford to hire expert security consultants, then that may be your best bet given that your in-house security skills are low.
If not, there is not escaping the fact that you are going to need to understand more about security, how to identify threats, and how to write tests to test for common security exploits like XSS, SQL injection, CSRF, and so on.
Automated security vulnerability software (static code analysis and runtime vulnerability scanning) are useful, but they are only ever going to be one piece in your overall security approach. Automated tools do not identify all exploits, and they can leave you with a false sense of security, or a huge list of false positives. Without the ability to interpret the output of these tools, you might as well not have them.
One tool I would recommend for external vulnerability scanning is QualysGuard. They have a huge and up to date database of common exploits that they can scan for in public facing web applications, web servers, DNS servers, firewalls, VPN servers etc., and the output of the reports usually leaves you with a very clear idea of what is wrong, and what to do about it. But again, this would only be one part in your overall security approach.
If you want to take a holistic approach to security that covers not only the components in your network, applications, databases, and so on, but also the processes (eg. change management, data retention policy, patching) you may find the PCI-DSS specification to be a useful guide, even if you are not storing credit card numbers.
Wow. I wasn't really expecting this little activity.
I may have to alter this answer depending on my experiences but in continuing to wade through the acres of verbiage on my quest for something approachable I happened on a project which has been brought into the OWASP fold:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
It boasts, and I quote from the project documentation's introduction:
[ZAP] is designed to be used by people
with a wide range of security
experience and as such is ideal for
developers and functional testers who
a (sic) new to penetration testing.
EDIT: After having a swift play with ZAP this morning, although I couldn't directly switch on the attack mode on our site right away I can see that the proxy works in a manner very similar to OWASP's Web Scarab (Would link but lack of rep and anti-spam rules prevent this. Web Scarab is more technically oriented, it seems, looking over the feature list Scarab does more stuff, but it doesn't have a pen test vulnerability scanner. I'll update more once I've worked out how to have a go with the vulnerability scanner.
Anyone else who would like to pitch in and have a go would be welcome to do so and comment or answer as well below.

Do you use 30 day trial servers to do development work? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I know this is an odd question but I need to ask it to get information to present to a client. Their lead network admin wants me to work on 30 day trial servers like Sharepoint & SQL Server to develop projects for their clients. While I will do as they ask, I'm not convinced this is the best way to go about developing software or troubleshooting previously developed software. To be honest, I've never worked on custom development for any server/software using a trial version.
What arguements are there for and against working on trial software/servers?
Pro: It enables you to mock up a concept and see if it seems like the development path will be easy before you shell out large amounts of money for the real deal.
Cons: It could trap you in a vicious cycle of wiping your virtual machine and re-installing the OS, the trial version, and your product (you do use source control, correct?) if they are hoping that this will alleviate the need for ever paying for the real product.
Suggestion: If you don't mind unsolicited advice, then I would determine why the lead admin wants to use the trial versions -- and then go from there. Until you know the reasons you cannot respond to them.
If they are doing it for the pro reason, then determine if you feel comfortable working with the possibility of switching technologies 30 days into your build. (Can you do it efficiently?)
If they are doing it to avoid spending money, present some of the alternate open source / free options that you are comfortable developing with. If they will not change their modus operandi at that point, then do what is necessary, knowing what you will be walking away from / getting in to.
(And if you don't mind one more bit of unsolicited advice -- if they are doing it for the con reason and will not change WALK AWAY)
Point them at BizSpark. Microsoft is begging people to use their stuff. A hunny will get you everything on the map for 3 years or until you start making money.
Oh, to answer your question: If I need to get funding for technology not present in the infrastructure or to do a proof of concept I would not think twice about using evals. That is what they are for. I would be evaluating the suitability of the product for use with my designs. Seems easy to me. Maybe I am just, hold on, i have to give my parrot a cracker... ;-)
Apart from the ethical arguments, there are practical ones:
What are you supposed to do if development overruns? Start reinstalling everything, wasting several days doing so?
Additionally, if the client is so strapped for cash that they want to do this, how can you be certain they will pay you (either due to cash flow problems, or simply because of their shady ethics)?
I'm pretty sure that that kind of use is a violation of license terms. Trial editions of servers are for evaluating a product. And if you are in fact creating a product, then you have gone way beyond evaluation.
I would never work under such terms. If you are developing a concrete product, get proper licenses for the development tools. I know that the developer edition of SQL server is not hugely expensive (compared to a version licensed for production use), so I would imagine that the same counts for Sharepoint.
And then there is of course, as already mentioned, what do you do when the trail period expires?
I wouldn't mind doing this so long as the job is shorter than 30 days. Make sure your work contract they're paying for the time worked and not specific deliverables, because your deliverables are time-bombed.
Also be prepared to walk away. If this company doesn't have resources to get the right software, you don't want to be there longer than 30 days anyhow.
Microsoft provides several pre-built virtual machines, that contains full stacks.
(Server 2008/Sql 2008/Sharepoin) (Server 2003/Sql/Project Server) etc.
They are time bombed, but often (not always) Microsoft will provide a new image after the time out.
The benefit of using these images is that they are already configured and good to go.
As an example here is a beta of sharepoint 2010 (http://www.microsoft.com/downloads/details.aspx?FamilyID=0c51819b-3d40-435c-a103-a5481fe0a0d2&displaylang=en).
If the project has a quick timeline, it provides the developers access to the configured stack right away, with no ramp up time of building new virtual machines.
Esp when working on beta/early release software this is great.
The SQL Server evaluation's download page mentions that the evaluation license is good for 180 days, and specifically advertises it as a tool you can use for mission-critical applications. This tells me MS is fine with your using it for development work.
To answer a question with more questions:
How long does this project run?
What phase of the effort are you in now?
Is this an internal/proof-of-concept project, or something that your customer(s) will be using for a long time?
If you are going to need to use SQL Server for Operations & Maintenance support months past the initial evaluation period, you ought to get a license for the full version of it. And also consider what your customers are using so that you can reproduce any bugs that come back from them.
I don't think it's ethical to continually renew evaluation licenses to have a longer evaluation period. Companies call them "evaluations" as a try-before-you-buy, not a keep-trying-without-buying.
I'm not sure what others are seeing as unethical here. If the project is short enough to be completed within the 30 day trial, I don't see any issues. I think that's a great use of trials - if they can't handle a clients applications then they aren't a good option and you can use something else.
I think others here have given some good advice regarding the longer than 30 days projects and some good contract ideas.
How in-house do the servers have to be? Would a hosted solution work for them? (Dreamhost, Amazon Web Services, whatever)? Some hosting systems provide pretty complex machine images (lots of stuff pre-installed--definitely AWS, presumably most others), decreasing setup time/effort. I think those come with licenses, though I don't honestly know. Plus, in at least some cases, you (they) only pay for what you (they) use.
Obviously no good if the physical machine needs to be in-house, or if things are otherwise super-sensitive.

Domain repository for requirements management - build or buy?

In my organisation, we have some very inefficient processes around managing requirements, tracking what was actually delivered on what versions, etc, do subsequent releases break previous functionality, etc - its currently all managed manually. The requirements are spread over several documents and issue trackers, and the implementation details is in code in subversion, Jira, TestLink. I'm trying to put together a system that consolidates the requirements info, so that it is sourced from a single, authoritative source, is accessible via standard interfaces - web services, browsers, etc, and can be automatically validated against. The actual domain knowledge is not that complicated but is highly proprietary and non-standard (i.e., not just customers with addresses, emails, etc), and is relational: customers have certain functionalities, features switched on/off, specific datasources hooked up - all on specific versions. So modelling this should be straightforward.
Can anyone advise the best approach for this - I a certain that I can develop a system from scratch that matches exactly the requirements, in say ruby on rails, grails, or some RAD framework. But I'm having difficulty getting management buy-in, they would feel safer with an off the shelf solution.
Can anyone recommend such a system? Or am I better off building it from scratch, as I feel I am? I'm afraid a bought system would take just as long to deploy, and would not meet our requirements.
Thanks for any advice.
I believe that you are describing two different problems. The first is getting everyone to standardize and the second is selecting a good tool for requirements management. I wouldn't worry so much about the tool as I would the process and the people. Having the best tool in the world won't help if your various project managers don't want to share.
So, my suggestion is to start simple. Grab Redmine or Trac and take on the challenge of getting everyone to standardize. Once you have everyone in the right mindset then you can improve the tools you use for storage.
{disclaimer - mentioning my employer's product}
The brief experiments I made with a commercial tool RequisitePro seemed pretty good me. Allowed one to annotate existing Word docs and create a real-time linked database of the identified requisistes then perform lots of analysis and tracking of them.
Sometimes when I see a commercial product I think "Oh, well nice glossy bits but the fundamentals I could knock up in Perl in a weekend." That's not the case with this stuff. I would certainly look at commercial products in this space and exeperiment with a couple (ReqPro has a free trial, I guess the competition will too) before spending time on my own development.
Thanks a mill for the reply. I will take a look at RequisitePro, at least I'll be following the "Nobody ever got fired for buying IBM" strategy ;) youre right, and I kinda knew it, in these situations, buy is better. It is tempting when I can visualise throwing it together quickly, but theres other tradeoffs and risks with that approach.
Thanks,
Justin
While Requisite Pro enforces a standard and that can certainly help you in your task, I'd certainly second Mark on trying to standardize the input by agreement with personnel and using a more flexible tool like Trac, Redmine (which both have incredibly fast deploy and setup times, especially if you host them from a VM) or even a custom one if you can get the management to endorse your project.

How to integrate telecommuters in an agile process? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm sure that all of us have had to deal with telecommuters at some point in time, and I'm facing a situation now where my new project will have a "core" group of office workers and some off-site telecommuters. Not wanting to repeat past mistakes, I'd really like to know what ways people have tried in the past to effectively integrate telecommuters in an agile process, namely scrum.
My first fear is that the telecommuters will be the first ones to break the "daily scrum" routine. And, as human nature often goes, once that gets broken, it's hard to resume and get people back on track. Scrum recommends enforcing small, fun "penalties" for people missing or being late to the daily scrum, like donating a few bucks to a jar which would later be used to buy a case of beers for the end-project party or something. This is obviously something that would be difficult to enforce online.
The other big problem with telecommuters is the "out of sight, out of mind" problem. Aside from using webcams/skype/teleconferencing, what other tips do people have for keeping the team as closely knit as possible?
Also, what about dealing with telecommuters from different timezones? At the moment, we're lucky enough not to have this problem, but it's definitely a possibility at some point in the future. How have other teams dealt with this problem?
Instant messaging really helps with the "out of sight, out of mind" issue as their 'Status' (Available, busy, on the bog, etc) is visible to all. Also, by responding to messages they reinforce the idea that they're generally available.
I wouldn't worry about the Scrum meeting issue, joining a meeting via teleconf is often easier than attending in person.
Set the ground rules upfront. Don't be wishy-washy about them.
You've probably eliminated the "I got stuck in traffic" excuse for missing the meeting or whatever when they're working from home (or a satellite site) and so there's no reason to expect less out of them.
Take advantage of technology:
Use IM. We use it here and it is great for 'reaching out and touching' the guy four states away. Make it a requirement to be available via IM.
Use other tools to help break down the barriers. It'll depend on your situation.
If you're having the daily meeting, it should be clear to everyone that you're going to be asking the questions:
What did you accomplish since we
last met?
What are you going to be doing
today?
What's in the way that needs to be
moved?
Just because you can't see Matt in his cube doesn't give me a right to be lazy or unproductive and unresponsive. It's like dealing with my kids - let them know the rules and what is expected, then nobody can claim ignorance.
We have success using this tools:
Assembla for project management (source control, wiki, scrum tool)
Skype for voice communication
Google talk for im
We are team of 3 developers, in 6 time zones range.
I spent a year as the only remote guy on an Agile team. I called into a conference line for the daily scrum, as well as the planning/review meetings. I kept in contact during the day via IM/e-mail/phone.
I think it worked pretty well overall. The biggest constant drawback was not being able to see the physical whiteboard we used to track the scrum. We discussed moving to some sort of online tool to do this, but it never happened.
I was one timezone away, and I just considered it part of the telecommute tradeoff that I would work the hours that the rest of team kept.
As far as penalties for missing SCRUM - to a certain degree you should enforce this loosely, via the beer jar or whatever. But if someone is consistently missing/late required meetings, then their manager needs to address that.
The are a number of techniques that you can use - remember the purpose of colocation is to encourage collaboration and communication. A few things can help out.
If your team is all nearby - think about having core days of when everybody can come into the office. My current team allows working from home on Mondays and Fridays - and everybody comes in the office Tuesday through Thursday
For distributed teams, I have had good success with using Wikis instead of giant sheets of paper on the wall. The nice thing about wikis is that they encorage the team to edit the forms to meet the needs of the team as opposed to adapting to a more formal tool.
Another advantage of having a Wiki is each person can have their own page to share pictures about their vacations and hobbies - this makes remote people more real.
When you have a distributed team, I want to second the use of Instant Messaging that includes a status (Available, Away (grabbing a cub of coffee), Busy (in a meeting)) - these can include notes if people switch between working at home and at the office.
Webcams are inexpensive and valuable tool
Invest in a decent speaker phone (we like Polycom phones) for your group conference calls
Use tools like LiveMeeting to promote remote pair programming
A technique for doing stand ups over the phone is to have the person talking say the name of someone else in the group who has not gone yet - this keeps everyone paying attention.
For iteration (sprint) planning meetings - follow up with meeting minutes or a communication plan to make sure that everyone is on the same page. Not being colocated means a tad more documentation and intentionality on communicating.
Good luck
SCRUM and many other agile methods really do depend on physical proximity - it is hard to integrate telecommuters into any development process where integration happens frequently, but these particular processes are especially hostile to disembodied developers.
You will have to adapt the processes to the situation at hand. Video conferencing using webcams is actually very usable, and in fact yo might want to experiment with having their webcam on all the time in their cubicle/work area so people can just walk up and ask a question as they would with any other coworker.
But at the end of the day, you simply have to expect things to go differently for them - they aren't going to be able to fully participate in many processes if you are an agile shop.
-Adam
Make sure they attend the daily standup via webcam; as you said that's the first mis-step down a slippery slope. We try to have all meetings done with a RoundTable as well which really helps.
I've been doing this for two months (working in Canada with the core team in Dublin) and so far everything has been going really well.
See Scott Hanselman's writeup on his first year working remotely at Microsoft - definitely some good tips there. One Year Later.
Instead of a beer jar, the privilege of telecommuting itself could be part of the bargain for participation when required. If the team is not responsible enough to telecommute properly than they probably shouldn't be. More fun penalties for occasional tardiness could be to use a funny avatar to represent the person that is missing from the meeting.
Other methods of keeping people closely knit is using collaboration tools such as Wikis and project tracking tools such as Basecamp or FogBugz.
For differing timezones, early meetings will need to occur based on the furthest west time zone, unless one is on the opposite side of the world, which is a bigger problem. Then it will probably be based on who is in charge.
We have been able to manage daily scrums in our environment even with distributed teams over the phone.
It helps to use software such as Rally and Basecamp to manage the process.
One place I worked used Asterisk instead of a normal phone system. It worked well because when you are working from home, you simply log on, people can call your direct line number, outsiders don't need to know. Even though phone call cost are relativity trivial these days, having a 'always on' connection encourages more communication. The sound quality is better too.
For telecommuters/distributed teams, I recommend getting a decent phone - most desk phones lose the ability for folks on the other end to hear folks who are multiple feet away from the phone during a standup.
When you do your demos of working code for stakeholders at the end of the iteration, use webex or livemeeting or something to share the desktop and a camera to show the speaker so that your distributed participants can see what's going on. (Even better would be to ask your telecommuters to attend during iteration boundaries to participate in person).
I recommend getting folks together for a few weeks at the beginning of the project during the inception/kickoff phase so folks can build interpersonal relationships. It's amazing how helpful the face-to-face interaction up front can be to build a foundation for teamwork.
Use a distributed card wall. I like Mingle (http://mingle.thoughtworks.com), but I haven't used other tools, so can't comment on them.
For retrospectives, it's useful to have a proxy in the room using IM to communicate with your distributed team members... so that any comments the distributed folks have can be written onto a piece of paper (or post-it, or however you do yours).
As for your fears of "out of site, out of mind", my preference for things like this is to not create solutions for problems that have not yet materialized. If you find that your team is becoming disconnected (prime discussion points for retrospectives), then you can facilitate a team discussion on how to deal with any issues that arise. Again - the team should help identify the problem and the solution rather than having a manager or scrum master dictate solutions. Start with an assumption of trust.
Distribute Scrum requires good preparation. It is not just about the tool.
We supported many rollouts in distributed environment and there was one fundamental point - people.
The most efficient is to start with ALL people in one location. They have to meet in person so they can know each other as persons, not just someone virtual on the other side of the world. As I used to say - team members need to smell each other.
For release planning meet at one location, if possible. Change locations so you visit all of them, to have a context and understanding of culture, habits, persons. For sprint planning use video meetings, screen sharing etc. It is not necessary to travel (it would be too often).
Clear roles and team(s) organization must be established. You have to have Product Owner and Scrum Masters. You should consider if you do not want to get PO & SM as close to the team as possible. Definitely you have to get them into face 2 face meetings (it is about face, not a location) every day.
Definition of done, if agreed by the team, helps to have the same understanding what Done means. In distributed environment is a must.
You will need a good communication tools for daily stand-ups . We found usable to use Skype or Office communicator for dailies. We use audio AND chat. Especially in international environment chat allows you to understand people. Keep communication channel open after daily so team members can discuss what is necessary outside of daily report.
And, the most important, is to do regular retrospectives with all team members in all locations. Do not forget to implement ideas coming from retrospective. Teams in other locations will need a local support who will help them to implement ideas.
I work on a team of 5. We to facilitate our telecommute workplace we use:
Asana - Project and Task management
Google Talk + Your favorite IM
client (I used Pidgin)
RingCentral - VOIP Telephone
Gmail - asynchronous communication (i.e. email)
Dropbox - file transfer and
backup
Team Viewer - Screen Sharing, Training, and Presentations
Even with these tools it is easy to fall short on your process so it is important to establish some best practices for your team based on your dynamic. For example, we have two chief practices:
Communicate Often - because we are not in the same location when communicating it is easy to forget that you are working on a team. For our team, we update our tasks in Asana with comments describing ideas, obstacles, and task completeness. When immediate assistance or feedback is needed, don't wait, seek assistance via IM or email if (the person is offline).
Lean on the side of over communication - This pertains more to Asana comments and emails. However, in general we found it is better to give more information than is needed (within bounds).

Resources