I am looking to create a simple mobile agent system which will deal with 4 tasks, i.e 4 different mobile agents jobs: Database update, meeting scheduling, network services discovery and kernel update.
I have done my research and have seen different frameworks such as Aglet, Jade, agent builder etc. My question is which one should i use? Also i need to setup the base code for it to work, can someone point me to a site or help me to setup the basic functions of the mobile agent?
I've read about tahiti server for the Aglet model. I'm quite confused about how to set up the mobile agent system. Any help would be much appreciated.
I have also tried to it using RMI. I had created a method of type agent, but i couldn't pass it through remote method implementation. I was reading about tcp and udp socket programming. I was thinking may be it would be more fair to do it using socket programming. In this case, would this be called an agent? I was thinking about the server sending datagram packets to multiple clients.
You need to ask yourself why you want to use mobile agents at all. The notion of a mobile agent was popular in the agent research community in the early 90's, but fell out of favour because (i) it wasn't clear what problem it was solving, (ii) the capability to allow arbitrary code to migrate to a particular computer and execute with enough privileges to access local data and services is very open to abuse, and (iii) all of the claimed benefits of mobile agents can actually be achieved though web services (REST or otherwise) and open data formats such as RDF. Consequently, few, if any, mobile agent platforms have been properly maintained since the early experiments.
It also sounds as though you need to be clear which end-user problem you want to solve. Scheduling a meeting and updating my kernel are very different tasks - I'd be very uncomfortable with a program that claims do both. If your interest is in the automation of system maintenance tasks, such as DB tuning and kernel patching, on large networks you might want to look at the SmartFrog project, or read up on autonomic computing.
I use JADE and I agree with the first guy, agent systems usually take alot of overhead to going so if you can avoid it, please do. If however you choose to proceed choose a platform with alot of support and a big user group.
Jade has some neat features like a directory facilitator DF, which works like a yellow pages so other agents don't have to know what agents are running and what services are supplied they can simply inquire by the DF.
Also JADE ContractNetBehaviours help simplify communication.
Related
I'm a web developer and completely new to Loadrunner suite.
Our client has already provided us with some Loadrunner actions, that I need to run them to test a site that is hosted on the company's intranet that I'm currently working.
The computer I'm using can not handle more than 7 vusers, therefore I was requested to use Amazon EC2 for load generators.
Before I request my company to be charged with Amazon services I need to know, would I be able to test our internal page from my computer exactly as I do with the load generator on my localhost, or the page that will be tested needs to be publicly accessible from the internet?
Any feedback will be appreciated. Thanks.
Please read carefully what James wrote. You said you are a web developer so the task that was given to you is roughly equivalent to "write a new DB access layer".
You didn't mention which protocol you are using but I will assume TruClient (based on the 7 vUsers per machine). I will also assume you are using the latest version of LoadRunner or at least something from the 12.6X family.
1) You already have a solution for AWS out of the box in the form of StormRunner (https://www.microfocus.com/en-us/products/stormrunner-load-agile-cloud-testing/overview). If you want to test if the solution works for you please request a couple of execution hours from the sales team and try it. If your company has a valid license for LoadRunner I don't think this will be an issue.
2) You have a simple integration into the controller application for EC2 and alike. In the controller go to Tools->Manage cloud accounts. If you run a small test the cost should not be too great I assume.
3) If you are a developer, we have a new offering called TruWeb which is a transport level protocol which should be more developer friendly. It will be able to run much more users per machine so you will be able to use it to test on EC2 micro machine (free tier). The caveat is that you will have to write some JavaScript code and not be able to reuse the actions given to you. You can download TruWeb from here - https://marketplace.microfocus.com/appdelivery/content/truweb and it comes with the LoadRunner installation out of the box since 12.58. If you need further assistance with TruWeb feel free to email us to - truweb_forum#microfocus.com
I hope this will give you some directions.
a) You need training. This is not a discipline that someone is socially promoted to and finds success
b) Expect that it will take at least six months to begin delivering value in this field, longer if you are not working with a mentor
c) This is a question of application communication architecture. Architecture is one of the foundation skills for a performance tester/engineer/architect.
d) It is not recommended that you use the controller as a load generator. It is not recommended that you use just one load generator. Both of which will cause your test to fail an audit from a more mature testing firm. Go with a minimum of three, two for primary load, one for a control set of a single virtual user of each type. Design your tests to allow for the examination of Control timing records compared to the global set to understand if you have an application issue or a load generator issue as part of your test design.
e) You will need to coordinate with your network team for two reasons. One, you may need to open outbound ports (covered in documentation) to allow your controller to communicate with your load generators. Two, you absolutely will have to coordinate a tunnel from the outside internet to your internal applications under test. Expect that security will be paramount only our requests and no other requests through the tunnel. There are many mechanisms to address this, from a custom HTTP header to certificates. Speak with your network security professionals for the setup and configuration which you will be able to implement.
The self paced training for loadrunner is available for download. It takes about three days to go through. This is the absolute minimum before you pick up this tool in anger. Ideally, you would go through training with a certified instructor and be paired with a mentor for a period. The length of time for the mentor is directly related to the number of foundation skills which you bring to the table.
I have a few questions:
It is possible to implement a "private messages" within GetStream?
Or, for example, can I combine Getstream API with http://social-stream.dit.upm.es/ ? (this system written on ROR).
It is possible to control and change algorithm, how Machine learning works in getstream.io ?
I mean, I found not much information about Machine learning in documentation and getstream account. Maybe I can read about it somewhere in more detail?
Machine learning works only on paid plans or in free plan also?
getstream.io have specific API for Machine learning purpose?
For example, if we write some additional features, like "private messages" on our side, which GetStream don't have in API, how we can apply Machine learning on this new features?
You may ask, "why you need a Machine learning for PM"?
Not only PM. Here a few examples:
a) If user have some keyword in PM - we can determine what the topic is interesting to the user.
b) In another scenario - we can analyze images (we use our engine for that purpose) in posts and if some image contain correlation between keywords and specific topic and if we look at who like/vote this image - we can show him more relative content.
c) There is dozens another examples, which we need to control Machine learning process.
Even if we implement Machine learning on our side (from zero, just "for private messages" and other related stuff), how we can connect our results in machine learning with results in GetStream? If we will use it separately, it can be inefficient and bring unpredictable results (even negative).
I want to clarify, I am not a developer. I am owner, but understand very well the project management and the entire development process.
My question is a lack of understanding of how the API works.
Thanks in advance!
Stream helps you build activity streams and newsfeeds. A central aspect of that is the follow relationship. The same building blocks also make it easy to build notification feeds and private messages. Stream isn't build for private messages, but we have dozens of apps using us for that purpose. It works for many apps, but depending on your feature set your milage may vary.
As for your second question. Stream provides scalable newsfeeds, analytics and personalization (machine learning). For larger customers we extensively customize the machine learning component. There are similarities between apps, but it's definitely something which needs to be tailor made. At the moment analytics and personalization are only available to our largest customers. More information can be found here: http://getstream.io/personalization/
Using your own machine learning and Stream's at the same time is pretty easy. You simply track engagement events both in Stream and in your own system. That will allow you to run your own analysis.
Say I have a bunch of webservers each serving 100's of requests/s, and I want to see real time stats like:
Request rate over last 5s, 60s, 5 min etc
Number of unique users seen again per time window
Or in general for a bunch of timestamped events, I want to see real-time derived statistics - what's the best way to go about it?
I've considered having each GET request update a global counter somewhere, then sampling that at various intervals, but at the event rates I'm seeing it's hard to get a distributed counter that's fast enough.
Any ideas welcome!
Added: Servers are Linux running Apache/mod_wsgi, with a Python (Django) stack.
Added: To give a sense of the event rates I want to track stats for, they're coming in at over 10K events/s. Even incrementing a distributed counter at that rate is a challenge.
You might like to help us try out the beta of our agent for application performance monitoring in Python web applications.
http://newrelic.com
It delves more into the application performance rather than just the web server, but since any bottlenecks aren't generate going to be the web server, but your application then that is going to be more useful anyway.
Disclaimer. I work for New Relic and this is the project I am working on. It is a paid product, but the beta means it is free for now with all features. Later when that changes, if you didn't want to pay for it, their is still a Lite subscription level which is free and which gives you basic web metrics reporting which still covers some of what you are after. Anyway, right now would be a great opportunity to make use of it to debug your performance while you can.
Virtually all good servers provide this kind of functionality out of the box. For example, Apache has the mod_status module and Glassfish supports JMX. Furthermore, there are many commercial packages for monitoring clusters, such as Hyperic and Zenoss.
What web or application server are you using? It is difficult to provide a solution without that information.
Look at using WebSockets, their overhead is much smaller than a HTTP request, they are very well suited to real-time web applications. See: http://nodeknockout.com/ for Node based websocket examples.
http://en.wikipedia.org/wiki/WebSocket
You will need to run a daemon if you want to run it on your apache server.
Also take a look at:
http://kaazing.com/ if you wan't less hassle, but are willing to fork out some cash.
On the Windows side, Perfmonance monitor is the tool you should investigate.
As Jared O'Connor said, you should precise what kind of web server you want to monitor.
While the are many social networks in the wild, most rely on data stored on a central site owned by a third party.
I'd like to build a solution, where data remains local on member's systems. Think of the project as an address book, which automagically updates contact's data as soon a a contact changes its coordinates. This base idea might get extended later on...
Updates will be transferred using public/private key cryptography using a central host. The sole role of the host is to be a store and forward intermediate. Private keys remain private on each member's system.
If two client are both online and a p2p connection could be established, the clients could transfer data telegrams without the central host.
Thus, sender and receiver will be the only parties which are able create authentic messages.
Questions:
Do exist certain protocols which I should adopt?
Are there any security concerns I should keep in mind?
Do exist certain services which should be integrated or used somehow?
More technically:
Use e.g. Amazon or Google provided services?
Or better use a raw web-server? If yes: Why?
Which algorithm and key length should be used?
UPDATE-1
I googled my own question title and found this academic project developed 2008/09: http://www.lifesocial.org/.
The solution you are describing sounds remarkably like email, with encrypted messages as the payload, and an application rather than a human being creating the messages.
It doesn't really sound like "p2p" - in most P2P protocols, the only requirement for central servers is discovery - you're using store & forward.
As a quick proof of concept, I'd set up an email server, and build an application that sends emails to addresses registered on that server, encrypted using PGP - the tooling and libraries are available, so you should be able to get that up and running in days, rather than weeks. In my experience, building a throw-away PoC for this kind of question is a great way of sifting out the nugget of my idea.
The second issue is that the nature of a social network is that it's a network. Your design may require you to store more than the data of the two direct contacts - you may also have to store their friends, or at least the public interactions those friends have had.
This may not be part of your plan, but if it is, you need to think it through early on - you may end up having to transmit the entire social graph to each participant for local storage, which creates a scalability problem....
The paper about Safebook might be interesting for you.
Also you could take a look at other distributed OSN and see what they are doing.
None of the federated networks mentioned on http://en.wikipedia.org/wiki/Distributed_social_network is actually distributed. What Stefan intends to do is indeed new and was only explored by some proprietary folks.
I've been thinking about the same concept for the last two years. I've finally decided to give it a try using Python.
I've spent the better part of last night and this morning writing a sockets communication script & server. I also plan to remove the central server from the equation as it's just plain cumbersome and there's no point to it when all the members could keep copies of their friend's keys.
Each profile could be accessed via a hashed string of someone's public key. My social network relies on nodes and pods. Pods are computers which have their ports open to the network. They help with relaying traffic as most firewalls block incoming socket requests. Nodes store information and share it with other nodes. Each node will get a directory of active pods which may be used to relay their traffic.
The PeerSoN project looks like something you might be interested in: http://www.peerson.net/index.shtml
They have done a lot of research and the papers are available on their site.
Some thoughts about it:
protocols to use: you could think exactly on P2P programs and their design
security concerns: privacy. Take a great care to not open doors: a whole system can get compromised 'cause you have opened some door.
services: you could integrate with the regular social networks through their APIs
People will have to install a program in their computers and remeber to open it everytime, like any P2P client. Leaving everything on a web-server has a smaller footprint / necessity of user action.
Somehow you'll need a centralized server to manage the searches. You can't just broadcast the internet to find friends. Or you'll have to rely uppon email requests to add somenone, and to do that you'll need to know the email in advance.
The fewer friends /contacts use your program, the fewer ones will want to use it, since it won't have contact information available.
I see that your server will be a store and forward, so the update problem is solved.
I'm sure many of you are familiar with the IBM i5 series emulator (looks like this poop)
My company uses this religiously and there is no Biz logic in it so anytime somone in our finance dpt makes a human error it accepts it and adds it to the database. Not to mention its ugly, hard to use, not intuitive, etc....
I would like to create a frontend for this interface so that we can control the logic before its submitted to the system (we dont control the system itself) so in effect I need to make my own emulator app.
However I cant seem to find any information on how to interface with the i series, namely login, send commands, and view or gather data from the screens it would normally send back.
Any suggestions?
The problem is not the iSeries but the software package your company is running on it.
There ARE advantages to use green screens: it's fast and it's almost unbeatable at data entry, provided you get used to it.
But to answer your question, the iSeries is a J2EE enabled machine: a HTTP server comes installed and depending of the version of the iSeries, WebSphere might be already installed, or are entitled to install it. Then you can use JT400, which is the java toolkit for the os400 containing the jdbc drivers to connect the database and the necessary classes for calling programs.
If you prefer php, there is a flavor of the Zend framework made to work on the iSeries but I never tried it.
I'd recommend that you take a look at both the Attachmate Verastream Host Integrator (VHI) and IBM's Host Access Transformation Services (HATS) products. They effectively just screen scrape the green screen terminals to allow you to pull and push data and provide macro recording and editing tools to automate the process. App integration can be achieved via web services or html/jsp/servlet programming (plus .Net for VHI and EJB's for HATS). They do come with enterprise pricing however which may be an obstacle for some. They do have free trial offerings for evaluation purposes to help determine if they are an appropriate solution to your problem.
What software packages are they using? Most programs that I use in the 5250 emulator has some business logic that error checks the data before adding it to the database. Can you get us some more information so we can direct you in a better direction.
There are vendors that sell products that screen-scrape the 5250 data stream and produces a web front-end. Or you can write your own front-end in the language of your choice and just do SQL calls to the database.
THere's got to be some source code. Start by looking at the menu and menu option your users are accessing and figure that's running behind them.
Use command STRPDM to look for source code - look in different libraries (they are like folders)
You might have source code in a "member" called something like xxxMNUSRC xxxRPGSRC (rpg program source) or xxxCLSRC (cl programs), xxxDDSSRC (display/screen source, physical/logical file source)
Objects a "compiled" objects such as files (tables), screens, priter files (reports)
Stay away from Qxxx and #xxx libraries - those are system libraries.
http://systeminetwork.com/ is a good resource for iSeries related questions.