I have a few questions:
It is possible to implement a "private messages" within GetStream?
Or, for example, can I combine Getstream API with http://social-stream.dit.upm.es/ ? (this system written on ROR).
It is possible to control and change algorithm, how Machine learning works in getstream.io ?
I mean, I found not much information about Machine learning in documentation and getstream account. Maybe I can read about it somewhere in more detail?
Machine learning works only on paid plans or in free plan also?
getstream.io have specific API for Machine learning purpose?
For example, if we write some additional features, like "private messages" on our side, which GetStream don't have in API, how we can apply Machine learning on this new features?
You may ask, "why you need a Machine learning for PM"?
Not only PM. Here a few examples:
a) If user have some keyword in PM - we can determine what the topic is interesting to the user.
b) In another scenario - we can analyze images (we use our engine for that purpose) in posts and if some image contain correlation between keywords and specific topic and if we look at who like/vote this image - we can show him more relative content.
c) There is dozens another examples, which we need to control Machine learning process.
Even if we implement Machine learning on our side (from zero, just "for private messages" and other related stuff), how we can connect our results in machine learning with results in GetStream? If we will use it separately, it can be inefficient and bring unpredictable results (even negative).
I want to clarify, I am not a developer. I am owner, but understand very well the project management and the entire development process.
My question is a lack of understanding of how the API works.
Thanks in advance!
Stream helps you build activity streams and newsfeeds. A central aspect of that is the follow relationship. The same building blocks also make it easy to build notification feeds and private messages. Stream isn't build for private messages, but we have dozens of apps using us for that purpose. It works for many apps, but depending on your feature set your milage may vary.
As for your second question. Stream provides scalable newsfeeds, analytics and personalization (machine learning). For larger customers we extensively customize the machine learning component. There are similarities between apps, but it's definitely something which needs to be tailor made. At the moment analytics and personalization are only available to our largest customers. More information can be found here: http://getstream.io/personalization/
Using your own machine learning and Stream's at the same time is pretty easy. You simply track engagement events both in Stream and in your own system. That will allow you to run your own analysis.
Related
I'm a web developer and completely new to Loadrunner suite.
Our client has already provided us with some Loadrunner actions, that I need to run them to test a site that is hosted on the company's intranet that I'm currently working.
The computer I'm using can not handle more than 7 vusers, therefore I was requested to use Amazon EC2 for load generators.
Before I request my company to be charged with Amazon services I need to know, would I be able to test our internal page from my computer exactly as I do with the load generator on my localhost, or the page that will be tested needs to be publicly accessible from the internet?
Any feedback will be appreciated. Thanks.
Please read carefully what James wrote. You said you are a web developer so the task that was given to you is roughly equivalent to "write a new DB access layer".
You didn't mention which protocol you are using but I will assume TruClient (based on the 7 vUsers per machine). I will also assume you are using the latest version of LoadRunner or at least something from the 12.6X family.
1) You already have a solution for AWS out of the box in the form of StormRunner (https://www.microfocus.com/en-us/products/stormrunner-load-agile-cloud-testing/overview). If you want to test if the solution works for you please request a couple of execution hours from the sales team and try it. If your company has a valid license for LoadRunner I don't think this will be an issue.
2) You have a simple integration into the controller application for EC2 and alike. In the controller go to Tools->Manage cloud accounts. If you run a small test the cost should not be too great I assume.
3) If you are a developer, we have a new offering called TruWeb which is a transport level protocol which should be more developer friendly. It will be able to run much more users per machine so you will be able to use it to test on EC2 micro machine (free tier). The caveat is that you will have to write some JavaScript code and not be able to reuse the actions given to you. You can download TruWeb from here - https://marketplace.microfocus.com/appdelivery/content/truweb and it comes with the LoadRunner installation out of the box since 12.58. If you need further assistance with TruWeb feel free to email us to - truweb_forum#microfocus.com
I hope this will give you some directions.
a) You need training. This is not a discipline that someone is socially promoted to and finds success
b) Expect that it will take at least six months to begin delivering value in this field, longer if you are not working with a mentor
c) This is a question of application communication architecture. Architecture is one of the foundation skills for a performance tester/engineer/architect.
d) It is not recommended that you use the controller as a load generator. It is not recommended that you use just one load generator. Both of which will cause your test to fail an audit from a more mature testing firm. Go with a minimum of three, two for primary load, one for a control set of a single virtual user of each type. Design your tests to allow for the examination of Control timing records compared to the global set to understand if you have an application issue or a load generator issue as part of your test design.
e) You will need to coordinate with your network team for two reasons. One, you may need to open outbound ports (covered in documentation) to allow your controller to communicate with your load generators. Two, you absolutely will have to coordinate a tunnel from the outside internet to your internal applications under test. Expect that security will be paramount only our requests and no other requests through the tunnel. There are many mechanisms to address this, from a custom HTTP header to certificates. Speak with your network security professionals for the setup and configuration which you will be able to implement.
The self paced training for loadrunner is available for download. It takes about three days to go through. This is the absolute minimum before you pick up this tool in anger. Ideally, you would go through training with a certified instructor and be paired with a mentor for a period. The length of time for the mentor is directly related to the number of foundation skills which you bring to the table.
I want to build a constantly running forex trading application (even if the web page is not opened),
But I don't know what is the best way to do it, it's a software that should run constantly with constant internet connection, should I write it in Python as a PC application or web application ?, or is there anything better ?
Thank you for your support,
I don't see any point of making a web application in your case. Architecture of algorithmic trading systems is a broad subject but most of the time it's an application which is only connected to a market data provider and a broker. A trading system can be built as a web application for example for browsing portfolio, historical trades or when some external events trigger trades but you should focus on the strategy itself rather than on a time consuming UI.
You should learn some basics from books/online course depending on your programming language but I think that Python is a great choice for rapid prototyping of strategies. You can also try some existing platforms like Quantopian which can give a quick start for strategies testing.
Focus your efforts on thorough testing of the strategy because most of the time they look good on a paper but turn out to be an utter garbage in practice and without a strategy there is no need for an infrastructure.
There are lots of technologies for developing "always on" scanners; I won't scour the internet for you; however, to give you an example, you can use something like Azure Functions -- it can run your scanner on a schedule. It can be built with Python, if that's your language of choice.
Of course, there are a lot of third-party tools to choose from as well.
I have an e-commerce website and I need to display products in homepage based on user interest like advertisement showing on facebook and google based on the search we did on the internet.
Is there any API from facebook or google or any website for fetching user interests?
I have wondered how facebook, google, and booking.com collaboratively tracking us even if they are different companies.
Is there any common gateway or common interface where these big companies share user info by making a system with cookies to track among all as a centralized big data model for behavioral advertising?
I am looking for answers from experts for this. I really need to build a system for tracking user interest based on their search on google and other websites.
So you need to use something called recommender systems, using a machine learning algorithm, you'll recommend products to people based on ratings and interest. by using previous data of the same user or other users (just like when you get recommended videos on youtube).
this topic is too big for me to explain it step by step in here, and you need to first have a good understanding of primitive machine learning such as classification, regression ... etc
so if you're interested make sure to check a coursera course called Machine Learning (Stanford University) it's taught by machine learning rockstar Andrew NG, and it doesn't only teach you machine learning, but it takes you from somebody with no idea on the subject to an expert (technically) the course used MATLAB/OCTAVE and it has an entire section on Recommender Systems which is what you need, after you've finished just implement what you learned in the language of your choice !
PS:
you can always look up tutorial online for implementing Recommender Systems, but you will waste so much time because you would have no idea about what you're doing without understanding the theory which you will master in the course I've recommended, the course can also be found easily on youtube. but taking it for FREE in coursera will help you more because you'll have hands-on programming experience, on the different subjects.
For that google and facebook has their algorithm to track user movement and they use it for showing ads on their website.
i don't think that is available for common use.
I think you won't be able to Track there interest live Facebook, Google, Amazon, twitter. They are collecting your Interest form there own platform.
Also they manage large Add provider. SO once a Add has been clicked by you, it has been tracked. Also Google Play, itunes has access to your Phone.
I am looking to create a simple mobile agent system which will deal with 4 tasks, i.e 4 different mobile agents jobs: Database update, meeting scheduling, network services discovery and kernel update.
I have done my research and have seen different frameworks such as Aglet, Jade, agent builder etc. My question is which one should i use? Also i need to setup the base code for it to work, can someone point me to a site or help me to setup the basic functions of the mobile agent?
I've read about tahiti server for the Aglet model. I'm quite confused about how to set up the mobile agent system. Any help would be much appreciated.
I have also tried to it using RMI. I had created a method of type agent, but i couldn't pass it through remote method implementation. I was reading about tcp and udp socket programming. I was thinking may be it would be more fair to do it using socket programming. In this case, would this be called an agent? I was thinking about the server sending datagram packets to multiple clients.
You need to ask yourself why you want to use mobile agents at all. The notion of a mobile agent was popular in the agent research community in the early 90's, but fell out of favour because (i) it wasn't clear what problem it was solving, (ii) the capability to allow arbitrary code to migrate to a particular computer and execute with enough privileges to access local data and services is very open to abuse, and (iii) all of the claimed benefits of mobile agents can actually be achieved though web services (REST or otherwise) and open data formats such as RDF. Consequently, few, if any, mobile agent platforms have been properly maintained since the early experiments.
It also sounds as though you need to be clear which end-user problem you want to solve. Scheduling a meeting and updating my kernel are very different tasks - I'd be very uncomfortable with a program that claims do both. If your interest is in the automation of system maintenance tasks, such as DB tuning and kernel patching, on large networks you might want to look at the SmartFrog project, or read up on autonomic computing.
I use JADE and I agree with the first guy, agent systems usually take alot of overhead to going so if you can avoid it, please do. If however you choose to proceed choose a platform with alot of support and a big user group.
Jade has some neat features like a directory facilitator DF, which works like a yellow pages so other agents don't have to know what agents are running and what services are supplied they can simply inquire by the DF.
Also JADE ContractNetBehaviours help simplify communication.
I'm working on a project for teachers and students to be able to have a medium to interact with one another using Azure as a medium for content delivery. However, since this is basically a free service (and a non-profit site), not every teacher can buy a copy of Encoder Pro to encode their streams.
This is where I'm at a crossroads and not sure what path to go down. I want teachers to be able to stream their desktops and interact with students, probably using the MSN chat services or facebook chat services since it's infrastructure that I don't need to pay for. However, additionally the question is how do they capture their desktop? And would Azure be able to convert that into a "smooth streaming" file, so that people with lower bandwith connections can see the stream reliabily? I know Azure can function as a CDN, but I'm not sure if it can do the conversion to live smooth streaming so that students can actually make use of the service.
Any ideas would be helpful.. I'm kind of brainstorming right now and working on the client end of things, but I've slowed down until I can figure out this problem.
Thanks!
To answer part of your question, Azure recently added a Media Services component. It's still in preview mode (free for now). Think of it as a hosted Expression Encoder Pro exposed with a bunch of APIs. For more info https://www.windowsazure.com/en-us/develop/net/how-to-guides/media-services/