Issues Getting Started with Restcomm - voip

I've been trying to get started with programming with Restcomm for a few weeks now, and I'm having trouble figuring what I need in order to get myself set up with all of the services.
So far, I have gotten myself situated with the Restcomm software via the AWS Marketplace; I am able to log into the software, but have failed to register a phone number yet. Whenever I select a number from the page by clicking "Register Number," a message comes up saying that registration "failed" without any additional information.
Additionally, I have downloaded and unzipped the folders for Mobicents (which I have not read much about the use of on my desktop yet) and the Telscale USSD Gateway (which I have read most of the background documentation for, but am yet to get in motion with because of my inability to utilize Restcomm).
I have really been trying to make sense of all of these pieces on my own, but I'm at a point of frustration. Could I get some guidance just walking me through what I need in order to get started with using Restcomm and having functions through telephones correspond with a simple database?
Thanks!

If you have been able to log into Restcomm AMI, you should be able to use the pre-packaged demo apps. Here is the documentation explaining how to test the demo apps. http://docs.telestax.com/restcomm-testing-default-demos/

Related

How can I schedule a YouTube livestream entirely from Linux?

I have a setup on a Raspberry Pi (with its native camera) that uses a cronjob to start an ffmpeg session with its output streaming to YouTube. I re-use the same stream key each time, which is written into my ffmpeg scripts. This all works perfectly each week, automatically starting and stopping at the desired time.
However, each week PRIOR to that livestream, I have to "manually" go into YouTube Studio and "schedule" a new future event. This is easy enough, since it lets me "reuse" previous settings -- all I have to change is the Title, date, and time. But I would love to figure out a way to automate that part of the process, as well. I assume it involves using the YouTube Data API, but I'm not well versed in API's, JSON, etc.
(I do have a strong Linux background, bash scripting skills, and general programming background.)
My final solution just needs to:
create the new scheduled event (maybe 12 hours prior to going live), with Title, Date, Time, "Unlisted" status, category, and so forth -- all the usual settings I do manually within Studio
retrieve the assigned URL for the upcoming stream (my script will then email that to me)
So, basically, I'm asking for help getting started with the API, or whatever method is capable of doing this. I would prefer to code it on the same Pi that does the ffmpeg encoding (although in a pinch, I could create the schedule from another computer, even Windows). Any examples would be great.
So far, all I have done is create my Google project, enable the YouTube Data API in the project, and create my API key. But I'm not sure where to go from there.
If Python as implementation language suites you, then I'd recommend to use the Google's APIs Client Library for Python.
Basically, this library is of good quality and (compared to other client libraries) simple to use. It will, for example, insulate you from having to deal explicitly with REST API calls, JSON and the like. Your code will also work under both GNU/Linux and Windows.
You may begin your journey by reading the official getting started doc: YouTube Live Streaming API Overview. Then I recommend absorbing these two important documents: Life of a Broadcast and Understanding Broadcasts and Streams.
Then go read, understand and run the following sample program from Google: create_broadcast.py. Of course, you'll have to adapt that code to your use case.
You'll have to exercise patience and perseverance (since you say you have no prior experience using the YouTube Data API). This API is non-trivial, but it'll pay off to you at the end of your (programming) journey (you mentioned to be versed in programming).
A special mention: for to be able to call the live streaming APIs you will first need to get acquainted with the things related to the so-called OAuth 2.0 authorization and authentication: Implementing OAuth 2.0 Authentication. There's an official document that you need absorb: OAuth 2.0 for Mobile & Desktop Apps.
A few more references: the live streaming API has an official documentation too. The main site documenting the client library is: Google API Client Library for Python Docs. Its source is public, to be found within the client library's public repo under the directory docs.
Also useful is to see the YouTube Data API's list of all instance methods.

How should I build this app over communcation apps?

These days I am finding myself in the position of having to implement for one of my college courses a system that should act as a giant wrapper over many communications apps like Gmail , Facebook Messenger maybe even WhatsApp .To put it simply you should have a giant web interface where you can authorize Gmail , Messenger and use them at once when required. I am thinking of going with an REST API to manage the user's services authorized by OAuth2.Also I am thinking of using Node.JS and Express.js in the backend and React.js in the frontend. I found some sweet libraries in npm that should take care of interacting with the involved APIs(https://www.npmjs.com/package/node-gmail-api this one for instance), but I am also doubtful about this approach , for example I have no idea how to keep the use notified about its incoming mails or messages for example . I am in dire need of some expertise since I forgot to mention but I am quite the newbie in this field. To sum it up for once my question is how would you implement such an infrastructure ? Is it my approach viable or I am bound to hit some really hard to overcome obstacles?
As a college exercise, it would be a really fun experiment, so it definitely worth the time you want to put into it. However, once you want to add more features, the complexity will go up pretty fast.
Here are a couple of ideas you can think of:
It's pretty clear that your system can't do more things than the capabilities exposed by the APIs of communication apps (e.g. you can't have notifications in gmail if the API doesn't have this capability).
For that reason, you should carefully study the APIs and what functionalities they expose. They have public docs that you can check out: (Gmail API, Facebook Messanger API)
Some of the apps you want to communicate with may not have an official API (e.g. WhatsApp) - those kinds of details you definitely want to know from the start.
Based on the analysis of those APIs, you should lay out a list of requirements for your system, which can be extracted from all the APIs, for example: message notifications, file transfers, user profiles, etc.
In this way, you know exactly what capabilities your system should have, and you don't end up implementing a feature that is available only in 1 API out of 4.
Also, it would be a bit challenging to design your system from a user perspective, because the apps have different usage patterns - chat apps, where messages are coming in real-time, vs email, which is not real-time communication. That's just a detail anyway, the gist of your project is to play with those APIs.
Also, it may worth checking out the Gateway Aggergation Pattern, which is related to this project - you may want the user to send a message to multiple apps, by using a single request to your service.

I want to use Alexa API to read grocery list (for display in a web app.) Must I write a "skill"?

First my apologies if this is a dumb question, or has been asked before. I've been searching for a couple of days and can't find an answer. This usually means I'm asking a dumb question. ;-)
I have a rPi and touchscreen in my kitchen; it displays helpful things like appointments for next 7 days, weather next 7 days, news headlines, etc. It's a web app I wrote in Angular 7, and it queries a NodeJS v8 backend which I also wrote. This was a hobby project to learn Angular and Node.
The Node app performs all interactions with the outside world, using Google's Calendar API to get appointments, Yahoo Weather API, newsapi.org. All of these integrations followed the same pattern -- obtain some apikey token, and use that when invoking some API method to get the requested data in a JSON wad.
Now, I would like to also get the grocery list from my Echo. After reviewing Amazon's Alexa API documentation, it doesn't appear that I can do this the same way as the previous three integrations I did. Must I really write a "skill" for this, though I never intend to invoke it on the Echo?
And if so, could you point to a decent sample? None of the samples provided by Amazon utilize the Lists API.
Yes, you must create a skill in order to receive access_token. And then - Out of Session Interaction feature might be useful.

BlueCove Client Doesn't Recognize PyBlueZ Service

I have a bluetooth server running on my Ubuntu desktop written using PyBlueZ. That server is advertising a certain service via UUID. I have tested this server thoroughly using a client I also wrote with PyBlueZ.
Now I am attempting to implement a Java client using BlueCove. My BlueCove client can successfully locate my desktop (I checked by printing out the "user friendly name"). However, when I do a service search on my desktop for the advertised UUID, no services are found. I do not get an error or anything, I simply do not get any callbacks for services discovered.
Is there anything unusual that might be going on when connecting these two systems? The UUIDs are identical, although I have noticed that the PyBlueZ format asks for dashes "-" in the UUID, whereas BlueCove expects one continuous string of hex digits. I assume this is a purely presentational issue.
Finally, in an unrelated matter, are there any BlueCove communities out there? I have been coming up with an extreme shortage of resources to find answers to my BlueCove questions...
The mistake I made was that I did not realize searchServices() was an asynchronous method which required as much time as device discovery to execute. I was not sleep()'ing to give it a chance to find services.
Now that I'm giving searchServices() time to complete, it is at least invoking the serviceDiscovered callback.

IBM iSeries Frontend Development

I'm sure many of you are familiar with the IBM i5 series emulator (looks like this poop)
My company uses this religiously and there is no Biz logic in it so anytime somone in our finance dpt makes a human error it accepts it and adds it to the database. Not to mention its ugly, hard to use, not intuitive, etc....
I would like to create a frontend for this interface so that we can control the logic before its submitted to the system (we dont control the system itself) so in effect I need to make my own emulator app.
However I cant seem to find any information on how to interface with the i series, namely login, send commands, and view or gather data from the screens it would normally send back.
Any suggestions?
The problem is not the iSeries but the software package your company is running on it.
There ARE advantages to use green screens: it's fast and it's almost unbeatable at data entry, provided you get used to it.
But to answer your question, the iSeries is a J2EE enabled machine: a HTTP server comes installed and depending of the version of the iSeries, WebSphere might be already installed, or are entitled to install it. Then you can use JT400, which is the java toolkit for the os400 containing the jdbc drivers to connect the database and the necessary classes for calling programs.
If you prefer php, there is a flavor of the Zend framework made to work on the iSeries but I never tried it.
I'd recommend that you take a look at both the Attachmate Verastream Host Integrator (VHI) and IBM's Host Access Transformation Services (HATS) products. They effectively just screen scrape the green screen terminals to allow you to pull and push data and provide macro recording and editing tools to automate the process. App integration can be achieved via web services or html/jsp/servlet programming (plus .Net for VHI and EJB's for HATS). They do come with enterprise pricing however which may be an obstacle for some. They do have free trial offerings for evaluation purposes to help determine if they are an appropriate solution to your problem.
What software packages are they using? Most programs that I use in the 5250 emulator has some business logic that error checks the data before adding it to the database. Can you get us some more information so we can direct you in a better direction.
There are vendors that sell products that screen-scrape the 5250 data stream and produces a web front-end. Or you can write your own front-end in the language of your choice and just do SQL calls to the database.
THere's got to be some source code. Start by looking at the menu and menu option your users are accessing and figure that's running behind them.
Use command STRPDM to look for source code - look in different libraries (they are like folders)
You might have source code in a "member" called something like xxxMNUSRC xxxRPGSRC (rpg program source) or xxxCLSRC (cl programs), xxxDDSSRC (display/screen source, physical/logical file source)
Objects a "compiled" objects such as files (tables), screens, priter files (reports)
Stay away from Qxxx and #xxx libraries - those are system libraries.
http://systeminetwork.com/ is a good resource for iSeries related questions.

Resources