I am tring to do the following:
I want a SIP User Agent to perform the following steps on receiving an inbound call (call set up request).
1) Read the caller ID from the SIP request and Log the details to file
2) Drop the call (terminate the call without picking up the call)
I have not been able to find a high level api that will let me script this interaction. I have taken a look at Jain but it seems to be a very low level API and I imagine will require a lot of work to get the above interaction coded up and working. Can anyone suggest an apropriate API to implement the above.
NOTE: I have tried ROXEO.com and their CCXML based apps are great but their pricing is aimed at big companies, so Voxeo is not an option.
There are quite a few open source SIP stacks around two examples of many are pjsip and sipsorcery (as a disclaimer I do some dev work on the latter). It will all depend on your language and prefeences as to which one suits. There are also lots of SIP tools around that may be a more efficient approach for you such as SIPp.
Apart from those options and given your very simple requirements you could probably get away with 20 or 30 lines of code that listens on a UDP socket, parses the incoming INVITE to extract the From header and then sends back a rejection response by changing the top line of the request to make it a response and sending it back to where it came from.
If you're using C, try eXosip, you could easily whatever you want.
Here
It's clear that Jain SIP could be quite painful (actually all the configuration but the API otherwise is quite high-level, to manipulate messages) , but you can take the jain-sip-presence-proxy and removes almost everything from their INVITE handler and build your own message
if you're using java, you can use peers which provides a high level api in package net.sourceforge.peers.sip.core.useragent. The entry point is UserAgent class, take a look at gui package if you want to see how it is used. Traces are in log files so you can track calls.
ivrworx but it can handle one scenarion at a time only
Asterisk pbx can act as a simple sip client, and do just that, however if you wante to integrate something in your own solution, take a look at: http://sipsimpleclient.org/projects/sipsimpleclient/wiki/SipMiddlewareApi
Related
Bots are amazing, unless you're Google Analytics
After many months of learning to host my own Discord bot, I finally figured it out! I now have a node server running on my localhost that sends and receives data from my Discord server; it works great. I can do all kinds of the things I want to with my Discord bot.
Given that I work with analytics everyday, one project I want to figure out is how to send data to Google Analytics (specifically GA4) from this node server.
NOTE: I have had success in sending data to my Universal Analytics property. However, as awesome as that was to finally see pageviews coming into, it was equally heartbreaking to recall that Google will be getting rid of Universal Analytics in July of this year.
I have tried the following options:
GET/POST requests to the collect endpoint
This option presented itself as impossible from the get-go. In order to send a request to the collection endpoint, a client_id must be sent along with the request itself. And this client_id is something that must be generated using Google's client id algorithm. So, I can't just make one up.
If you consider this option possible, please let me know why.
Install googleapis npm package
At first, I thought I could just install the googleapis package and be ready to go, but that idea fell on its face immediately too. With this package, I can't send data to GA, I can only read with it.
Find and install a GTM npm package
There are GTM npm packages out there, but I quickly found out that they all require there to be a window object, which is something my node server would not have because it isn't a browser.
How I did this for Universal Analytics
My biggest goal is to do this without using Python, Java, C++ or any other low level languages. Because, that route would require me to learn new languages. Surely it's possible with NodeJS alone... no?
I eventually stumbled upon the idea of actually hosting a webpage as some sort of pseudo-proxy that would send data from the page to GA when accessed by something like a page scraper. It was simple. I created an HTML file that has Google Tag Manager installed on it, and all I had to do was use the puppeteer npm package.
It isn't perfect, but it works and I can use Google Tag Manager to handle and manipulate input, which is wonderful.
Unfortunately, this same method will not work for GA4 because GA4 automatically excludes all identified bot traffic automatically, and there is no way to turn that setting off. It is a very useful feature for GA4, giving it quite a bit more integrity than UA, and I'm not trying to get around that fact, but it is now the Bane of my entire goal.
https://support.google.com/analytics/answer/9888366?hl=en
Where to go from here?
I'm nearly at the end of my wits on figuring this one out. So, either an npm package exists out there that I haven't found yet, or this is a futile project.
Does anyone have any experience in sending data from NodeJS to GA4? (or even GTM?) How did you do it?
...and this client_id is something that must be generated using Google's client id algorithm. So, I can't just make one up...
Why, of course you can. GA4 generates it pretty much the same as UA does. You don't need anything from google to do it.
Besides, instead of mimicking just requests to the collect endpoint, you may just wanna go the MP route right away: https://developers.google.com/analytics/devguides/collection/protocol/ga4 The links #dockeryZ gave, work perfectly fine. Maybe try opening them in incognito, or in a different browser? Maybe you have a plugin blocking analytics urls.
Moreover, you don't really need to reinvent the bicycle. Node already has a few packages to send events to GA4, here's one looking good: https://www.npmjs.com/package/ga4-mp?activeTab=readme
Or you can just use gtag directly to send events. I see a lot of people doing it even on the front-end: https://www.npmjs.com/package/ga-gtag Gtag has a whole api not described in there. Here's more on gtag: https://developers.google.com/tag-platform/gtagjs/reference Note how the library allows you to set the client id there.
The only caveat there is that you'll have to track client ids and session ids manually. Shouldn't be too bad though. Oh, and you will have to redefine the concept of a pageview, I guess. Well, the obvious one is whenever people post in the chan that is different from the previous post in a session. Still, this will have to be defined in the code.
Don't worry about google's bot traffic detection. It's really primitive. Just make sure your useragent doesn't scream "bot" in it. Make something better up.
I'm building a search app using Node.js and Express, and I want to add an autocomplete feature. Previously I've used Socket.io to build a chat app so Socket.io came out in my mind first.
But I did some research and it looks like many people are using AJAX for autocomplete, so what are the difference between the two implementations?
I don't really have much experience with TCP and HTTP protocols so I would really appreciate clear and simple answers for noobs :)
First thing first, your use case is to create an autocomplete feature. What that means? that means when you are inserting a letter in your input field you will request a server with the term you want to find to receive all the autocomplete values.
As a developer when you read this feature details, you should have in mind the word event in our case the keypress event. So each time this event is triggered you want to request the server to get the autocomplete list.
What possibilities do you have to do that ?
First most commonly used for this type of scenarios is a simple ajax call, which will send an request and when finished will update the autocomplete with the corresponding details. As we see in this case for each letter typed, a request potentially can be made (usually you can implement a debounce function) to reduce the numbers of calls. The good think here is that you will close the connection once you received your details, and there are million of plugins with jquery which are doing that just fine.
Second approach is to use socket.io which also is a viable option, you will open your connection once, and for each keypress event you will emit your get details which will be usually faster cause you will reuse the existing connection. The con part here is that you will need to create it by yourself I do not know any plugins which are implementing autocomplete with socket.io.
Conclusion
Socket.io
faster due to reuse of existing connection
more labor work, very little plugins|extensions
good for the case when you already using socket.io on your app
overkill just for the autocomplete feature
Ajax
slower in comparison with socket.io
tons of plugins
overall be a good solution for this use case.
Socket.io/Websockets are primarily for real-time interactions between the server and the client(s). Socket.io also require a constant connection and more setup to have the server respond to a single client. Either way the speed will primarily be dependent on server processing. In the case of a search autocomplete, where you're literally sending a request to the server and expecting a single response back to the requesting client, I'd personally go with the AJAX route. This question has a few good answers that go into detail about this a bit more: What is the disadvantage of using websocket/socket.io where ajax will do?
What I Heard :
WebHooks : They are just HTTP POST and not a new protocol or any new Technology . Let me put it in an example. Lets say we want to watch a directory for any changes and ping the user whenever anything is changed. I write a C# code watching the directory for changes and when something happens, I do an HTTP POST to let the user know something is changed and it might interest you.
Azure Functions: The best way i can explain you is hosting bits and pieces of reusable code online and hitting them via HTTP call whenever needed and not worrying about infrastructure or any supporting platform.
What I want to know:
Why is the name WebHook making so much noise , i mean its very clear and straight forward programming that you do to tell your users that something happened via some API calls or event listeners.
Can someone please make me understand these Terminologies if I got them in a wrong way and also some examples might help me along with your explanation.
I'm fairly new to programming and this question is about making sure I get the HTTP protocol correctly. My issue is that when I read about HTTP request/response, it looks like it needs to be in a very specific format with a status code, HTTP version number, headers, a blank line followed by the body.
However, after creating a web app with nodejs/express, I never once had to actually write code that made an HTTP response in this format (I'm assuming, although I don't know for sure that other frameworks like ruby on rails or python/Django are the same). In the express app, I just set up the route handlers to render the appropriate pages, when a request was made to that route.
Is this because express is actually putting the response in the correct HTTP format behind the scenes? In other words, if I looked at the expressJS code, would there be something in that code that actually makes an HTTP response in the HTTP format?
My confusion is that, it seems like the HTTP request/response format is so important but somehow I never had to write any code dealing with it for a node/express application. Maybe this is the entire point of a framework like express... to take out the details so that developers can deal with business logic. And if that is correct, does anyone ever write web apps without a framework to do this. Would you then be responsible for writing code that puts the server's response into the exact HTTP format?
I'm fairly new to programming and this question is about making sure I get the HTTP protocol correctly. My issue is that when I read about HTTP request/response, it looks like it needs to be in a very specific format with a status code, HTTP version number, headers, a blank line followed by the body.
Just to give you an idea, there are probably hundreds of specifications that have something to do with the HTTP protocol. They deal with not only the protocol itself, but also with the data format/encoding for everything you send including headers and all the various content types you can send, authentication schemes, caching, status codes, URL decoding, etc.... You can see some of the specifications involved just by looking here: https://www.w3.org/Protocols/.
Now a simple request and a simple text response could get away with only knowing a few of these specifications, but life is not always that simple.
Is this because express is actually putting the response in the correct HTTP format behind the scenes? In other words, if I looked at the expressJS code, would there be something in that code that actually makes an HTTP response in the HTTP format?
Yes, there would. A combination of Express and the HTTP library that is built into node.js handle all the details of the specification for you. That's the advantage of using a library/framework. They even handle different versions of the protocol and feedback from thousands of other developers have helped them to clean up edge case bugs. A good library/framework allows you to still control any detail about the response (headers, content types, status codes, etc..) without making you have to go through the detail work of actually creating the exact response. This is a good thing. It lets you write code faster and lets you ride on the shoulders of others who have already figured out minutiae details that have nothing to do with the logic of your app.
In fact, one could say the same about the TCP protocol below the HTTP protocol. No regular app developer wants to write their own TCP stack. Instead, you just want a working TCP stack that you can use that's already been tuned and debugged for you.
However, after creating a web app with nodejs/express, I never once had to actually write code that made an HTTP response in this format (I'm assuming, although I don't know for sure that other frameworks like ruby on rails or python/Django are the same). In the express app, I just set up the route handlers to render the appropriate pages, when a request was made to that route.
Yes, this is a good thing. The framework did the detail work for you. You just call res.setHeader(), res.status(), res.cookie(), res.send(), res.json(), etc... and Express makes the entire response for you.
And if that is correct, does anyone ever write web apps without a framework to do this. Would you then be responsible for writing code that puts the server's response into the exact HTTP format?
If you didn't use a framework or library of any kind and were programming at the raw TCP level, then yes you would be responsible for all the details of the HTTP protocol. But, hardly anybody other than library developers ever does this because frankly it's just a waste of time. Every single platform has at least one open source library that does this already and even if you were working on a brand new platform, you could go get an open source body of code and port it to your platform much quicker than you could write all this yourself.
Keep in mind that one of the HUGE advantages of node.js is that there's an enormous body of open source code (mostly in NPM and Github) already prepackaged to work with node.js. And, because node.js is server-side where code memory isn't usually tight and where code just comes from the local hard disk at server init time, there's little downside to grabbing a working and tested package that does what you already need, even if you're only going to use 5% of the functionality in the package. Or, worst case, clone an existing repository and modify it to perfectly suit your needs.
Is this because express is actually putting the response in the
correct HTTP format behind the scenes?
Yes, exactly, HTTP is so ubiquitous that almost all programming languages / frameworks handle the actual writing and parsing of HTTP behind the scenes.
Does anyone ever write web apps without a framework to do this. Would
you then be responsible for writing code that puts the server's
response into the exact HTTP format?
Never (unless you're writing code that needs very low level tweaking of HTTP code or something)
I have a system where various rss feeds are added. I want to follow the content and be notified when new content is added in the feeds without having to check them one by one.
I found out there is a pubsubhubbub protocol and that publishers can use various hubs which implement this protocol in their feeds. This is how I found out about superfeedr and I'm trying to work with their XMPP API. I installed their nodejs library and made a few subscribe tests that worked fine.
Is it possible to use the node superfeedr module to subscribe to a feed that doesn't use superfeedr? For example I found one that has:
link rel='hub' href='http://pubsubhubbub.appspot.com/'
Do I have to handle each hub separately or I can just send them the same requests based on the protocol?
Alex, I created Superfeedr.
Yes, of course it is possible to subscribe to a feed that doesn't use Superfeedr. Superfeedr acts as a default hub. You can add any feed, and you should get notifications for it. The only difference is that you may see delays. We poll feeds every 15 minutes, so, unless there are strong caches, you should see messages no later than 15 minutes after they've been published.
2 and 3 are probably not relevant given 1. However, I believe there are a couple other PubSubHubbub libraries, but they all require that your endpoint is outside the firewall... and all of them will only work for feeds that use the pubsubhubbub protocol. Even though your application will use each hub separately, the code should be the same, so that's transparent for you.
I hope this helps.