I am trying to find a way to send a large amount of SIP INVITE from my linux OS to a remote application that accept SIP INVITE.
I found a way to send many SIP INVITE from the same source (i.e. ip.ethernetcard local linux os):
sipp -sn uac ip.remote.app -i ip.ethernetcard local linux os -m 10 -s "name.user"
This send 10 SIP INVITE. The problem is when I look at the log on the remote side (using tcpdump), I see that source is always the same (ip.ethernetcard local linux os). Is there a way to minimic different sources i.e. we are pretending that we have multiple clients talking to the remote app ?
Use some sip stress test tool such as SIPp to generate varied INVITE messages.
Injecting values from an external CSV during calls
You can use "-inf file_name" as a command line parameter to input values into the scenarios. The first line of the file should say whether the data is to be read in sequence (SEQUENTIAL), random order (RANDOM), or in a user based manner (USER). Each line corresponds to one call and has one or more ';' delimited data fields and they can be referred as [field0], [field1], ... in the xml scenario file. Example:
SEQUENTIAL
sipp1
sipp2
sipp3
...
Will be read in sequence (first call will use first line, second call second line). At any place where the keyword "[field0]" appears in the scenario file, it will be replaced by either "sipp1", "sipp2" or "sipp3" depending on the call.
As before, use
sipp -sn uac ip.remote.app -i ip.ethernetcard_local_linux_os -m 10 -s "name.user"
add -inf file_name and -sf uac.xml
In the xml file (standard example taken from sipp web page), replace
sip:sipp[local_ip]:[local_port]>;tag=[call_number]
sip:[field0]#[local_ip]:[local_port]>;tag=[call_number]
That is it.
Related
Basically, I am using wget on a file containing multiple URLs. I notice that for each line the command I use:
wget -i list_of_urls
and for each row in "list_of_urls" wget does a log-in step to the FTP server that I'm downloading from. It does the log-in step automatically, without me entering any username and password. Each line produces the output
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|130.14.250.13|.21... connected.
Logging in as anonymous ... Logged in!
followed by the file downloading.
Is there any way to log in only for the first row and then using that login to download all the following rows? Since the URLs point to the same FTP server, only different files, it feels like logging in for each row is wasteful.
Edit: changed from "website" to "FTP server" since that was what I actually meant, thanks. Added a sample output of the log-in message.
After some fiddling around I think using the rsync protocol solved the problem. This works in this case since the file host has both ftp and rsync servers containing the same files. I then simply (for small file sizes) use
rsync $(tr '\n' ' ' <list_of_urls) /usrpath/
which was much faster than using wget on the ftps. I had to include the $(tr '\n' ' ' <list_of_urls) since the list of urls had end of line separation, but rsync takes space-separated files in the command line. It seems like the rsync protocol in this case only logs on once and then downloads all files, since it went much faster.
Another problem arises with this method when list_of_urls is very long, which I haven't solved yet.
I'm coding a program to do some action with webdriver and Autoit in python, I want to do two things before I start selling my code:
Add onetime activation code to my software on one PC, like to make the program works only on one pc.
Make my program able to receive updates from the internet once I add to the code some more features or correct some others.
is it possible only with Python? or what is the method statement to do it?
On client side you need use hard-disk serial or/and uuid of partition or Operational System timestamp + something to generate serial code .
On server side , you need a API to store hard disk serial to validate if this is a computer valid . And you client on load check if activaction is valid .
The second question i can't answer .
Regarding the second part of your question:
Create a text file with the latest version of your application and put it on your webserver e.g. http://download.example.com/example-app-version.txt to get and read its value later.
On your python code download and read the text (for Python 3+ use 'import urllib.request' and urllib.request.urlretrieve) when your app runs and compare it against the installed version (if statement).
E.g.
if latestVer > installedVer:
#update
else:
#application continues
I'm starting my project which is simply about reading the I/Q data from SDR Radio software like GNU Radio as an input for my own application. I thought about using the pipe command to do so, but don't really know how to use it in this case. Another idea is to get I/Q data directly from sound card.
I would like to ask you what is the most effective way to get these data. Thanks.
Named pipes are a very common way to do this. The concept is simple. First, you create a named pipe using the mkfifo command:
$ mkfifo my_named_pipe
$ ls -l
prw-rw-r-- 1 user user 0 Dec 16 10:04 my_named_pipe
As you can see, there's a new file-like thing with a 'p' flag.
Next, configure your GNU Radio app to write to this pipe (i.e. by using a file sink or file descriptor sink).
Then, all you need to do is configure your app to read from this file. Note that the GNU Radio app and your app need to run at the same time.
Of course, you could consider simply writing your app in GNU Radio. Using Python blocks, it's very easy to get started.
My goal is to run a bash script automatically whenever any new file is added to a particular directory or any subdirectory of that particular directory.
Detail Scenario:
I am creating an automated process for file submission from teachers to students and vice versa. Sender will upload file and it will be stored inside the Uploads directory in the LAMP server in the format, ex. "name_course-name_filename.pdf". I want some method so that when any file stored inside the Uploads folder, the same time a script will be called and send that file to the list of receives.
From the database I can find the list of receiver for that particular course and student.
The only concern of mine is, how to call a script automatically and make it work on individual file whenever the content of the directory changes. Cron will do in intervals but not a real time work.
Linux provides a nice mechanism for that purpose which is called inotify. inotify is mostly available as a C API. But there have been developed shell utilities as well. You should use inotifywait from inotifytools (pkg name in debian) for this. Here comes a basic example:
#!/bin/bash
directory="/tmp" # or whatever you are interested in
inotifywait -m -e create "$directory" |
while read folder eventlist eventfile
do
echo "the following events happened in folder $folder:"
echo "$eventlist $eventfile"
done
Update:
If the problem goes complicated, for example you'll have to monitor recursive, dynamic directory structures, you should have a look at incron It's a cron like daemon which executes scripts on certain events. But the events are file system events rather than timer events.
There is another option to 'inotifywait':
-d --daemon
Same as --monitor, except run in the background logging events to a file
that must be specified by --outfile. Implies --syslog.
For completeness:
-m --monitor
Instead of exiting after receiving a single event, execute indefinitely.
The default behaviour is to exit after the first event occurs.
Within the do-done block of your 'while' statement, you might parse each event report for interesting details then use 'case-esac' to take action based on each event that you care about.
For something that you plan to rely on for your operations, you might also consider replacing the hard-coded '$directory' with some sort of configuration file. Such a file might include the path and filename, the interesting events for that path and file, and a script to run when those events happened.
The script might take the list of events as parameters and then 'case-esac' again.
Just one man's ramblins,
~~~ 8d;-Dan
I have a computer (say computer A) whenever computer A gets a connection over a particular telnet port, it launches a program.
this program on computer A handles login, authentication etc. One of the jobs it can do is receive files. It handles this by launching gKermit.
/usr/local/bin/gkermit -e 8000 -w -a /location/x/ -ir
I have a second program on computer B. This 2nd program will connect to computer A
mPid = forkpty(&mPort, buffer, &mCurrTermattr, NULL);
...
if child
{
execl("/usr/bin/telnet", "telnet", mComPort.name.c_str(), NULL);
}
now the parent process of the program can use the file descriptor mPort to send receive data. (i.e. like logging into computer A, and telling it to receive a file)
The problem is that when computer B launches gKermit to send a file, It cannot communicate with computer A gKermit.
system(gkermit -d gkermit.txt -X -e 8000 -i -s testfile.txt)
One would think if we are talking using mPort we could redirect the computer B system call stdio to use that mPort by doing:
dup2(mPort, STDIN_FILENO)
however this does not do the trick. Any help would be appreciated.
I may be wrong, but you need to redirect stdout (and maybe stdin if kermit communication is bidirectional). Also, I'm a little curious what is the mPort, a pipe? Do you read and write to it? Usually, you have two file descriptors, one fo reading, one for writing.
Thanks for the responses jpalecek,
It seems that adding:
dup2(mPort, STDOUT_FILENO)
now allows gKermint to communicate in both directions. Which of course makes sense. ugh