I use a Linksys PAP2 for my fax line and I can toggle modem mode by dialing *99 before the number, however I do not want to have to dial it every time to send a fax. Is there any way to incorporate the modem toggle into the dial plan ? I have googled and haven't found anything, I'm afraid the answer may be no, but I wanted to give this a try.
I know faxing over IP is not reliable, yet, I would still like to entertain it. Are there are other analog adapters that are more suitable?
suppose its the same number you want to use for sending the fax, lets assume its "16131234567" just add the following rule in your dial plan. (|<16131234567:*9916131234567>|)
when you will dial the fax number ATA will automatically add *99 before it.
My pap has the *99 modem code too. I researched this some time ago. *99 does the following: Force dtmf inband, echo cancel off, echo supress off, silence supress off, force g.711/ulaw, and call waiting off. In respect to modems and fax, due to the half duplex nature of fax, there may be a benefit of using echo cancellation. Most any normal dial up is where it's best disable of the echo canceller, not fax. However, some may be using "echo suppress", and/or silence suppression, both of which hurt fax greatly.
In terms of making fax reliable over Voip, the number one best thing you can do is complain, complain, and complain to your ISP and get them to reduce the packet jitter on your line! There is T.38 to experiment with which was designed to work with jittery lines, but not all ata's and providers support it. However, there is no real "better" ata then another ata for fax and modems. Voip uses primarily g.711/ulaw codec, which is what the PSTN uses. It's mainly the high packet jitter on many residential lines that make fax not work. The local nodes can be oversold, but they generally want to deliver high speed (quantity), not low jitter (quality).
Oh! the one biggest myth going around is to disable ECM mode. DON'T! Ecm mode was invented just for improving success rates over noisy and unreliable phone lines. In fact, ECM helps greatly with Voip. Turning off ECM is "I don't care if my faxes are unreadable" mode, going back to the early 80's with fax machines were thermal roll. Turning off ECM will increase the apparent "success" to you because there will be fewer errors given to you by your machine in front of you. However, pages may be missing or cut in the middle, unreadable, etc. but your machine will indicate a "success". I’ve heard from too many people that leaned the hard way when it occurred to them because people were telling them that pages were missing, etc.
One last thing I want to touch on is baud rate, generally many machines have two modes: no ecm 9600, and ECM 14400, and if you have it, super g3 33600. Of course, SG3 is going to give the most problems. Generally reducing the baud rate below 14400 isn’t necessary, since it's not a big jump, and 14400 is more tolerant to noise anyhow. Don’t switch to 9600 if it means disabling ECM mode.
Related
I am trying to create a special military RADIO transmitter.
Basically, the flow is:
A solider will receive a message to transmit (about 10 times a day). Each message is of length 1024 bits exactly.
He will insert this message into the radio and validate it is inserted correctly.
The RADIO will repetitively transmit this message.
This is very important that the transmitter will not be hacked, because its very important in times of emergencies.
So, the assistance I ask from you is, how to preform stage 2 without risking getting infected.
If I will transfer the data using a DOK, it may be hacked.
If I will make the user type in 1024 bits, it will be safe but exhausting.
Any Ideas? (unlimited budget)
(It’s important to say that the data being transmitted is not a secret)
Thanks for any help you can supply!
Danny
Edit:
Basically, I want to create the most secure way to transfer a fixed number of bits (in this case 1024), from one (may be infected computer) to the other (air gaped computer).
without having any risk of a virus being transferred as well.
I don't mind if an hacker will change the data that is transferred from the infected computer, I just want that the length of the data will be exactly 1024, and avoiding virus to be inserted to the other computer.
Punch card (https://en.wikipedia.org/wiki/Punched_card) sounds like a good option, but an old one.
Any alternatives?
The transmitter is in the field, and is one dead soldier away from falling into enemy hands at any time. The enemy will take it apart, dissect it, learn how it works and use the protocol to send fraudulent messages that may contain exploit code to you, with or without the original equipment. You simply cannot prevent a trasmitter or otherwise mocked up "enemy" version of a transmitter from potentially transmitting bad stuff, because those are outside of your control. This is the old security adage "Never trust the client" taken to its most extreme level. Even punch cards could be tampered with.
Focus on what you can control: The receiving (or host) computer (which, contrary to your description, is not airgapped as it is receiving outside communication in your model) will need to validate the messages that come in from the client source; this validation will need to check for malicious messages and handle them safely (don't do anything functional with them, just log it, alert somebody and move on with life).
Your protocol should only be treating inbound messages like text or identifiers for message types. Under no circumstances should you be trying to interpret them as machine language instructions, and any SQL queries or strings that this message is appended to should be properly sanitized. This will prevent the host from executing any nasties that do come in.
I'm in a situation where we are hooking up to a device that may speak a variety of different baud rates depending on model. Some of which may be non-standard, like 10000, but that's another problem for another day.
Ideally I could use Qt to auto detect the baud rate, but from my research that's likely not possible for a few reasons, which I'm okay with. However, is there any native Linux based method to auto detect the baud rate of the connected device? Even a 3rd party open source application could suffice.
Linux serial drivers don't support autobauding, because most hardware doesn't support it, because there's no agreement on how it might work. It's highly application-specific.
If you're using FTDI serial adapters, then most of them support the bit-bang mode, and you should use them as a digital oscilloscope in such a mode to get a bitstream that's very easy to autobaud on.
On other devices, the simplest way towards autobauding is to set the device to 2-3x the highest baudrate you expect, then treat the input data like a chunked digital oscilloscope, taking account of error bits, and use heuristics to detect the baud rate. It will succeed in a surprising number of cases, but you must get the statistical model of the data source right. I don't know of any pre-canned solutions for that.
Some additional kernel support could be had to better timestamp the input from the UART (whether hardware or USB) and thus decrease the uncertainity in your data and thus the number of samples you need to take to detect baud.
Some of which may be non-standard, like 10000, but that's another problem for another day.
No biggie. I figured it out 16 years ago :) This is the answer you're looking for. If you think that the API is sick as in very, very sick, then you'd be right.
I would like to track a large number of beacons (~500) at once within a 50-100 m radius via an app on an iPhone (5s). I've had a look at the spec and online and I can't see if there is any limit on the number of beacons you can track at once using BLE. Does anyone know if there is limitation on the number of beacons you can track exists or if an iPhone 5s would be up to the task of tracking that many beacons?
You used the word track, but iOS has two different methods: monitoring and ranging.
You can set a maximum of 20 regions to monitor. (Found in documentation for the startMonitoringForRegion: method.) Region limits mostly come into play if your app is in the background. The OS will alert your app when you enter or leave a region that you're monitoring (give or take a few minutes). The OS will even launch your app just to let it know what happened (although only for a short time).
The other method is ranging, which is to find all the beacons within the Bluetooth range of the device (typically around 100 feet give or take). If your beacons are spread out over 100 miles, then you probably won't run into any practical limit here. I have not found any documentation for this, and I have only four beacons that I'm testing with, and four at a time works.
Here's one way to handle your situation. Make all your 500 beacons use the same UUID, and make a beacon region using initWithProximityUUID:identifier: method. (Identifier is just for you -- it doesn't affect anything). Starting monitoring for that beacon region. That way, your app will be notified whenever one of your 500 beacons are found (give or take a few minutes). Once notified, you can use startRangingBeaconsInRegion: to find all the beacons around that area, then use the major and minor values to figure out which beacons the user is near.
I'll add to Tim Tisdall's answer, which sets out the right framework. I can't speak to the specific capabilities of the iPhone 5s, or iOS in general, but I don't see any reason why it wouldn't return every ADV_IND packet (i.e. beacon transmission) that it receives.
The question is, will the 500 beacons be able to transmit their ADV_IND packets without collisions?
It takes about 0.128ms to transmit an ADV_IND packet. The time between advertising transmissions is configurable between 20ms and 10240ms (at intervals of 0.625ms), so the probability of collisions depends on the configuration of the beacons.
Based on the Poisson distribution, the probability of a collision for any given ADV_IND packet is 1-exp(-2*N*(0.128/AI)), where N is the number of beacons within range, AI is the time in milliseconds of the advertising interval (assuming all the beacons are configured the same), and the 0.128 is the time in milliseconds it takes to send the ADV_IND packet. (See http://www3.cs.stonybrook.edu/~jgao/CSE590-fall09/aloha-analysis.pdf if you want an explanation.)
For 500 beacons with the maximum advertising interval of about 10 seconds, there will be a collision about once every 81 packets (or about 6 out of 500). If you're willing to wait for a couple intervals (i.e. 30 seconds), there's a good chance you'll be able to receive all 500 ADV_IND packets.
On the other hand, if the advertising interval is smaller, say 500ms, you'll have a collision about 23% of the time (or 113 out of 500). You'd have to wait for several more intervals to improve the probability that you'd see the broadcasts from all the beacons.
The other way to look at it is that the more beacons you have, the longer you have to wait to make sure you receive all their packets. (The math to calculate the delay to receive the packets with a certain probability from the number of beacons and the advertising interval is too much for me today.)
One caveat: if you want to connect to these beacons, as opposed to just receiving the ADV_IND packet, that requires an exchange of two more packets on the advertising channels, and the probability of a collision in the advertising channels goes up a bit.
If I am reading your question right, you want to put all 500 iBeacons within 100 meters of each other, meaning their transmissions will overlap. You will probably run into radio congestion problems long before you run into any limitations of iOS7 or your phone.
I have successfully tested 20 iBeacons in close proximity without problems, but 500 iBeacons is an extreme density. this discussion on the hardware issue suggests you may run into trouble.
At a minimum, the collisions of the transmissions of 500 iBEacons will make it take longer for your iOS device to see each iBeacon. Normally, iOS7 provides a ranging update once per second for each iOS device, but you may find that you get updates much less often. It all depends on your application whether or not less frequent updates are acceptable.
Even if delays are acceptable, I would absolutely test this before counting on it working at all. Unfortunately, that means getting your hands on lots of iBeacons.
I don't agree. It is true that ble beacons only transmit advertising data, but the transmission of such data last about 3ms (considering three advertising channels).
Having 500 beacons, WITHOUT considering any collision, the scanner will takes 1.5s to see them all.
But, if all beacons are configured in same way (same advertising interval) it is inevitable to have collisions which lead to have undiscovered beacons. Even if the advertising interval is different between beacons collisions occur. To avoid collision probability one should use longer advertising interval, but this lead to longer discovery latency.
This reasoning is very raw, it doesn't take care of many effects, but is just an order of magnitude calculation.
By the way, the question is not easy, there are many parameters which play role, some are known some are unknown. But I'm working with ble since one year about and, to me, 500 is a huge number and there is the possibility that you don't see the majority of nodes because of collisions.
I was doing some research into iBeacon's because of this question (I had no idea what it was about).
It seems that on the "beacon" side of things all that happens is general advertising packets are sent out. It's similar to how a device advertises that you can connect to it. However, you don't actually connect to iBeacon's, it just reads those advertising packets. There's no built-in limitation on how many advertising packets a device can receive.
So, it wouldn't surprise me if 500 iBeacon's would run with no issues. The advertising packets are small and are spaced out (time wise, they are repeated every X ms). There's no communication going from the phone to the iBeacon, the phone is simply receiving the packets it hears. If there's interference on one packet it'll likely manage to get the next one.
I am currently working on a project involving a Lego Mindstorms kit. The brick is the NXT and I was curious about the bluetooth ping rates.
I ran a test of 100 pings on it and got some interesting results. The latencies seemed to fall into bands. I increased to 10,000 pings and it highlighted this trend even more clearly. Does anyone know what could cause this to happen?
In case it is relevant, the distance between the sender and receiver was about 3 metres.
Few reasons :
Buffering and internal timers to flush buffers can cause it.
Also depends on the ping intervals (i.e. time between subsequent pings), as the link might go to power save mode during inactivity and it will take a fine time to come back up.
Size of the ping packets
What bluetooth profile is used here ?
At work, we just got a large number exotic cellular devices that need to be programmed. To do this, you plug in a standard home telephone and dial a series of numbers, with pauses between them.
To me, this is a task that begs to be automated, and we've got one Linux desktop (a test Asterisk machine) with a modem on it.
So, how can I automate this task?
Simply send the necessary AT commands to your modem via the modem's corresponding /dev device, e.g. ATDT 12,456567,21
I think you should be able to open the modem device (often sym-linked from /dev/modem), and enter modem codes to reset the modem (atz, perhaps), then the codes to dial (atd), then the number, with "," for pause.
You can automate this in probably almost any language that allows you to write to the device file.
Take a look at the reference here:
http://www.zoltrix.com/support_html/modem/USEMODEM.HTM
My typical dial out string (all directed at the modem device):
ATZ (Dear modem, forget everything you knew)
ATS11=33 (I liked dialing fast)
ATF0 (Auto negotiate link speed)
ATL3 (I like it loud)
ATM3 (I only like hearing the handshake loudly)
AT&G(x) (In case you have a US modem and need to use it in the rest of the world (guard tone))
AT&K3 (hw flow control, if not available use software via AT&K4)
AT&R1 (CTS (clear to send) is always on. Wrapping RJ-11 connections in static free softener sheets helps this.
Finally, and most importantly:
ATDT (number) (Dial a number using DTMF) Depending on the age, your modem may support ATDP (pulse dialing).
Just keep in mind, +++ is an escape sequence, returning you to the modem console :) Have fun. +++ ATH0 and you hung up. ATH1 takes it off hook and does little else. ATA answers an incoming data call. Comma, , is a pause.
Yeah, others linked to the Hayes AT command set, I actually used it for years as a SysOp of a BBS :)
Finally, screw Kermit, use Zmodem.
Links: Synchronet, WWiV, the rest are an exercise for the reader, though I humbly suggest searching for Renegade, Telegard, TaG and others.
Oh dear, I'm off on a tangent.
If you need to pause and respond to replies back from the device - this is exactly what expect was invented for
Use the Hayes command set:
The following commands are understood by virtually all modems supporting an AT command set, whether old or new.
D Dial
Dial the following number and then handshake
P - Pulse Dial
T - Touch Tone Dial
W - Wait for the second dial tone
R - Reverse to answer-mode after dialing
# - Wait for up to 30 seconds for one or more ringbacks
, - Pause for the time specified in register S8 (usually 2 seconds)
; - Remain in command mode after dialing.
! - Flash switch-hook (Hang up for a half second, as in transferring a call.)
L - Dial last number
See Linux Modem-HOWTO for details.