Run AT commands from adb shell on Redmi 7 - linux

I tried this:
echo -e "ATD123456789;\r" > /dev/smd0
and then when I ran:
cat /dev/smd0
I got this output:
ATD123456789;
Is that what I'm supposed to see? The phone didn't respond to the command.
Update: The phone made a call when I used smd7 or smd11. The problem is I'm trying to send SMS messages using AT+CMGS and it's not working.
Update2: I run this command:cat /dev/smd7 & echo -e "AT+CMGS=24;\r" > /dev/smd7.
Then I enter the PDU message and I get this: /system/bin/sh: 079...771B: not found

As you probably know, the command
ATD<number>;\r
performs a voice call to the destination number <number> (without the semicolon ; the call type would depend o the current settings of AT+FCLASS command).
By default the OK result code would be received as soon as it starts remotely ringing, so after some seconds. But it would take even more if there are network problems or the remote number is unavailable/doesn't exist.
The default timeout of ATD command during a voice call is 30s, and can be changed by issuing ATS7 command. For example, to set a 1 minute timeout:
ATS7=60
The answer you get is the command echo: in fact the modem, by default, echoes every character sent to its AT port (the echo can be desabled through ATE0 command and aenabled again with ATE1). Receiving it **is the proof that the modem is correctly powered on and that it communicates correctly.
So, even though I'm aware that's not the only thing you expect to seee (you would like to see an answer!) you are actually supposed to see it.
Some pieces of advice in order to receive your answer:
Start providing simplier commands with shorter timeouts. For example the very basic AT.
Make sure to wait at least the maximum command timeout
Set the cat command in background and before starting providing commands:
cat /dev/smd0 &
echo -e "AT\r" > /dev/smd0
OK
Note: I'm not aware of any timeout in cat command.

To have an interactive session you can use:
strace 2>/dev/null -e inject=ioctl:retval=0 microcom /dev/smdXX
Without the strace command, microcom returns an ioctl error.
Strace makes microcom think the ioctl succeeded and so it allows it to continue and run.

Related

Telnet to server, login, and return command result in one line

I've looked at several other solutions, but none appear to be working the way I need.
I have an embedded controller running Linux (Dreadnaught) and a router also running Linux. I want to read the routing table (just the WAN IP of the default route) of the router, from the controller. My controller has telnet and wget, but does not have ssh or curl. I'd like to do this in a single command with a single result, so I can send the one command from an internal program and parse/save one result.
If I telnet to the router from my PC, either of these two commands gives me the exact result I need:
route |grep default|cut -c 17-32
or
dbctl get route/default/current_gateway
Route takes about 30 seconds (not sure why?), even without grep and cut; but dbctl is instant for all intents and purposes.
I've tried the eval method per Telnet to login with username and password to mail Server, but that shows all the telnet interactions; I want just the final string result.
I had a poke around at wget, but it looks to be for downloading files, not executing commands.
I'm hoping for:
somecommand server=1.2.3.4 user=myuser passwd=MyP#s$ command='dbctl get route/default/current_gateway'
which just returns:
8.7.6.5
Then my internal program (ISaGRAF, but shouldn't be relevant) can send one string to cmd and be returned 1 string, which I can use for my own nefarious purposes (well, I'm just going to log it actually).
If there's absolutely no other way, I can drop a sh script on to the requesting controller, but I'd rather not (extra steps to install, not as portable).
Solved as I was reviewing the question, but looking for suggestions - is this the cleanest method? Could it be done better?
OK, I poked around at the eval method again. Yes, it shows me the full interaction, but it's easy to just get the bits I need, using head and tail:
eval "{ sleep 2; echo myuser; sleep 1; echo MyP#s$; sleep 1; echo 'dbctl get route/default/current_gateway'; sleep 2; }" |telnet 1.2.3.4 |head -n 5|tail -n 1
eval returns the full interaction:
Entering character mode Escape character is '^]'.
login: myuser
Password:
admin#myrouter:~# dbctl get route/default/current_gateway
8.7.6.5
admin#myrouter:~#
So I just need head and tail to grab the one line I want using |head -n 5|tail -n 1

Update Bash commands every 2 seconds (without re-running code everytime)

for my first bash project I am developing a simple bash script that shows basic information about my system:
#!/bash/sh
UPTIME=$(w)
MHZ=$(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq)
TEMP=$(cat /sys/class/thermal/thermal_zone0/temp)
#UPTIME shows the uptime of the device
#MHZ shows the overclocked specs
#TEMP shows the current CPU Temperature
echo "$UPTIME" #displays uptime
echo "$MHZ" #displays overclocked specs
echo "$TEMP" #displays CPU Temperature
MY QUESTION: How can I code this so that the uptime and CPU temperature refresh every 2seconds without re-generating the code new every time (I just want these two variables to update without having to enter the file path again and re-running the whole script).
This code is already working fine on my system but after it executes in the command line, the information isn't updating because it executed the command and is standing by for the next command instead of updating the variables such as UPTIME in real time.
I hope someone understands what I am trying to achieve, sorry about my bad wordings of this idea.
Thank you in advance...
I think it will help you. You can use the watch command for updating that for every two seconds without the loop.
watch ./filename.sh
It will give you the update of that command for every two second.
watch - execute a program periodically, showing output fullscreen
Not sure to really understand the main goal, but here's an answer to the basic question "How can I code this so that the uptime and CPU temperature refresh every two seconds ?" :
#!/bash/sh
while :; do
UPTIME=$(w)
MHZ=$(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq)
TEMP=$(cat /sys/class/thermal/thermal_zone0/temp)
#UPTIME shows the uptime of the device
#MHZ shows the overclocked specs
#TEMP shows the current CPU Temperature
echo "$UPTIME" #displays uptime
echo "$MHZ" #displays overclocked specs
echo "$TEMP" #displays CPU Temperature
sleep 2
done
I may suggest some modifications.
For such simple job I may recommend no to use external utilities. So instead of $(cat file) you could use $(<file). This is a cheaper method as bash does not have to launch cat.
On the other hand if reading those devices returns only one line, you can use the bash built-in read like: read ENV_VAR <single_line_file. It is even cheaper. If there are more lines and for example you want to read the 2nd line, you could use sg like this: { read line_1; read line2;} <file.
As I see w provides much more information and as I assume you need only the header line. This is exactly what uptime prints. The external utility uptime reads the /proc/uptime pseudo file. So to avoid to call externals, you can read this pseudo file directly.
The looping part also uses the external sleep(1) utility. For this the timeout feature of the read internal could be used.
So in short the script would look like this:
while :; do
# /proc/uptime has two fields, uptime and idle time
read UPTIME IDLE </proc/uptime
# Not having these pseudo files on my system, the whole line is read
# Maybe some formatting is needed. For MHZ /proc/cpuinfo may be used
read MHZ </sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
read TEMP </sys/class/thermal/thermal_zone0/temp
# Bash supports only integer arithmetic, so chomp off float
UPTIME_SEC=${UPTIME%.*}
UPTIME_HOURS=$((UPTIME_SEC/3600))
echo "Uptime: $UPTIME_HOURS hours"
echo $MHZ
echo $TEMP
# It reads stdin, so pressing an ENTER it returns immediately
read -t 2
done
This does not call any external utility and does not make any fork. So instead of executing 3 external utilities (using the expensive fork and execve system calls) in every 2 seconds this executes none. Much less system resources are used.
you could use while [ : ] and sleep 2
You need the awesome power of loops! Something like this should be a good starting point:
while true ; do
echo 'Uptime:'
w 2>&1 | sed 's/^/ /'
echo 'Clocking:'
sed 's/^/ /' /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
echo 'Temperature:'
sed 's/^/ /' /sys/class/thermal/thermal_zone0/temp
echo '=========='
sleep 2
done
That should give you your three sections, with the data of each nicely indented.

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

Shebang causes script to fail

I'm quite bad at bash, and I try to make a script to connect to all my switches with openSSH in order to make some configuration.
I created an array containing all my 25 switches, and then I used a loop to open SSH connection with each of them.
As I'm on Windows and using bash, I've just installed Cygwin.
However, I had to use expect and writing my password in plain text as the switches are quite poor and that is the best way for me (I won't manually put my RSA key on every single switch as it would take me as much time as writing manually the configuration on every switch).
I use the shebang #!/usr/bin/expect -f to make bash recognize expect. When I do this, the expect syntax (spawn, expect, interact) works perfectly, but my array doesn't work.
I get the following error message:
extra characters after close-quote
while executing "arrayname=("172.21.21.20" "172.20.55.55" ... "
When I change the shebang, and use #!/bin/bash, expect is not found anymore :
./stationsnmp.sh: line 20: spawn : command not found couldn't read
./stationsnmp.sh: line 24: send : command not found couldn't read
file "assword": no such file or directory ./stationsnmp.sh: line 27:
send : command not found ./stationsnmp.sh: line 28: interact :
command not found
I'm really not a pro in bash, which explains I can't get this little problem... Some help would be welcome.
EDIT : Below is a part of my code
#!/bin/bash
switch=("172.20.0.229" "172.20.0.232" "172.20.0.233" "172.21.0.15" "172.21.0.16" "172.21.2.1" "172.20.2.250" "172.21.3.1" "172.20.3.250" "172.21.4.1" "172.20.4.250" "172.21.6.1" "172.20.6.250" "172.21.7.1" "172.20.7.250" "172.21.8.1" "172.20.8.250" "172.20.9.250" "172.21.9.1" "172.21.10.1" "172.20.10.250" "172.21.11.1" "172.20.11.250" "172.21.12.1" "172.21.12.250")
nmb=`echo ${#switch[#]}`
set timeout 3
for ((ii=0; ii<=$nmb; ii++))
#for ii in {0..${#switch[#]}}
do
if [ ${switch[$ii]:5:1} -eq 1 ]
then
ipdc=`echo ${switch[ii]} | grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.'`"10"
spawn ssh admin#switch[$ii]
expect "*assword*"
send "PASS\r"
interact
exit
fi
done
You are mixing bash and expect, those are two entirely different languages. You probably want to have a bash wrapper script with proper option handling (see getopts) which takes a list of IP addresses and execute your expect script for each IP address passed to your bash-wrapper. If your expect script is small you might want to embed it into your shell script as opposed to having it in a separate file.
EDIT:
#!/bin/bash
switches=("172.20.0.229" "172.20.0.232")
for ip in "${switches[#]}"; do
expect "${ip}" <<-'EOT'
set host [lindex $argv 0]
set timeout 3
spawn ssh -l admin $host
expect "*assword*"
send "PASS\r"
interact
exit
EOT
done

How can I perform some commands on Intersystem cache from shellscript?

I want to perform some commands on Intersystem cache from shell script. One solution which I know is through making a config file but the problem is I dont know how to use config file through shell script. Is there any other solution for this...
for example what I have to run on cache is
csession instancename
zn "area"
area>D ^%RI
Device:some/device/path
Next: It should take enter
This can be accomplished from a Linux shell, simply keep a log of the commands you need to perform and then put them into a script. Here's an example of logging into Cache and writing "Hello world" -- note that this also assumes you need to authenticate.
echo -e "username\npassword\nW \"Hello world\"\nH\n" | csession instance
Note that every command you would have run manually is in there and separated by "\n", this is the character that represents the "Enter" key.
It is possible (for some operating systems) to run the Cache terminal in batch mode. For example:
echo: off
wait for:Username
send: SYS<CR>
wait for:Password
send: XXX<CR>
echo: on
logfile: Somepath\myFile.log
send: ZN "AREA"
wait for:AREA>
send: D ^%RI
wait for:Device:
send: some/device/path
wait for:Next:
send: <CR>
This is documented in the Intersystems cache terminal documentation, especially the using terminal in batch mode section and the terminal scripts section.
This is a very old question .. as I came across the same thing and so with a little R&D I found a work around to this problem. Which is very cool and simple.
Let's say I have this file (can be with any extension with each command in separate line)
myScript.scr
zn "%SYS"
for e="a","b","c" { w e,! }
So passing it to cache terminal in case of UNIX is using csession with linux PIPE (|) operator
cat myScript.scr | csession {instance_name}
Eg.
cat myScript.scr | csession CACHE
output
a
b
c
Note:
• Don't separate a command in multiple lines else `csession` will through <SYNTAX> error. (See how I wrote the *for* loop)
• Extra knowledge - Intersystem Ensemble supports *Cache Terminal Batch Mode* in Windows case... While in linux there is no cterm to take the scripts..
• But linux gives you a work around to do this ;).
Hope this helps you guys!! cheers :D

Resources