Phone not entering system recovery, adb, fastboot - firefox-os

I flashed a wrong base image binary(flame)(http://1drv.ms/1rCB954) on my ZTE OPEN device. Now its stuck on the main Firefox screen and refuses to enter recovery. It doesn't detect in adb, fastboot either. Is there a way to still recover my phone ?
sudo ./flash.sh
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached
roamer2 device
Partition table...
< waiting for device >
sending 'partition' (33 KB)...
OKAY [ 0.005s]
writing 'partition'...
FAILED (remote: unknown partition name)
Finished. total time: 0.007s
Flash nCPU...
sending 'modem' (32369 KB)...
OKAY [ 2.963s]
writing 'modem'...
FAILED (remote: unknown partition name)
Finished. total time: 2.965s
sending 'rpm' (143 KB)...
OKAY [ 0.014s]
writing 'rpm'...
FAILED (remote: unknown partition name)
Finished. total time: 0.017s
sending 'tz' (331 KB)...
OKAY [ 0.033s]
writing 'tz'...
FAILED (remote: unknown partition name)
Finished. total time: 0.035s
sending 'sbl1' (239 KB)...
OKAY [ 0.023s]
writing 'sbl1'...
FAILED (remote: unknown partition name)
Finished. total time: 0.025s
sending 'sdi' (10 KB)...
OKAY [ 0.002s]
writing 'sdi'...
FAILED (remote: unknown partition name)
Finished. total time: 0.005s
sending 'fsg' (829 KB)...
OKAY [ 0.079s]
writing 'fsg'...
FAILED (remote: unknown partition name)
Finished. total time: 0.082s
Flash Apps...
sending 'aboot' (354 KB)...
OKAY [ 0.034s]
writing 'aboot'...
FAILED (remote: unknown partition name)
Finished. total time: 0.037s
sending 'boot' (7514 KB)...
OKAY [ 0.686s]
writing 'boot'...
OKAY [ 1.290s]
Finished. total time: 1.976s
sending 'system' (267069 KB)...
FAILED (remote: data too large)
Finished. total time: 0.001s
sending 'persist' (4264 KB)...
OKAY [ 0.390s]
writing 'persist'...
FAILED (remote: flash write failure)
Finished. total time: 0.655s
sending 'recovery' (8816 KB)...
OKAY [ 0.805s]
writing 'recovery'...
OKAY [ 1.536s]
Finished. total time: 2.342s
sending 'cache' (5304 KB)...
OKAY [ 0.485s]
writing 'cache'...
OKAY [ 1.376s]
Finished. total time: 1.861s
sending 'userdata' (36604 KB)...
OKAY [ 3.339s]
writing 'userdata'...
FAILED (remote: flash write failure)
Finished. total time: 9.362s
sending 'usbmsc' (20480 KB)...
OKAY [ 1.868s]
writing 'usbmsc'...
FAILED (remote: unknown partition name)
Finished. total time: 1.870s
Done...
rebooting...
Finished. total time: 0.001s
Just close the windows as you wish.
- waiting for device -
Please help.

I'm not sure where to locate official images, ZTE had an eBay site with images based on country. Whenever I soft brick a phone, I basically follow Jan's comment above to manually reboot the phone into the bootloader. Once that's done, you should be able to see the phone manually from running fastboot devices. Then the series of commands (that the flash scripts usually run) is simply:
fastboot flash boot boot.img
fastboot flash system system.img
fastboot flash userdata userdata.img
fastboot reboot
I just did this today with Geeksphones peaks, but I've done it on almost every phone.

Related

Nuxt.js / Node.js Nginx requests per second

I'm trying to prepare a CentOS server to run Nuxt.js (Node.js) application via Nginx reverse proxy.
First, I fire up a simple test server that returns an HTTP 200 response with the text "ok". It easily handles ~10.000 requests/second with ~10ms of mean latency.
Then, when I switch to the hello-world NUXT example app (npx create-nuxt-app) and I run weighttp http benchmarking tool to run the following command:
weighttp -n 10000 -t 4 -c 100 localhost:3000
The results are as follows:
starting benchmark...
spawning thread #1: 25 concurrent requests, 2500 total requests
spawning thread #2: 25 concurrent requests, 2500 total requests
spawning thread #3: 25 concurrent requests, 2500 total requests
spawning thread #4: 25 concurrent requests, 2500 total requests
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done
finished in 9 sec, 416 millisec and 115 microsec, 1062 req/s, 6424 kbyte/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed,
0 errored
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 61950000 bytes total, 2000000 bytes http, 59950000 bytes data
As You can see it won't climb over 1062 req/s. Sometimes I can reach something like ~1700 req/s if I ramp up the concurrency param, but no more than that.
I'm expecting a simple hello world example app to run at least ~10.000 req/s without a high delay or latency on this machine.
I've tried checking file limits, open connection limits, Nginx workers, etc but couldn't find the root cause, so I'm really looking forward to any ideas on where to at least start searching for the root cause.
I can provide any logs or any other additional info if needed.

NodeJS socket.io unable to handle arrival rate in performance test

On performance testing my node.js socket.io app it seems unable to handle the desired amount of concurrent websocket requests.
I am testing the application in a Docker environment with the following specs:
CPUs: 2
Ram: 4 GB
The application is stripped down to a bare minimum that only accepts websocket connections with socket.io + express.js.
I perform the tests with the help of artillery.io, the test scenario is:
config:
target: "http://127.0.0.1:5000"
phases:
- duration: 100
arrivalRate: 20
scenarios:
- engine: "socketio"
flow:
- emit:
channel: "echo"
data: "hello"
- think: 50
Report:
Summary report # 16:54:31(+0200) 2018-07-30
Scenarios launched: 2000
Scenarios completed: 101
Requests completed: 560
RPS sent: 6.4
Request latency:
min: 0.1
max: 3
median: 0.2
p95: 0.5
p99: 1.4
Scenario counts:
0: 2000 (100%)
Codes:
0: 560
Errors:
Error: xhr poll error: 1070
timeout: 829
So I get a lot of xhr poll errors.
While I monitor the CPU + mem stats the highest value for the CPU is only 43,25%. Memory will only get as high as 4%.
Even when I alter my test to an arrival rate of 20 over a timespan of 100 seconds I still get XHR poll errors.
So are these test numbers beyond the capability of nodejs + socket.io with this specs or is something else nog working as expected ? Perhaps the docker environment or the Artillery software ?
any help or suggestions would be appreciated !
side note: Already looked into nodejs clustering for scaling but like to get the most out of one process first.
Update 1
After some more testing with a websocket stresstest script found here: https://gist.github.com/redism/11283852
It seems I hit some sort of limit when I use an arrival rate higher than 50 or want to establish more connections then +/- 1900.
Until 1900 connections almost each connection gets established but after this number the XHR poll error grows exponential.
Still no high CPU or Memory values for the docker containers.
The XHR poll error in detail:
Error: xhr poll error
at XHR.Transport.onError (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transport.js:64:13)
at Request.<anonymous> (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transports\polling-xhr.js:128:10)
at Request.Emitter.emit (D:\xxx\xxx\api\node_modules\component-emitter\index.js:133:20)
at Request.onError (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transports\polling-xhr.js:309:8)
at Timeout._onTimeout (D:\xxx\xxx\api\node_modules\engine.io-client\lib\transports\polling-xhr.js:256:18)
at ontimeout (timers.js:475:11)
at tryOnTimeout (timers.js:310:5)
at Timer.listOnTimeout (timers.js:270:5) type: 'TransportError', description: 503
Update 2
Changing the transport to "websocket" in the artillery test gives some better performance.
Testcase:
config:
target: "http://127.0.0.1:5000"
socketio:
transports: ["websocket"]
phases:
- duration: 20
arrivalRate: 200
scenarios:
- engine: "socketio"
flow:
- emit:
channel: "echo"
data: "hello"
- think: 50
Results: Arrival rate is not longer the issue but I hit some kind of limit at 2020 connections. After that it gives a "Websocket error".
So is this a limit on Windows 10 and can you change it ? Is this limit the reason why the tests with long-polling perform so badly

Connect exception in gatling- What does this mean?

I ran the below config in gatling from my local machine to verify 20K requests per second ..
scn
.inject(
atOnceUsers(20000)
)
It gave these below error in reports...What des this mean in gatling?
j.n.ConnectException: Can't assign requested address:
/xx.xx.xx:xxxx 3648 83.881 %
j.n.ConnectException: connection timed out: /xx.xx.xx:xxxx 416 9.565 %
status.find.is(200), but actually found 500 201 4.622 %
j.u.c.TimeoutException: Request timeout to not-connected after
60000ms 84 1.931 %
Are these timeouts happening due to server not processing the requests or requests not going from my local machine
Most probably yes, that's the reason.
Seems your simulation was compiled successfully and started.
If you look to the error messages you will see percentages after each line (83.881%, 9.565%, 1.931 %). This means that actually the requests were generated and were sent and some of them failed. Percentages are counted based on total number of fails.
If some of the requests are OK and you get these errors, then Gatling did its job. It stress tested your application.
Try to simulate with lower number of users,for example:
scn
inject(
rampUsers(20) over (10 seconds)
)
If it works then definitely your application is not capable to handle 20000 requests at once.
For more info on how to setup a simulation see here.

u-boot hangs on soft reboot

I'm having this subtle issue where if I put my ARM device (U-boot + Linux) under soft reboot cycle (stress test), it fails after 100+ cycles. The serial output I capture in failed scenario is:
...
g_txrx_mode=1
g_profileid=1
id=0x1F11 board_type=0x0004 HAS_POE_SUPPORT=1
Not POE
read_rbf_header_from_ext4 - filename = e30.core.rbf filesize = 7317252
cff_from_mmc_ext4:writing e30.core.rbf length 13 num_files 0
Full Configuration Succeeded.
crestron_load_rbf: use core e30.core.rbf length 13 rval 1
Booting from primary
Writing to MMC(0)... done
dram_init: id 1f11 (id & 0x0001) 1 has_dsp/has_dante0
DDRCAL: Success
INFO : Skip relocation as SDRAM is non secure memory
Reserving 2048 Bytes for IRQ stack at: ffe2f708
DRAM : 512 MiB
On a successful reboot, next printed lines are:
WARNING: Caches not enabled
MMC: In: serial
Out: serial
Err: serial
it seems it failed between 'skip_relocation()' and 'enable_caches()'. but why after 100+ attempts? Could it be memory issue? Memory timing issue? And how can I debug it?

Meteor app deployed to Digital Ocean stuck at 100% CPU and OOM

I have a Meteor (0.8.0) app deployed using Meteor Up to Digital Ocean that's been stuck at 100% CPU, only to crash with out of memory, and start up again at 100% CPU. It's been stuck like this for the past 24 hours. The weird part is nobody is using the server and meteor.log isn't showing much clues. I've got MongoHQ with oplog for the database.
Digital Ocean specs:
1GB Ram 30GB SSD Disk New York 2 Ubuntu 12.04.3 x64
Screenshot showing issue:
Note that the screenshot was captured yesterday and it has stayed pegged at 100% cpu until it crashes with out of memory. The log shows:
FATAL ERROR: Evacuation Allocation failed - process out of memory
error: Forever detected script was killed by signal: SIGABRT error:
Forever restarting script for 5 time
Top displays:
26308 meteorus 20 0 1573m 644m 4200 R 98.1 64.7 32:45.36 node
How it started:
I have an app that takes in a list of emails via csv or mailchimp oauth, sends them off to fullcontact via their batch process call http://www.fullcontact.com/developer/docs/batch/ and then updates the Meteor collections accordingly depending on the response status. A snippet from a 200 response
if (result.statusCode === 200) {
var data = JSON.parse(result.content);
var rate_limit = result.headers['x-rate-limit-limit'];
var rate_limit_remaining = result.headers['x-rate-limit-remaining'];
var rate_limit_reset = result.headers['x-rate-limit-reset'];
console.log(rate_limit);
console.log(rate_limit_remaining);
console.log(rate_limit_reset);
_.each(data.responses, function(resp, key) {
var email = key.split('=')[1];
if (resp.status === 200) {
var sel = {
email: email,
listId: listId
};
Profiles.upsert({
email: email,
listId: listId
}, {
$set: sel
}, function(err, result) {
if (!err) {
console.log("Upsert ", result);
fullContactSave(resp, email, listId, Meteor.userId());
}
});
RawCsv.update({
email: email,
listId: listId
}, {
$set: {
processed: true,
status: 200,
updated_at: new Date().getTime()
}
}, {
multi: true
});
}
});
}
Locally on my wimpy Windows laptop running Vagrant, I have no performance issues whatsoever processing hundreds of thousands of emails at a time. But on Digital Ocean, it can't even handle 15,000 it seems (I've seen the CPU spike to 100% and then crash with OOM, but after it comes up it usually stabalizes... not this time). What worries me is that the server hasn't recovered at all despite no/little activity on the app. I've verified this by looking at analytics - GA shows 9 sessions total over the 24 hours doing little more than hitting / and bouncing, MixPanel shows only 1 logged in user (me) in the same timeframe. And the only thing I've done since the initial failure is check the facts package, which shows:
mongo-livedata observe-multiplexers 13 observe-drivers-oplog 13
oplog-watchers 16 observe-handles 15 time-spent-in-QUERYING-phase
87828 time-spent-in-FETCHING-phase 82 livedata
invalidation-crossbar-listeners 16 subscriptions 11 sessions 1
Meteor APM also doesn't show anything out of the ordinary, the meteor.log doesn't show any meteor activity aside from the OOM and restart messages. MongoHQ isn't reporting any slow running queries or much activity - 0 queries, updates, inserts, deletes on avg from staring at their monitoring dashboard. So as far as I can tell, there hasn't been much activity for 24 hours, and certainly not anything intensive. I've since tried to install newrelic and nodetime but neither is quite working - newrelic shows no data and the meteor.log has a nodetime debug message
Failed loaded nodetime-native extention.
So when I try to use nodetime's CPU profiler it turns up blank and the heap snapshot returns with Error: V8 tools are not loaded.
I'm basically out of ideas at this point, and since Node is pretty new to me it feels like I'm taking wild stabs in the dark here. Please help.
Update: Server is still pegged at 100% four days later. Even an init 6 doesn't do anything - Server restarts, node process starts and jumps back up to 100% cpu. I tried other tools like memwatch and webkit-devtools-agent but could not get them to work with Meteor.
The following is the strace output
strace -c -p 6840
Process 6840 attached - interrupt to quit
^CProcess 6840 detached
% time seconds usecs/call calls errors syscall
77.17 0.073108 1 113701 epoll_wait
11.15 0.010559 0 80106 39908 mmap
6.66 0.006309 0 116907 read
2.09 0.001982 0 84445 futex
1.49 0.001416 0 45176 write
0.68 0.000646 0 119975 munmap
0.58 0.000549 0 227402 clock_gettime
0.10 0.000095 0 117617 rt_sigprocmask
0.04 0.000040 0 30471 epoll_ctl
0.03 0.000031 0 71428 gettimeofday
0.00 0.000000 0 36 mprotect
0.00 0.000000 0 4 brk
100.00 0.094735 1007268 39908 total
So it looks like the node process spends most of its time in epoll_wait.
I had a similar issue. I didn't need Oplog and I was suggested to add meteor package "disable-oplog". So I did, and the CPU usage was reduced a lot. If you are not really taking advantage of Oplog it might be better to disable it, so do meteor add disable-oplog and see what happens.
I hope this helps.
-Are you using Meteor-up ?
I also use New York 2
In my local enviroment with ubuntu server virtual box works awsome with only 512 Mb and 1 Core.
I'm having the same issue on DigitalOcean 4 Gb RAM, 2 cores VPS + Meteorup (and my app of course).
LOCAL ENVIROMENT on virtualbox - 1 CORE - 512 MB - New York 2 - ubuntu 14.04 x86.
-------------------------------------
>Meteor.js = 0.8.0,
>Node = 0.10.26,
>MongoDB shell version = 2.4.10,
>%CPU = 20.8 avg,
>%MEM = 27.4 avg
DIGITALOCEAN 4 GB RAM - 2 CPUS - ubuntu 14.04 x64.
-------------------------------------
>Meteor.js = 0.8.0,
>Node = 0.10.26,
>MongoDB shell version = 2.4.10,
>%CPU = 101.8 avg,
>%MEM = 27.4 avg
> PID meteoru+ 20 0 1644244 796692 6228 R **102.2** **32.7** 84:47.08 node
Also, my app does something like yours. Im using CFS package from atmosphere, and node-csv to read the CSV that i upload. The upload works great, also node-csv works great....but i can confirm you if thats the problem, it seems to be NODE running on DigitalOcean.
My MongoDB works great also...
I was new with VPS and the first thing I tried to do is run my script. The problem was that I started the same server with node and pm2 a couple of times.
Solution
run pm2 kill to kill all processes run by your process manager
run killall node - to kill all running process if any remains
run pm2 start <your_server>.js - to run your server again

Resources