Emulator is not running after AOSP Ubuntu 14.04 - emulation

After building successfully by ~$ make -j , if I run ~$emulator it shows below problem.
I am using Ubuntu 14.04.
sh: 1: glxinfo: not found emulator: WARNING: system partition size
adjusted to match image file (1792 MB > 200 MB)
emulator: WARNING: data partition size adjusted to match image file
(550 MB > 200 MB)
sh: 1: glxinfo: not found emulator: WARNING: encryption is off X Error
of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 149 () Minor opcode of failed
request: 2 Serial number of failed request: 35 Current serial
number in output stream: 36 QObject::~QObject: Timers cannot be
stopped from another thread

Related

WineBottler returning error message every time i try an open .exe file on Mac

I'm trying to open a plugin for SNAP (to process Sentinel-3 imagery) on my Mac - the plugin downloads as an .exe file which means I need to open it using WineBottler. Every time I try and open the file however, I get this error message:
###BOTTLING### default.sh
/var/folders/rz/rr6ytzhx5gl60f1v1tbc67xm0000gn/T/AppTranslocation/6CDA1855-FA78-4A2A-A976-2C1A539F36ED/d/WineBottler.app/Contents/Frameworks/WBottler.framework/Resources/bottler.sh: line 39: /Applications/Wine.app/Contents/Resources/bin/wine: Bad CPU type in executable
###BOTTLING### Gathering debug Info...
Versions
OS...........................: darwin21
Wine.........................:
WineBottler..................: 1.8.6
Wineticks....................: 20220411-next - sha256sum: b6370f13c4dc410023f2a4e4e9a4385d2a0420031666c2f30befccc9b39c8f65
Environment
PWD..........................: '/Applications/Wine.app/Contents/Resources/bin'
PATH.........................: /Applications/Wine.app/Contents/Resources/bin:/usr/bin:/bin:/usr/sbin:/sbin
USER.........................: hannah
HOME.........................: /Users/hannah
COMPUTERNAME.................: hannahâs MacBook Air
BUNDLERESOURCEPATH...........: /var/folders/rz/rr6ytzhx5gl60f1v1tbc67xm0000gn/T/AppTranslocation/6CDA1855-FA78-4A2A-A976-2C1A539F36ED/d/WineBottler.app/Contents/Frameworks/WBottler.framework/Resources
WINEPREFIX...................: /Applications/Wine.app/Contents/Resources
WINEPATH.....................: /Applications/Wine.app/Contents/Resources/bin
LD_LIBRARY_PATH..............: /Applications/Wine.app/Contents/Resources/lib:/opt/X11/lib:/usr/X11/lib
DYLD_FALLBACK_LIBRARY_PATH...: /Applications/Wine.app/Contents/Resources/lib:/usr/lib:/opt/X11/lib:/usr/X11/lib
SILENT.......................:
http_proxy...................:
https_proxy..................:
ftp_proxy....................:
socks5_proxy.................:
Bottle
TEMPLATE.....................:
BOTTLE.......................: /Users/hannah/Desktop/Untitled.app
INSTALLER_URL................: /Users/hannah/Desktop/iCOR_Setup_3.0.0.exe
INSTALLER_IS_ZIPPED..........: 0
INSTALLER_NAME...............: iCOR_Setup_3.0.0.exe
INSTALLER_ARGUMENTS..........:
REMOVE_MONO..................:
REMOVE_GECKO.................:
REMOVE_USERS.................:
REMOVE_INSTALLERS............:
WINETRICKS_ITEMS.............: winxp
DLL_OVERRIDES................:
EXECUTABLE_PATH..............: winefile
EXECUTABLE_ARGUMENTS.........:
EXECUTABLE_VERSION...........: 1.0.0
BUNDLE_COPYRIGHT.............: © Your Company
BUNDLE_IDENTIFIER............: com.yourcompany.yourapp
BUNDLE_CATEGORYTYPE..........: public.app-category.business
SILENT.......................:
Hardware:
Hardware Overview:
Model Name: MacBook Air
Model Identifier: MacBookAir7,2
Processor Name: Dual-Core Intel Core i5
Processor Speed: 1.6 GHz
Number of Processors: 1
Total Number of Cores: 2
L2 Cache (per Core): 256 KB
L3 Cache: 3 MB
Hyper-Threading Technology: Enabled
Memory: 4 GB
System Firmware Version: 476.0.0.0.0
OS Loader Version: 540.120.3~22
SMC Version (system): 2.27f2
Serial Number (system): C02QM1XWG941
Hardware UUID: EE27242F-C2B2-59E6-AAED-D598D1D61044
Provisioning UDID: EE27242F-C2B2-59E6-AAED-D598D1D61044
###BOTTLING### Create .app...
###BOTTLING### Enabling CoreAudio, Colors, Antialiasing and flat menus...
/var/folders/rz/rr6ytzhx5gl60f1v1tbc67xm0000gn/T/AppTranslocation/6CDA1855-FA78-4A2A-A976-2C1A539F36ED/d/WineBottler.app/Contents/Frameworks/WBottler.framework/Resources/bottler.sh: line 134: /Applications/Wine.app/Contents/Resources/bin/wine: Bad CPU type in executable
### LOG ### Command '/Applications/Wine.app/Contents/Resources/bin/wine regedit /tmp/reg.reg' returned status 126.
###ERROR### Command '/Applications/Wine.app/Contents/Resources/bin/wine regedit /tmp/reg.reg' returned status 126.
Task returned with status 1.
I've tried downloading the 'stable' version of WineBottler, download and redownload it to no avail - it always returns this message. I can't seem to find any way of getting around this or recently posted question (a lot are from 2010-15 and are outdated in their solutions)
Does anyone know what I can do to get around this and open it? It's driving me insane!!!
Thanks!

Glusterfs geo-replication issue

I have been using georep for the last two months and posted this on their GitHub but no answers so far.
Description of problem: after copying ~8TB without any issue, some nodes are flipping between Active and Faulty with the following error message in gsync log:
ssh> failed with UnicodeDecodeError: 'ascii' codec can't decode byte 0xf2 in position 60: ordinal not in range(128).
Default encoding in all machines is utf-8
Command to reproduce the issue:
gluster volume georeplication master_vol user#slave_machine::slave_vol start
The full output of the command that failed:
The command itself it's fine but you need to start it to fail, hence the command it's not the issue on it's own
Expected results:
No such failures, copy should go as planned
Mandatory info:
The output of the gluster volume info command:
Volume Name: volname
Type: Distributed-Replicate
Volume ID: d5a46398-9638-4b50-9db0-4cd7019fa526
Status: Started
Snapshot Count: 0
Number of Bricks: 12 x 2 = 24
Transport-type: tcp
Bricks: 24 bricks (omited the names cause not relevant and too large)
Options Reconfigured:
features.ctime: off
cluster.min-free-disk: 15%
performance.readdir-ahead: on
server.event-threads: 8
cluster.consistent-metadata: on
performance.cache-refresh-timeout: 1
diagnostics.client-log-level: WARNING
diagnostics.brick-log-level: WARNING
performance.flush-behind: off
performance.cache-size: 5GB
performance.cache-max-file-size: 1GB
performance.io-thread-count: 32
performance.write-behind-window-size: 8MB
client.event-threads: 8
network.inode-lru-limit: 1000000
performance.md-cache-timeout: 1
performance.cache-invalidation: false
performance.stat-prefetch: on
features.cache-invalidation-timeout: 30
features.cache-invalidation: off
cluster.lookup-optimize: on
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 33
storage.owner-gid: 33
features.bitrot: on
features.scrub: Active
features.scrub-freq: weekly
cluster.rebal-throttle: lazy
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
The output of the gluster volume status command:
Don't really think this is relevant as everything seems fine, if needed I'll post it
The output of the gluster volume heal command:
Same as before
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
Not the relevant ones as is georep, posting the exact issue: (this log is from master volume node)
[2022-09-23 09:53:32.565196] I [master(worker /bricks/brick1/data):1439:process] _GMaster: Entry Time Taken [{MKD=0}, {MKN=0}, {LIN=0}, {SYM=0}, {REN=0}, {RMD=0}, {CRE=0}, {duration=0.0000}, {UNL=0}]
[2022-09-23 09:53:32.565651] I [master(worker /bricks/brick1/data):1449:process] _GMaster: Data/Metadata Time Taken [{SETA=0}, {SETX=0}, {meta_duration=0.0000}, {data_duration=1663926812.5656}, {DATA=0}, {XATT=0}]
[2022-09-23 09:53:32.566270] I [master(worker /bricks/brick1/data):1459:process] _GMaster: Batch Completed [{changelog_end=1663925895}, {entry_stime=None}, {changelog_start=1663925895}, {stime=(0, 0)}, {duration=673.9491}, {num_changelogs=1}, {mode=xsync}]
[2022-09-23 09:53:32.668133] I [master(worker /bricks/brick1/data):1703:crawl] _GMaster: processing xsync changelog [{path=/var/lib/misc/gluster/gsyncd/georepsession/bricks-brick1-data/xsync/XSYNC-CHANGELOG.1663926139}]
[2022-09-23 09:53:33.358545] E [syncdutils(worker /bricks/brick1/data):325:log_raise_exception] : connection to peer is broken
[2022-09-23 09:53:33.358802] E [syncdutils(worker /bricks/brick1/data):847:errlog] Popen: command returned error [{cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-GcBeU5/38c083bada86a45a28e6710377e456f6.sock geoaccount#slavenode6 /usr/libexec/glusterfs/gsyncd slave mastervol geoaccount#slavenode1::slavevol --master-node masternode21 --master-node-id 08c7423e-c2b6-4d40-adc8-d2ded4f66608 --master-brick /bricks/brick1/data --local-node slavenode6 --local-node-id bc1b3971-50a7-4b32-a863-aaaa02419de6 --slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/sbin --master-dist-count 12}, {error=1}]
[2022-09-23 09:53:33.358927] E [syncdutils(worker /bricks/brick1/data):851:logerr] Popen: ssh> failed with UnicodeDecodeError: 'ascii' codec can't decode byte 0xf2 in position 60: ordinal not in range(128).
[2022-09-23 09:53:33.672739] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}]
[2022-09-23 09:53:45.477905] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
**- Is there any crash ? Provide the backtrace and coredump
Provided log up
Additional info:
Master volume: 12x2 Distributed-replicated setup, been working for a couple years no, no big issues as of today. 160TB of Data
Slave volume: 2x(5+1) Distributed-disperse setup, created exclusively to be a slave georep node. Managed to copy 11TB of data from master node, but it's failing.
The operating system / glusterfs version:
On ALL nodes: Glusterfs version= 9.6
Master nodes OS: CentOS 7
Slave nodes OS: Debian11
Extra questions
Don't really know if it's the place to ask this, but while we're at it, any guidance as of how to improve sync performance? Tried changing the parameter sync_jobs up to 9 (from 3) but as we've seen (while it was working) it'd only copy from 3 nodes max, at a "low" speed (about 40% of our bandwidth). It could go as high as 1Gbps but the max we got was 370Mbps.
Also, is there any in-depth documentation for georep? The basics we found were too basic and we did miss more doc to read and dig up into.

Running OpenOCD fails with jtagRocketConfig

This is what I get when I try to connect Software RTL simulation and OpenOCD:
xPack OpenOCD, x86_64 Open On-Chip Debugger 0.10.0+dev-00068-ge1e63ef30 (2020-03-16-05:57)
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : only one transport option; autoselect 'jtag'
Info : Initializing remote_bitbang driver
Info : Connecting to localhost:38000
Info : remote_bitbang driver initialized
Info : This adapter doesn't support configurable speed
Error: fflush: Broken pipe
Error: read: count=-1, error=Broken pipe
Error: Trying to use configured scan chain anyway...
Error: fflush: Broken pipe
Error: fflush: Broken pipe
Warn : Bypassing JTAG setup events due to errors
Error: fflush: Broken pipe
Error: fflush: Broken pipe
Error: failed jtag scan: -4
Error: Unsupported DTM version: 12
Info : Listening on port 3333 for gdb connections
Error: Target not examined yet
Error: Unsupported DTM version: 12
Error: fflush: Broken pipe
Error: failed: -4
I applied the instructions at 8.2.2.1. Creating a DTM+JTAG Config.
The following is the content of OpenOCD config file:
interface remote_bitbang
remote_bitbang_host localhost
remote_bitbang_port 38000
set _CHIPNAME riscv
jtag newtap $_CHIPNAME cpu -irlen 5
set _TARGETNAME $_CHIPNAME.cpu
target create $_TARGETNAME riscv -chain-position $_TARGETNAME
gdb_report_data_abort enable
init
halt
And this is the command I use to run the simulation:
./simulator-chipyard-jtagRocketConfig +jtag_rbb_enable=1 --rbb-port=38000 <TESTNAME>
I managed to run OpenOCD.
This is the current content of OpenOCD config file:
adapter_khz 10000
interface remote_bitbang
remote_bitbang_host $::env(REMOTE_BITBANG_HOST)
remote_bitbang_port $::env(REMOTE_BITBANG_PORT)
set _CHIPNAME riscv
jtag newtap $_CHIPNAME cpu -irlen 5 -expected-id 0x10e31913
set _TARGETNAME $_CHIPNAME.cpu
target create $_TARGETNAME riscv -chain-position $_TARGETNAME
$_TARGETNAME configure -work-area-phys $::env(WORK_AREA) -work-area-size 8096 -work-area-backup 1
gdb_report_data_abort enable
gdb_report_register_access_error enable
# Expose an unimplemented CSR so we can test non-existent register access
# behavior.
riscv expose_csrs 2288
riscv expose_custom 1,12345-12348
init
set challenge [riscv authdata_read]
riscv authdata_write [expr $challenge + 1]
halt
And this is how I run OpenOCD:
REMOTE_BITBANG_HOST=localhost REMOTE_BITBANG_PORT=38000 WORK_AREA=0x1212340000 openocd --command 'gdb_port 0' --command 'tcl_port disabled' --command 'telnet_port disabled' -f openocd.cfg
I also installed Verilator (v4.028).

libGL error - Unabe to load drivers with OpengGL code in Ubuntu

I just installed PyOpenGL and proceed practice with this tutorial. It starts with this simple code that creates a window:
from OpenGL.GL import *
from OpenGL.GLUT import *
from OpenGL.GLU import *
window = 0 # glut window number
width, height = 500, 400 # window size
def draw(): # ondraw is called all the time
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # clear the screen
glLoadIdentity() # reset position
# ToDo draw rectangle
glutSwapBuffers() # important for double buffering
# initialization
glutInit() # initialize glut
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH)
glutInitWindowSize(width, height) # set window size
glutInitWindowPosition(0, 0) # set window position
window = glutCreateWindow("noobtuts.com") # create window with title
glutDisplayFunc(draw) # set draw function callback
glutIdleFunc(draw) # draw all the time
glutMainLoop()
But when I try to run it I get this set of errors:
An error ocurred while starting the kernel
libGL error: unable to load driver: nouveau_dri.so
libGL error: driver pointer missing
libGL error: failed to load driver: nouveau
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
X Error of failed request: GLXBadContext
Major opcode of failed request: 155 (GLX)
Minor opcode of failed request: 6 (X_GLXIsDirect)
Serial number of failed request: 43
Current serial number in output stream: 42
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 155 (GLX)
Minor opcode of failed request: 24 (X_GLXCreateNewContext)
Value in failed request: 0x0
Serial number of failed request: 42
Current serial number in output stream: 43
Can someone guide me on what they mean or how to fix them?
There are some discussions on the internet about this issue, but I haven't found one with strong solution. I'm using Ubuntu 16 by the way. Thanks
It seems you have installed some proprietary drivers and after that things got broke. You should uninstall newly installed NVDIA driver and reinstall it. Then on main window go to search application and type additional drivers in that window select driver you installed.
If this doesn't help and you have not installed any additional drivers please type following command on terminal and see if you have this file
》》locate nouveau_dri.so
If it can't find install it and that should fix error.

Out of memory errors when executing Jest on Ubuntu

I'm trying to execute Jest on Ubuntu 14.04.02, in a virtual machine with 4gb of RAM. node version 0.12.2, npm 2.0.0-alpha-5
free shows me:
total used free shared buffers cached
Mem: 3.8G 199M 3.6G 976K 1.1M 18M
When I run npm test, I keep getting a variety of out of memory errors:
Error: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
FATAL ERROR: Committing semi space failed. Allocation failed - process out of memory
# Fatal error in ../deps/v8/src/heap/store-buffer.cc, line 132
# CHECK(old_virtual_memory_->Commit(reinterpret_cast<void*>(old_limit_), grow * kPointerSize, false)) failed
Any idea what the minimum memory requirement is...or if I have misconfiguration something that is leading to this?
It turns out downgrading to node version 0.10.32, installed via npm, healed the issue.

Resources