Using Sublime to edit Yocto files, leads to fail to start bitbake server - sublimetext3

If I open my Yocto project's folder with Sublime under Ubuntu 16.04 and try to build with:
bitbake <image>
I get these errors:
ERROR: Unable to start bitbake server (None)
ERROR: Server log for this session (/local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/build-openstlinuxeglfs-stm32mp1-sw25v00/bitbake-cookerdaemon.log):
--- Starting bitbake server pid 4602 at 2020-02-01 02:59:00.519051 ---
Traceback (most recent call last):
File "/local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/openembedded-core/bitbake/lib/bb/daemonize.py", line 83, in createDaemon
function()
File "/local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/openembedded-core/bitbake/lib/bb/server/process.py", line 469, in _startServer
self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
File "/local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/openembedded-core/bitbake/lib/bb/cooker.py", line 210, in __init__
self.initConfigurationData()
File "/local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/openembedded-core/bitbake/lib/bb/cooker.py", line 396, in initConfigurationData
self.add_filewatch(mc.getVar("__base_depends", False), self.configwatcher)
File "/local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/openembedded-core/bitbake/lib/bb/cooker.py", line 306, in add_filewatch
watcher.add_watch(f, self.watchmask, quiet=False)
File "/local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/layers/openembedded-core/bitbake/lib/pyinotify.py", line 1924, in add_watch
raise WatchManagerError(err, ret_)
pyinotify.WatchManagerError: add_watch: cannot watch /local/STM32MP15-Ecosystem-v1.1.0/Distribution-Package/openstlinux-4.19-thud-mp1-19-10-09/build-openstlinuxeglfs-stm32mp1-sw25v00/conf WD=-1, Errno=No space left on device (ENOSPC)
ERROR: No space left on device or exceeds fs.inotify.max_user_watches?
ERROR: To check max_user_watches: sysctl -n fs.inotify.max_user_watches.
ERROR: To modify max_user_watches: sysctl -n -w fs.inotify.max_user_watches=<value>.
ERROR: Root privilege is required to modify max_user_watches.
Closing the editor and issue again the command works correctly.
Other editors (like gedit) does not have this behavior.
I know I can live without Sublime, but I want to understand che cause of the errors.

You're running out of inotify watches. Software such as Sublime and the program you're running here (among others) use inotify watches to detect changes to the file system, such as being able to track when files are changing or when the contents of a directory changes.
There's a (user settable) upper limit to the number of watches that can be in use at once, and the rather cryptic error message you're seeing here is a symptom of the limit being reached and the program failing to obtain a watch.
The default value for the maximum inotify watches may not be set high enough on your system for the software (and volume of files) that you're using, but you can change that if you like.
The output at the bottom of your error diagnotic information shows how you can view/adjust the upper limit. The following question also shows how you could do this as well.
https://unix.stackexchange.com/questions/13751/kernel-inotify-watch-limit-reached

Related

shutil.rmtree() error when trying to remove NFS-mounted directory

Attempting to execute shutil.rmtree(path) on a directory managed by NFS consistently fails. Below you can see that os.rmdir(path) within shutil.rmtree(path) causes the exception. Is there a more robust way for me to achieve the expected result?
It appears that it removes all of the files, yet a hidden .nfs file remains in the directory for a short amount of time. I'm guessing that the process from which I'm calling rmtree has an open file handle to one of the files inside the directory, which, when deleted, apparently causes NFS to write a new hidden file. That would cause os.rmdir to fail on attempting to remove a non-empty directory.
Traceback (most recent call last):
File "/home/me/pre3/lib/python3.6/shutil.py", line 484, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/me/pre3/lib/python3.6/shutil.py", line 482, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty:
NFS details:
$ nfsstat -m
/home/me/nfs from XXX.YYY.ZZZ:/mnt/path/to/nfs
Flags: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=50,retrans=2,sec=sys,mountaddr=REDACTED,mountvers=3,mountport=832,mountproto=udp,local_lock=none,addr=REDACTED
I'm using Python 3.6.6 on Ubuntu 16.04.
If the python logging module is logging to the target output directory, it will maintain an open file. A workaround is to call logging.shutdown() first, then called shutil.rmtree(path). This is not a general answer to the broader question, however.
You could try defining an error handler function to be passed to the onerror arg for shutil.rmtree: https://docs.python.org/3/library/shutil.html#shutil.rmtree
def handle_rmtree_err(function, path, excinfo):
...
shutil.rmtree(my_path, onerror=handle_rmtree_err)
There are all sorts of reasons why a process may be holding onto a file, so I can't tell you what the error handler should do exactly.
If you haven't figured out what is holding onto the file, try $ lsof | grep .nfsXXXX.
If all else fails you could time.sleep(secs) and retry shutil.rmtree.

Python in Visual Studio Code getting winError[5] Access denied from idescripts

it is my first time posting here so I apologize if this question is not correctly formatted.
I installed VSC to use it in developing STM32 code. I found the damongranlabs idescripts in github that would help greatly. While running the update.py script, I get the winError[5] access denied. VSC is using powershell as a terminal and I am running windows 10 with Python 3.8 32bit.
I have tried running VSC as an admin with no luck.
The following is what I get in powershell after attempting to run the script:
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Try the new cross-platform PowerShell https://aka.ms/pscore6
PS D:\Development\STM32\cubemxprojects\476EncTest> & d:/Development/STM32/cubemxprojects/476EncTest/ideScripts/update.py
Update started.
Existing '.vscode' folder used.
One STM32CubeMX file found: 476EncTest.ioc
Existing 'Makefile' file found (restored from '.backup').
Copy of file (new name: Makefile): D:/Development/STM32/cubemxprojects/476EncTest/Makefile.backup
Makefile 'print-variable' function added.
Valid 'buildData.json' file found.
Valid 'toolsPaths.json' file found.
'toolsPaths.json' file updated!
Enter path(s) to OpenOCD configuration file(s):
Example: 'target/stm32f0x.cfg'. Absolute or relative to OpenOCD /scripts/ folder.
If more than one file is needed, separate with comma.
Paste here and press Enter: C:\Users\omis2\AppData\Roaming\GNUMCUEclipse\GNU MCU Eclipse\OpenOCD\0.10.0-12-20190422-2015\scripts\target\stm32l4x.cfg
Enter path or command for 'stm32SvdPath':
Paste here and press Enter: C:\Users\omis2\AppData\Roaming\GNUMCUEclipse\Keil.STM32L4xx_DFP.2.2.0\CMSIS\SVD\STM32L4x6.svd
ERROR (55 seconds).
Unexpected error occured during 'Update' procedure. Exception:
Traceback (most recent call last):
File "D:\Development\STM32\cubemxprojects\476EncTest\ideScripts\update.py", line 56, in <module>
makefileData = makefile.getMakefileData(makeExePath, gccExePath)
File "D:\Development\STM32\cubemxprojects\476EncTest\ideScripts\updateMakefile.py", line 93, in getMakefileData
projectName = self.getMakefileVariable(makeExePath, gccExePath, self.mkfStr.projectName)[0]
File " D:\Development\STM32\cubemxprojects\476EncTest\ideScripts\updateMakefile.py", line 366, in getMakefileVariable
proc = Popen(arguments, stdout=PIPE)
File "C:\Users\omis2\AppData\Local\Programs\Python\Python38- 32\lib\subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\omis2\AppData\Local\Programs\Python\Python38- 32\lib\subprocess.py", line 1307, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
PermissionError: [WinError 5] Access is denied
Thanks for any help

GCF Node10 deploy failed: "Function failed on loading user code. Error message: Provided code is not a loadable module."

After making some adjustments (a rather big PR), which basically adds Google Cloud Storage connection to this function, deployment starts to fail. Unfortunately, the error message is pretty unclear and therefore doesn't provide me in much hint. Locally and in tests things run fine, so I'm a bit lost right now which direction to search. Logs don't provide insights either.
Can't really easily share the changes in the PR unfortunately. Worst case I'll revert and go piece by piece from there, but that's a tedious process.
The service account that is used in the deployment got access to the used bucket (with write), but I also don't think this error hints to permissions else I hope the error message would be more insightful.
Command used:
gcloud beta functions deploy eventStreamPostEvent --runtime nodejs10 --memory 128MB --trigger-http --source ./dist --service-account $DEPLOY_SERVICE_ACCOUNT --verbosity debug
Deploying function (may take a while - up to 2 minutes)...
..............................failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: Provided code is not a loadable module.
Could not load the function, shutting down.
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 985, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 795, in Run
resources = command_instance.Run(args)
File "/usr/lib/google-cloud-sdk/lib/surface/functions/deploy.py", line 231, in Run
enable_vpc_connector=True)
File "/usr/lib/google-cloud-sdk/lib/surface/functions/deploy.py", line 175, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 300, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 356, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=3, message=Function failed on loading user code. Error message: Provided code is not a loadable module.
Could not load the function, shutting down.
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function failed on loading user code. Error message: Provided code is not a loadable module.
Could not load the function, shutting down.
I hope anyone knows what is causing this error.
Stackdriver logs show me nothing more than:
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
authenticationInfo: {…}
methodName: "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction"
requestMetadata: {
destinationAttributes: {…}
requestAttributes: {…}
}
resourceName: "projects/<projectName>/locations/europe-west1/functions/eventStreamPostEvent"
serviceName: "cloudfunctions.googleapis.com"
status: {
code: 3
message: "INVALID_ARGUMENT"
}
}
I had the same issue and seems the message comes from here.
When you have multiple .js files with some subfolders in the root folder of your function, by default without any specification you need to name the entry module as index.js or function.js.
I found that by deploying the function using node8. The error messages should be clearer...
Usually (or; for me) the cause of OperationError: code=3 is an error in importing the modules you have defined.
Fixed this by:
deleting node_modules
rm -r .\node_modules\
optional: you can do npm i after deleting node_modules and test your function locally before deploying.
then deleting .gcloudignore and deploying as usual.
For me, the problem was caused by having installed one of my node_modules in the wrong directory (.. - up one dir). Make sure all of your node_modules needed are in the right place. This can easily happen if you have multiple functions in subfolders.
Your source code must contain an entry point function that has been correctly specified in your deployment, either via Cloud console or Cloud SDK.
Source: https://cloud.google.com/functions/docs/troubleshooting#entry-point

QEMU simple backend tracing dosen't print anything

I'm doing get simple trace file from QEMU.
I followed instructions docs/tracing.txt
with this command "qemu-system-x86_64 -m 2G -trace events=/tmp/events ../qemu/test.img"
i'd like to get just simple trace file.
i've got trace-pid file, however, it dosen't have anything in it.
Build with the 'simple' trace backend:
./configure --enable-trace-backends=simple
make
Create a file with the events you want to trace:
echo bdrv_aio_readv > /tmp/events
echo bdrv_aio_writev >> /tmp/events
Run the virtual machine to produce a trace file:
qemu -trace events=/tmp/events ... # your normal QEMU invocation
Pretty-print the binary trace file:
./scripts/simpletrace.py trace-events trace-* # Override * with QEMU
i followd this instructions.
please somebody give me some advise for this situation.
THANKS!
I got same problem by following the same document.
https://fossies.org/linux/qemu/docs/tracing.txt
got nothing because
bdrv_aio_readv and bdrv_aio_writev was not enabled by default, at least the version I complied, was not enabled. you need to open trace-events under source directory, looking for some line without disabled, e.g. I using:
echo "load_file" > /tmp/events
Then start qemu,
after a guest started, I run
./scripts/simpletrace.py trace-events trace-Pid
I got
load_file 1474.156 pid=5249 name=kvmvapic.bin path=qemu-2.8.0-rc0/pc-bios/kvmvapic.bin
load_file 22437.571 pid=5249 name=vgabios-stdvga.bin path=qemu-2.8.0-rc0/pc-bios/vgabios-stdvga.bin
load_file 10034.465 pid=5249 name=efi-e1000.rom
you can also add -monitor stdio to qemu command line, after it started, you can the following command in qemu CLI:
(qemu) info trace-events
load_file : state 1
vm_state_notify : state 1
balloon_event : state 0
cpu_out : state 0
cpu_in : state 0
1 means enabled events.
Modify the trace-events file in the source tree
As of v2.9.0 you also have to remove the disable from the lines you want to enable there, e.g.:
-disable exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
+exec_tb(void *tb, uintptr_t pc) "tb:%p pc=0x%"PRIxPTR
and recompile.
Here is a minimal fully automated runnable example that boots Linux and produces traces: https://github.com/cirosantilli/linux-kernel-module-cheat
For example, I used the traces to count how many boot instructions Linux has: https://github.com/cirosantilli/linux-kernel-module-cheat/blob/c7bbc6029af7f4fab0a23a380d1607df0b2a3701/count-boot-instructions.md
I have a lightly patched QEMU as a submodule, the key commit is: https://github.com/cirosantilli/qemu/commit/e583d175e4cdfb12b4812a259e45c679743b32ad

Mercurial largefiles not working on Windows Server 2008

I'm trying to get the largefiles extension working on a mercurial server under Windows Server 2008 / IIS 7.5 with the hgweb.wsgi script.
When I clone a repo with largefiles locally (but using https://domain/, not a file system path) everything gets cloned fine, but when I try it on a different machine I get abort: remotestore: largefile XXXXX is missing
Here's the verbose output:
requesting all changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 177 changes to 177 files
calling hook changegroup.lfiles: <function checkrequireslfiles at 0x0000000002E00358>
updating to branch default
resolving manifests
getting .hglf/path/to.file
...
177 files updated, 0 files merged, 0 files removed, 0 files unresolved
getting changed largefiles
getting path/to.file:c0c81df934cd72ca980dd156984fa15987e3881d
abort: remotestore: largefile c0c81df934cd72ca980dd156984fa15987e3881dis missing
Both machines have the extension working. I've tried disabling the firewall but that didn't help. Do I have to do anything to set up the extension besides adding it to mercurial.ini?
Edit: If I delete the files from the server's AppData\Local\largefiles\ directory, I get the same error when cloning on the server, unless I use a filesystem path to clone, in which case the files are added back to `AppData\Local\largefiles\'
Edit 2: Here's the debug output and traceback:
177 files updated, 0 files merged, 0 files removed, 0 files unresolved
getting changed largefiles
using http://domain
sending capabilities command
getting largefiles: 0/75 lfile (0.00%)
getting path/to.file:64f2c341fb3b1adc7caec0dc9c51a97e51ca6034
sending statlfile command
Traceback (most recent call last):
File "mercurial\dispatch.pyo", line 87, in _runcatch
File "mercurial\dispatch.pyo", line 685, in _dispatch
File "mercurial\dispatch.pyo", line 467, in runcommand
File "mercurial\dispatch.pyo", line 775, in _runcommand
File "mercurial\dispatch.pyo", line 746, in checkargs
File "mercurial\dispatch.pyo", line 682, in <lambda>
File "mercurial\util.pyo", line 463, in check
File "mercurial\commands.pyo", line 1167, in clone
File "mercurial\hg.pyo", line 400, in clone
File "mercurial\extensions.pyo", line 184, in wrap
File "hgext\largefiles\overrides.pyo", line 629, in hgupdate
File "hgext\largefiles\lfcommands.pyo", line 416, in updatelfiles
File "hgext\largefiles\lfcommands.pyo", line 398, in cachelfiles
File "hgext\largefiles\basestore.pyo", line 80, in get
File "hgext\largefiles\remotestore.pyo", line 56, in _getfile
Abort: remotestore: largefile 64f2c341fb3b1adc7caec0dc9c51a97e51ca6034 is missing
The _getfile function throws an exception because the statlfile command returns that the file wasn't found.
I've never used python myself, so I don't know what I'm doing while trying to debug this :D
AFAIK the statlfile command gets executed on the server so I can't debug it from my local machine. I've tried running python -m win32traceutil on the server, but it doesn't show anything. I also tried setting accesslog and errorlog in the server's mercurial config file, but it doesn't generate them.
I run hg through the hgweb.wsgi script, and I have no idea if/how I can get into the python debugger using that, but if I could get the debugger running on the server I could narrow down the problem...
Finally figured it out, the extension tries to write temporary files to %windir%\System32\config\systemprofile\AppData\Local, which was causing permission errors. The call was wrapped in a try-catch block that ended up returning the "file not found" error.
I'm just posting this for anyone else coming into the thread from a search.
There's currently an issue using the largefiles extension in the mercurial python module when hosted via IIS. See this post if you're encountering issues pushing large changesets (or large files) to IIS via TortoiseHg.
The problem ultimlately turns out to be a bug in SSL processing introduced Python 2.7.3 (probably explaining why there are so many unresolve posts of people looking for problems with Mercurial). Rolling back to Python 2.7.2 let me get a little further ahead (blocked at 30Mb pushes instead of 15Mb), but to properly solve the problem I had to install the IISCrypto utility to completely disable transfers over SSLv2.

Resources