Unknown error while debuggin rust application in VS Code - rust

I am trying to debug a fairly large rust project in VS code.
The launch.json has this:
{
"type": "lldb",
"request": "launch",
"name": "Debug executable 'rpfm_ui'",
"cargo": {
"args": [
"build",
"--bin=rpfm_ui",
"--package=rpfm_ui"
],
"filter": {
"name": "rpfm_ui",
"kind": "bin"
}
},
"args": [],
"cwd": "${workspaceFolder}"
},
But when I try to run the application I get the following
Finished dev [unoptimized + debuginfo] target(s) in 9.53s
Raw artifacts:
{
fileName: 'c:\\Users\\ole_k\\Desktop\\rpfm-master\\target\\debug\\rpfm_ui.exe',
name: 'rpfm_ui',
kind: 'bin'
}
Filtered artifacts:
{
fileName: 'c:\\Users\\ole_k\\Desktop\\rpfm-master\\target\\debug\\rpfm_ui.exe',
name: 'rpfm_ui',
kind: 'bin'
}
configuration: {
type: 'lldb',
request: 'launch',
name: "Debug executable 'rpfm_ui'",
args: [],
cwd: '${workspaceFolder}',
relativePathBase: 'c:\\Users\\ole_k\\Desktop\\rpfm-master',
program: 'c:\\Users\\ole_k\\Desktop\\rpfm-master\\target\\debug\\rpfm_ui.exe',
sourceLanguages: [ 'rust' ]
}
Listening on port 49771
[adapter\src\terminal.rs:99] FreeConsole() = 1
[adapter\src\terminal.rs:100] AttachConsole(pid) = 1
[adapter\src\terminal.rs:104] FreeConsole() = 1
[2020-06-27T20:43:04Z ERROR codelldb::debug_session] process launch failed: unknown error
Debug adapter exit code=0, signal=null.
I have also seen this:
PS C:\Users\ole_k\Desktop\rpfm-master> & 'c:\Users\ole_k.vscode\extensions\vadimcn.vscode-lldb-1.5.3\adapter\codelldb.exe' 'terminal-agent' '--port=49628'
Error: Os { code: 10061, kind: ConnectionRefused, message: "No connection could be made because the target machine actively refused it." }
[2020-06-27T20:29:08Z ERROR codelldb::debug_session] process launch failed: unknown error
If I run the application from the terminal inside vs code (cargo run --bin rpfm_ui) it works.
There are some external dependencies which are in folders outside of the root folder.
I can debug other projects in the solution which share a lot of the code, but not the external dependencies.
I can debug other projects.
I am running as administrator.
Any ideas on how to resolve the issue?

Related

nuxt3 + #nuxt/content ... ERROR Failed to parse source for import analysis because the content contains invalid JS syntax

I would like to use nuxt/content for some content from a database. When starting the development environment with npm run dev I get the following message
ERROR Failed to parse source for import analysis because the content contains invalid JS syntax. If you are using JSX, make sure to name the file with the .jsx or .tsx extension.
and
[Vue warn]: Failed to resolve component: nuxt-content
my config
//nuxt.config.ts
export default defineNuxtConfig({
modules: [
'#nuxtjs/tailwindcss',
'#pinia/nuxt',
'#nuxt/content',
],
imports: {
dirs: ['stores'],
},
buildModules: [
'#nuxt/postcss8',
'#pinia/nuxt',
],
build: {
postcss: {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
},
},
css: [
'#/assets/style/main.scss',
],
})
When i remove the module #nuxt/content the error message also disappears.
By the way, I use vite in this project.

Log all failed attempts in testcafe quarantine mode?

I have quarantine mode enabled in my testcafe configuration.
"ci-e2e": {
"browsers": [
"chrome:headless"
],
"debugOnFail": false,
"src": "./tests/e2e/*.test.ts",
"concurrency": 1,
"quarantineMode": true,
"reporters": [
{
"name": "nunit3",
"output": "results/e2e/testResults.xml"
},
{
"name": "spec"
}
],
"screenshots": {
"takeOnFails": true,
"path": "results/ui/screenshots",
"pathPattern": "${DATE}_${TIME}/${FIXTURE}/${TEST}/Screenshot-${QUARANTINE_ATTEMPT}.png"
},
"video": {
"path": "results/ui/video",
"failedOnly": true,
"pathPattern": "${DATE}_${TIME}/${FIXTURE}/${TEST}/Video-${QUARANTINE_ATTEMPT}"
}
},
Now when some attempt fails I have entry in log (nunit xml logfile) with information about failed runs and only one stack-trace. I have screenshot for each failed run.
<failure>
<message>
<![CDATA[ ❌ AssertionError: ... Run 1: Failed Run 2: Failed Run 3: Failed ]]>
</message>
<stack-trace>
here we have stack-trace for only one failed run
</stack-trace>
</failure>
I want to have log entry with stack-trace for each failed run for each failed test. Is it possible to configure testcafe this way? If not what I need to do?
There is a mistake in the config file. The name of the option for reporters should be reporter, but it is reporterS. It means that Testcafe doesn't use these reporters at all and maybe now you just see an outdated file with results.

Can't deploy Angular Universal to Firebase Functions

Whenever I try to deploy my Angular Universal app, the hosting gets deployed without issue, but I'm faced with the following error whenever I run ng deploy:
Functions did not deploy properly.
Everything functions without error when I run npm run build:ssr though, so I'm not sure what is causing this error. Here is my firebase.json file:
{
"hosting": [
{
"target": "iquench-website",
"public": "dist\\dist\\browser",
"ignore": [
"**/.*"
],
"headers": [
{
"source": "*.[0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f][0-9a-f].+(css|js)",
"headers": [
{
"key": "Cache-Control",
"value": "public,max-age=31536000,immutable"
}
]
}
],
"rewrites": [
{
"source": "**",
"function": "ssr"
}
]
}
],
"functions": {
"source": "dist"
}
}
and my .firebaserc:
{
"projects": {
"default": "PROJECT_NAME"
},
"targets": {
"PROJECT_NAME": {
"hosting": {
"iquench-website": [
"PROJECT_NAME"
]
}
}
}
}
Here are the logs when deploying:
=== Deploying to 'PROJECT_NAME'...
i deploying functions, hosting
i functions: ensuring required API cloudfunctions.googleapis.com is enabled...
i functions: ensuring required API cloudbuild.googleapis.com is enabled...
+ functions: required API cloudfunctions.googleapis.com is enabled
+ functions: required API cloudbuild.googleapis.com is enabled
i functions: preparing dist directory for uploading...
i functions: packaged dist (5.72 MB) for uploading
+ functions: dist folder uploaded successfully
i hosting[PROJECT_NAME]: beginning deploy...
i hosting[PROJECT_NAME]: found 115 files in dist\dist\browser
+ hosting[PROJECT_NAME]: file upload complete
i functions: current functions in project: backupFirestore(us-central1), deleteUser(us-central1), ssr(us-central1)
i functions: uploading functions in project: ssr(us-central1)
i functions: updating Node.js 10 function ssr(us-central1)...
+ scheduler: required API cloudscheduler.googleapis.com is enabled
! functions[ssr(us-central1)]: Deployment error.
Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.
Functions deploy had errors with the following functions:
ssr
To try redeploying those functions, run:
firebase deploy --only "functions:ssr"
To continue deploying other features (such as database), run:
firebase deploy --except functions
Functions did not deploy properly.

Debugging python in docker container using debugpy and vs code results in timeout/connection refused

I'm trying to setup native debugging for a python script running in docker for Visual Studio Code using debugpy. Ideally I'd like to just F5 and be on my way (including a build phase if needed). Currently I'm bouncing between a timeout caused from debugpy.listen(5678) inlined within the VS code editor itself (Exception has occurred: RuntimeError timed out waiting for adapter to connect) or a connection refused.
I created a launch.json from the documentation provided by microsoft:
launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Integration (test)",
"type": "python",
"request": "attach",
"pathMappings": [
{
"localRoot": "${workspaceFolder}/test",
"remoteRoot": "/test"
}
],
"port": 5678,
"host": "127.0.0.1"
}
]
}
building the image looks like this so far:
Dockerfile
FROM python:3.7-slim-buster as base
RUN apt-get -y update; apt-get install -y vim git cmake
WORKDIR /
RUN mkdir .cache src in out config log
COPY requirements.txt .
RUN pip install -r requirements.txt; rm requirements.txt
#! TODO: config folder needs to be a mapped volume so they can change creds without rebuild
WORKDIR /src
COPY test ../test
COPY config ../config
COPY src/ .
#? D E B U G I M A G E
FROM base as debug
RUN pip install debugpy
CMD python -m debugpy --listen 0.0.0.0:5678 ../test/edu.employer._test.py
#! P R O D U C T I O N I M A G E
# FROM base as prod
# CMD [ "python", "/test/edu.employer._test.py" ]
Some examples I found try to simply things with a docker-compose.yaml, but I'm unsure if i need one at this point.
docker-compose.yaml
services:
tester:
container_name: tester
image: employer/test:1.0.0
build:
context: .
target: debug
dockerfile: test/edu.employer._test.Dockerfile
volumes:
- ./out:/out
- ./.cache:/.cache
- ./log:/log
ports:
- 5678:5678
which I based off a the CLI command: docker run -it -v $(pwd)/out:/out -v $(pwd)/.cache:/.cache -v $(pwd)/log:/log employer/test:1.0.0;
"critical" parts of my script just listen and wait for the bugger:
from __future__ import absolute_import
# Standard
import os
import sys
# 3rd Party
import debugpy
debugpy.listen(5678)
debugpy.wait_for_client()
# 1st Party. NOTE: All source files are in /src, so we can add that path here for testing
# and batch import all integrations files. Not very clean however
sys.path.insert(0, os.path.join('/', 'src'))
import integrations as ints
You have to configure the debugger with: debugpy.listen(("0.0.0.0", 5678)).
This happens because, by default, debugpy is listening on localhost. If you have your docker container on another host you have to add 0.0.0.0.
Turns out I needed to create a tasks.json file and provide the details on running the image...
tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dependsOn": ["docker-build"],
"dockerRun": {
"image": "employer/test:1.0.0"
// "env": {
// "FLASK_APP": "path_to/flask_entry_point.py"
// }
},
"python": {
"args": [],
"file": "/test/edu.employer._test.py"
}
}
]
}
and define a preLaunchTask:
{
"name": "Docker: Python",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/test",
"remoteRoot": "/test"
}
],
//"projectType": "django"
}
}

VSCode/Win10 - Cannot find runtime 'node' on PATH

To reproduce:
Use my Windows 10 PC to open VSCode
Create a Hello World js file
Launch as node application from the VSCode Debugger view
Produces error: Cannot find runtime 'node' on PATH
Given:
My launch.json:
{
"version": "0.2.0",
"configurations": [
{
"name": "Build",
"type": "node",
"request": "launch",
"program": "${workspaceRoot}/build.js",
"stopOnEntry": false,
"args": [],
"cwd": "${workspaceRoot}",
"env": {
"NODE_ENV": "development"
},
"console": "internalConsole",
"sourceMaps": false
}
]
}
I have moved nodejs to within 2048 characters of my PATH variable.
I have restarted my computer.
I have restarted VSCode.
From both CMD and the Integrated Terminal (sysnative/cmd):
PATH contains D:/Program Files/nodejs
where node returns D:/Program Files/nodejs/node.exe
From the Google Developer Console:
process.env.PATH contains D:/Program Files/nodejs
Running the following code produces D:/Program Files/nodejs/node.exe:
(function(){
const cp = require('child_process');
const env = Object.assign({}, process.env, {
ATOM_SHELL_INTERNAL_RUN_AS_NODE: '1',
ELECTRON_NO_ATTACH_CONSOLE: '1'
});
const result = cp.spawnSync('where', ['node'], {
detached: true,
stdio: ['ignore', 'pipe', process.stderr],
env,
encoding: 'utf8'
});
console.log(result.stdout);
})();
Additional
Also .NET-Core applications behave identically - terminal and cmd works, dotnet is on the path, but Launch from VSCode debugger view fails to find CLI Tools on the path.
Attaching to an existing dotnet process produces a different error:
Please set up the launch configuration file for your application. command 'csharp.listProcess' not found
Not sure if related, but F12 for jumping to declaration is unresponsive.
Update
I've been doing some debugging, and it looks like the following code produces Command Failed: echo test:
require('child_process')
.execSync('echo test', {cwd: workspaceRoot, env: process.env});
Under the hood, it winds up calling
require('child_process')
.spawnSync('cmd', ['/s', '/c', '"echo test"'], {cwd: workspaceRoot, env: process.env});
The command it builds under the hood is C:\Windows\System32\cmd.exe /s /c "echo test" which I tested and does indeed print test.
The spawnSync call reveals that the exit code was 3221225477.
In fact, every time I use child_process to execute something via cmd, the exit code is 3221225477. I can get spawnSync to start other processes aside from cmd, though. This works:
require('child_process')
.spawnSync('node', ['build.js'], {cwd: workspaceRoot, env: process.env});

Resources