How to change the default quantity of lines visible in debug mode? - node.js

It shows only 2 lines before and after the current statement. I would like to change the quantity of lines each time I press n (next) or c (continue). The function list(n), quoting the documentation, it says:
list(5) - List scripts source code with 5 line context (5 lines before
and after)
Debug Info - NodeJS
For example executing:
$: node debug app.js
break in c:\nodejs\app.js:3
1
2
> 3 var fs= require('fs');
4
5 console.log('Hello World!');
debug>
I want to change the default number each time I move on statements. Or, an opinion which would be the best way to do it ?

Related

SED style Multi address in Python?

I have an app that parses multiple Cisco show tech files. These files contain the output of multiple router commands in a structured way, let me show you an snippet of a show tech output:
`show clock`
20:20:50.771 UTC Wed Sep 07 2022
Time source is NTP
`show callhome`
callhome disabled
Callhome Information:
<SNIPET>
`show module`
Mod Ports Module-Type Model Status
--- ----- ------------------------------------- --------------------- ---------
1 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
2 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
3 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
4 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
21 0 Fabric Module N9K-C9504-FM-R ok
22 0 Fabric Module N9K-C9504-FM-R ok
23 0 Fabric Module N9K-C9504-FM-R ok
<SNIPET>
My app currently uses both SED and Python scripts to parse these files. I use SED to parse the show tech file looking for a specific command output, once I find it, I stop SED. This way I don't need to read all the file (these can get to be very big files). This is a snipet of my SED script:
sed -E -n '/`show running-config`|`show running`|`show running config`/{
p
:loop
n
p
/`show/q
b loop
}' $1/$file
As you can see I am using a multi address range in SED. My question specifically is, how can I achieve something similar in python? I have tried multiple combinations of flags: DOTALL and MULTILINE but I can't get the result I'm expecting, for example, I can get a match for the command I'm looking for, but python regex wont stop until the end of the file after the first match.
I am looking for something like this
sed -n '/`show clock`/,/`show/p'
I would like the regex match to stop parsing the file and print the results, immediately after seeing `show again , hope that makes sense and thank you all for reading me and for your help
You can use nested loops.
import re
def process_file(filename):
with open(filename) as f:
for line in f:
if re.search(r'`show running-config`|`show running`|`show running config`', line):
print(line)
for line1 in f:
print(line1)
if re.search(r'`show', line1):
return
The inner for loop will start from the next line after the one processed by the outer loop.
You can also do it with a single loop using a flag variable.
import re
def process_file(filename):
in_show = False
with open(filename) as f:
for line in f:
if re.search(r'`show running-config`|`show running`|`show running config`', line):
in_show = True
if in_show
print(line)
if re.search(r'`show', line1):
return

rampUser method is getting stuck in gatling 3.3

I am having issues using rampUser() method in my gatling script. The request is getting stuck after the following entry which had passed half way through.
Version : 3.3
================================================================================
2019-12-18 09:51:44 45s elapsed
---- Requests ------------------------------------------------------------------
> Global (OK=2 KO=0 )
> graphql / request_0 (OK=1 KO=0 )
> rest / request_0 (OK=1 KO=0 )
---- xxxSimulation ---------------------------------------------------
[##################################### ] 50%
waiting: 1 / active: 0 / done: 1
================================================================================
I am seeing the following in the log which gets repeated for ever and the log size increases
09:35:46.495 [GatlingSystem-akka.actor.default-dispatcher-2] DEBUG io.gatling.core.controller.inject.open.OpenWorkload - Injecting 0 users in scenario xxSimulation, continue=true
09:35:47.494 [GatlingSystem-akka.actor.default-dispatcher-6] DEBUG io.gatling.core.controller.inject.open.OpenWorkload - Injecting 0 users in scenario xxSimulation, continue=true
The above issue is happening only with rampUser and not happening with
atOnceUsers()
rampUsersPerSec()
rampConcurrentUsers()
constantConcurrentUsers()
constantUsersPerSec()
incrementUsersPerSec()
Is there a way to mimic rampUser() in some other way or is there a solution for this.
My code is very minimal
setUp(
scenarioBuilder.inject(
rampUsers(2).during(1 minutes)
)
).protocols(protocolBuilder)
I am stuck with this for some time and my earlier post with more information can be found here
Can any of the gatling experts help me on this?
Thanks for looking into it.
It seems you have slightly incorrect syntax for a rampUsers. You should try remove a . before during.
I have in my own script this code and it works fine:
setUp(userScenario.inject(
// atOnceUsers(4),
rampUsers(24) during (1 seconds))
).protocols(httpProtocol)
Also, in Gatling documentation example is also without a dot Open model:
scn.inject(
nothingFor(4 seconds), // 1
atOnceUsers(10), // 2
rampUsers(10) during (5 seconds), // HERE
constantUsersPerSec(20) during (15 seconds), // 4
constantUsersPerSec(20) during (15 seconds) randomized, // 5
rampUsersPerSec(10) to 20 during (10 minutes), // 6
rampUsersPerSec(10) to 20 during (10 minutes) randomized, // 7
heavisideUsers(1000) during (20 seconds) // 8
).protocols(httpProtocol)
)
My guess is that syntax can't be parsed, so instead 0 is substituted. (Here is example of rounding. Not applicable, but as reference: gatling-user-injection-constantuserspersec)
Also, you mentioned that others method work, could you paste working code as well?

How to profile a vim plugin written in python

Vim offers the :profile command, which is really handy. But it is limited to Vim script -- when it comes to plugins implemented in python it isn't that helpful.
Currently I'm trying to understand what is causing a large delay on Denite. As it doesn't happen in vanilla Vim, but only on some specific conditions which I'm not sure how to reproduce, I couldn't find which setting/plugin is interfering.
So I turned to profiling, and this is what I got from :profile:
FUNCTION denite#vim#_start()
Defined: ~/.vim/bundle/denite.nvim/autoload/denite/vim.vim line 33
Called 1 time
Total time: 5.343388
Self time: 4.571928
count total (s) self (s)
1 0.000006 python3 << EOF
def _temporary_scope():
nvim = denite.rplugin.Neovim(vim)
try:
buffer_name = nvim.eval('a:context')['buffer_name']
if nvim.eval('a:context')['buffer_name'] not in denite__uis:
denite__uis[buffer_name] = denite.ui.default.Default(nvim)
denite__uis[buffer_name].start(
denite.rplugin.reform_bytes(nvim.eval('a:sources')),
denite.rplugin.reform_bytes(nvim.eval('a:context')),
)
except Exception as e:
import traceback
for line in traceback.format_exc().splitlines():
denite.util.error(nvim, line)
denite.util.error(nvim, 'Please execute :messages command.')
_temporary_scope()
if _temporary_scope in dir():
del _temporary_scope
EOF
1 0.000017 return []
(...)
FUNCTIONS SORTED ON TOTAL TIME
count total (s) self (s) function
1 5.446612 0.010563 denite#helper#call_denite()
1 5.396337 0.000189 denite#start()
1 5.396148 0.000195 <SNR>237_start()
1 5.343388 4.571928 denite#vim#_start()
(...)
I tried to use the python profiler directly by wrapping the main line:
import cProfile
cProfile.run(_temporary_scope(), '/path/to/log/file')
, but no luck -- just a bunch of errors from cProfile. Perhaps it is because the way python is started from Vim, as it is hinted here that it only works on the main thread.
I guess there should be an easier way of doing this.
The python profiler does work by enclosing the whole code,
cProfile.run("""
(...)
""", '/path/to/log/file')
, but it is not that helpful. Maybe it is all that is possible.

How do I search and replace in vi to eliminate a random number of random characters preceding a known string?

I have text files which look like this:
0 298047498 /directory1/app/20170417/file1.blob 0 f191
e 6569844 /directory1/app/20170417/file2.blob 0 f191
344 /directory1/app/20170417/file3.blob 0
8946 /directory1/app/20170417/file4.blob 0
196496 /directory1/app/20170417/file5.blob 0
9 182340752 /directory1/app/20170417/file6.blob 0 f191
68802 /directory1/app/20170417/file7.blob 0
I want to remove everything prior to the first / and everything after the file extension.
Results should look like this:
/directory1/app/20170417/file1.blob
/directory1/app/20170417/file2.blob
/directory1/app/20170417/file3.blob
Is there a way to do this using vi search and replace?
This type of question may be better placed here: https://vi.stackexchange.com/
But for now:
Yout can e.g. use a simple vim-macro, in which you collect all the key-strokes you need to edit one line and repeat this macro as many times as you need it.
Here are simply the key-strokes for one line:
dt/WD
d = delete..
t = ..till the first "/"
W = [shift]+[w] jumps to the next Word (after the "file-location-string")
D = [shift]+[d] deletes till the end of the current line
If you want to record this as a macro, do the following, with the keystrokes from above, inbetween - like this:
qmdt/WD[home][down]q
qm = start the recording of a macro in buffer "m"
... key-strokes from above
[home][down] = key [home] followed by [arrow down]-key, to move into the next line (for convenince)
q = end up the macro-recording
Now execute that macro with:
#m
And if you added the [down] key, you can do something like:
7#m
with which you fire your macro 7 times, for all your 7 lines.

Force lshosts command to return megabytes for "maxmem" and "maxswp" parameters

When I type "lshosts" I am given:
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
server1 X86_64 Intel_EM 60.0 12 191.9G 159.7G Yes ()
server2 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
server3 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
I am trying to return maxmem and maxswp as megabytes, not gigabytes when lshosts is called. I am trying to send Xilinx ISE jobs to my LSF, however the software expects integer, megabyte values for maxmem and maxswp. By doing debugging, it appears that the software grabs these parameters using the lshosts command.
I have already checked in my lsf.conf file that:
LSF_UNIT_FOR_LIMTS=MB
I have tried searching the IBM Knowledge Base, but to no avail.
Do you use a specific command to specify maxmem and maxswp units within the lsf.conf, lsf.shared, or other config files?
Or does LSF force return the most practical unit?
Any way to override this?
LSF_UNIT_FOR_LIMITS should work, if you completely drained the cluster of all running, pending, and finished jobs. According to the docs, MB is the default, so I'm surprised.
That said, you can use something like this to transform the results:
$ cat to_mb.awk
function to_mb(s) {
e = index("KMG", substr(s, length(s)))
m = substr(s, 0, length(s) - 1)
return m * 10^((e-2) * 3)
}
{ print $1 " " to_mb($6) " " to_mb($7) }
$ lshosts | tail -n +2 | awk -f to_mb.awk
server1 191900 159700
server2 191900 191200
server3 191900 191200
The to_mb function should also handle 'K' or 'M' units, should those pop up.
If LSF_UNIT_FOR_LIMITS is defined in lsf.conf, lshosts will always print the output as a floating point number, and in some versions of LSF the parameter is defined as 'KB' in lsf.conf upon installation.
Try searching for any definitions of the parameter in lsf.conf and commenting them all out so that the parameter is left undefined, I think in that case it defaults to printing it out as an integer in megabytes.
(Don't ask me why it works this way)

Resources