vstest.console.exe randomly misses some tests - vstest

I am setting up unit tests to be executed on a build server (Jenkins).
I have noticed that sometimes vstest decides to ignore some of my tests. They are not skipped or failed. They are simply ignored as if they weren't there to begin with. I run the test build again without any changes and the problem goes away.
There does not seem to be vstest crashes in the console log. It looks like the test run has completed normally, just with the wrong number of total tests.
I checked the test assembly with a decompiler and all the test methods are there.
TRX output file looks complete and valid.
In the trend graph below, red is failed, blue is passed. Notice the dip at #78? That's where I'm missing 3 out of 13 tests:
And here are vstest summaries from two consecutive builds, with no changes in between:
#78: Total tests: 10. Passed: 6. Failed: 4. Skipped: 0.
#79: Total tests: 13. Passed: 9. Failed: 4. Skipped: 0.
Did anybody encounter something like this? Should I consider ditching vstest in favor of a more reliable testing framework before I get too deeply entrenched in this?

Sorry about the bug. This is a regression in VS 2015. I fixed this bug in January and the fix will be included in VS 2015 Update 2. (I'm a software engineer at Microsoft.) The tests are actually run, but the results were sent back asynchronously and sometimes we closed the test host process before all of the results were sent back to the test engine process.

Related

Synchronization problem while executing Simulink FMU in ROS 2-Gazebo (TF_OLD_DATA warning)

I'm working in a co-simulation project between Simulink and Gazebo. The aim is to move a robot model in Gazebo with the trajectory coordinates computed from Simulink. I'm using MATLAB R2022a, ROS 2 Dashing and Gazebo 9.9.0 in a computer running Ubuntu 18.04.
The problem is that when launching the FMU with the fmi_adapter, I'm obtaining the following. It is tagged as [INFO], but actually messing up all my project.
[fmi_adapter_node-1] [INFO] [fmi_adapter_node]: Simulation time 1652274762.959713 is greater than timer's time 1652274762.901340. Is your step size to large?
Note the timer's time is higher than the simulation time. Even if I try to change the step size with the optional argument of the fmi_adapter_node, the same log appears with small differences in the times. I'm using the next commands:
ros2 launch fmi_adapter fmi_adapter_node.launch.py fmu_path:=FMI/Trajectory/RobotMARA_SimulinkFMU_v2.fmu # default step size: 0.2
ros2 launch fmi_adapter fmi_adapter_node.launch.py fmu_path:=FMI/Trajectory/RobotMARA_SimulinkFMU_v2.fmu _step_size:=0.001
As you would expect, the outputs of the FMU are the xyz coordinates of the robot trajectory in each time step. Since the fmi_adapter_node creates topics for both inputs and outputs, I'm reading the output xyz values by means of 3 subscribers with the next code. Then, those coordinates are being used to program the robot trajectories with the MoveIt-Python API.
When I run the previous Python code, I'm obtaining the following warning once and again and the robot manipulator actually doesn't move.
[ WARN] [1652274804.119514250]: TF_OLD_DATA ignoring data from the past for frame motor6_link at time 870.266 according to authority unknown_publisher
Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained
The previous warning is explained here, but I'm not able to fix it. I've tried clicking Reset in RViz, but nothing changes. I've also tried the following without success:
ros2 param set /fmi_adapter_node use_sim_time true # it just sets the timer's time to 0
It seems that the clock is taking negative values, so there is a synchronization problem.
Any help is welcome.
The warning message by the FMIAdapterNode is emitted if the timer's period is only slightly greater than the simulation step-size and if the timer is preempted by other processes or threads.
I created an issue at https://github.com/boschresearch/fmi_adapter/issues/9 which explains this in more detail and lists two possible fixes. It would be great if you could contribute to this discussion.
I assume that the TF_OLD_DATA error is not related to the fmi_adapter. Looking at the code snippet at ROS Answers, I wondered whether x,y,z values are re-published at all given that the lines
pose.position.x = listener_x.value
pose.position.y = listener_y.value
pose.position.z = listener_z.value
are not inside a callback and executed even before rospy.spin(), but maybe that's just truncated.

Is there a way of listing all tests in a Cargo project without running them?

Waiting for a large test suite to run is painful, so I collect the duration of each test from cargo test and use a simple heuristic to find failures fast (I order by probability of failure/last run duration and run tests in that order).
This is great, but it doesn't have a way of knowing about new tests. If I could list all tests, I could detect new tests and add them to the high risk group that gets run first.
You can run cargo test -- --list to list all tests and benchmarks. The output format is:
glonk: benchmark
hurz: test
1 test, 1 benchmark
You can suppress the summary line by passing the --format=terse flag.
Note that --list is a command line flag that is passed to the test binary itself, and not a Cargo flag. You can get a full list of flags accepted by the test binary using cargo test -- --help.

Under Chisel 3, it takes 10 min to compile the Verilator generated C++ of Rocket Chip. Are there any ways to speed this up?

We are modifying Rocket Chip code. After each modification, we need to run the assembly programs, to be sure everything still runs correctly.
To do this, the steps are:
1) Run Chisel, to generate Verilog
2) Run the verilog through Verilator, to generate C++
3) Compile generated C++
4) Run tests
Step 3 is about 10 times longer than it was under Chisel 2. It takes about 10 minutes, which slows development.
Is there any way to speed this up?
I have found a non-trivial amount of build and run time is spent on not-really-synthesizable constructs that are used for verification support.
For example, I disable the TLMonitors through the Config options. You can find an example in the subsystem Configs.
class WithoutTLMonitors extends Config ((site, here, up) => {
case MonitorsEnabled => false
})

Openai universe-starter-agent not training

I've been trying to run Openai's universe-starter-agent example found here, However, using an m4.16xlarge instance on AWS with 32 workers, the agent's training result doesn't improve after 0.6 hours (over 30 minutes) while it is stated that "the agent is able to solve the same environment in 10 minutes" on the GitHub page.
The progress was monitored through TensorBoard. Please notice the example given in the GitHub was shown for the case of 16 workers, and it converges to an episode reward of 21 within 30 minutes, while for this case, with doubled number of workers and same amount of training time, the reward doesn't improve. I also took a look at the log and it seems there's no compiling error. The command I used to run the script is:
python train.py --num-workers 32 --env-id PongDeterministic-v3 --log-dir /tmp/pong
The only thing that I find a little dubious is when running the script, the following error was shown, but didn't abort the run. The error reads: "failed to connect to server"
Has anyone else run the starter agent, and/or run into similar issue? If so, how did you solve it?
Thanks!
Problem solved - downgrade tensorflow from 1.0.0 to 0.11.0 and trained as expected!

npm is very slow on Windows 10

This question is basically a duplicate of this one, except that the accepted answer on that question was, "it's not actually slower, you just weren't running the timing command correctly."
In my case, it actually is slower! :)
I'm on Windows 10. Here's the output from PowerShell's Measure-Command (the TotalMilliseconds line represents wall-clock time):
PS> Measure-Command {npm --version}
Days : 0
Hours : 0
Minutes : 0
Seconds : 1
Milliseconds : 481
Ticks : 14815261
TotalDays : 1.71472928240741E-05
TotalHours : 0.000411535027777778
TotalMinutes : 0.0246921016666667
TotalSeconds : 1.4815261
TotalMilliseconds : 1481.5261
A few other numbers, for comparison:
'{.\node_modules.bin\mocha}': 1300ms
'npm run test' (just runs mocha): 3300ms
npm help: 1900ms.
the node interpreter itself is ok: node -e 0: 180ms
It's not just npm that's slow... mocha reports that my tests only take 42ms, but as you can see above, it takes 1300ms for mocha to run those 42ms of tests!
I've had the same trouble. Do you have Symantec Endpoint Protection? Try disabling Application and Device Control in Change Settings > Client Management > General > Enable Application and Device Control.
(You could disable SEP altogether; for me the command is: "%ProgramFiles(x86)%\Symantec\Symantec Endpoint Protection\smc.exe" -stop.)
If you have some other anti-virus, there's likely a way to disable it as well. Note that closing the app in the Notification area might not stop the virus protection. The problem is likely with any kind of realtime protection that scans a process as it starts. Since node and git are frequently-invoked short-running processes, this delay is much more noticeable.
In Powershell, I like to measure the performance of git status, both before and after that change: Measure-Command { git status }
I ran into this problem long ago, I think it was an extension that I had. I use Visual Studio Code, and when it has no extensions and running bash:
//GIT Bash Configuration
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe",
it actually flies, I use both OS, so I can tell the difference. Try using different tools and disabling some.
And if that still doesn't work, check your antivirus, maybe it's slowing down the process?
Been googling this all day, with no luck. Decided to uninstall Java to see what would happen and bingo, solved my problem. I know this is an old thread, but I found myself coming back to it so many times to see if I missed anything.
off topic:
Got to figure out how to get Java working now 🤦
Didn't know about Measure-Command, so I'll be using that in the future!
I had this problem. When I tried to run an application of my job in my home, I realized that in my job's laptop the app starts on 2 minutes but in my personal notebook it tooked 5 minutes or more.
After trying some possible solutions, finally I found the problem was that I installed Git Bash in my D drive partition which is a HDD. When I re-installed in C drive whichs is a SSD then the app started faster. However, I also moved Node.js to C drive to prevent another issues.

Resources