I am able to generate code coverage reports using c8.
But how do I output total coverage % every time? Currently, it outputs total coverage % only if it is less than the threshold like this.
But I wanted to see the % every time it runs.
Config :
{
"include": "app",
"check-coverage": true,
"lines": 99,
"reporter": ["text","cobertura"]
}
I just needed to add "text-summary" as one of the reporters.
"reporter": ["text","text-summary","cobertura"]
Related
I am writing a LoadTestShape Class. I want to be able to set number of transaction per minute. If I try to set it by specifying user_count it doesn't work because the RPS will vary cpu to cpu .
If say I want to set 1000 transaction per minute, how can we achieve that in locust?
The only way to "achieve" this in Locust is implementing Pacing, the logic is:
You know how much time one iteration takes for one user
If user manages to finish faster - you need to introduce some "think time" so your thread "sleeps" till the desired time
If iteration is slower - no sleep is required.
This answer shows how you can create a custom load shape
Alternatively you can consider switching to a load testing tool which provides such functionality, i.e. Apache JMeter with Constant Throughput Timer
for this timing that's enough to add a LoadTestShape class to locustfile.py like the below code which I use for one of my tests you can change the number of users or other parameters as you want(I write a full docstring that describes each parameter in that):
class StagesShape(LoadTestShape):
"""
A simply load test shape class that has different user and spawn_rate at
different stages.
Keyword arguments:
stages -- A list of dicts, each representing a stage with the following keys:
duration -- When this many seconds pass the test is advanced to the next stage
users -- Total user count
spawn_rate -- Number of users to start/stop per second
stop -- A boolean that can stop that test at a specific stage
stop_at_end -- Can be set to stop once all stages have run.
"""
stages = [
{"duration": 60, "users": 3, "spawn_rate": 0.05},
{"duration": 60, "users": 6, "spawn_rate": 0.05},
{"duration": 60, "users": 9, "spawn_rate": 0.05},
{"duration": 60, "users": 12, "spawn_rate": 0.05},
{"duration": 60, "users": 15, "spawn_rate": 0.05},
{"duration": 60, "users": 18, "spawn_rate": 0.05},
{"duration": 60, "users": 21, "spawn_rate": 0.05},
{"duration": 60, "users": 24, "spawn_rate": 0.05},
{"duration": 60, "users": 27, "spawn_rate": 0.05},
{"duration": 60, "users": 30, "spawn_rate": 0.05},
]
def tick(self):
run_time = self.get_run_time()
for stage in self.stages:
if run_time < stage["duration"]:
tick_data = (stage["users"], stage["spawn_rate"])
return tick_data
return None
I ended up using constant_throughput to control the number of requests per second.
For different time of the day I'd use a different throughput value.
Simply set the wait_time = constant_throughput(0.1). Based on RPS you want you can either set the value low for less number of requests and more for more RPS.
We have the following output at the end of our Jest test base:
Test Suites: 273 passed, 273 total
Tests: 1 skipped, 1923 passed, 1924 total
Snapshots: 61 passed, 61 total
Time: 38.885 s, estimated 39 s
You see there is one skipped test.
When I search my test files either for it.skip or test.skip or generally skip I find nothing.
What I also tried is outputting the test run into JSON via:
jest --json --outputFile=testrun.json
In the top of the file I find this information:
{
"numFailedTestSuites": 0,
"numFailedTests": 0,
"numPassedTestSuites": 273,
"numPassedTests": 1923,
"numPendingTestSuites": 0,
"numPendingTests": 1,
"numRuntimeErrorTestSuites": 0,
"numTodoTests": 0,
"numTotalTestSuites": 273,
"numTotalTests": 1924,
...
}
so it looks like that numPendingTests is the one pointing to the skipped one. But when I search the output file, again, no trace of a skipped test. In fact, I did a search for "status": "[a-z]and there is no other status to be found than passed.
Short of looking through 270+ test suites, how else could a skipped test hide from me? Is there any way to find it?
As mentioned by johnrsharp as a comment, another way to skip tests in Jest is to prefix the term it(test) with x- so if you want to scan the files for skipped tests, you also need to look out for xit or xtest.
I'm trying to follow this tutorial on hyperparameter tuning on AI Platform: https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-on-google-cloud-platform-is-now-faster-and-smarter.
My configuration yaml file looks like this:
trainingInput:
hyperparameters:
goal: MINIMIZE
hyperparameterMetricTag: loss
maxTrials: 4
maxParallelTrials: 2
params:
- parameterName: learning_rate
type: DISCRETE
discreteValues:
- 0.0005
- 0.001
- 0.0015
- 0.002
The expected output:
"completedTrialCount": "4",
"trials": [
{
"trialId": "3",
"hyperparameters": {
"learning_rate": "2e-03"
},
"finalMetric": {
"trainingStep": "123456",
"objectiveValue": 0.123456
},
},
Is there any way to customize the trialId instead the defaults numeric values (e.g. 1,2,3,4...)?
It is not possible to customize the trialId as it is dependent on the parameter maxTrials in your hyperparameter tuning config.
maxTrials only accepts integers, so the assigned value to trialId will be a range from 1 to your defined maxTrials.
Also as mentioned in the example in your post where maxTrials: 40 is set and it yields a json that shows trialId: 35 which is within the range of maxTrials.
This indicates that 40 trials have been completed, and the best so far
is trial 35, which achieved an objective of 1.079 with the
hyperparameter values of nembeds=18 and nnsize=32.
Example output:
Can we make a celery task run at 1:30, 3:00, 4:30, 6 AM using single crontab function?
i.e 'schedule': crontab(minute=30, hour='1, 3, 4, 6') will run it at 1:30, 3:30, 4:30, 6:30AM
but I want it to run every 90 mins from 1:30 to 6AM
I would create two separate schedules (not separate function) as
CELERY_BEAT_SCHEDULE = {
"task_one": {
"task": "path.to.task.my_task_function",
"schedule": crontab(minute="30", hour="1, 4")
},
"task_two": {
"task": "path.to.task.my_task_function",
"schedule": crontab(hour="3, 6")
},
}
Here, the schedules are pointing towards the same function named my_task_function(...), but with separate schedules configs.
In this setting, the task_one will execute at 1.30 and 4.30 UTC whereas the task_two will get executed at 3.00 and 6.00 UTC
I had this script to convert some .txt files into .hdf5. These were later used as input for another function.
I had implemented this before and everythin was running smoothly (about 2weeks ago).
name = "ecg.hdf5"
sampling_rate = 250;
ecg = np.genfromtxt('ecg.txt')
hf = h5py.File(name, 'w')
# Create subgroups and add dataset
signals = hf.create_group('signals/ECG/raw')
ecg = signals.create_dataset('ecg',data = ecg)
# max and min for plot limits
ecg_max = max(ecg)
ecg_min = min(ecg)
# Add attributes
ecg.attrs.create('json','{"name": "signal0", "resolution": 16, "labels": ["I"], "units": {"signal": {"max": %f, "min": %f}, "time": {"label": "second"}}, "sampleRate": %d, "type": "/ECG/raw/ecg"}' %(ecg_max,ecg_min,sampling_rate))
hf.close()
As I was running it, I keep having this error and can't atribute the 'attribute'
rro adding atribu
Any idea, please?
Thanks in advance
Solvet it: updating the h5py version from 2.9.0 to 2.10.0