YOLOv5 get boxes, scores, classes, nums - pytorch

im trying to bind the Object Tracking with Deep Sort in my Project and i need to get the boxes, scores, classes, nums.
Loading Pretrained Yolov5 model:
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
model.eval()
Getting the Prediction:
result = model(img)
print(result.shape)
print(result)
torch.Size([8, 6])
tensor([[277.50000, 379.25000, 410.50000, 478.75000, 0.90625, 2.00000],
[404.00000, 205.12500, 498.50000, 296.00000, 0.88623, 2.00000],
[262.50000, 247.75000, 359.50000, 350.25000, 0.88281, 2.00000],
[210.50000, 177.75000, 295.00000, 261.75000, 0.83154, 2.00000],
[195.50000, 152.50000, 257.75000, 226.00000, 0.78223, 2.00000],
[137.00000, 146.75000, 168.00000, 162.00000, 0.55713, 2.00000],
[ 96.00000, 130.12500, 132.50000, 161.12500, 0.54199, 2.00000],
[ 43.56250, 89.56250, 87.68750, 161.50000, 0.50146, 5.00000]], device='cuda:0')
tensor([[277.50000, 379.25000, 410.50000, 478.75000, 0.90625, 2.00000],
[404.00000, 205.12500, 498.50000, 296.00000, 0.88623, 2.00000],
[262.50000, 247.75000, 359.50000, 350.25000, 0.88281, 2.00000],
[210.50000, 177.75000, 295.00000, 261.75000, 0.83154, 2.00000],
[195.50000, 152.50000, 257.75000, 226.00000, 0.78223, 2.00000],
[137.00000, 146.75000, 168.00000, 162.00000, 0.55713, 2.00000],
[ 96.00000, 130.12500, 132.50000, 161.12500, 0.54199, 2.00000],
[ 43.56250, 89.56250, 87.68750, 161.50000, 0.50146, 5.00000]], device='cuda:0')
so now my question is how do i get the boxes, scores, classes, nums in each variables?
I need that for the Object Tracking
I tried it once with the example on Pytorch Documentation:
result.xyxy[0]
but in my Case I get an Error:
Tensor has no attribute xyxy

The output from the model is a torch tensor and has no xyxy method. You need to extract the values manually. Either you can go through each detection one by one:
import torch
det = torch.rand(8, 6)
for *xyxy, conf, cls in det:
print(*xyxy)
print(conf)
print(cls)
or you can slice the detections tensor by:
xyxy = det[:, 0:4]
conf = det[:, 4]
cls = det[:, 5]
print(xyxy)
print(conf)
print(cls)

Related

tune hyperparameters of XGBRanker

I try to optimize my hyperparameters of my XGBoost Ranker model, but I can't
Here is what my table (df on code) looks like :
query
relevance
features
1
5
5.4.7....
1
3
6........
2
5
3........
2
3
8........
3
2
1........
Then I split my table on train test with on the test table only one query:
gss = GroupShuffleSplit(test_size=1, n_splits=1,).split(df, groups=df['query'])
X_train_inds, X_test_inds = next(gss)
train_data= df.iloc[X_train_inds]
X_train=train_data.drop(columns=["relevance"])
Y_train=train_data.relevance
test_data= df.iloc[X_test_inds]
X_test=test_data.drop(columns=["relevance"])
Y_test=test_data.relevance
and constitute groups which is the number of lines by query:
groups = train_data.groupby('query').size().to_frame('size')['size'].to_numpy()
And then I run my model and try to optimize the hyperparameters with a RandomizedSearchCV:
param_dist = {'n_estimators': randint(40, 1000),
'learning_rate': uniform(0.01, 0.59),
'subsample': uniform(0.3, 0.6),
'max_depth': [3, 4, 5, 6, 7, 8, 9],
'colsample_bytree': uniform(0.5, 0.4),
'min_child_weight': [0.05, 0.1, 0.02]
}
scoring = sklearn.metrics.make_scorer(sklearn.metrics.ndcg_score, k=10,
greater_is_better=True)
model = xgb.XGBRanker(
tree_method='hist',
booster='gbtree',
objective='rank:ndcg',)
clf = RandomizedSearchCV(model,
param_distributions=param_dist,
cv=5,
n_iter=5,
scoring=scoring,
error_score=0,
verbose=3,
n_jobs=-1)
clf.fit(X_train,Y_train, group=groups)
Then I have the following error message which it seems be related to my construction of groups but I don't see why (Knowing that without the randomsearch the model works) :
Check failed: group_ptr_.back() == num_row_ (11544 vs. 9235) : Invalid group structure. Number of rows obtained from groups doesn't equal to actual number of rows given by data.
Same problem as here:(Tuning XGBRanker produces error for groups)

Invalid parameter imputer getting Check the list of available parameters with `estimator.get_params().keys()`

When I try to run a RandomForestClassifier with Pipeline and param_grid:
nominal_columns = ['heating', 'fuel', 'sewer', 'waterfront', 'newConstruction', 'centralAir']
numerical_pipeline = Pipeline([('imputer', SimpleImputer(strategy='mean')),
('scaler', StandardScaler())])
nominal_pipeline = Pipeline([('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer([
('numerical_transformer', numerical_pipeline, numerical_columns),
('nominal_transformer', nominal_pipeline, nominal_columns),
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('regressor', RandomForestRegressor(random_state=0))
])
model = pipeline.fit(X_train, y_train)
param_grid = [
{'imputer__strategy': ['mean', 'median'],
'regressor__n_estimators': [3, 10, 30],
'regressor__max_features': [2, 4, 6]},
{'imputer__strategy': ['mean', 'median'],
'regressor__bootstrap': [False],
'regressor__n_estimators': [3, 10],
'regressor__max_features': [2, 3, 4]},
]
gridSearch = GridSearchCV(model, param_grid, cv=3,
scoring='neg_mean_squared_error',
return_train_score=True)
I get this error
ValueError: Invalid parameter imputer for estimator Pipeline(steps=[('preprocessor',
ColumnTransformer(transformers=[('numerical_transformer',
Pipeline(steps=[('imputer',
SimpleImputer()),
('scaler',
StandardScaler())]),
['lotSize', 'age',
'landValue', 'livingArea',
'pctCollege', 'bedrooms',
'fireplaces', 'bathrooms',
'rooms']),
('nominal_transformer',
Pipeline(steps=[('imputer',
SimpleImputer(strategy='most_frequent')),
('encoder',
OneHotEncoder(handle_unknown='ignore'))]),
['heating', 'fuel', 'sewer',
'waterfront',
'newConstruction',
'centralAir'])])),
('regressor', RandomForestRegressor(random_state=0))]). Check the list of available parameters with `estimator.get_params().keys()`.
I've been reading documentation for the past hour and still haven't managed to find a solution to this. Is there a problem with my preprocessor? I've tried to change my strategy to mean instead of most_frequent but that means I get a cannot convert stirng to float error
You've misspecified one of the hyperparameters, imputer__strategy. Your model is a pipeline containing a column transformer containing pipelines, so you need a name for each of those. I believe you need
preprocessor__numerical_transformer__imputer__strategy

How to pass group information in sklearn Random Grid Search for XGBRanker?

When I'm tryingto perform random grid search on XGBRanker model, I keep getting an error as follows:
/workspace/src/objective/rank_obj.cc:52: Check failed: gptr.size() != 0 && gptr.back() == info.labels_.Size(): group structure not consistent with #rows
The error seems to be regarding the structure of the group information passed. I'm passing the size of each group. If there are N rows and 2 groups then the array passed would be [g1_size, g2_size].
I'm not sure where I'm going wrong since I'm able to fit the model without any issues. Only when I try to perform RandomGridSearchCV, am I facing this error. The code snippet is as follows:
model = xgb.XGBRanker(
objective="rank:ndcg",
max_depth= 10,
n_estimators=100,
verbosity=1)
param_dist = {'n_estimators': [100,200,300],
'learning_rate': [1e-3,1e-4,1e-5],
'subsample': [0.8,0.9,1],
'max_depth': [5, 6, 7]
}
fit_params = {"group": groups}
scoring = make_scorer(ndcg_score, greater_is_better=True)
clf = RandomizedSearchCV(model,
param_distributions=param_dist,
cv =5,
n_iter=5,
scoring=scoring,
error_score=0,
verbose=3,
n_jobs=-1)
clf.fit(X_train, Y_train,**fit_params)

Multiple Entity recognition with Spacy python Error

i am stuck on a problem and seeking help from you. i am trying to train multiple entity using spacy
Following is my Train Data
response =[
('java developer with java and html css javascript ',
{'entities': [(0, 14, 'jobtitle'),
(0 , 4, 'skills'),
(34,37,'skills'),
(38, 49, 'skills')
]
}),
('looking for software engineer with java python',
{
'entities': [
(12, 29, 'jobtitle'),
(40, 46, 'skills'),
(35,39,"skills")
]
})
]
here is train code i have issue
nlp = spacy.blank("en")
optimizer = nlp.begin_training()
for i in range(20):
random.shuffle(TRAIN_DATA)
for text, annotations in TRAIN_DATA:
nlp.update([text], [annotations], sgd=optimizer)
Error :
ValueError: [E103] Trying to set conflicting doc.ents: '(0, 14, 'jobtitle')' and '(0, 4, 'skills')'. A token can only be part of one entity, so make sure the entities you're setting don't overlap.
As the error message explains, spacy's NER model does not support overlapping entity spans, so you can't train a model using these annotations.

I am using SVR() function for regression. I am unable to optimize it's parameter using #pso by #Pyswarm

Optimizing parameters of #SVR() using #pyswarm #PSO function.
I have 200 inputs of the dataset with 9 features of each input. I have to predict one output parameter. I already did it by calling using #SVR() function using it's default parameters. The results are not satisfactory. Now I want to optimize its parameters using the "PSO" algorithm but unable to do it.
model = SVR()model.fit(Xtrain,ytrain)
pred_y = model.predict(Xtest)
param = {'kernel' : ('linear', 'poly', 'rbf', 'sigmoid'),'C':[1,5,10],'degree' : [3,8],'coef0' : [0.01,10,0.5],'gamma' : ('auto','scale')}
import pyswarms as ps
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2,options=param)
best_cost, best_pos = optimizer.optimize(model, iters=100)-
2019-08-13 12:19:48,551 - pyswarms.single.global_best - INFO - Optimize for 100 iters with {'kernel': ('linear', 'poly', 'rbf', 'sigmoid'), 'C': [1, 5, 10], 'degree': [3, 8], 'coef0': [0.01, 10, 0.5], 'gamma': ('auto', 'scale')}
pyswarms.single.global_best: 0%| |0/100
TypeError: 'SVR' object is not callable
There is an error in the first two lines. Two lines of code got mixed there by mistake.
1. model = SVR()
2. model.fit(Xtrain,ytrain)

Resources