Creating a function for my code with datetime and timedelta - python-3.x

I have finished my code and it works when I run it. However, I need to turn this into a function, that if I call the function and pass in any list of number, i can get the same results. This is my code:
from datetime import datetime, timedelta
dateStr = 'user-input'
dateObj = datetime.strptime(dateStr, '%Y%m%d')
timeStep = timedelta(days=1)
dateObj2 = dateObj + timeStep
days15 = [dateObj + timeStep*i for i in range(15)]
print(days15)
------------------ output:
datetime.datetime(2017, 1, 1, 0, 0),..
I need to be able to pass in
date_str = "20170817"
results = days_15(date_str)
print(results)
And then get the same results. Any hints? or any help - Thank you

You just need to add a def statement before your code to define the function, and rather than printing the result, return it:
from datetime import datetime, timedelta
def days_15(dateStr):
dateObj = datetime.strptime(dateStr, '%Y%m%d')
timeStep = timedelta(days=1)
return [dateObj + timeStep*i for i in range(15)]
my_date_str = "20170817"
results = days_15(my_date_str)
print(results)
Output:
[
datetime.datetime(2017, 8, 17, 0, 0), datetime.datetime(2017, 8, 18, 0, 0),
datetime.datetime(2017, 8, 19, 0, 0), datetime.datetime(2017, 8, 20, 0, 0),
datetime.datetime(2017, 8, 21, 0, 0), datetime.datetime(2017, 8, 22, 0, 0),
datetime.datetime(2017, 8, 23, 0, 0), datetime.datetime(2017, 8, 24, 0, 0),
datetime.datetime(2017, 8, 25, 0, 0), datetime.datetime(2017, 8, 26, 0, 0),
datetime.datetime(2017, 8, 27, 0, 0), datetime.datetime(2017, 8, 28, 0, 0),
datetime.datetime(2017, 8, 29, 0, 0), datetime.datetime(2017, 8, 30, 0, 0),
datetime.datetime(2017, 8, 31, 0, 0)
]

Related

How to resolve the error name:_name = "label" if "label" in features[0].keys() else "labels" in Hugging face NER

I am trying to run custom NER on my data using offset values. I tried to replicate using this link << https://huggingface.co/course/chapter7/2 >>
I keep getting the error
variable name:_name = "label" if "label" in features[0].keys() else "labels"
DATA BEFORE tokenize_and_align_labels FUNCTIONS
{'texts': ['WASHINGTON USA WA DRIVER LICENSE BESSETTE Lamma 4d DL 73235766 9 Class AM to Iss 22/03/2021 Ab Exp 07130/2021 DOB 2/28/21 1 BESSETTE 2 GERALD 8 6930 NE Grandview Blvd, keyport, WA 86494 073076 12 Restrictions A 9a End P 16 Hgt 5\'-04" 15 Sex F 18 Eyes BLU 5 DD 73235766900000000000 Gerald Bessette', ] }
tag_names': [
[
{'start': 281, 'end': 296, 'tag': 'PERSON_NAME', 'text': 'Gerald Bessette'},
{'start': 135, 'end': 141, 'tag': 'FIRST_NAME', 'text': 'GERALD'},
{'start': 124, 'end': 122, 'tag': 'LAST_NAME', 'text': 'BESSETTE'},
{'start': 81, 'end': 81, 'tag': 'ISSUE_DATE', 'text': '22/03/2021'},
{'start': 99, 'end': 109, 'tag': 'EXPIRY_DATE', 'text': '07130/2021'},
{'start': 114, 'end': 121, 'tag': 'DATE_OF_BIRTH', 'text': '2/28/21'},
{'start': 51, 'end': 59, 'tag': 'DRIVER_LICENSE_NUMBER', 'text': '73235766'},
{'start': 144, 'end': 185, 'tag': 'ADDRESS', 'text': '6930 NE Grandview Blvd, keyport, WA 86494'}
],
DATA AFTER tokenize_and_align_labels FUNCTIONS
{'input_ids':
[[0, 305, 8684, 2805, 9342, 10994, 26994, 42560, 39951, 163, 12147, 3935, 6433, 6887, 1916, 204, 417, 13925, 6521, 1922, 4390, 4280, 361,
4210, 3326, 7, 19285, 820, 73, 3933, 73, 844, 2146, 2060, 12806, 321, 5339, 541, 73, 844, 2146, 14010, 387, 132, 73, 2517, 73, 2146, 112,
163, 12147, 3935, 6433, 132, 272, 39243, 495, 290, 5913, 541, 12462, 2374, 5877, 12543, 6, 762, 3427, 6, 9342, 290, 4027, 6405, 13470, 541,
5067, 316, 40950, 2485, 83, 361, 102, 4680, 221, 545, 289, 19377, 195, 32269, 3387, 113, 379, 15516, 274, 504, 26945, 12413, 791, 195, 27932,
6521, 1922, 4390, 36400, 45947, 151, 14651, 163, 3361, 3398, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'attention_mask':
[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'offset_mapping': [[(0, 0), (0, 1), (1, 10), (11, 14), (15, 17), (18, 20), (20, 24), (25, 28), (28, 32), (33, 34), (34, 37), (37, 39), (39, 41),
(42, 45), (45, 47), (48, 49), (49, 50), (51, 53), (54, 56), (56, 58), (58, 60), (60, 62), (63, 64), (65, 70), (71, 73),
(74, 76), (77, 80), (81, 83), (83, 84), (84, 86), (86, 87), (87, 89), (89, 91), (92, 94), (95, 98), (99, 100), (100, 102),
(102, 104), (104, 105), (105, 107), (107, 109), (110, 112), (112, 113), (114, 115), (115, 116), (116, 118), (118, 119),
(119, 121), (122, 123), (124, 125), (125, 128), (128, 130), (130, 132), (133, 134), (135, 136), (136, 140), (140, 141),
(142, 143), (144, 146), (146, 148), (149, 151), (152, 157), (157, 161), (162, 166), (166, 167), (168, 171), (171, 175),
(175, 176), (177, 179), (180, 181), (181, 183), (183, 185), (186, 188), (188, 190), (190, 192), (193, 195), (196, 204),
(204, 208), (209, 210), (211, 212), (212, 213), (214, 217), (218, 219), (220, 222), (223, 224), (224, 226), (227, 228),
(228, 230), (230, 232), (232, 233), (234, 236), (237, 240), (241, 242), (243, 245), (246, 250), (251, 253), (253, 254),
(255, 256), (257, 259), (260, 262), (262, 264), (264, 266), (266, 269), (269, 277), (277, 280), (281, 287), (288, 289),
(289, 292), (292, 296), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0),
(0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)]
'labels': [[24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 2, 10, 10, 18, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 3, 11, 11, 11, 11, 19, 24, 24, 1, 9, 9, 9, 17, 24, 24, 24, 24, 24, 24, 4, 12, 20, 24, 0, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
16, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 7, 15, 15, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24],
My Code:
import transformers
from transformers import AutoTokenizer
from transformers import AutoTokenizer,BertModel,BertTokenizer
from transformers import RobertaModel,RobertaConfig,RobertaForTokenClassification
from transformers import TrainingArguments, Trainer
# from transformers.trainer import get_tpu_sampler
from transformers.trainer_pt_utils import get_tpu_sampler
from transformers.data.data_collator import DataCollator, InputDataClass
from transformers import DataCollatorForTokenClassification
from transformers import AutoModelForTokenClassification
import torch
from torch.nn import CrossEntropyLoss, MSELoss
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data.dataloader import DataLoader
from torch.utils.data.distributed import DistributedSampler
from torch.utils.data.sampler import RandomSampler
from torchcrf import CRF
import dataclasses
import logging
import warnings
import tqdm
import os
import numpy as np
from typing import List, Union, Dict
os.environ["WANDB_DISABLED"] = "true"
print(transformers.__version__)
import evaluate
metric = evaluate.load("seqeval")
model_checkpoint = "bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) #add_prefix_space=True
def isin(a, b):
return a[1] > b[0] and a[0] < b[1]
def tokenize_and_align_labels(examples, label2id, max_length=256):
tokenized_inputs = tokenizer(examples["texts"], truncation=True, padding='max_length', max_length=max_length,return_offsets_mapping=True)
print("tokenization done")
labels = []
for i, label_idx_for_single_input in enumerate(tqdm.tqdm(examples["tag_names"])):
# print(i,label_idx_for_single_input)
labels_for_single_input = ['O' for _ in range(max_length)]
# print(labels_for_single_input)
text_offsets = tokenized_inputs['offset_mapping'][i]
# print("text_offsets",text_offsets)
for entity in label_idx_for_single_input:
# print("entity",entity)
tag = entity['tag']
# print("tag",tag)
tag_offset = [entity['start'], entity['end']]
# print("tag_offset",tag_offset)
# text_offsets [(0, 0), (0, 1), (1, 10), (11, 14), (15, 17), (18, 20), (20, 24), (25, 28), (28, 32), (33, 34), (34, 37), (37, 39), (39, 41), (42, 45), (45, 47), (48, 49), (49, 50), (51, 53), (54, 56), (56, 58), (58, 60), (60, 62), (63, 64), (65, 70), (71, 73), (74, 76), (77, 80), (81, 83), (83, 84), (84, 86), (86, 87), (87, 89), (89, 91), (92, 94), (95, 98), (99, 100), (100, 102), (102, 104), (104, 105), (105, 107), (107, 109), (110, 112), (112, 113), (114, 115), (115, 116), (116, 118), (118, 119), (119, 121), (122, 123), (124, 125), (125, 128), (128, 130), (130, 132), (133, 134), (135, 136), (136, 140), (140, 141), (142, 143), (144, 146), (146, 148), (149, 151), (152, 157), (157, 161), (162, 166), (166, 167), (168, 171), (171, 175), (175, 176), (177, 179), (180, 181), (181, 183), (183, 185), (186, 188), (188, 190), (190, 192), (193, 195), (196, 204), (204, 208), (209, 210), (211, 212), (212, 213), (214, 217), (218, 219), (220, 222), (223, 224), (224, 226), (227, 228), (228, 230), (230, 232), (232, 233), (234, 236), (237, 240), (241, 242), (243, 245), (246, 250), (251, 253), (253, 254), (255, 256), (257, 259), (260, 262), (262, 264), (264, 266), (266, 269), (269, 277), (277, 280), (281, 287), (288, 289), (289, 292), (292, 296), (0, 0), (0, 0), (0, 0), (0, 0), (0, 0)]
# entity {'start': 281, 'end': 296, 'tag': 'PERSON_NAME', 'text': 'Gerald Bessette'}
# tag PERSON_NAME
# tag_offset [281, 296]
affected_token_ids = [j for j in range(max_length) if isin(tag_offset, text_offsets[j])]
# print("affected_token_ids",affected_token_ids)
if len(affected_token_ids) < 1:
# print('affected_token_ids)<1')
continue
if any(labels_for_single_input[j] != 'O' for j in affected_token_ids):
# print('entity orverlap! skipping')
continue
for j in affected_token_ids:
labels_for_single_input[j] = 'I_' + tag
labels_for_single_input[affected_token_ids[-1]] = 'L_' + tag
labels_for_single_input[affected_token_ids[0]] = 'B_' + tag
label_ids = [label2id[x] for x in labels_for_single_input]
labels.append(label_ids)
tokenized_inputs["labels"] = labels
# print(tokenized_inputs.keys())
return tokenized_inputs
import json
data = []
with open('data.json', 'r') as f:
for line in f:
data.append(json.loads(line))
l = []
for k, v in data[0].items():
l.append({'text': k, 'spans': v})
train_set = [
[
x['text'],
[{'start': y["start"], 'end': y["end"], 'tag': y["label"], 'text': y["ngram"]} for y in x['spans']]
] for x in l
]
## count labels in dataset
from collections import Counter
e = []
for x in train_set:
for y in x[1]:
e.append(y['tag'])
Counter(e).most_common()
## get label list
ori_label_list = []
for line in train_set:
ori_label_list += [entity['tag'] for entity in line[1]]
ori_label_list = sorted(list(set(ori_label_list)))
label_list = []
for prefix in 'BIL':
label_list += [prefix + '_' + x for x in ori_label_list]
label_list += ['O']
label_list = sorted(list(set(label_list)))
print(label_list)
print(len(label_list))
label2id = {n:i for i,n in enumerate(label_list)}
id2label= {str(i):n for i,n in enumerate(label_list)}
# id2label = {str(i): label for i, label in enumerate(label_names)}
# label2id = {v: k for k, v in id2label.items()}
train_examples ={'texts':[x[0] for x in train_set],'tag_names':[x[1] for x in train_set]}
train_examples = tokenize_and_align_labels(train_examples,label2id)
# train_examples = train_examples.map(tokenize_and_align_labels(label2id),batched=True)
print("here")
print(train_examples.keys())
print(len(train_examples['labels']))
# dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'offset_mapping', 'labels'])
# 775
data_collator = DataCollatorForTokenClassification(tokenizer=tokenizer)
# collator=data_collator(train_examples)
# def compute_metrics(eval_preds):
# logits, labels = eval_preds
# predictions = np.argmax(logits, axis=-1)
#
# # Remove ignored index (special tokens) and convert to labels
# true_labels = [[label_list[l] for l in label if l != -100] for label in labels]
# true_predictions = [
# [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
# for prediction, label in zip(predictions, labels)
# ]
# all_metrics = metric.compute(predictions=true_predictions, references=true_labels)
# return {
# "precision": all_metrics["overall_precision"],
# "recall": all_metrics["overall_recall"],
# "f1": all_metrics["overall_f1"],
# "accuracy": all_metrics["overall_accuracy"],
# }
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,id2label=id2label,label2id=label2id,)
print(model.config.num_labels)
args = TrainingArguments(
"bert-finetuned-ner",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=2,
weight_decay=0.01,
# push_to_hub=True,
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_examples,
# eval_dataset=train_examples,
data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
trainer.train()
ERROR
_name = "label" if "label" in features[0].keys() else "labels"
AttributeError: 'tokenizers.Encoding' object has no attribute 'keys'
I think the object tokenized_inputs that you create and return in tokenize_and_align_labels is likely to be a tokenizers.Encoding object, not a dict or Dataset object (check this by printing type(myobject) when in doubt), and therefore it won't have keys.
You should apply your Tokenizer to your examples using the map function of Dataset, as in this example from the documentation.

bokeh hbar_stack not rendering properly when using datetimes

i'm trying to plot a hbar_stack with datetimes in x axis with no luck. i've done normal hbar plots with datetimes before with no problems so it's has to be something with the hbar_stack.
Here is the code with some static data:
start_date = datetime.datetime(2020, 7, 10, 10, 26, 15, 240666)
end_date = datetime.datetime(2020, 7, 10, 13, 27, 33, 741238)
tasks = ['task 1', 'task 2', 'task 3', 'task 4']
status = ['status_1', 'status_2', 'status_3', 'status_4']
exports = {'tasks': tasks, 'status_1': [datetime.datetime(2020, 7, 10, 13, 26, 59, 531234),
datetime.datetime(2020, 7, 10, 13, 25, 16, 666837),
datetime.datetime(2020, 7, 10, 10, 37, 16, 368927),
datetime.datetime(2020, 7, 10, 10, 26, 15, 240666)],
'status_2': [None, datetime.datetime(2020, 7, 10, 13, 27, 33, 741238),
datetime.datetime(2020, 7, 10, 11, 37, 7, 629667),
datetime.datetime(2020, 7, 10, 10, 27, 5, 540767)],
'status_3': [None, None, None, datetime.datetime(2020, 7, 10, 10, 54, 17, 738024)],
'status_4': [None, None, None, datetime.datetime(2020, 7, 10, 11, 2, 15, 196620)]}
p = figure(y_range=tasks, x_range=[start_date, end_date], x_axis_type='datetime', title="Tasks timeline",
tools=["hover,pan,reset,save,wheel_zoom"], tooltips=None)
p.xaxis.formatter = DatetimeTickFormatter(
days=["%m-%d-%Y"],
months=["%m-%d-%Y"],
years=["%m-%d-%Y"],
)
p.xaxis.major_label_orientation = radians(30)
p.hbar_stack(status, y='tasks', height=0.2, color=Spectral[11][:len(status)], source=ColumnDataSource(exports))
As one can see from the data the datetimes are minutes apart but it renders with years of difference. On hovering the data(x, y) the x value is not showing a date, instead it's showing a big number like 1.589e+12. Any help is appreciated.
enter image description here
dts = [
datetime(2020, 7, 10, 13, 26, 59, 531234),
datetime(2020, 7, 10, 13, 25, 16, 666837),
datetime(2020, 7, 10, 10, 37, 16, 368927),
datetime(2020, 7, 10, 10, 26, 15, 240666)
]
# because the datetimes are in reverse order
ends = dts[0:-1]
starts = dts[1:]
p = figure(plot_height=350, x_axis_type="datetime", y_range=["a", "b", "c"])
p.hbar(y=["b", "b", "b"], left=starts, right=ends,
line_color="white", fill_color=["red", "blue", "orange"])
p.xaxis.formatter.hours = ["%b %Y %H:%M"]
show(p)
which yields:

How to convert "2011-2012" to datetime object in mongodb in python?

Hi I am saving year as string as "2011-2012" in db, i want to convert it to datetime object and save it in datetime object like "datetime.datetime(2011, 1, 1, 0, 0)-datetime.datetime(2012, 1, 1, 0, 0)"?
can you please help me
you could use rrule from the dateutil package:
from datetime import datetime
from dateutil.rrule import rrule, MONTHLY
# input string:
years = '2011-2012'
# map to integer:
years = tuple(map(int, years.split('-')))
# obtain a datetime object for the starting year and the total years:
start_year = datetime(years[0], 1, 1)
total_years = years[1]-years[0]
# use rrule to generate a monthly range:
monthrange = []
dt_range = list(rrule(freq=MONTHLY, count=total_years*12+1, dtstart=start_year))
# [datetime.datetime(2011, 1, 1, 0, 0),
# datetime.datetime(2011, 2, 1, 0, 0),
# datetime.datetime(2011, 3, 1, 0, 0),
# datetime.datetime(2011, 4, 1, 0, 0),
# datetime.datetime(2011, 5, 1, 0, 0),
# datetime.datetime(2011, 6, 1, 0, 0),
# datetime.datetime(2011, 7, 1, 0, 0),
# datetime.datetime(2011, 8, 1, 0, 0),
# datetime.datetime(2011, 9, 1, 0, 0),
# datetime.datetime(2011, 10, 1, 0, 0),
# datetime.datetime(2011, 11, 1, 0, 0),
# datetime.datetime(2011, 12, 1, 0, 0),
# datetime.datetime(2012, 1, 1, 0, 0)]

Get the list of RGB pixel values of each superpixel

l have an RGB image of dimension (224,224,3). l applied superpixel segmentation on it using SLIC algorithm.
As follow :
img= skimageIO.imread("first_image.jpeg")
print('img shape', img.shape) # (224,224,3)
segments_slic = slic(img, n_segments=1000, compactness=0.01, sigma=1) # Up to 1000 segments
segments_slic.shape
(224,224)
Number of returned segments are :
np.max(segments_slic)
Out[49]: 595
From 0 to 595. So, we have 596 superpixels (regions).
Let's take a look at segments_slic[0]
segments_slic[0]
Out[51]:
array([ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5,
5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7,
8, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9,
10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12,
12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14,
14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16,
16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18,
18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 19, 20, 20,
20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21,
21, 21, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23,
23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25,
25, 25, 25])
What l would like to get ?
for each superpixel region make two arrays as follow:
1) Array : contain the indexes of the pixels belonging to the same superpixel.
For instance
superpixel_list[0] contains all the indexes of the pixels belonging to superpixel 0 .
superpixel_list[400] contains all the indexes of the pixels belonging to superpixel 400
2)superpixel_pixel_values[0] : contains the pixel values (in RGB) of the pixels belonging to superpixel 0.
For instance, let's say that pixels 0, 24 , 29, 53 belongs to the superpixel 0. Then we get
superpixel[0]= [[223,118,33],[245,222,198],[98,17,255],[255,255,0]]# RGB values of pixels belonging to superpixel 0
What is the efficient/optimized way to do that ? (Because l have l dataset of images to loop over)
EDIT-1
def sp_idx(s, index = True):
u = np.unique(s)
if index:
return [np.where(s == i) for i in u]
else:
return [s[s == i] for i in u]
#return [s[np.where(s == i)] for i in u] gives the same but is slower
superpixel_list = sp_idx(segments_slic)
superpixel = sp_idx(segments_slic, index = False)
In superpixel_list we are supposed to get a list containing the index of pixels belonging to the same superpixel.
For instance
superpixel_list[0] is supposed to get all the pixel indexes of the pixel affected to superpixel 0
however l get the following :
superpixel_list[0]
Out[73]:
(array([ 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2,
3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5,
5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7,
7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 10,
10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 13, 13, 13]),
array([0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5,
6, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 4, 5, 6,
7, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 6, 0, 1, 2, 3, 4, 5, 0, 1,
2, 3, 4, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2]))
Why two arrays ?
In superpixel[0] for instance we are supposed to get the RGB pixel values of each pixel affected to supepixel 0 as follow :
for instance pixels 0, 24 , 29, 53 are affected to superpixel 0 then :
superpixel[0]= [[223,118,33],[245,222,198],[98,17,255],[255,255,0]]
However when l use your function l get the following :
superpixel[0]
Out[79]:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
Thank you for your help
Can be done using np.where and the resulting indices.
def sp_idx(s, index = True):
u = np.unique(s)
return [np.where(s == i) for i in u]
superpixel_list = sp_idx(segments_slic)
superpixel = [img[idx] for idx in superpixel_list]

Strange Behaviour when Updating Cassandra row

I am using pyspark and pyspark-cassandra.
I have noticed this behaviour on multiple versions of Cassandra(3.0.x and 3.6.x) using COPY, sstableloader, and now saveToCassandra in pyspark.
I have the following schema
CREATE TABLE test (
id int,
time timestamp,
a int,
b int,
c int,
PRIMARY KEY ((id), time)
) WITH CLUSTERING ORDER BY (time DESC);
and the following data
(1, datetime.datetime(2015, 3, 1, 0, 18, 18, tzinfo=<UTC>), 1, 0, 0)
(1, datetime.datetime(2015, 3, 1, 0, 19, 12, tzinfo=<UTC>), 0, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 22, 59, tzinfo=<UTC>), 1, 0, 0)
(1, datetime.datetime(2015, 3, 1, 0, 23, 52, tzinfo=<UTC>), 0, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 32, 2, tzinfo=<UTC>), 1, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 32, 8, tzinfo=<UTC>), 0, 2, 0)
(1, datetime.datetime(2015, 3, 1, 0, 43, 30, tzinfo=<UTC>), 1, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 44, 12, tzinfo=<UTC>), 0, 2, 0)
(1, datetime.datetime(2015, 3, 1, 0, 48, 49, tzinfo=<UTC>), 1, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 49, 7, tzinfo=<UTC>), 0, 2, 0)
(1, datetime.datetime(2015, 3, 1, 0, 50, 5, tzinfo=<UTC>), 1, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 50, 53, tzinfo=<UTC>), 0, 2, 0)
(1, datetime.datetime(2015, 3, 1, 0, 51, 53, tzinfo=<UTC>), 1, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 51, 59, tzinfo=<UTC>), 0, 2, 0)
(1, datetime.datetime(2015, 3, 1, 0, 54, 35, tzinfo=<UTC>), 1, 1, 0)
(1, datetime.datetime(2015, 3, 1, 0, 55, 28, tzinfo=<UTC>), 0, 2, 0)
(1, datetime.datetime(2015, 3, 1, 0, 55, 55, tzinfo=<UTC>), 1, 2, 0)
(1, datetime.datetime(2015, 3, 1, 0, 56, 24, tzinfo=<UTC>), 0, 3, 0)
(1, datetime.datetime(2015, 3, 1, 1, 11, 14, tzinfo=<UTC>), 1, 2, 0)
(1, datetime.datetime(2015, 3, 1, 1, 11, 17, tzinfo=<UTC>), 2, 1, 0)
(1, datetime.datetime(2015, 3, 1, 1, 12, 8, tzinfo=<UTC>), 1, 2, 0)
(1, datetime.datetime(2015, 3, 1, 1, 12, 10, tzinfo=<UTC>), 0, 3, 0)
(1, datetime.datetime(2015, 3, 1, 1, 17, 43, tzinfo=<UTC>), 1, 2, 0)
(1, datetime.datetime(2015, 3, 1, 1, 17, 49, tzinfo=<UTC>), 0, 3, 0)
(1, datetime.datetime(2015, 3, 1, 1, 24, 12, tzinfo=<UTC>), 1, 2, 0)
(1, datetime.datetime(2015, 3, 1, 1, 24, 18, tzinfo=<UTC>), 2, 1, 0)
(1, datetime.datetime(2015, 3, 1, 1, 24, 18, tzinfo=<UTC>), 1, 2, 0)
(1, datetime.datetime(2015, 3, 1, 1, 24, 24, tzinfo=<UTC>), 2, 1, 0)
Towards the end of the data, there are two rows which have the same timestamp.
(1, datetime.datetime(2015, 3, 1, 1, 24, 18, tzinfo=<UTC>), 2, 1, 0)
(1, datetime.datetime(2015, 3, 1, 1, 24, 18, tzinfo=<UTC>), 1, 2, 0)
It is my understanding that when I save to Cassandra, one of these will "win" - there will only be one row.
After writing to cassandra using
rdd.saveToCassandra(keyspace, table, ['id', 'time', 'a', 'b', 'c'])
Neither row appears to have won. Rather, the rows seem to have "merged".
1 | 2015-03-01 01:17:43+0000 | 1 | 2 | 0
1 | 2015-03-01 01:17:49+0000 | 0 | 3 | 0
1 | 2015-03-01 01:24:12+0000 | 1 | 2 | 0
1 | 2015-03-01 01:24:18+0000 | 2 | 2 | 0
1 | 2015-03-01 01:24:24+0000 | 2 | 1 | 0
Rather than the 2015-03-01 01:24:18+0000 containing (1, 2, 0) or (2, 1, 0), it contains (2, 2, 0).
What is happening here? I can't for the life of me figure out this behaviour is being caused.
This is a little known effect that comes from the batching together of data. Batching writes assigns the same timestamp to all Inserts in the batch. Next, if two writes are done with the exact same timestamp then there is a special merge rule since there was no "last" write. The Spark Cassandra Connector uses intra-partition batches by default so this is very likely to happen if you have this kind of clobbering of values.
The behavior with two identical write timestamps is a merge based on the Greater value.
Given Table (key, a, b)
Batch
Insert "foo", 2, 1
Insert "foo", 1, 2
End batch
The batch gives both mutations the same timestamp. Cassandra cannot chose a "last-written" since they both happened at the same time, instead it just chooses the greater value of the two. The merged result will be
"foo", 2, 2

Resources