Text in table cells not available in Word doc (python-docx) - python-3.x

I'm trying to extract the text from certain columns in tables saved in docx files, so I'm using the python-docx library to parse the documents but it's only returning the text from certain cells. I've used opc-diag to get the xml for the word doc, and I've pasted a snippet below. The only cells that I can read the text from are the ones containing numbers (so 1 in the snippet), but I can't see what's different about those cells in the XML. I know I might have to end up writing my own parser (can't use Word as the code will be hosted in an AWS service) but I feel like I'm missing something obvious. Has anybody come across anything like this before? I found some other stackoverflow answers mentioning <w:sdt> tags causing problems, but I don't have any of those.
The code I'm using to extract text from cells -
for table in raw_script.tables:
column_data = []
for column in table.columns:
for cell in column.cells:
if cell.text not in column_data:
column_data.append(cell.text)
print(column_data)
That prints ['', '1', '2', '3', '4', '5'], which isn't what I want!
The document.xml snippet, if it helps -
<w:body>
<w:tbl>
<w:tblPr>
<w:tblW w:w="10556" w:type="dxa"/>
<w:tblLayout w:type="fixed"/>
<w:tblLook w:val="04A0" w:firstRow="1" w:lastRow="0" w:firstColumn="1" w:lastColumn="0" w:noHBand="0" w:noVBand="1"/>
</w:tblPr>
<w:tblGrid>
<w:gridCol w:w="2978"/>
<w:gridCol w:w="794"/>
<w:gridCol w:w="1622"/>
<w:gridCol w:w="4084"/>
<w:gridCol w:w="1078"/>
</w:tblGrid>
<w:tr w:rsidR="00C409AE" w:rsidTr="00305E71">
<w:trPr>
<w:cantSplit/>
<w:trHeight w:val="2127"/>
</w:trPr>
<w:tc>
<w:tcPr>
<w:tcW w:w="2978" w:type="dxa"/>
<w:tcBorders>
<w:top w:val="nil"/>
<w:left w:val="nil"/>
<w:bottom w:val="nil"/>
<w:right w:val="nil"/>
</w:tcBorders>
<w:shd w:val="clear" w:color="auto" w:fill="auto"/>
<w:hideMark/>
</w:tcPr>
<w:p w:rsidR="00C409AE" w:rsidRPr="00D01A3D" w:rsidRDefault="00CF5F0E" w:rsidP="00C409AE">
<w:pPr>
<w:spacing w:after="0" w:line="240" w:lineRule="auto"/>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:szCs w:val="20"/>
<w:u w:val="single"/>
<w:lang w:val="en-AU" w:eastAsia="en-AU"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:szCs w:val="20"/>
<w:u w:val="single"/>
<w:lang w:val="en-AU" w:eastAsia="en-AU"/>
</w:rPr>
<w:t>
Sunset</w:t>
</w:r>
</w:p>
<w:p w:rsidR="00C409AE" w:rsidRPr="00D01A3D" w:rsidRDefault="00C409AE" w:rsidP="00C409AE">
<w:pPr>
<w:spacing w:after="0" w:line="240" w:lineRule="auto"/>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:szCs w:val="20"/>
<w:u w:val="single"/>
<w:lang w:val="en-AU" w:eastAsia="en-AU"/>
</w:rPr>
</w:pPr>
</w:p>
<w:p w:rsidR="00C409AE" w:rsidRPr="00D01A3D" w:rsidRDefault="00C409AE" w:rsidP="00C409AE">
<w:pPr>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:b/>
<w:color w:val="000000"/>
<w:u w:val="single"/>
</w:rPr>
</w:pPr>
<w:r w:rsidRPr="00D01A3D">
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:b/>
<w:color w:val="000000"/>
<w:u w:val="single"/>
</w:rPr>
<w:t>
Series Title: 10:00:02</w:t>
</w:r>
</w:p>
<w:p w:rsidR="00C409AE" w:rsidRPr="00D01A3D" w:rsidRDefault="00CF5F0E" w:rsidP="00C409AE">
<w:pPr>
<w:spacing w:after="0" w:line="240" w:lineRule="auto"/>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:b/>
<w:color w:val="000000"/>
<w:u w:val="single"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:b/>
<w:color w:val="000000"/>
<w:u w:val="single"/>
</w:rPr>
<w:t>
Sample Script</w:t>
</w:r>
</w:p>
<w:p w:rsidR="00C409AE" w:rsidRPr="00305E71" w:rsidRDefault="00C409AE" w:rsidP="00C409AE">
<w:pPr>
<w:spacing w:after="0" w:line="240" w:lineRule="auto"/>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:szCs w:val="20"/>
<w:u w:val="single"/>
<w:lang w:val="en-AU" w:eastAsia="zh-TW"/>
</w:rPr>
</w:pPr>
</w:p>
</w:tc>
<w:tc>
<w:tcPr>
<w:tcW w:w="794" w:type="dxa"/>
<w:tcBorders>
<w:top w:val="nil"/>
<w:left w:val="nil"/>
<w:bottom w:val="nil"/>
<w:right w:val="nil"/>
</w:tcBorders>
<w:shd w:val="clear" w:color="auto" w:fill="auto"/>
<w:noWrap/>
<w:hideMark/>
</w:tcPr>
<w:p w:rsidR="00C409AE" w:rsidRDefault="00C409AE" w:rsidP="00C409AE">
<w:pPr>
<w:spacing w:after="0" w:line="240" w:lineRule="auto"/>
<w:jc w:val="center"/>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:i/>
<w:iCs/>
<w:color w:val="000000"/>
<w:lang w:val="en-AU" w:eastAsia="en-AU"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Arial" w:hAnsi="Arial" w:cs="Arial"/>
<w:i/>
<w:iCs/>
<w:color w:val="000000"/>
</w:rPr>
<w:t>
1</w:t>
</w:r>
</w:p>
</w:tc>
<w:tc>
<w:tcPr>
<w:tcW w:w="1622" w:type="dxa"/>
<w:tcBorders>
<w:top w:val="nil"/>
<w:left w:val="nil"/>
<w:bottom w:val="nil"/>
<w:right w:val="nil"/>
</w:tcBorders>
<w:shd w:val="clear" w:color="auto" w:fill="auto"/>
<w:noWrap/>
<w:hideMark/>
</w:tcPr>
<w:p w:rsidR="00C409AE" w:rsidRDefault="00C409AE" w:rsidP="00C409AE">
<w:pPr>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:b/>
<w:bCs/>
<w:color w:val="000000"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:b/>
<w:bCs/>
<w:color w:val="000000"/>
</w:rPr>
<w:t>
SONG:</w:t>
</w:r>
</w:p>
</w:tc>
<w:tc>
<w:tcPr>
<w:tcW w:w="4084" w:type="dxa"/>
<w:tcBorders>
<w:top w:val="nil"/>
<w:left w:val="nil"/>
<w:bottom w:val="nil"/>
<w:right w:val="nil"/>
</w:tcBorders>
<w:shd w:val="clear" w:color="auto" w:fill="auto"/>
<w:hideMark/>
</w:tcPr>
<w:p w:rsidR="00C409AE" w:rsidRDefault="00CF5F0E" w:rsidP="00C409AE">
<w:pPr>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:color w:val="000000"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:color w:val="000000"/>
</w:rPr>
<w:t>
# Theme Music Lyrics</w:t>
</w:r>
</w:p>
<w:p w:rsidR="00CF5F0E" w:rsidRDefault="00CF5F0E" w:rsidP="00C409AE">
<w:pPr>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:color w:val="000000"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:color w:val="000000"/>
</w:rPr>
<w:t>
# Second Line Of Theme</w:t>
</w:r>
</w:p>
</w:tc>
<w:tc>
<w:tcPr>
<w:tcW w:w="1078" w:type="dxa"/>
<w:tcBorders>
<w:top w:val="nil"/>
<w:left w:val="nil"/>
<w:bottom w:val="nil"/>
<w:right w:val="nil"/>
</w:tcBorders>
<w:shd w:val="clear" w:color="auto" w:fill="auto"/>
<w:noWrap/>
<w:hideMark/>
</w:tcPr>
<w:p w:rsidR="00C409AE" w:rsidRDefault="00C409AE" w:rsidP="00C409AE">
<w:pPr>
<w:jc w:val="right"/>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:color w:val="000000"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:cs="Calibri"/>
<w:color w:val="000000"/>
</w:rPr>
<w:t>
10:00:01</w:t>
</w:r>
</w:p>
</w:tc>
</w:tr>
<w:tr w:rsidR="00C409AE" w:rsidTr="00305E71">
<w:trPr>
<w:cantSplit/>
<w:trHeight w:val="1512"/>
</w:trPr>
<w:tc>
<w:tcPr>
<w:tcW w:w="2978" w:type="dxa"/>
<w:tcBorders>
<w:top w:val="nil"/>
<w:left w:val="nil"/>
<w:bottom w:val="nil"/>
<w:right w:val="nil"/>
</w:tcBorders>
<w:shd w:val="clear" w:color="auto" w:fill="auto"/>
<w:hideMark/>
</w:tcPr>

Related

Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0(when checking arugment mat1 in method wrapper_addmm

I trained a faster r cnn in order to detect tools. I already define my model and every thing worked. But to have a cleaner code without gloabal variables I tried to write a class MyModel who will automatically define every objet and train the model. So on this class I defined a class called self.dataset = ToolDataset.
On this first class I have defined my input (an image) and my output (a target which is a dictionnary with bboxes, labels, area …).
Then I built a data loader (so I have a self.data_loader), and I used the function train_one_epoch of the engine librarie. On this function, I gave in input my model (a faster r cnn), my data loader, and the device who is cuda:0 (I printed it). This function iterate on my data loader. This function defines a list of images and a list of targets, and converts the values of the lists into the good device.
And then it calls : model(images, targets). And on this step I got the error with the two devices founded (I pasted the error at the end of the message).
I got the error even if every tensor (my images, and every values of my target dictionary) returned True for the command tensor.is_cuda. So I really don’t understand why does the error say that I have also a cpu device. I show you my function train , train_one_epoch, and my variables images and targets :
train method :
def train(self, num_epoch = 10, gpu = True):
if gpu :
CUDA_LAUNCH_BLOCKING="1"
#torch.set_default_tensor_type(torch.FloatTensor)
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
model.to(device)
if self.multi_object_detection == False :
num_classes = 2 # ['Tool', 'background']
else :
print("need to set a multi object detection code")
in_features = torch.tensor(model.roi_heads.box_predictor.cls_score.in_features, dtype = torch.int64).to(device)
print("in_features = {}".format(in_features))
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
print( "model.roi_heads.box_predictor {}".format( model.roi_heads.box_predictor))
model_parameters = filter(lambda p: p.requires_grad, model.parameters())
#params = sum([np.prod(p.size()) for p in model_parameters])
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(params, lr=0.001, momentum=0.9, weight_decay=0.0005)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.5)
gc.collect()
num_epochs = 5
FILE_model_dict_gpu = "model_state_dict__gpu_lab2_and_lab7_5epoch.pth"
list_of_list_losses = []
print("device = ", device)
if (self.data_loader.dataset) == None :
self.build_dataloader(device)
for epoch in tqdm(range(num_epochs)):
# Train for one epoch, printing every 10 iterations
train_his_, list_losses, list_losses_dict = train_one_epoch(model, optimizer, self.data_loader, device, epoch, print_freq=10)
list_of_list_losses.append(list_losses)
# Compute losses over the validation set
#val_his_ = validate_one_epoch(model, val_data_loader, device, print_freq=10)
# Update the learning rate
print("lr before update : ", lr_scheduler)
lr_scheduler.step()
print("lr after update : ", lr_scheduler)
# Store loss values to plot learning curves afterwork.
if epoch == 0:
train_history = {k: [v] for k, v in train_his_.items()}
#val_history = {k: [v] for k, v in val_his_.items()}
else:
for k, v in train_his_.items():train_history[k] += [v]
# for k, v in val_his_.items():val_history[k] += [v]
# On peut save le modèle dans la boucle en ajoutant un critère : si la validation decroit
# torch.save(model, save_path)
torch.cuda.empty_cache()
gc.collect()
train_one_epoch function (I print some information that will be shown on the output at the end of the message)
def train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq):
model.train()
metric_logger = utilss.MetricLogger(delimiter=" ")
metric_logger.add_meter('lr', utilss.SmoothedValue(window_size=1, fmt='{value:.6f}'))
header = 'Epoch: [{}]'.format(epoch)
list_losses = []
list_losses_dict = []
for i, values in tqdm(enumerate(metric_logger.log_every(data_loader, print_freq, header))):
images, targets = values
for image in images :
print("before the to(device) operation, image.is_cuda = {}".format(image.is_cuda))
images = list(image.to(device, dtype=torch.float) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
#images = [image.cuda() for image in images]
for image in images :
print(image)
print("after the to(device) operation, image.is_cuda = {}".format(image.is_cuda))
for target in targets :
for t, dict_value in target.items():
print("after the to(device) operation, dict_value.is_cuda = {}".format(dict_value.is_cuda))
print("images = {}".format(images))
print("targets = {}".format(targets))
# Feed the training samples to the model and compute the losses
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
# reduce losses over all GPUs for logging purposes
loss_dict_reduced = utilss.reduce_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
loss_value = losses_reduced.item()
print("Loss is {}, stopping training".format(loss_value))
if not math.isfinite(loss_value):
print("Loss is {}, stopping training".format(loss_value))
print(loss_dict_reduced)
sys.exit(1)
list_losses.append(loss_value)
# Pytorch function to initialize optimizer
optimizer.zero_grad()
# Compute gradients or the backpropagation
losses.backward()
# Update current gradient
optimizer.step()
And I show you my output with the error (with my images and target, and the error) :
in_features = 1024
model.roi_heads.box_predictor FastRCNNPredictor(
(cls_score): Linear(in_features=1024, out_features=2, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=8, bias=True)
)
device = cuda:0
100%|██████████| 515/515 [00:00<00:00, 112118.06it/s]
100%|██████████| 761/761 [00:00<00:00, 111005.96it/s]
0%| | 0/5 [00:00<?, ?it/s]
0it [00:00, ?it/s]
before the to(device) operation, image.is_cuda = True
tensor([[[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
[0.0235, 0.0235, 0.0235, ..., 0.0235, 0.0235, 0.0235],
[0.0353, 0.0353, 0.0353, ..., 0.0314, 0.0314, 0.0314]],
[[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0078, 0.0078, 0.0078, ..., 0.0039, 0.0039, 0.0039],
[0.0235, 0.0235, 0.0235, ..., 0.0157, 0.0157, 0.0157],
[0.0353, 0.0353, 0.0353, ..., 0.0235, 0.0235, 0.0235]],
[[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
...,
[0.0078, 0.0078, 0.0078, ..., 0.0078, 0.0078, 0.0078],
[0.0235, 0.0235, 0.0235, ..., 0.0196, 0.0196, 0.0196],
[0.0353, 0.0353, 0.0353, ..., 0.0275, 0.0275, 0.0275]]],
device='cuda:0')
after the to(device) operation, image.is_cuda = True
after the to(device) operation, dict_value.is_cuda = True
after the to(device) operation, dict_value.is_cuda = True
after the to(device) operation, dict_value.is_cuda = True
after the to(device) operation, dict_value.is_cuda = True
after the to(device) operation, dict_value.is_cuda = True
images = [tensor([[[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
[0.0235, 0.0235, 0.0235, ..., 0.0235, 0.0235, 0.0235],
[0.0353, 0.0353, 0.0353, ..., 0.0314, 0.0314, 0.0314]],
[[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
[0.0078, 0.0078, 0.0078, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0078, 0.0078, 0.0078, ..., 0.0039, 0.0039, 0.0039],
[0.0235, 0.0235, 0.0235, ..., 0.0157, 0.0157, 0.0157],
[0.0353, 0.0353, 0.0353, ..., 0.0235, 0.0235, 0.0235]],
[[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
[0.0078, 0.0078, 0.0078, ..., 0.0118, 0.0118, 0.0118],
...,
[0.0078, 0.0078, 0.0078, ..., 0.0078, 0.0078, 0.0078],
[0.0235, 0.0235, 0.0235, ..., 0.0196, 0.0196, 0.0196],
[0.0353, 0.0353, 0.0353, ..., 0.0275, 0.0275, 0.0275]]],
device='cuda:0')]
targets = [{'boxes': tensor([[1118.8964, 0.0000, 1368.9186, 399.3243],
[1043.0958, 111.4863, 1332.4319, 426.1295]], device='cuda:0',
dtype=torch.float64), 'labels': tensor([1, 1], device='cuda:0'), 'index': tensor([311], device='cuda:0'), 'area': tensor([99839.9404, 91037.6485], device='cuda:0', dtype=torch.float64), 'iscrowd': tensor([0], device='cuda:0')}]
/home/nathaneberrebi/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448278899/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
0it [00:02, ?it/s]
0%| | 0/5 [00:02<?, ?it/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-15-51a35da5b1fe> in <module>
----> 1 class_model.train()
<ipython-input-7-d44d099a7743> in train(self, num_epoch, gpu)
144
145 # Train for one epoch, printing every 10 iterations
--> 146 train_his_, list_losses, list_losses_dict = train_one_epoch(model, optimizer, self.data_loader, device, epoch, print_freq=10)
147 list_of_list_losses.append(list_losses)
148 # Compute losses over the validation set
<ipython-input-6-347c12a81a2f> in train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq)
519
520 # Feed the training samples to the model and compute the losses
--> 521 loss_dict = model(images, targets)
522 losses = sum(loss for loss in loss_dict.values())
523
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/lib/python3.8/site-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)
95 features = OrderedDict([('0', features)])
96 proposals, proposal_losses = self.rpn(images, features, targets)
---> 97 detections, detector_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
98 detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)
99
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/lib/python3.8/site-packages/torchvision/models/detection/roi_heads.py in forward(self, features, proposals, image_shapes, targets)
752 box_features = self.box_roi_pool(features, proposals, image_shapes)
753 box_features = self.box_head(box_features)
--> 754 class_logits, box_regression = self.box_predictor(box_features)
755
756 result: List[Dict[str, torch.Tensor]] = []
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/lib/python3.8/site-packages/torchvision/models/detection/faster_rcnn.py in forward(self, x)
280 assert list(x.shape[2:]) == [1, 1]
281 x = x.flatten(start_dim=1)
--> 282 scores = self.cls_score(x)
283 bbox_deltas = self.bbox_pred(x)
284
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
94
95 def forward(self, input: Tensor) -> Tensor:
---> 96 return F.linear(input, self.weight, self.bias)
97
98 def extra_repr(self) -> str:
~/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1845 if has_torch_function_variadic(input, weight):
1846 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1847 return torch._C._nn.linear(input, weight, bias)
1848
1849
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument mat1 in method wrapper_addmm)
Thank you very much for your help, I'm having this issue since while. And I cannot torch.jit.trace my last model (before trying to clean my code using a class to build automatically every object with just one function train) because of the same error. And I need to fix it to use this model in a c++ code.
Let me know if you need further informations.
Here is my toch env :
PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.8 (64-bit runtime)
Python platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce RTX 3060 Laptop GPU
Nvidia driver version: 460.80
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.20.2
[pip3] numpydoc==1.1.0
[pip3] torch==1.9.0
[pip3] torchaudio==0.9.0a0+33b2469
[pip3] torchvision==0.10.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.1.74 h6bb024c_0 nvidia
[conda] mkl 2021.2.0 h06a4308_296
[conda] mkl-service 2.4.0 py38h497a2fe_0 conda-forge
[conda] mkl_fft 1.3.0 py38h42c9631_2
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.18.5 pypi_0 pypi
[conda] numpy-base 1.20.2 py38hfae3a4d_0
[conda] numpydoc 1.1.0 py_1 conda-forge
[conda] pytorch 1.9.0 py3.8_cuda11.1_cudnn8.0.5_0 pytorch
[conda] torch 1.9.0 pypi_0 pypi
[conda] torchaudio 0.9.0 py38 pytorch
[conda] torchvision 0.10.0 py38_cu111 pytorch

setPTS outputting wrong duration (Ffmpeg)

I have a video of duration 14 seconds. I applied the setPTS filter to make it 3 times slow. So the total duration should be 42s (14*3). But the output gives a 49s long video. Here is the command:
ffmpeg -i video.mp4 -filter_complex "[0:v]setpts=3*PTS[v];[0:a]atempo=0.6,atempo=0.5[a]" -map "[v]" -map "[a]" output.mp4
What is wrong with this command? I tried it for different videos but all has the same weird issue. Any help will be appreciated. Regards.
Log:
ffmpeg version git-2020-08-31-4a11a6f Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 10.2.1 (GCC) 20200805
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libsrt --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libgsm --enable-librav1e --enable-libsvtav1 --disable-w32threads --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 58.100 / 56. 58.100
libavcodec 58.101.101 / 58.101.101
libavformat 58. 51.101 / 58. 51.101
libavdevice 58. 11.101 / 58. 11.101
libavfilter 7. 87.100 / 7. 87.100
libswscale 5. 8.100 / 5. 8.100
libswresample 3. 8.100 / 3. 8.100
libpostproc 55. 8.100 / 55. 8.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
creation_time : 2018-11-20T06:28:07.000000Z
Duration: 00:00:14.72, start: 0.000000, bitrate: 688 kb/s
Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(tv, unknown/bt470bg/unknown), 198x360 [SAR 1:1 DAR 11:20], 592 kb/s, 29.93 fps, 29.93 tbr, 29931 tbn, 59.86 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 96 kb/s (default)
Metadata:
creation_time : 2018-11-20T06:28:07.000000Z
handler_name : IsoMedia File Produced by Google, 5-11-2011
File 'output.mp4' already exists. Overwrite? [y/N] y
Stream mapping:
Stream #0:0 (h264) -> setpts
Stream #0:1 (aac) -> atempo
setpts -> Stream #0:0 (libx264)
atempo -> Stream #0:1 (aac)
Press [q] to stop, [?] for help
[libx264 # 0000021e37f15340] using SAR=1/1
[libx264 # 0000021e37f15340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 # 0000021e37f15340] profile High, level 1.3, 4:2:0, 8-bit
[libx264 # 0000021e37f15340] 264 - core 161 - H.264/MPEG-4 AVC codec - Copyleft 2003-2020 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: isommp42
encoder : Lavf58.51.101
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p(progressive), 198x360 [SAR 1:1 DAR 11:20], q=-1--1, 29.93 fps, 29931 tbn, 29.93 tbc (default)
Metadata:
encoder : Lavc58.101.101 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
encoder : Lavc58.101.101 aac
frame= 1313 fps=256 q=-1.0 Lsize= 2241kB time=00:00:44.11 bitrate= 416.2kbits/s dup=875 drop=0 speed=8.61x
video:1500kB audio:693kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.182632%
[libx264 # 0000021e37f15340] frame I:7 Avg QP:21.61 size: 18425
[libx264 # 0000021e37f15340] frame P:331 Avg QP:24.78 size: 3593
[libx264 # 0000021e37f15340] frame B:975 Avg QP:31.57 size: 223
[libx264 # 0000021e37f15340] consecutive B-frames: 0.8% 0.2% 1.6% 97.5%
[libx264 # 0000021e37f15340] mb I I16..4: 3.6% 31.1% 65.3%
[libx264 # 0000021e37f15340] mb P I16..4: 0.8% 1.3% 4.2% P16..4: 37.0% 29.0% 15.9% 0.0% 0.0% skip:11.7%
[libx264 # 0000021e37f15340] mb B I16..4: 0.0% 0.0% 0.2% B16..8: 8.4% 2.8% 1.2% direct: 0.4% skip:87.0% L0:61.8% L1:31.0% BI: 7.3%
[libx264 # 0000021e37f15340] 8x8 transform intra:22.3% inter:18.7%
[libx264 # 0000021e37f15340] coded y,uvDC,uvAC intra: 73.2% 85.7% 52.9% inter: 11.6% 7.4% 0.7%
[libx264 # 0000021e37f15340] i16 v,h,dc,p: 25% 33% 22% 19%
[libx264 # 0000021e37f15340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 28% 19% 4% 5% 4% 7% 5% 7%
[libx264 # 0000021e37f15340] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 26% 13% 5% 7% 6% 7% 6% 7%
[libx264 # 0000021e37f15340] i8c dc,h,v,p: 38% 29% 22% 11%
[libx264 # 0000021e37f15340] Weighted P-Frames: Y:2.7% UV:0.3%
[libx264 # 0000021e37f15340] ref P L0: 86.1% 8.9% 3.6% 1.4% 0.1%
[libx264 # 0000021e37f15340] ref B L0: 94.7% 4.6% 0.8%
[libx264 # 0000021e37f15340] ref B L1: 99.3% 0.7%
[libx264 # 0000021e37f15340] kb/s:280.02
[aac # 0000021e3847c900] Qavg: 729.586

replace one element in multiple lists of list

I have multiple lists in the list and I want to replace "\xa0" in each list and I don't know how to do this my sample list looks like
[['0001/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'LSSZEC18033999', '\xa0'],
['0001/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '40693008366', '\xa0'],
['0002/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'APLU750808254', 'HTHC18032101'],
['0002/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '02037823030', '\xa0'],
['0003/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'LSSZEC18032365', '\xa0'],
['0003/19-20', 'NHAVA SHEVA SEA (INNSA1)', 'SHAE19030155', '\xa0'],
['0004/18-19', 'NHAVA SHEVA SEA (INNSA1)', '0258A33647', 'LLLNVS842311NVS'],
['0004/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '17602776476', '\xa0'],
['0005/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'APLU750808254', 'HTHC18032101'],
['0005/19-20', 'NHAVA SHEVA SEA (INNSA1)', 'SNKO02A190301057', '\xa0'],
['0006/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'SZWY18030109', '\xa0'],
['0006/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '40684842450', '3986'],
['0007/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'SRL18030520', '\xa0'],
['0007/19-20', 'NHAVA SHEVA SEA (INNSA1)', 'HDMUJPNS1768154', '\xa0'],
['0008/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'YSNBF18030315', '\xa0'],
['0008/19-20', 'MUMBAI', 'CTLQD19036504', '\xa0'], ['0009/18-19', 'NHAVA
SHEVA SEA (INNSA1)', 'SNKO02A180300433', '\xa0'], ['0009/19-20', 'SAHAR AIR
CARGO ACC (INBOM4)', '51404381786', 'X8867ANKF7X'], ['0010/18-19', 'NHAVA
SHEVA SEA (INNSA1)', 'SNKO02A180300587', '\xa0'], ['0010/19-20', 'NHAVA
SHEVA SEA (INNSA1)', 'SRL19030377', '\xa0']]
need help.
Try the below code, hope this helps.
data = [['0001/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'LSSZEC18033999', '\xa0'], ['0001/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '40693008366', '\xa0'], ['0002/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'APLU750808254', 'HTHC18032101'], ['0002/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '02037823030', '\xa0'], ['0003/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'LSSZEC18032365', '\xa0'], ['0003/19-20', 'NHAVA SHEVA SEA (INNSA1)', 'SHAE19030155', '\xa0'], ['0004/18-19', 'NHAVA SHEVA SEA (INNSA1)', '0258A33647', 'LLLNVS842311NVS'], ['0004/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '17602776476', '\xa0'], ['0005/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'APLU750808254', 'HTHC18032101'], ['0005/19-20', 'NHAVA SHEVA SEA (INNSA1)', 'SNKO02A190301057', '\xa0'], ['0006/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'SZWY18030109', '\xa0'], ['0006/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '40684842450', '3986'], ['0007/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'SRL18030520', '\xa0'], ['0007/19-20', 'NHAVA SHEVA SEA (INNSA1)', 'HDMUJPNS1768154', '\xa0'], ['0008/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'YSNBF18030315', '\xa0'], ['0008/19-20', 'MUMBAI', 'CTLQD19036504', '\xa0'], ['0009/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'SNKO02A180300433', '\xa0'], ['0009/19-20', 'SAHAR AIR CARGO ACC (INBOM4)', '51404381786', 'X8867ANKF7X'], ['0010/18-19', 'NHAVA SHEVA SEA (INNSA1)', 'SNKO02A180300587', '\xa0'], ['0010/19-20', 'NHAVA SHEVA SEA (INNSA1)', 'SRL19030377', '\xa0']]
newdata = [[sent.replace(u'\xa0', u' ') for sent in lst]for lst in data]
print(newdata)
in_list = [['123', '\xa0'], ['123', '\xa0'], ['123', '\xa0'], ['123', '\xa0']]
out_list = [[i.replace('\xa0', '') if i == '\xa0' else i for i in sub_list] for sub_list in in_list]

Color KML Objects grouped

I have a question regarding groups in KML.
I have a dataset, consisting of 50 objects. These objects have attributes,f.e severity. Is there any possibility to classify my KML-document based on these severity classes? (1/1.5/2) Or based on any of the other attributes?I already created folders manually, but the bigger the dataset gets the more work it is... I would also like to color the objects, based on the classification. Attached I will add my document. Maybe anyone has an idea how to approach this?
<?xml version="1.0" encoding="utf-8" ?>
<kml xmlns="http://www.opengis.net/kml/2.2">
<Document id="root_doc">
<Schema name="Flood_2017_KML" id="Flood_2017_KML">
<SimpleField name="ID" type="int"></SimpleField>
<SimpleField name="GlideNumbe" type="string"></SimpleField>
<SimpleField name="Country" type="string"></SimpleField>
<SimpleField name="OtherCount" type="string"></SimpleField>
<SimpleField name="long" type="float"></SimpleField>
<SimpleField name="lat" type="float"></SimpleField>
<SimpleField name="Area" type="float"></SimpleField>
<SimpleField name="Began" type="string"></SimpleField>
<SimpleField name="Ended" type="string"></SimpleField>
<SimpleField name="Validation" type="string"></SimpleField>
<SimpleField name="Dead" type="int"></SimpleField>
<SimpleField name="Displaced" type="int"></SimpleField>
<SimpleField name="MainCause" type="string"></SimpleField>
<SimpleField name="Severity" type="float"></SimpleField>
</Schema>
<Folder><name>Flood_2017_KML</name>
<Folder id="Severity1_flood_2017">
<Style id="transGreyPoly">
<LineStyle>
<width>1</width>
<color>3c8C8C8C</color>
</LineStyle>
<PolyStyle>
<color>3c8C8C8C</color>
</PolyStyle>
</Style>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4441</SimpleData>
<SimpleData name="Country">Peru</SimpleData>
<SimpleData name="long">-77.572950000000006</SimpleData>
<SimpleData name="lat">-5.250831000000000</SimpleData>
<SimpleData name="Area">288499.131403999985196</SimpleData>
<SimpleData name="Began">2017/02/01</SimpleData>
<SimpleData name="Ended">2017/02/07</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">1</SimpleData>
<SimpleData name="Displaced">12000</SimpleData>
<SimpleData name="MainCause">Heavy Rain</SimpleData>
<SimpleData name="Severity">1.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>-78.859612,-8.472832 -80.740118,-5.695045 -79.255508,-4.505716 -77.57295,-2.721449 -74.405783,-2.02883 -74.702704,-5.995598 -78.859612,-8.472832</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4457</SimpleData>
<SimpleData name="Country">Angola</SimpleData>
<SimpleData name="long">13.656325000000001</SimpleData>
<SimpleData name="lat">-8.717518999999999</SimpleData>
<SimpleData name="Area">24002.582783800000470</SimpleData>
<SimpleData name="Began">2017/03/21</SimpleData>
<SimpleData name="Ended">2017/04/08</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">11</SimpleData>
<SimpleData name="Displaced">344</SimpleData>
<SimpleData name="MainCause">Heavy Rain</SimpleData>
<SimpleData name="Severity">1.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>14.126451,-9.709634 13.384147,-9.758848 12.988251,-8.965265 13.186199,-8.519087 13.087225,-7.77524 13.33466,-7.676191 14.324399,-8.321313 14.126451,-9.709634</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>y
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4460</SimpleData>
<SimpleData name="Country">Malawi</SimpleData>
<SimpleData name="long">33.871761999999997</SimpleData>
<SimpleData name="lat">-10.364181000000000</SimpleData>
<SimpleData name="Area">24405.783080000001064</SimpleData>
<SimpleData name="Began">2017/04/04</SimpleData>
<SimpleData name="Ended">2017/04/18</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">4</SimpleData>
<SimpleData name="Displaced">0</SimpleData>
<SimpleData name="MainCause">Heavy Rain</SimpleData>
<SimpleData name="Severity">1.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>34.56458,-11.306672 33.228432,-11.206827 33.178945,-9.42169 34.119197,-9.521335 34.56458,-11.306672</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
</Folder>
<Folder id="Severity1.5_flood_2017">
<Style id="transGreenPoly">
<LineStyle>
<width>1</width>
<color>507832F0</color>
</LineStyle>
<PolyStyle>
<color>507832F0</color>
</PolyStyle>
</Style>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4433</SimpleData>
<SimpleData name="Country">Germany</SimpleData>
<SimpleData name="long">9.583276000000000</SimpleData>
<SimpleData name="lat">54.705274000000003</SimpleData>
<SimpleData name="Area">18991.845394600000873</SimpleData>
<SimpleData name="Began">2017/01/02</SimpleData>
<SimpleData name="Ended">2017/01/05</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">0</SimpleData>
<SimpleData name="Displaced">0</SimpleData>
<SimpleData name="MainCause">Winter Storm Axel</SimpleData>
<SimpleData name="Severity">1.500000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>13.532608,54.306792 13.33466,53.662268 10.414927,53.415802 9.425187,54.457616 9.227239,55.251098 9.524161,55.994746 9.920057,55.944961 9.623135,54.953381 10.019031,54.407731 10.761336,54.109839 11.058258,53.911344 12.493381,54.009796 13.532608,54.306792</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
</Folder>
<Folder id="Severity2_flood_2017">
<Style id="transPinkPoly">
<LineStyle>
<width>1</width>
<color>5014B45A</color>
</LineStyle>
<PolyStyle>
<color>5014B45A</color>
</PolyStyle>
</Style>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4445</SimpleData>
<SimpleData name="Country">Chile</SimpleData>
<SimpleData name="long">-70.248874999999998</SimpleData>
<SimpleData name="lat">-30.939481000000001</SimpleData>
<SimpleData name="Area">183781.025771999993594</SimpleData>
<SimpleData name="Began">2017/02/24</SimpleData>
<SimpleData name="Ended">2017/03/03</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">3</SimpleData>
<SimpleData name="Displaced">1200</SimpleData>
<SimpleData name="MainCause">Heavy Rain</SimpleData>
<SimpleData name="Severity">2.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>-70.644771,-35.947863 -72.030407,-36.04634 -71.733485,-32.277917 -71.634511,-29.699471 -70.842719,-26.129643 -69.852979,-25.832622 -68.467343,-26.626701 -69.358109,-28.411367 -69.951953,-30.4937 -70.347849,-31.782749 -70.248875,-33.072047 -70.644771,-35.947863</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4450</SimpleData>
<SimpleData name="GlideNumbe">FL-2017-000018-PER</SimpleData>
<SimpleData name="Country">Peru</SimpleData>
<SimpleData name="long">-75.148087000000004</SimpleData>
<SimpleData name="lat">-11.004229000000000</SimpleData>
<SimpleData name="Area">810942.342724999994971</SimpleData>
<SimpleData name="Began">2017/02/01</SimpleData>
<SimpleData name="Ended">2017/03/22</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">78</SimpleData>
<SimpleData name="Displaced">70000</SimpleData>
<SimpleData name="MainCause">Heavy Rain</SimpleData>
<SimpleData name="Severity">2.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>-70.050927,-17.99786 -73.020147,-16.70712 -76.08834,-14.226255 -78.56269,-9.960576 -80.938066,-4.504871 -77.770898,-4.010598 -75.989366,-6.094124 -73.119121,-9.566619 -69.358109,-14.626329 -70.050927,-17.99786</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4456</SimpleData>
<SimpleData name="GlideNumbe">MS-2017-000033-COL</SimpleData>
<SimpleData name="Country">Colombia</SimpleData>
<SimpleData name="long">-76.113083000000003</SimpleData>
<SimpleData name="lat">2.187014000000000</SimpleData>
<SimpleData name="Area">28634.320610300001135</SimpleData>
<SimpleData name="Began">2017/04/01</SimpleData>
<SimpleData name="Ended">2017/04/08</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">314</SimpleData>
<SimpleData name="Displaced">0</SimpleData>
<SimpleData name="MainCause">Heavy Rain</SimpleData>
<SimpleData name="Severity">2.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>-75.197574,1.640966 -76.286288,1.343995 -77.028593,1.939405 -76.880132,2.881474 -76.484236,3.030034 -75.741931,3.029661 -75.346035,2.434424 -75.197574,1.640966</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4463</SimpleData>
<SimpleData name="GlideNumbe">FL-2017-000038-IRN</SimpleData>
<SimpleData name="Country">Iran</SimpleData>
<SimpleData name="long">46.169280000000001</SimpleData>
<SimpleData name="lat">37.704303000000003</SimpleData>
<SimpleData name="Area">40807.872714999997697</SimpleData>
<SimpleData name="Began">2017/04/15</SimpleData>
<SimpleData name="Ended">2017/04/21</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">42</SimpleData>
<SimpleData name="Displaced">0</SimpleData>
<SimpleData name="MainCause">Torrential Rain</SimpleData>
<SimpleData name="Severity">2.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>47.530173,37.35593 46.738381,36.711703 45.204284,36.662888 44.808388,38.745718 46.639407,38.645625 47.530173,37.35593</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4458</SimpleData>
<SimpleData name="GlideNumbe">TC-2017-000031-AUS</SimpleData>
<SimpleData name="Country">Australia</SimpleData>
<SimpleData name="OtherCount">New Zealand</SimpleData>
<SimpleData name="long">148.681590000000000</SimpleData>
<SimpleData name="lat">-21.974972999999999</SimpleData>
<SimpleData name="Area">258485.657990000006976</SimpleData>
<SimpleData name="Began">2017/03/28</SimpleData>
<SimpleData name="Ended">2017/04/08</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">6</SimpleData>
<SimpleData name="Displaced">20000</SimpleData>
<SimpleData name="MainCause">Tropical Cyclone Debbie</SimpleData>
<SimpleData name="Severity">2.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>152.244654,-25.250018 149.077487,-25.645118 146.900059,-23.858911 145.118527,-20.089444 146.108267,-18.304829 147.295955,-19.693847 148.879539,-20.686372 149.671331,-22.471883 152.046707,-24.258189 152.244654,-25.250018</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
<Placemark>
<ExtendedData><SchemaData schemaUrl="#Flood_2017_KML">
<SimpleData name="ID">4461</SimpleData>
<SimpleData name="Country">New Zealand</SimpleData>
<SimpleData name="long">176.790203999999989</SimpleData>
<SimpleData name="lat">-38.204245999999998</SimpleData>
<SimpleData name="Area">23530.760163599999942</SimpleData>
<SimpleData name="Began">2017/04/05</SimpleData>
<SimpleData name="Ended">2017/04/21</SimpleData>
<SimpleData name="Validation">News</SimpleData>
<SimpleData name="Dead">0</SimpleData>
<SimpleData name="Displaced">2200</SimpleData>
<SimpleData name="MainCause">Heavy Rain</SimpleData>
<SimpleData name="Severity">2.000000000000000</SimpleData>
</SchemaData></ExtendedData>
<MultiGeometry><Polygon><outerBoundaryIs><LinearRing><coordinates>178.027379,-37.758764 177.334561,-39.196423 175.800464,-38.898133 175.553029,-37.212069 175.948925,-37.4602 176.592256,-38.005975 177.384048,-38.055959 178.027379,-37.758764</coordinates></LinearRing></outerBoundaryIs></Polygon></MultiGeometry>
</Placemark>
</Folder>
</Folder>
</Document></kml>
Assuming that you're working in Google Earth, unfortunately you'll need to try some different software for this. Google Earth does not have a way to do automatic grouping or classification on KML files. The only option it has for such things is in the import workflow, like when you're importing a shapefile in to Earth Pro and choose to use buckets for styling.
This is the kind of thing that you'll probably want to do in some GIS software with real analytical capabilities. QGIS is a great option that's free and open source, though it does have a bit of a learning curve. In there you'll be able to maintain your dataset, do grouping by attributes, and export to KML.

FFmpeg stops encoding after x minutes when using Mjpeg

I'm using ffmpeg in a node application with this command:
ffmpeg -seekable 0 -i http://127.0.0.1:8100/Mjpeg/1?authToken=xxx -video_size 1280x720 -r 30 -pix_fmt yuv420p -y D:\Video\pflyers\test.mp4
The encoding would stop after 28:53 every time. After some reading I figured I had to spawn the child instead of exec because of the large sterr output.
Before doing that I wanted to see if that was in fact the issue so I tried doing:
-nostats -hide_banner -loglevel panic
to avoid the large output to sterr. FFmpeg still stopped after 28:53. Further I tried to write the sterr to log.txt instead of using the above code. I did so adding this to the end:
2> log.txt
Still it would stop at 28:53.
Finally I tried running the command in cmd.exe resulting in the encoding stopping at 29:14.
What I realized comparing the outputs from ffmpeg run from node and run from cmd.exe was that the encoding stopped when the log.txt reached 388kB.
How can I fix this?
Here's the full output:
C:\Users\VossVind>ffmpeg -seekable 0 -i http://127.0.0.1:8100/Mjpeg/1?authToken=xxx -video_size 1280x720 -r 30 -pix_fmt yuv420p -y D:\Video\pflyers\test.mp4
ffmpeg version 3.4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.2.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth --enable-libmfx
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Input #0, mpjpeg, from 'http://127.0.0.1:8100/Mjpeg/1?authToken=xxx':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 96:96 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[swscaler # 0000019639b27fc0] deprecated pixel format used, make sure you did set range correctly
[libx264 # 0000019639944040] using SAR=1/1
[libx264 # 0000019639944040] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 # 0000019639944040] profile High, level 3.1
[libx264 # 0000019639944040] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'D:\Video\pflyers\test.mp4':
Metadata:
encoder : Lavf57.83.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 30 fps, 15360 tbn, 30 tbc
Metadata:
encoder : Lavc57.107.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
More than 1000 frames duplicated 114432kB time=00:03:17.80 bitrate=4739.3kbits/s dup=1000 drop=0 speed=0.95x
frame=52544 fps= 29 q=-1.0 Lsize= 1050136kB time=00:29:11.36 bitrate=4912.0kbits/s dup=8757 drop=0 speed=0.972x
video:1049508kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.059803%
[libx264 # 0000019639944040] frame I:261 Avg QP:20.59 size: 51862
[libx264 # 0000019639944040] frame P:15400 Avg QP:23.72 size: 33007
[libx264 # 0000019639944040] frame B:36883 Avg QP:24.55 size: 14989
[libx264 # 0000019639944040] consecutive B-frames: 0.9% 15.8% 2.5% 80.8%
[libx264 # 0000019639944040] mb I I16..4: 17.1% 82.1% 0.8%
[libx264 # 0000019639944040] mb P I16..4: 4.3% 48.4% 0.2% P16..4: 12.3% 7.5% 4.8% 0.0% 0.0% skip:22.5%
[libx264 # 0000019639944040] mb B I16..4: 2.7% 15.6% 0.0% B16..8: 22.3% 7.0% 1.8% direct: 3.8% skip:46.8% L0:52.3% L1:34.0% BI:13.8%
[libx264 # 0000019639944040] 8x8 transform intra:88.6% inter:92.9%
[libx264 # 0000019639944040] coded y,uvDC,uvAC intra: 59.4% 33.6% 2.6% inter: 19.3% 10.9% 0.6%
[libx264 # 0000019639944040] i16 v,h,dc,p: 57% 30% 12% 1%
[libx264 # 0000019639944040] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 18% 45% 2% 1% 1% 2% 2% 3%
[libx264 # 0000019639944040] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 52% 24% 12% 2% 2% 2% 2% 2% 2%
[libx264 # 0000019639944040] i8c dc,h,v,p: 63% 17% 20% 1%
[libx264 # 0000019639944040] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 # 0000019639944040] ref P L0: 43.8% 7.5% 29.1% 19.6%
[libx264 # 0000019639944040] ref B L0: 71.8% 21.2% 7.0%
[libx264 # 0000019639944040] ref B L1: 93.9% 6.1%
[libx264 # 0000019639944040] kb/s:4908.78
Link to -v 48 verbose logging: https://pastebin.com/YwQx8bB2

Resources