How to stack Sentinel bands? - python-3.x

I work with many Sentinel-2 images whose 12 strips I would like to stack into a single file. My images are in envi format (img and hdr).
I tried to do the concatenation with the module rsgislib by applying the following code :
import rsgislib
from rsgislib import imageutils
imagePath = "/run/media/afavro/Elements/Acquisitions_Copernicus2/Sentinel-2/THEIA/2A/resampling/subset_20181005_944_J_resampled.data/"
nom = 'ROI_Resize__Layer__Band_1_SENTINEL2A_20181005_104840_944_L2A_T31TDJ_D_V1_9'
imageList = [nom + '_ATB_R1.hdr',
nom + '_ATB_R2.hdr',
nom +'_SRE_B2.hdr',
nom +'_SRE_B3.hdr',
nom +'_SRE_B4.hdr',
nom +'_SRE_B5.hdr',
nom +'_SRE_B6.hdr',
nom +'_SRE_B7.hdr',
nom +'_SRE_B8.hdr',
nom +'_SRE_B8a.hdr',
nom +'_SRE_B9.hdr',
nom +'_SRE_B10.hdr',
nom +'_SRE_B11.hdr',
nom +'_SRE_B12.hdr',
nom +'_FRE_B2.hdr',
nom +'_FRE_B3.hdr',
nom +'_FRE_B4.hdr',
nom +'_FRE_B5.hdr',
nom +'_FRE_B6.hdr',
nom +'_FRE_B7.hdr',
nom +'_FRE_B8.hdr',
nom +'_FRE_B8a.hdr',
nom +'_FRE_B9.hdr',
nom +'_FRE_B10.hdr',
nom +'_FRE_B11.hdr',
nom +'_FRE_B12.hdr']
bandNamesList = ['_ATB_R1',
'_ATB_R2',
'_SRE_B2',
'_SRE_B3',
'_SRE_B4',
'_SRE_B5',
'_SRE_B6',
'_SRE_B7',
'_SRE_B8',
'_SRE_B8a',
'_SRE_B9',
'_SRE_B10',
'_SRE_B11',
'_SRE_B12',
'_FRE_B2',
'_FRE_B3',
'_FRE_B4',
'_FRE_B5',
'_FRE_B6',
'_FRE_B7',
'_FRE_B8',
'_FRE_B8a',
'_FRE_B9',
'_FRE_B10',
'_FRE_B11',
'_FRE_B12']
# Output image
outputImage = 'SENTINEL2A_20181005-104840-944_L2A_T31TDH_D_V1-9_stack.envi'
# Format and type
gdalFormat = 'ENVI'
dataType = rsgislib.TYPE_16UINT
# Stack
imageutils.stackImageBands(imageList, bandNamesList, outputImage, None, 0, gdalFormat, dataType)
But whatever parameters I change, I always end up with the same error message :
"There are 26 images to stack
ROI_Resize__Layer__Band_1_SENTINEL2A_20181005_104840_944_L2A_T31TDJ_D_V1_9_ATB_R1.hdr
ERROR 4: ROI_Resize__Layer__Band_1_SENTINEL2A_20181005_104840_944_L2A_T31TDJ_D_V1_9_ATB_R1.hdr: no such files or folders
Traceback (most recent call last):
File "<ipython-input-4-9b52a8bb11b5>", line 70, in <module>
imageutils.stackImageBands(imageList, bandNamesList, outputImage, None, 0, gdalFormat, dataType)
error: Could not open image ROI_Resize__Layer__Band_1_SENTINEL2A_20181005_104840_944_L2A_T31TDJ_D_V1_9_ATB_R1.hdr"
Do you have any advice for me ?

You are inputting the header file. I believe you should be entering the image file (e.g., .envi if that is the extension you are using).

Related

Python error upon exif data extraction via Pillow module: invalid continuation byte

I am writing a piece of code to extract exif data from images using Python. I downloaded the Pillow module using pip3 and am using some code I found online:
from PIL import Image
from PIL.ExifTags import TAGS
imagename = "path to file"
image = Image.open(imagename)
exifdata = image.getexif()
for tagid in exifdata:
tagname = TAGS.get(tagid, tagid)
data = exifdata.get(tagid)
if isinstance(data, bytes):
data = data.decode()
print(f"{tagname:25}: {data}")
On some images this code works. However, for images I took on my Olympus camera I get the following error:
GPSInfo : 734
Traceback (most recent call last):
File "_pathname redacted_", line 14, in <module>
data = data.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf0 in position 30: invalid continuation byte
When I remove the data = data.decode() part, I get the following:
GPSInfo : 734
PrintImageMatching : b"PrintIM\x000300\x00\x00%\x00\x01\x00\x14\x00\x14\x00\x02\x00\x01\x00\x00\x00\x03\x00\xf0\x00\x00\x00\x07\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x0b\x008\x01\x00\x00\x0c\x00\x00\x00\x00\x00\r\x00\x00\x00\x00\x00\x0e\x00P\x01\x00\x00\x10\x00`\x01\x00\x00 \x00\xb4\x01\x00\x00\x00\x01\x03\x00\x00\x00\x01\x01\xff\x00\x00\x00\x02\x01\x83\x00\x00\x00\x03\x01\x83\x00\x00\x00\x04\x01\x83\x00\x00\x00\x05\x01\x83\x00\x00\x00\x06\x01\x83\x00\x00\x00\x07\x01\x80\x80\x80\x00\x10\x01\x83\x00\x00\x00\x00\x02\x00\x00\x00\x00\x07\x02\x00\x00\x00\x00\x08\x02\x00\x00\x00\x00\t\x02\x00\x00\x00\x00\n\x02\x00\x00\x00\x00\x0b\x02\xf8\x01\x00\x00\r\x02\x00\x00\x00\x00 \x02\xd6\x01\x00\x00\x00\x03\x03\x00\x00\x00\x01\x03\xff\x00\x00\x00\x02\x03\x83\x00\x00\x00\x03\x03\x83\x00\x00\x00\x06\x03\x83\x00\x00\x00\x10\x03\x83\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\t\x11\x00\x00\x10'\x00\x00\x0b\x0f\x00\x00\x10'\x00\x00\x97\x05\x00\x00\x10'\x00\x00\xb0\x08\x00\x00\x10'\x00\x00\x01\x1c\x00\x00\x10'\x00\x00^\x02\x00\x00\x10'\x00\x00\x8b\x00\x00\x00\x10'\x00\x00\xcb\x03\x00\x00\x10'\x00\x00\xe5\x1b\x00\x00\x10'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x05\x05\x00\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x05\x05\x05\x00\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
ResolutionUnit : 2
ExifOffset : 230
ImageDescription : OLYMPUS DIGITAL CAMERA
Make : OLYMPUS CORPORATION
Model : E-M10MarkII
Software : Version 1.2
Orientation : 1
DateTime : 2020:02:13 15:02:57
YCbCrPositioning : 2
YResolution : 350.0
Copyright :
XResolution : 350.0
Artist :
How should I fix this problem? Should I use a different Python module?
I did some digging and figured out the answer to the problem I posted about. I originally postulated that the rest of the metadata was in the byte data:
b"PrintIM\x000300\x00\x00%\x00\x01\x00\x14\x00\x14\x00\x02\x00\x01\x00\x00\x00\x03\x00\xf0\x00\x00\x00\x07\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\n\x00\x00\x00\x00\x00\x0b\x008\x01\x00\x00\x0c\x00\x00\x00\x00\x00\r\x00\x00\x00\x00\x00\x0e\x00P\x01\x00\x00\x10\x00`\x01\x00\x00 \x00\xb4\x01\x00\x00\x00\x01\x03\x00\x00\x00\x01\x01\xff\x00\x00\x00\x02\x01\x83\x00\x00\x00\x03\x01\x83\x00\x00\x00\x04\x01\x83\x00\x00\x00\x05\x01\x83\x00\x00\x00\x06\x01\x83\x00\x00\x00\x07\x01\x80\x80\x80\x00\x10\x01\x83\x00\x00\x00\x00\x02\x00\x00\x00\x00\x07\x02\x00\x00\x00\x00\x08\x02\x00\x00\x00\x00\t\x02\x00\x00\x00\x00\n\x02\x00\x00\x00\x00\x0b\x02\xf8\x01\x00\x00\r\x02\x00\x00\x00\x00 \x02\xd6\x01\x00\x00\x00\x03\x03\x00\x00\x00\x01\x03\xff\x00\x00\x00\x02\x03\x83\x00\x00\x00\x03\x03\x83\x00\x00\x00\x06\x03\x83\x00\x00\x00\x10\x03\x83\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\t\x11\x00\x00\x10'\x00\x00\x0b\x0f\x00\x00\x10'\x00\x00\x97\x05\x00\x00\x10'\x00\x00\xb0\x08\x00\x00\x10'\x00\x00\x01\x1c\x00\x00\x10'\x00\x00^\x02\x00\x00\x10'\x00\x00\x8b\x00\x00\x00\x10'\x00\x00\xcb\x03\x00\x00\x10'\x00\x00\xe5\x1b\x00\x00\x10'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x05\x05\x00\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x05\x05\x05\x00\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00##\x80\x80\xc0\xc0\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
That assumption wasn't correct. Although the above is metadata, it simply isn't the metadata I am looking for (in my case the FocalLength attribute). Rather it appears to be Olympus specific metadata. The answer to my solution was to find all the metadata. I found a piece of code that worked very well in Stack Overflow: In Python, how do I read the exif data for an image?.
I used the following code by Nicolas Gervais:
import os,sys
from PIL import Image
from PIL.ExifTags import TAGS
for (k,v) in Image.open(sys.argv[1])._getexif().items():
print('%s = %s' % (TAGS.get(k), v))
I replaced sys.argv[1] with the path name to the image file.
Alternate Solution
As MattDMo mentioned, there are also specific libraries for reading EXIF data in Python. One that I found that look promising is ExifRead which can be download by typing the following in the terminal:
pip install ExifRead

Reading a semicolon (';') seperated raw text in Python

I have managed to find some solutions online but often without an explanation. I am new to Python and normally choose to rework my data in Excel. I would, however, like to learn how to deal with a problem like the following:
I was given data for Brazil in this form. The website says to "save the file as a csv". It looks like a mess...
Espírito Santo;
Dados;
Ano;População total;Homens;Mulheres;Nascimentos;Óbitos;Taxa de Crescimento Geométrico;Taxa Bruta de Natalidade;Taxa Bruta de Mortalidade;Esperança de Vida ao Nascer;Esperança de Vida ao Nascer - Homens;Esperança de Vida ao Nascer - Mulheres;Taxa de Mortalidade Infantil;Taxa de Mortalidade Infantil - Homens;Taxa de Mortalidade Infantil - Mulheres;Taxa de Fecundidade Total;Razão de Dependência - Jovens 0 a 14 anos;Razão de Dependência - Idosos 65 ou mais anos;Razão de Dependência;Índice de Envelhecimento;
2010;3596057;1772936;1823121;54018;19734;x;15.02;5.49;75.93;71.9;80.19;11.97;13.59;10.28;1.73;34.49;10.17;44.67;29.41;
2011;3642595;1795501;1847094;55387;19923;1.29;15.21;5.47;76.36;72.35;80.59;11.3;12.87;9.66;1.77;33.72;10.41;44.13;30.77;
2012;3689347;1818188;1871159;55207;20142;1.28;14.96;5.46;76.76;72.78;80.96;10.69;12.2;9.1;1.75;32.98;10.68;43.65;32.17;
2013;3736386;1841035;1895351;56785;20396;1.27;15.2;5.46;77.14;73.19;81.31;10.14;11.6;8.6;1.8;32.29;10.97;43.26;34.22;
2014;3784361;1864376;1919985;57964;20676;1.28;15.32;5.46;77.51;73.58;81.64;9.64;11.06;8.15;1.83;31.73;11.31;43.04;35.59;
2015;3832826;1887984;1944842;58703;20979;1.28;15.32;5.47;77.85;73.95;81.95;9.19;10.56;7.74;1.85;31.29;11.69;42.98;37.44;
2016;3879376;1910629;1968747;55091;21282;1.21;14.2;5.49;78.18;74.31;82.24;8.78;10.11;7.38;1.73;30.84;12.13;42.97;39.35;
2017;3925341;1932993;1992348;58530;21624;1.18;14.91;5.51;78.49;74.65;82.5;8.42;9.71;7.06;1.84;30.52;12.61;43.13;41.31;
2018;3972388;1955930;2016458;58342;22016;1.2;14.69;5.54;78.79;74.97;82.76;8.09;9.34;6.77;1.83;30.31;13.14;43.45;43.6;
2019;4018650;1978483;2040167;58106;22419;1.16;14.46;5.58;79.06;75.27;82.99;7.79;9;6.52;1.83;30.12;13.71;43.83;45.45;
I used MSWord to replace the ";" with "," and Excel´s import from text to try and get a more familiar data frame.
How would you approach data in this form using Python? Save as a .csv then import again with pandas? I am hoping for a better solution by keeping it as a """ enclosed string.
You can tell the csv parser what the delimiter is, in this case it is ';'
with open('filepath.csv') as csv_file:
csv.reader(csv_file, delimiter=';')
How do you get this data? As file? As string?
If you have a file you can use pandas to read the csv which would give you a pandas DataFrame:
pandas.read_csv('filepath.csv', delimiterstr=';')
You can find more info here: https://realpython.com/python-csv/

use of Scrapy's regular expressions

I'm practicing on scraping newspaper's articles with Scrapy. I have some problems in sub-stringing text from web pages. Witht the built-in re and re_first functions I can set where to start the search, but I don't find how to set where to end.
Here follows the code:
import scrapy
from spider.items import Articles
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
class QuotesSpider(scrapy.Spider):
name = "lastampa"
allowed_domains = ['lastampa.it']
def start_requests(self):
urls = [
'http://www.lastampa.it/2017/10/26/economia/lavoro/datalogic-cerca-laureati-e-laureandi-per-la-ricerca-e-sviluppo-rPsS8gVM5ZX7gEZcugklwJ/pagina.html'
]
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
items = []
item = Articles()
item['date'] = response.xpath('//div[contains(#class, "ls-articoloDataPubblicazione")]').re_first(r'content=\s*(.*)')
item['author'] = response.xpath('//div[contains(#class, "ls-articoloAutore")]').re_first(r'">\s*(.*)')
item['title'] = response.xpath('//div[contains(#class, "ls-articoloTitolo")]').re_first(r'<h3>\s*(.*)')
item['subtitle'] = response.xpath('//div[contains(#class, "ls-articoloCatenaccio")]').re_first(r'">\s*(.*)')
item['text'] = response.xpath('//div[contains(#class, "ls-articoloTesto")]').re_first(r'<p>\s*(.*)')
items.append(item)
Well, with this code I can get the needed text, but also all the following tags till the end of paths.
e.g.
'subtitle': 'Gli inserimenti saranno in Italia, Stati Uniti, Cina, Vietnam</div>'
How can I escape the ending </div> (or any other character after a defined point) ?
Can someone can turn on the light on this. Thanks
You don't have to mess with regular expressions like this if you only need to extract the text from elements. Just use text() in your XPath expressions. For example for subtitle field, extract it using:
item['subtitle'] = response.xpath('//div[contains(#class, "ls-articoloCatenaccio")]/text()').extract_first().strip()
which, in your example, will produce:
u'Gli inserimenti saranno in Italia, Stati Uniti, Cina, Vietnam'
Article text is a bit more complicated, as it's spread over child elements. However, you can extract it from there, post-process it and store as a single string like this:
item['text'] = ''.join([x.strip() for x in response.xpath('//div[contains(#class, "ls-articoloTesto")]//text()').extract()])
which will produce:
u'Datalogic, leader mondiale nelle tecnologie di identificazione automatica dei dati e dei processi di automazione industriale, ha lanciato la selezione di giovani talenti laureati o laureandi in ingegneria e/o in altre materie scientifiche che abbiano voglia di confrontarsi con una realt\xe0 internazionale. Il primo step delle selezioni si \xe8 tenuto in occasione dell\u2019Open Day: i candidati hanno sostenuto un primo colloquio con senior manager nazionali e internazionali, arrivati per l\u2019occasione anche da sedi estere. Durante l\u2019Open Day \xe8 stata presentata l\u2019azienda insieme alle 80 posizioni aperte di cui una cinquantina in Italia. La ricerca dei candidati \xe8 rivolta a laureandi e neolaureati di tutta Italia interessati a far parte di una realt\xe0 dinamica e internazionale, provenienti da Facolt\xe0 tecnico-scientifiche quali Ingegneria, Fisica, Informatica.Ai giovani verr\xe0 proposto un interessante percorso di carriera che permetter\xe0 di approfondire e sviluppare tecnologie all\u2019avanguardia nei 10 centri di Ricerca dell\u2019azienda dislocati tra Italia, Stati Uniti, Cina e Vietnam, dove si ha l\u2019opportunit\xe0 di approfondire tutte le tematiche relative all\u2019elettronica e al software design: dallo sviluppo di sistemi operativi mobili, di sensori laser e tecnologie ottiche, allo studio di sistemi di intelligenza artificiale. Informazioni sul sito www.datalogic.com.Alcuni diritti riservati.'

Handling data compression with non-ASCII values while reading and writing file

I am trying to learn lossless compression algorithms using Python 3 and until now I have implemented huffman,burrow wheeler transform and move to front which can take up to 256 unique characters based on there ASCII values. So basically I am trying to read a UTF-8 text file and convert its characters to a single string, then alter that string to compress it. All the algorithms work perfectly but the problem lies in reading file with non-ASCII characters, because if I read the file without encoding it the data value of some special characters goes up to 8221 and movetofront algorithm gives this error:
ValueError: 8221 is not in list
To the read file I tried:
with open('test.txt','r',encoding='utf-8') as f:
data = f.readlines()
charData = ''.join(str(x.encode('utf-8'))[2:-1] for x in data)
huffmanEncode(mtfEncoding(bwt_suffixArray(charData)))
Encode individual char and slice b'', bytes representation from it.
which converts this-> 'you’ll have to check'
to this-> 'you\xe2\x80\x99ll have to check'
Now I input this string, compress it, then decompress it. Decompression works perfectly and I get my string back that represents Unicode. My question is how to get the original content of file back, I tried:
print(bytes(decompressedStr).decode('utf-8'))
#Gives:
>>>TypeError: string argument without an encoding
and:
print(codecs.encode(str,decompressedStr).decode('utf-8'))
#Gives same exact string back:
>>>you\xe2\x80\x99ll have to check
Is there a more efficient way to do this? If not how to convert Unicode representing string to UTF-8 string?
Compression algorithms work on bytes, which is what an encoded file contains. Open your original file in binary mode:
with open('test.txt','rb') as f:
data = f.read()
Don't decode it to Unicode characters, whose ordinal values can be much larger than a byte. Compress the bytes, decompress the bytes, then decode the result to Unicode.
Full example:
#!python3
#coding:utf8
import lzma
text = '''Hola! Yo empecé aprendo Español hace dos mes en la escuela. Yo voy la universidad. Yo tratar estudioso Español tres hora todos los días para que yo saco mejor rápido. ¿Cosa algún yo debo hacer además construir mí vocabulario? Muchas veces yo estudioso la palabras solo para que yo construir mí voabulario rápido. Yo quiero empiezo leo el periódico Español la próxima semana. Por favor correcto algún la equivocaciónes yo hisciste. Gracias!'''
# Create a file containing non-ASCII characters:
with open('test.txt','w',encoding='utf8') as f:
f.write(text)
# Read the raw bytes data.
with open('test.txt','rb') as f:
data = f.read()
# Note: The file write/read can be skipped by encoding the original Unicode text
# to bytes manually.
#
# data = text.encode('utf8')
# Using a built-in Python compression/decompression algorithm.
compressed_data = lzma.compress(data)
decompressed_data = lzma.decompress(compressed_data)
print('orginial length =',len(data))
print('compressed length =',len(compressed_data))
print('decompressed length =',len(decompressed_data))
assert data == decompressed_data
# Now decode the byte data back to Unicode.
print(decompressed_data.decode('utf8'))
Output:
orginial length = 455
compressed length = 372
decompressed length = 455
Hola! Yo empecé aprendo Español hace dos mes en la escuela. Yo voy la universidad. Yo tratar estudioso Español tres hora todos los días para que yo saco mejor rápido. ¿Cosa algún yo debo hacer además construir mí vocabulario? Muchas veces yo estudioso la palabras solo para que yo construir mí voabulario rápido. Yo quiero empiezo leo el periódico Español la próxima semana. Por favor correcto algún la equivocaciónes yo hisciste. Gracias!

Python not getting the right value in an Excel cell

I want to color the interior of a cell according to it's content, however when I'm accessing its value I'm always getting '1.0', the value is calculated.
Colorisation code :
def _colorizeTop10RejetsSheet(self):
"""Colore les position de la page "Top 10 Rejets" """
start_position = (5, 12)
last_line = 47
for x in range(start_position[0], last_line+1):
current_cell = self.workbook.Sheets("Top 10 Rejets").Cells(x, start_position[1])
current_cell.Interior.Color = self._computePositionColor(current_cell.Value)
def _computePositionColor(self, position):
"""Colore les position de 1 a 5 en rouge de et 6 a 10 en orange"""
if position < 6:
return self.RED
elif position < 11:
return self.ORANGE
else:
return self.WHITE
Excel cell code :
=SI(ESTNA(RECHERCHEV(CONCATENER(TEXTE($F23;0);TEXTE($G23;"00");$H23;$I23);Données!$J:$P;7;FAUX));MAX(Données!$P:$P);RECHERCHEV(CONCATENER(TEXTE($F23;0);TEXTE($G23;"00");$H23;$I23);Données!$J:$P;7;FAUX))
How could I get the calculated value?
I'm using python 2.7 and I'm communicating with Excel through win32com
Thanks
Adding this to the beginning of the _colorizeTop10Rejets method did the trick
self.xl.Calculate()
self.xl is the object returned by win32.Dispatch('Excel.Application')

Resources