get the name of a png image using glob and rsplit - string

l have a set of images as follow :
1234_hello_BV56.png
1256_how_5t.png
l want to store in variable
labels only the names between '_'
to get hello , how
rest_left 1234 1256
rest right BV56 5t
to sum up.
for input like :
1234_hello_BV56.png
l want to get the following :
label=hello
rest_left=1234
rest_right=BV56
to do so l tried the following
import os
import glob
os.chdir(path)
images_name = glob.glob("*.png")
first try
set_img = set([x.rsplit('.', 1)[0] for x in images_name])
it only separte the whole world from png extenxion.
second try
label,sep,rest = img.partition('_')
It returns the first sequence before the firts '_'

UPDATE:
In [304]: left,labels,right = list(zip(*[os.path.splitext(x)[0].split('_')
for x in images_name]))
In [305]: left
Out[305]: ('1234', '1256')
In [306]: labels
Out[306]: ('hello', 'how')
In [307]: right
Out[307]: ('BV56', '5t')
Is that what you want?
In [266]: [os.path.splitext(x)[0].split('_') for x in images_name]
Out[266]: [['1234', 'hello', 'BV56'], ['1256', 'how', '5t']]

Related

python3 : append string to every list

I am using python3 with gitpython and generating the result as shown below :
0bf35c4cf243e0fe13adbe7aeba99a03ddf6acfd refs/release/17.xp.0.95/head
d0c5f748e65488ce2e90c1ed027c2da252a5c6a2 refs/release/17.xp.0.96/head
530bdbf8f06859d8aca55cee7b57e27e68e87a94 refs/release/17.xp.0.97/head
0dd0342466540bc38e26ef74af6c8837d165cae5 refs/release/17.xp.0.98/head
919b78fb737b00830a8e48353b0f977c442600dd refs/release/17.xp.0.99/head
But i want to append the string name "acme" to every line, for example
0bf35c4cf243e0fe13adbe7aeba99a03ddf6acfd refs/release/17.xp.0.95/head
acme
d0c5f748e65488ce2e90c1ed027c2da252a5c6a2 refs/release/17.xp.0.96/head
acme
530bdbf8f06859d8aca55cee7b57e27e68e87a94 refs/release/17.xp.0.97/head
acme
0dd0342466540bc38e26ef74af6c8837d165cae5 refs/release/17.xp.0.98/head
acme
919b78fb737b00830a8e48353b0f977c442600dd refs/release/17.xp.0.99/head
acme
Below is the code i am using, please advise the solution to append/concatenate the string to every end of the lines.
import os,re,sys,argparse
import git
if len(sys.argv) < 2:
print('Usage : --track <track name> without "track/" ')
sys.exit()
input_track = sys.argv[1].strip()
print ("Checking for the track name - track/",input_track)
def show_ref(input_track,gitname):
url = "git#github/"+gitname+".git"
g = git.cmd.Git()
ig1 = g.ls_remote(url,"refs/heads/track/"+input_track).split('\d')
print ("Branch for glide-test:\n",'\n'.join(ig1))
for x in range(13,20):
ig6 = g.ls_remote(url,"refs/release/"+str(x)+"."+input_track+".*/head").split('|')
print ('\n'.join(ig6))
#"\n".join(map(lambda word: word+"x", s.split("\n")))
show_ref(input_track,"acme")
You can simply modify this line
ig6 = g.ls_remote(url,"refs/release/"+str(x)+"."+input_track+".*/head").split('|')
By adding the string "acme" to the string you build.
Like this
ig6 = g.ls_remote(url,"refs/release/"+str(x)+"."+input_track+".*/head acme").split('|')
Is that what you meant?

How to get python to recognize missing value when converting list to dictionary?

Working in Python and I've tried a number of different variations but this is my latest. I'm trying to convert "user" list to a dictionary that looks like this:
{
"Grae Drake": 98110,
"Bethany Kok": None,
"Alex Nussbacher": 94101,
"Darrell Silver": 11201,
}
It would show the user's name and zip code, but one user is missing a zip code so I want it to show 'None' where the zip code is missing. Converting isn't the issue, but I'm trying to make it more dynamic in that it will recognize the missing zip code and input 'None' instead.
users = [["Grae Drake", 98110], ["Bethany Kok"], ["Alex Nussbacher", 94101], ["Darrell Silver", 11201]]
def user_contacts():
for name, z in users:
[None if z is None else z for z in users]
user_dict = dict(users)
return user_dict
One possible solution:
users = [["Grae Drake", 98110], ["Bethany Kok"], ["Alex Nussbacher", 94101], ["Darrell Silver", 11201]]
d = dict((u + [None])[:2] for u in users)
print(d)
Prints:
{'Grae Drake': 98110, 'Bethany Kok': None, 'Alex Nussbacher': 94101, 'Darrell Silver': 11201}

Issues reading NetCDF4 LST files from Copernicus

I have a series of hourly LST data from Copernicus. These can be read and displayed fine in Panopoly (Windows version 4.10.6) but:
Crash QGIS 2.18.13 when opening with the 'NetCDF4 Browser' plugin (V0.3)
Reads LST variable incorrectly using R ncdf4 code:
*
file1 <- nc_open('g2_BIOPAR_LST_201901140400_E114.60S35.43E119.09S30.22_GEO_V1.2.nc')
file2 <- nc_open('g2_BIOPAR_LST_201901140500_E114.60S35.43E119.09S30.22_GEO_V1.2.nc')
# attributes(file1)$names
# Just for one variable for now
dat_new <- cbind(
ncvar_get(file1, 'LST'),
ncvar_get(file2, 'LST'))
dim(dat_new)
print(dim(dat_new))
var <- file1$var['LST']$LST
# Create a new file
file_new <- nc_create(
filename = 'Copernicus_LST.nc',
# We need to define the variables here
vars = ncvar_def(
name = 'LST',
units = var$units,
dim = dim(dat_new)))
# And write to it
ncvar_put(
nc = file_new,
varid = 'LST',
vals = dat_new)
# Finally, close the file
nc_close(file_new)
Returns:
[1] "Error, passed variable has a dim that is NOT of class ncdim4!"
[1] "Error occurred when processing dim number 1 of variable LST"
Error in ncvar_def(name = "LST", units = var$units, dim = dim(dat_new)) :
This dim has class: integer
Similarly, using the python netCDF4 approach
compiled = netCDF4.MFDataset(['g2_BIOPAR_LST_201901140400_E114.60S35.43E119.09S30.22_GEO_V1.2.nc','g2_BIOPAR_LST_201901140500_E114.60S35.43E119.09S30.22_GEO_V1.2.nc']))
Returns
ValueError: MFNetCDF4 only works with NETCDF3_* and NETCDF4_CLASSIC formatted files, not NETCDF4
I'm presuming that this is an issue with the file formatting from Copernicus... Has anyone else encountered this? Have put these two example files here.
Thanks!

Python3 decode removes white spaces when should be kept

I'm reading a binary file that has a code on STM32. I placed deliberate 2 const strings in the code, that allows me to read SW version and description from a given file.
When you open a binary file with hex editor or even in python3, you can see correct form. But when run text = data.decode('utf-8', errors='ignore'), it removes a zeros from the file! I don't want this, as I keep EOL characters to properly split and extract string that interest me.
(preview of the end of the data variable)
Svc\x00..\Src\adc.c\x00..\Src\can.c\x00defaultTask\x00Task_CANbus_receive\x00Task_LED_Controller\x00Task_LED1_TX\x00Task_LED2_RX\x00Task_PWM_Controller\x00**SW_VER:GN_1.01\x00\x00\x00\x00\x00\x00MODULE_DESC:generic_module\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00**Task_SuperVisor_Controller\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x06\x07\x08\t\x00\x00\x00\x00\x01\x02\x03\x04..\Src\tim.c\x005!\x00\x08\x11!\x00\x08\x01\x00\x00\x00\xaa\xaa\xaa\xaa\x01\x01\nd\x00\x02\x04\nd\x00\x00\x00\x00\xa2J\x04'
(preview of text, i.e. what I receive after decode)
r g # IDLE TmrQ Tmr Svc ..\Src\adc.c ..\Src\can.c
defaultTask Task_CANbus_receive Task_LED_Controller Task_LED1_TX
Task_LED2_RX Task_PWM_Controller SW_VER:GN_1.01
MODULE_DESC:generic_module
Task_SuperVisor_Controller ..\Src\tim.c 5! !
d d J
with open(path_to_file, "rb") as binary_file:
# Read the whole file at once
data = binary_file.read()
text = data.decode('utf-8', errors='ignore')
# get index of "SW_VER:" sting in the file
sw_ver_index = text.rfind("SW_VER:")
# SW_VER found
if sw_ver_index is not -1:
# retrive the value, e.g. "SW_VER:WB_2.01" will has to start from position 7 and finish at 14
sw_ver_value = text[sw_ver_index + 7:sw_ver_index + 14]
module.append(tuple(('DESC:', sw_ver_value)))
else:
# SW_VER not found
module.append(tuple(('DESC:', 'N/A')))
# get index of "MODULE_DESC::" sting in the file
module_desc_index = text.rfind("MODULE_DESC:")
# MODULE_DESC found
if module_desc_index is not -1:
module_desc_substring = text[module_desc_index + 12:]
module_desc_value = module_desc_substring.split()
module.append(tuple(('DESC:', module_desc_value[0])))
print(module_desc_value[0])
As you can see my white characters are gone, while they should be present

using as.ppp on data frame to create marked process

I am using a data frame to create a marked point process using as.ppp function. I get an error Error: is.numeric(x) is not TRUE. The data I am using is as follows:
dput(head(pointDataUTM[,1:2]))
structure(list(POINT_X = c(439845.0069, 450018.3603, 451873.2925,
446836.5498, 445040.8974, 442060.0477), POINT_Y = c(4624464.56,
4629024.646, 4624579.758, 4636291.222, 4614853.993, 4651264.579
)), .Names = c("POINT_X", "POINT_Y"), row.names = c(NA, -6L), class = c("tbl_df",
"tbl", "data.frame"))
I can see that the first two columns are numeric, so I do not know why it is a problem.
> str(pointDataUTM)
Classes ‘tbl_df’, ‘tbl’ and 'data.frame': 5028 obs. of 31 variables:
$ POINT_X : num 439845 450018 451873 446837 445041 ...
$ POINT_Y : num 4624465 4629025 4624580 4636291 4614854 ...
Then I also checked for NA, which shows no NA
> sum(is.na(pointDataUTM$POINT_X))
[1] 0
> sum(is.na(pointDataUTM$POINT_Y))
[1] 0
When I tried even only the first two columns of the data.frame, the error I get on using as.ppp is this:
Error: is.numeric(x) is not TRUE
5.stop(sprintf(ngettext(length(r), "%s is not TRUE", "%s are not all TRUE"), ch), call. = FALSE, domain = NA)
4.stopifnot(is.numeric(x))
3.ppp(X[, 1], X[, 2], window = win, marks = marx, check = check)
2.as.ppp.data.frame(pointDataUTM[, 1:2], W = studyWindow)
1.as.ppp(pointDataUTM[, 1:2], W = studyWindow)
Could someone tell me what is the mistake here and why I get the not numeric error?
Thank you.
The critical check is whether PointDataUTM[,1] is numeric, rather than PointDataUTM$POINT_X.
Since PointDataUTM is a tbl object, and tbl is a function from the dplyr package, what is probably happening is that the subset operator for the tbl class is returning a data frame, and not a numeric vector, when a single column is extracted. Whereas the $ operator returns a numeric vector.
I suggest you convert your data to data.frame using as.data.frame() before calling as.ppp.
In the next version of spatstat we will make our code more robust against this kind of problem.
I'm on the phone, so can't check but I think it is happens because you have a tibble and not a data.frame. Please try to convert to a data.frame using as.data.frame first.

Resources