Writing on an existing Julia data file with the same key - io

Assume that we have a .jld file which has two keys, "hi" and "bye" as
import JLD
file = JLD.jldopen("test.jld","a+")
file["hi"] = randn(1)
file["bye"] = randn(1)
JLD.close(file)
Now what should I do if I want to change the value saved on test.jld with key "hi" and without affecting the value for the key "bye"?
It tried the following code
file = JLD.jldopen("test.jld","a+")
file["hi"] = randn(1)
JLD.close(file)
but it shows the error Error creating dataset //hi.

Once you have created the JLD file, you should use load and save to change values, ie
julia> using JLD
julia> filed = JLD.load("test.jld")
Dict{String,Any} with 2 entries:
"bye" => [-0.275391]
"hi" => [-0.869752]
julia> filed["hi"] = randn(1)
1-element Array{Float64,1}:
-0.3132472191308679
julia> JLD.save("test.jld", filed)
julia> filed = JLD.load("test.jld")
Dict{String,Any} with 2 entries:
"bye" => [-0.275391]
"hi" => [-0.313247]

Related

Python pylibmc syntax / how can I use mc.set in a loop

I want to set keys in a for loop, and read them back in another script also using a loop.
To test memcache is working I made these simple scripts:
a.py
import pylibmc
mc = pylibmc.Client(["127.0.0.1"], binary=True,
behaviors={"tcp_nodelay": True,
"ketama": True})
mc["key_1"] = "Value 1"
mc["key_2"] = "Value 2"
mc["key_3"] = "Value 3"
mc["key_4"] = "Value 4"
b.py:
import pylibmc
mc = pylibmc.Client(["127.0.0.1"], binary=True,
behaviors={"tcp_nodelay": True,
"ketama": True})
print("%s" % (mc["key_1"]))
print("%s" % (mc["key_2"]))
print("%s" % (mc["key_3"]))
print("%s" % (mc["key_4"]))
This working fine. But I have no clue how to rewrite memcache line to be used in a for loop.
I tried several things, but nothing I tried did work.
What I want is something like this:
for index in range (0,4):
mc["key_(index)"] = "Value (index)"
you can use f-strings:
for index in range (0,4):
key = f"key_{index}"
mc[key] = f"{mc[key]} {index}" # or "Value {index}"

Issues reading NetCDF4 LST files from Copernicus

I have a series of hourly LST data from Copernicus. These can be read and displayed fine in Panopoly (Windows version 4.10.6) but:
Crash QGIS 2.18.13 when opening with the 'NetCDF4 Browser' plugin (V0.3)
Reads LST variable incorrectly using R ncdf4 code:
*
file1 <- nc_open('g2_BIOPAR_LST_201901140400_E114.60S35.43E119.09S30.22_GEO_V1.2.nc')
file2 <- nc_open('g2_BIOPAR_LST_201901140500_E114.60S35.43E119.09S30.22_GEO_V1.2.nc')
# attributes(file1)$names
# Just for one variable for now
dat_new <- cbind(
ncvar_get(file1, 'LST'),
ncvar_get(file2, 'LST'))
dim(dat_new)
print(dim(dat_new))
var <- file1$var['LST']$LST
# Create a new file
file_new <- nc_create(
filename = 'Copernicus_LST.nc',
# We need to define the variables here
vars = ncvar_def(
name = 'LST',
units = var$units,
dim = dim(dat_new)))
# And write to it
ncvar_put(
nc = file_new,
varid = 'LST',
vals = dat_new)
# Finally, close the file
nc_close(file_new)
Returns:
[1] "Error, passed variable has a dim that is NOT of class ncdim4!"
[1] "Error occurred when processing dim number 1 of variable LST"
Error in ncvar_def(name = "LST", units = var$units, dim = dim(dat_new)) :
This dim has class: integer
Similarly, using the python netCDF4 approach
compiled = netCDF4.MFDataset(['g2_BIOPAR_LST_201901140400_E114.60S35.43E119.09S30.22_GEO_V1.2.nc','g2_BIOPAR_LST_201901140500_E114.60S35.43E119.09S30.22_GEO_V1.2.nc']))
Returns
ValueError: MFNetCDF4 only works with NETCDF3_* and NETCDF4_CLASSIC formatted files, not NETCDF4
I'm presuming that this is an issue with the file formatting from Copernicus... Has anyone else encountered this? Have put these two example files here.
Thanks!

How to convert the describe table output list or map in groovy

How to convert the below output map or list in groovy
col_name,data_type,comment
"brand","string",""
"tactic_name","string",""
"tactic_id","string",""
"content_description","string",""
"id","bigint",""
"me","bigint",""
"npi","bigint",""
"fname","string",""
"lname","string",""
"addr1","string",""
"addr2","string",""
"city","string",""
"state","string",""
"zip","int",""
"event","string",""
"event_date","timestamp",""
"error_flag","string",""
"error_reason","string",""
"vendor","string",""
"year","int",""
"month","int",""
"",,
"# Partition Information",,
"# col_name ","data_type ","comment "
"",,
"vendor","string",""
"year","int",""
"month","int",""**
Need to separate the partition columns in separate map and normal columns in separate map.
Expected output:
[[brand,string],[...]]
Try this code :
CsvParser is used to read the text. But your text needs some alteration before parsing it. So i did some text processing for fitting it into the csv format.
#Grab('com.xlson.groovycsv:groovycsv:0.2')
import com.xlson.groovycsv.CsvParser
def csv = '''col_name,data_type,comment
"brand","string",""
"tactic_name","string",""
"tactic_id","string",""
"content_description","string",""
"id","bigint",""
"me","bigint",""
"npi","bigint",""
"fname","string",""
"lname","string",""
"addr1","string",""
"addr2","string",""
"city","string",""
"state","string",""
"zip","int",""
"event","string",""
"event_date","timestamp",""
"error_flag","string",""
"error_reason","string",""
"vendor","string",""
"year","int",""
"month","int",""
"",,
"# Partition Information",,
"# col_name ","data_type ","comment "
"",,
"vendor","string",""
"year","int",""
"month","int",""**'''
def maptxt = csv.split('"# Partition Information",,')
def map1txt = maptxt[0].trim()
def map2txt = maptxt[1].trim().readLines().collect{
it=it.replace('#','')
it=it.replaceAll("\\s", "")
}.join('\n')
println getAsMap(map1txt)
println getAsMap(map2txt)
Map getAsMap (def txt)
{
Map ret = [:]
def data = new CsvParser().parse(txt)
for (each in data){
if(each.col_name) // empty keys are neglected.
ret[each.col_name]=each.data_type
}
ret
}
your text has empty col_name. This code neglected that rows.

How do you use lua in redis to return usable result to nodejs

one of the module i am implementing for my mobile app api is to get all outstanding notifications from , submitting username.
i used a list called username:notifications to store all outstanding id of notifications.
For example, in my test case, ['9','10',11'] is the result after calling for
lrange username:notifications 0 -1
So i wrote a lua script to get lrange and for each result,
hgetall notification:id
And for some reason, lua could not send the table, result to nodejs in usable state. Wondering if anyone
has a solution for multiple hgetall requests and returning them to nodejs
Here goes the rest of the code:
-- #KEYS: "username"
-- #ARGV: username
-- gets all fields from a hash as a dictionary
local hgetall = function (key)
local bulk = redis.call('HGETALL', key)
local result = {}
local nextkey
for i, v in ipairs(bulk) do
if i % 2 == 1 then
nextkey = v
else
result[nextkey] = v
end
end
end
local result = {}
local fields = redis.call('LRANGE' , ARGV[1], 0,-1)
for i, field in ipairs(fields) do
result[field] = hgetall('notification:'..field)
end
return result
You cannot return a "dictionary" from a Lua script, it is not a valid Redis type (see here).
What you can do is something like this:
local result = {}
local fields = redis.call('LRANGE' , ARGV[1], 0, -1)
for i=1,#fields do
local t = hgetall('notification:' .. fields[i])
result[#result+1] = fields[i]
result[#result+1] = #t/2
for j=1,#t do
result[#result+1] = t[j]
end
end
return result
The result is a simple list with this format:
[ field_1, nb_pairs_1, pairs..., field_2, nb_pairs_2, ... ]
You will need to decode it in your Node program.
EDIT: there is another solution, probably simpler in your case: encode the result in JSON and return it as a string.
Just replace the last line of your code by:
return cjson.encode(result)
and decode from JSON in your Node code.

Split a string on sqlite

i don't know sqlite but I have to implement a database already done. I'm programming with Corona SDK. The problem: i have a column called "answers" in this format: House,40|Bed,20|Mirror,10 ecc.
I want to split the string and remove "," "|" like this:
VARIABLE A=House
VARIABLE A1=40
VARIABLE B=Bed
VARIABLE B1=20
VARIABLE C=Mirror
VARIABLE C1=10
I'm sorry for my english. Thanks to everybody.
Try this:
If you want to simply remove the characters, then you can use the following:
Update 3 :
local myString = "House;home;flat,40|Bed;bunk,20|Mirror,10"
local myTable = {}
local tempTable = {}
local count_1 = 0
for word in string.gmatch(myString, "([^,|]+)") do
myTable[#myTable+1]=word
count_1=count_1+1
tempTable[count_1] = {} -- Multi Dimensional Array
local count_2 = 0
for word_ in string.gmatch(myTable[#myTable], "([^,|,;]+)") do
count_2=count_2+1
local str_ = word_
tempTable[count_1][count_2] = str_
--print(count_1.."|"..count_2.."|"..str_)
end
end
print("------------------------")
local myTable = {} -- Resetting my table, just for using it again :)
for i=1,count_1 do
for j=1,#tempTable[i] do
print("tempTable["..i.."]["..j.."] = "..tempTable[i][j])
if(j==1)then myTable[i] = tempTable[i][j] end
end
end
print("------------------------")
for i=1,#myTable do
print("myTable["..i.."] = "..myTable[i])
end
--[[ So now you will have a multidimensional array tempTable with
elements as:
tempTable = {{House,home,flat},
{40},
{Bed,bunk},
{20},
{Mirror},
{10}}
So you can simply take any random/desired value from each.
I am taking any of the 3 from the string "House,home,flat" and
assigning it to var1 below:
--]]
var1 = tempTable[1][math.random(3)]
print("var1 ="..var1)
-- So, as per your need, you can check var1 as:
for i=1,#tempTable[1] do -- #tempTable[1] means the count of array 'tempTable[1]'
if(var1==tempTable[1][i])then
print("Ok")
break;
end
end
----------------------------------------------------------------
-- Here you can print myTable(if needed) --
----------------------------------------------------------------
for i=1,#myTable do
print("myTable["..i.."]="..myTable[i])
end
--[[ The output is as follows:
myTable[1]=House
myTable[2]=40
myTable[3]=Bed
myTable[4]=20
myTable[5]=Mirror
myTable[6]=10
Is it is what you are looking for..?
]]--
----------------------------------------------------------------
Keep coding............. :)

Resources