Use TransformDataset without using AnalyzeAndTransformDataset - python-3.x

I am trying to use tensorflow transform and I would like to serialise a whole pipeline composed by different transformations. Let say I have a transformation that doesn't have to be fitted (as feature interaction between numeric columns). I want to use the TransformDataset function directly on the preprocessing function I have already defined. Anyway it seems this is not possible
If a run something like this
import pprint
import tempfile
import apache_beam as beam
import pandas as pd
import tensorflow as tf
import tensorflow_transform.beam as tft_beam
from tensorflow_transform.tf_metadata import dataset_metadata, schema_utils
NUMERIC_FEATURE_KEYS = ['a', 'b', 'c']
impute_dictionary = dict(b=1.0, c=0.0)
RAW_DATA_FEATURE_SPEC = dict([(name, tf.io.FixedLenFeature([], tf.float32)) for name in NUMERIC_FEATURE_KEYS])
RAW_DATA_METADATA = dataset_metadata.DatasetMetadata(schema_utils.schema_from_feature_spec(RAW_DATA_FEATURE_SPEC))
def interaction_fn(inputs):
outputs = inputs.copy()
new_numeric_feature_keys = []
for i in range(len(NUMERIC_FEATURE_KEYS)):
for j in range(i, len(NUMERIC_FEATURE_KEYS)):
if i == j:
outputs[f'{NUMERIC_FEATURE_KEYS[i]}_squared'] = outputs[NUMERIC_FEATURE_KEYS[i]] * outputs[NUMERIC_FEATURE_KEYS[i]]
new_numeric_feature_keys.append(f'{NUMERIC_FEATURE_KEYS[i]}_squared')
else:
outputs[f'{NUMERIC_FEATURE_KEYS[i]}_{NUMERIC_FEATURE_KEYS[j]}'] = outputs[NUMERIC_FEATURE_KEYS[i]] * outputs[ NUMERIC_FEATURE_KEYS[j]]
new_numeric_feature_keys.append(f'{NUMERIC_FEATURE_KEYS[i]}_{NUMERIC_FEATURE_KEYS[j]}')
NUMERIC_FEATURE_KEYS.extend(new_numeric_feature_keys)
return outputs
if __name__ == '__main__':
temp = tempfile.gettempdir()
data = pd.DataFrame(dict(
a=[1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
b=[1.0, 1.0, 1.0, 2.0, 0.0, 1.0],
c=[0.9, 2.0, 1.0, 0.0, 0.0, 0.0]
))
data.to_parquet('data_no_nans.parquet')
x = {}
for col in data.columns:
x[col] = tf.constant(data[col], dtype=tf.float32, name=col)
with beam.Pipeline() as pipeline:
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
raw_data = pipeline | 'ReadTrainData' >> beam.io.ReadFromParquet('data_no_nans.parquet')
raw_dataset = (raw_data, RAW_DATA_METADATA)
transformed_data, _ = (raw_data, interaction_fn) | tft_beam.TransformDataset()
transformed_data | beam.Map(pprint.pprint)
I get the error
2020-02-11 15:49:37.025525: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-11 15:49:37.132944: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7f87ddda6d30 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-11 15:49:37.132959: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
WARNING:tensorflow:Tensorflow version (2.1.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
WARNING:tensorflow:Tensorflow version (2.1.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
Traceback (most recent call last):
File "/Users/andrea.marchini/Hackathon/tfx_test/foo.py", line 56, in <module>
transformed_data, _ = (raw_data, interaction_fn) | tft_beam.TransformDataset()
File "/Users/andrea.marchini/.local/share/virtualenvs/tfx_test-jg7eSsGQ/lib/python3.7/site-packages/apache_beam/transforms/ptransform.py", line 482, in __ror__
pvalueish, pvalues = self._extract_input_pvalues(left)
File "/Users/andrea.marchini/.local/share/virtualenvs/tfx_test-jg7eSsGQ/lib/python3.7/site-packages/tensorflow_transform/beam/impl.py", line 908, in _extract_input_pvalues
dataset_and_transform_fn)
TypeError: cannot unpack non-iterable PCollection object
Is the TransformDatasetsupposed to be used only on the result of the AnalyzeAndTransformDataset one?

Maybe you could you try this:
transformed_data = (raw_dataset, interaction_fn) | tft_beam.TransformDataset()
I think it tried to unpack raw_data which does not contain the metadata. Moreover TransformDataset returns only on variable, not two.

Related

How can I utilize JAX library on my code with numpy take realted error: "NotImplementedError: The 'raise' mode to jnp.take is not supported."

Due to my need to speed up my written code, I have modified that to pure NumPy code to evaluate the runtime in this way and by JAX accelerator in Python. I don't know if my code is appropriate to be accelerated by JAX, but my little previous studies and JAX usage experiences encourage me to try vectorizing or parallelizing the prepared NumPy code by JAX. For initial test, I have put jax.jit decorator on the function, but it stuck at the first line of my code. it raised the following error in Colab:
<__array_function__ internals> in take(*args, **kwargs)
UnfilteredStackTrace: NotImplementedError: The 'raise' mode to jnp.take is not supported.
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
NotImplementedError Traceback (most recent call last)
<__array_function__ internals> in take(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/jax/_src/numpy/lax_numpy.py in _take(a, indices, axis, out, mode)
5437 elif mode == "raise":
5438 # TODO(phawkins): we have no way to report out of bounds errors yet.
-> 5439 raise NotImplementedError("The 'raise' mode to jnp.take is not supported.")
5440 elif mode == "wrap":
5441 indices = mod(indices, _constant_like(indices, a.shape[axis_idx]))
NotImplementedError: The 'raise' mode to jnp.take is not supported.
I don't know how to handle this code by JAX. This error is related to np.take module, although I guess it will stuck again at some other lines e.g. which contain reduce.
The sample code is:
import numpy as np
import jax
pp_ = np.array([[0.75, 0.5, 0.5], [15, 10, 15], [0.5, 3., 0.35], [15, 17, 15]])
rr_ = np.array([1, 3, 2, 5], dtype=np.float64)
gg_ = np.array([-0.48305741, -1])
ee_ = np.array([[0, 2], [1, 3]], dtype=np.int64)
#jax.jit
def JAX_acc(pp_, rr_, gg_, ee_):
rr_act = np.take(rr_, ee_)
r_add = np.add.reduce(rr_act, axis=1)
pc_dis = np.sum((r_add, gg_), axis=0)
ang_ = np.arccos((rr_act ** 5 + pc_dis[:, None] ** 2) / 1e5)
pl_rad = rr_act * np.cos(ang_)
pp_act = np.take(pp_, ee_, axis=0)
pc_vec = -np.subtract.reduce(pp_act, axis=1)
pc_ = pp_act[:, 0, :] + pc_vec / np.linalg.norm(pc_vec, axis=1)[:, None] * np.abs(pl_rad[:, 0][:, None])
return print(pc_dis, pc_, pl_rad)
JAX_acc(pp_, rr_, gg_, ee_)
main Qusestion: Could JAX library be utilized for this example? How?
Shall I use other modules instead np.take?
I would be appreciated for helping to cure this code by JAX.
---------------- solved by the update ----------------
I would be grateful for any other explanations on the following extraneus questions (not needed):
Which of math operations (-,+,*,...) and their NumPy equivalents (np.power, nu.sum,...) will be faster using JAX? Do NumPy ones will be handled by JAX in a better scheme (in terms of speed) than common math ones?
Does JAX CPU mode need other writing styles than TPU mode; I didn't use that so far.
Updates:
I have changed the code using jnp related modules based on #jakedvp comment and the problem by np.take is gone:
def JAX_acc_jnp(pp_, rr_, gg_, ee_):
rr_act = jnp.take(rr_, ee_)
r_add = jnp.sum(rr_act, axis=1) # .squees()
pc_dis = jnp.add(r_add, gg_)
ang_ = jnp.arccos((rr_act ** 5 + pc_dis[:, None] ** 2) / 1e5)
pl_rad = rr_act * jnp.cos(ang_)
pp_act = jnp.take(pp_, ee_, axis=0)
pc_vec = jnp.diff(pp_act, axis=1).squeeze()
pc_ = pp_act[:, 0, :] + pc_vec / jnp.linalg.norm(pc_vec, axis=1)[:, None] * jnp.abs(pl_rad[:, 0][:, None])
return pc_dis, pc_, pl_rad
For pc_dis and pc_ the results are true, but pl_rad is different due to ang_ different achieved values which are all -1.0927847e-10; perhaps because true values are with -13 decimals and JAX changed dtype to float32, I don't know. If so, how could I specify which dtype JAX use?
larger data sizes: pp_, rr_, gg_, ee_

Tensorflow sampling from float32 and float64

I have a memory issue in running the float64 in the cluster so want to try out float32. However with float32 I get nan after a few iterations in the optimization. I have a Monte Carlo sampling within the model. Now I am trying to see whats the difference in loss values companring the 2 dtypes.
But having a problem as i cant get the exact samples (or approx) generated fromt both dtypes. Here i have a sample of the issue:
float32:
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
tfd = tfp.distributions
mu = tf.constant([1., 2, 3],dtype=np.float32)
cov =tf.Variable( [[ 0.36, 0.12, 0.06],[ 0.12, 0.29, -0.13],[ 0.06, -0.13, 0.26]],dtype=np.float32)
scale = tf.linalg.cholesky(cov)
mvn = tfd.MultivariateNormalTriL(
loc=mu,
scale_tril=scale)
b = mvn.sample(1 ,seed = 123, dtype=np.float32)
tf.print(b)
[[0.958210826 1.73758364 3.31611514]]
float64:
import tensorflow as tf
import tensorflow_probability as tfp
import numpy as np
tfd = tfp.distributions
mu = tf.constant([1., 2, 3],dtype=np.float64)
cov =tf.Variable( [[ 0.36, 0.12, 0.06],[ 0.12, 0.29, -0.13],[ 0.06, -0.13, 0.26]],dtype=np.float64)
scale = tf.linalg.cholesky(cov)
mvn = tfd.MultivariateNormalTriL(
loc=mu,
scale_tril=scale)
tf.random.set_seed(1234)
b = mvn.sample(1 ,seed = 123, dtype=np.float64)
tf.print(b)
[[2.4163821132534813 2.7382581482188852 3.1792761513055856]]
So my question is there a way to get at least approx samples irrespective of the dtype?

How to use specific GPU's in keras for multi-GPU training?

I have a server with 4 GPU's. I want to use exactly 2 of them for multi-GPU training.
Keras documentation provided here gives some insight about how to use multiple GPU's but I want to select the specific GPU's. Is there a way to achieve this?
from keras import backend as K
import tensorflow as tf
c = []
for d in ['/device:GPU:2', '/device:GPU:3']:
with K.tf.device(d):
config = tf.ConfigProto(intra_op_parallelism_threads=4,\
inter_op_parallelism_threads=4, allow_soft_placement=True,\
device_count = {'CPU' : 1, 'GPU' : 2})
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3])
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2])
c.append(tf.matmul(a, b))
with tf.device('/cpu:0'):
sum = tf.add_n(c)
session = tf.Session(config=config)
K.set_session(session)
I think this should work . You should be having the number(index) of GPU devices you want to use. In this case its 2 and 3. Relevant links 1)https://github.com/carla-simulator/carla/issues/116
2) https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus
The best way is to compile the Keras model with a tf.distribute Strategy by creating and compiling your model in the strategy's scope. For example:
import contextlib
def model_scope(devices):
if 1 < len(devices):
strategy = tf.distribute.MirroredStrategy(devices)
scope = strategy.scope()
else:
scope = contextlib.supress() # Python 3.4 up
return scope
devices = ['/device:GPU:2', '/device:GPU:3']
with model_scope(devices):
# create and compile your model
model = get_model()
model.compile(optimizer=optimizer, loss=loss)

numpy code works in REPL, script says type error

Copy and pasting this code into the python3 REPL works, but when I run it as a script, I get a type error.
"""Softmax."""
scores = [3.0, 1.0, 0.2]
import numpy as np
from math import e
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
results = []
x = np.transpose(x)
for j in range(len(x)):
exps = [np.exp(s) for s in x[j]]
_sum = np.sum(np.exp(x[j]))
softmax = [i / _sum for i in exps]
results.append(softmax)
final = np.vstack(results)
return np.transpose(final)
# pass # TODO: Compute and return softmax(x)
print(softmax(scores))
# Plot softmax curves
import matplotlib.pyplot as plt
x = np.arange(-2.0, 6.0, 0.1)
scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)])
plt.plot(x, softmax(scores).T, linewidth=2)
plt.show()
The error I get running the script via CLI is the following:
bash$ python3 softmax.py
Traceback (most recent call last):
File "softmax.py", line 22, in <module>
print(softmax(scores))
File "softmax.py", line 13, in softmax
exps = [np.exp(s) for s in x[j]]
TypeError: 'numpy.float64' object is not iterable
This kind of crap makes me so nervous about running interpreted code in production with libraries like these, seriously unreliable and undefined behaviour is totally unacceptable IMO.
At the top of your script, you define
scores = [3.0, 1.0, 0.2]
This is the argument in your first call of softmax(scores). When converted to a numpy array, scores is 1-d array with shape (3,).
You pass scores into the function, and then it is converted to a numpy array by the call
x = np.transpose(x)
However, it is still 1-d, with shape (3,). The transpose function swaps dimensions, but it does not add a dimension to a 1-d array. In effect, transpose is a "no-op" when applied to a 1-d array.
Then, in the loop that follows, x[j] is a scalar of type numpy.float64, so it does not make sense to write [np.exp(s) for s in x[j]]. x[j] is a scalar, not a sequence, so you can't iterate over it.
In the bottom part of your script, you redefine scores as
x = np.arange(-2.0, 6.0, 0.1)
scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)])
Now scores is 2-d array (scores.shape is (3, 80)), so you don't get an error when you call softmax(scores).

rpy2 automatic NumPy conversion NA/NaN/Inf in foreign function call

When trying to call the BayesTree R package from python with simple data, I am getting the error "NA/NaN/Inf in foreign function call" even though all datum are positive real numbers.
Source Code
import numpy as np
# R interface for python
import rpy2
# For importing R packages
from rpy2.robjects.packages import importr
# Activate conversion from numpy to R
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate()
train_x_py = np.array([[0.0, 0.0],
[0.0, 1.0],
[1.0, 1.0]])
# Any 3-length float vector fails for training y
train_y_py = np.array([1.0,2.0,3.0])
test_x_py = np.array([[0.2, 0.0],
[0.2, 0.2],
[1.0, 0.2]])
# Create R versions of the training and testing data
train_x = rpy2.robjects.r.matrix(train_x_py, nrow=3, ncol=2)
train_y = rpy2.robjects.vectors.FloatVector(train_y_py)
test_x = rpy2.robjects.r.matrix(test_x_py, nrow=3, ncol=2)
print(train_x)
print(train_y)
print(test_x)
BayesTree = importr('BayesTree')
response = BayesTree.bart(train_x, train_y, test_x,
verbose=False, ntree=100)
# The 7th return value is the estimated response
response = response[7]
print(response)
Code Output / Error
[,1] [,2]
[1,] 0 0
[2,] 0 1
[3,] 1 1
[1] 1 2 3
[,1] [,2]
[1,] 0.2 0.0
[2,] 0.2 0.2
[3,] 1.0 0.2
Traceback (most recent call last):
File "broken_rpy2.py", line 32, in <module>
verbose=False, ntree=100)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/functions.py", line 178, in __call__
return super(SignatureTranslatedFunction, self).__call__(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/rpy2/robjects/functions.py", line 106, in __call__
res = super(Function, self).__call__(*new_args, **new_kwargs)
rpy2.rinterface.RRuntimeError: Error in (function (x.train, y.train, x.test = matrix(0, 0, 0), sigest = NA, :
NA/NaN/Inf in foreign function call (arg 7)
The error on line 32 to which it is referring is:
response = BayesTree.bart(train_x, train_y, test_x,
verbose=False, ntree=100)
System Setup
Operating System:
Mac OS X Sierra 10.12.6
Python Version:
Python 3.6.1
R Version:
R 3.4.1
Python Packages:
pip 9.0.1,
rpy2 2.8.6,
numpy 1.13.0
Question
Is this my own user error, or is this a bug in rpy2?
This is a problem in the R package "BayesTree". You can reproduce the problem in R directly with the following code (assuming you have installed the BayesTree package).
train_x = matrix(c(0,0,1,0,1,1),nrow=3,ncol=2)
train_y = as.vector(c(1,2,3))
test_x = matrix(c(.2,.2,1.,.0,.2,.2),nrow=3,ncol=2)
result = bart(train_x,train_y,test_x,verbose=FALSE,ntree=100)

Resources