I am beginner in tesnor flow. Trying to understand about various API's and features. For most of the API's in tensor flow has name argument like shown below
# Create a variable.
w = tf.Variable(0, name="abc">)
In most of the programming example "w" is used. I am not getting when name optional argument is used for example how "abc" is used either inside the variable function and how it used in following lines after executing above statement. Kindly help.
Name argument is used in most cases so you can restore your variables from a saved checkpoint. Check this https://www.tensorflow.org/guide/saved_model . Furthermore it is helpful for finding your variables or operations that are visualized in the graph through tensorboard.
Related
I've been trying to use Set-AzVmRunCommand instead of the invoke alternative, as it gives more options like protected params and better feedback. I've used the equivalent in Az CLI for some time with good results. Documentation is here https://learn.microsoft.com/en-us/powershell/module/az.compute/set-azvmruncommand?view=azps-9.2.0
I'm struggling to form the parameter object that it expects, which is described as follows:
ALIASES
COMPLEX PARAMETER PROPERTIES
To create the parameters described below, construct a hash table containing the appropriate properties. For information on hash tables, run Get-Help about_Hash_Tables.
PARAMETER <IRunCommandInputParameter[]>: The parameters used by the script.
Name <String>: The run command parameter name.
Value <String>: The run command parameter value.
PROTECTEDPARAMETER <IRunCommandInputParameter[]>: The parameters used by the script.
Name <String>: The run command parameter name.
Value <String>: The run command parameter value.
In az CLI the params are formed rather simply, using a format like the below:
--parameters "serverName =xyz" "databaseName =abc"
I've tried creating an array of hashtables, an individual hashtable containing multiple entries, etc. but all to no avail. I'm sure that it's obvious to someone out there, so thought I'd ask if anyone knew how to use this correctly? I can't find much out there in terms of extra info or documented examples.
I think I may be supposed to create a hashtable within a parameter object of type IRunCommandInputParameter, just not sure yet, ref: https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.powershell.cmdlets.compute.models.api20210701.iruncommandinputparameter?view=az-ps-latest
From MathiasR.Jessen in comments =
Set-AzVMRunCommand ... -Parameter #(#{Name='serverName';Value='xyz'},#{Name='databaseName';Value='abc'})
Array containing hashtable for each parameter required. Works just fine, thank you!
I am looking at Kedro Library as my team are looking into using it for our data pipeline.
While going to the offical tutorial - Spaceflight.
I came across this function:
def preprocess_companies(companies: pd.DataFrame) -> pd.DataFrame:
"""Preprocess the data for companies.
Args:
companies: Source data.
Returns:
Preprocessed data.
"""
companies["iata_approved"] = companies["iata_approved"].apply(_is_true)
companies["company_rating"] = companies["company_rating"].apply(_parse_percentage)
return companies
companies is the name of the csv file containing the data
Looking at the function, my assumption is that (companies: pd.Dafarame) is the shorthand to read the "companies" dataset as a dataframe. If so, I do not understand what does -> pd.Dataframe at the end means
I tried looking at python documentation regarding such style of code but I did not managed to find any
Much help is appreciated to assist me in understanding this.
Thank you
This is tht way of declaring type of your inputs(companies: pd.DataFrame) . Here comapnies is argument and pd.DataFrame is its type . in same way -> pd.DataFrame this is the type of output
Overall they are saying that comapnies of type pd.DataFrame will return pd.DataFrametype variable .
I hope you got it
The -> notation is type hinting, as is the : part in the companies: pd.DataFrame function definition. This is not essential to do in Python but many people like to include it. The function definition would work exactly the same if it didn't contain this but instead read:
def preprocess_companies(companies):
This is a general Python thing rather than anything kedro-specific.
The way that kedro registers companies as a kedro dataset is completely separate from this function definition and is done through the catalog.yml file:
companies:
type: pandas.CSVDataSet
filepath: data/01_raw/companies.csv
There will then a node defined (in pipeline.py) to specify that the preprocess_companies function should take as input the kedro dataset companies:
node(
func=preprocess_companies,
inputs="companies", # THIS LINE REFERS TO THE DATASET NAME
outputs="preprocessed_companies",
name="preprocessing_companies",
),
In theory the name of the parameter in the function itself could be completely different, e.g.
def preprocess_companies(anything_you_want):
... although it is very common to give it the same name as the dataset.
In this situation companies is technically any DataFrame. However, when wrapped in a Kedro Node object the correct dataset will be passed in:
Node(
func=preprocess_companies, # The function posted above
inputs='raw_companies', # Kedro will read from a catalog entry called 'raw companies'
outputs='processed_companies', # Kedro will write to a catalog entry called 'processed_companies'
)
In essence the parameter name isn't really important here, it has been named this way so that the person reading the code knows that it is semantically about companies, but the function name does that too.
The above is technically a simplification since I'm not getting into MemoryDataSets but hopefully it covers the main points.
I have functions process and matrix. The following code works
process(matrix({{2,4,6},{8,10,12},{14,16,20}}))
However the following doesn't work.
n='matrix({{2,4,6},{8,10,12},{14,16,20}})'
process(n)
It throws some error. The reason is obvious that process takes n as string rather than the output of the function matrix. So the basic difficulty involved here is about evaluating string from variable n and then give it as argument to the function process. Here loadstring function is of no use as matrix is local function and can't be referred from loadstring.
Is there any work around for this? I hope that I have clearly stated the problem here. It is about evaluating (or unloading) string and then passing it as argument to another function. Any help will be appreciated. Thanks.
as matrix is local function
Lua takes local declarations seriously. If a variable is declared local, it can only be accessed by code which is statically within the local scope of that variable. Strings which you later turn into code are not statically in the local scope and therefore cannot access local variables.
Now, with Lua 5.2+, you can provide load with a second parameter, a table which represents the global environment against which that Lua chunk will be built. If that table contains a matrix value, then the loaded string can access it. For Lua 5.1, you'd have to use setfenv on the function returned to load to accomplish a similar effect. The Lua 5.2+ method would look like this:
local env = {matrix = matrix}
local func = load("return matrix({{2,4,6},{8,10,12},{14,16,20}})", nil, "t", env)
process(func())
Note the following:
You must create an explicit table which is the global environment. There's nothing you can pass that says "make my locals available"; you have to put every local you'd like to access there. Which is, generally speaking, why you pass these things as parameters or just make them globals.
You explicitly need the "return " there if you want to get the results of the call to matrix.
You have to call the function. Functions are values in Lua, so you can freely pass them around. But if you want to pass the results of a function to another function, you have to actually call it.
Just new to tensorflow. When checking impl for feature_column.categorical_column_with_hash_bucket, I found this code:
sparse_id_values = string_ops.string_to_hash_bucket_fast(sparse_values,
self.hash_bucket_size, name='lookup')
Not sure why use name='lookup' here, is it related with lookup_ops.py? Documentation specified in tf.string_to_hash_bucket_fast:
name: A name for the operation (optional).
But not quite understand, trying to go deeper in source code, found it's wrapped in go/wrapper in a interface, even can't find a detailed algo impl. Any suggestions?
tf.string_to_hash_bucket_fast() creates an actual op in the graph. It's called StringToHashBucketFast in the native implementation, see the source code in tensorflow/core/kernels/string_to_hash_bucket_op.cc. In particular, it has a gradient. So the name can be helpful to recognize this op in the graph, for example in tensorboard.
The name lookup in this place explains the meaning of sparse_id_values: it's a conversion of sparse_values to ids (hashes).
I am exploring convolution layer in keras from:
https://github.com/fchollet/keras/blob/master/keras/layers/convolutional.py#L233
everywhere i found following type of code lines:
#interfaces.legacy_conv1d_support
#interfaces.legacy_conv2d_support
what is working and role of these lines. I searched in google but not find answer anywhere. please explain.
These lines starting with # are called decorators in python. Check out this page to read a brief summary about them. The basic function of this decorators is, that they wrap the following function into another function which has some kind of "wrapper" functions, like preprocessing the arguments, changing the accessibility of the function etc.
Taking a look at the interfaces.py file you will see this:
legacy_conv1d_support = generate_legacy_interface(
allowed_positional_args=['filters', 'kernel_size'],
conversions=[('nb_filter', 'filters'),
('filter_length', 'kernel_size'),
('subsample_length', 'strides'),
('border_mode', 'padding'),
('init', 'kernel_initializer'),
('W_regularizer', 'kernel_regularizer'),
('b_regularizer', 'bias_regularizer'),
('W_constraint', 'kernel_constraint'),
('b_constraint', 'bias_constraint'),
('bias', 'use_bias')],
preprocessor=conv1d_args_preprocessor)
So, the use of this function is basicly to rename parameters. Why is this? The keras API changed the names of some arguments of some functions (like W_regularizer -> kernel_regularizer). To allow users to be able to run old code, they added this decorator, which will just replace the names of old arguments with the corresponding new parameter name before calling the real function. This allows you to run "old" keras 1 code, even though you have installed keras 2.
Tl;dr: These lines are just used to for compatibility reasons. As this are just internal aspects of keras there is nothing you have to worry about or to take care of.