tf.get_variable
import tensorflow as tfntf.get_variablenhelp(tf.get_variable)n"""nmodulen tensorflow.python.ops.variable_scope:nnget_variable(name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=True, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None)nn create new or reuse existing variablenn reuse: prefixes the name with the current variable scope, example belownn ```pythonn def foo():n with tf.variable_scope("foo", reuse=tf.AUTO_REUSE):n v = tf.get_variable("v", [1])n return vnn v1 = foo() # Creates v.n v2 = foo() # Gets the same, existing v.n assert v1 == v2n ```nn initializer:n 1. If `None` (the default), the default initializer passed inn the variable scope will be used.n 2. If that one is `None` too, a `glorot_uniform_initializer` will be used.n 3. If a Tensor, in which case the variable is initialized to this value and shape.nn regularizer:n 1. if `None` (the default), the default regularizern passed in the variable scope will be usedn 2. if that is `None` too, then no regularization is performednn partitioner: (for future)n If a partitioner is provided, a `PartitionedVariable` is returned.n Accessing this object as a `Tensor` returns the shards concatenated alongn the partition axis.nn Some useful partitioners are available. See, e.g.,n `variable_axis_size_partitioner` and `min_max_variable_partitioner`.nn Args:n name: string, The name of the new or existing variable.nn shape: list or tuple, Shape of the new or existing variable.nn dtype: type, Type of the new or existing variable (defaults to `DT_FLOAT`)nn initializer: Initializer for the variable if one is created.nn regularizer: A (Tensor -> Tensor or None) function; the result ofn applying it on a newly created variable will be added to the collectionn @{tf.GraphKeys.REGULARIZATION_LOSSES} and can be used for regularization.nn trainable: default True, If `True` also add the variable to the graph collectionn `GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`).nn (rest for future)nn collections: List of graph collections keys to add the Variable to.n Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see `tf.Variable`).n caching_device: Optional device string or function describing where then Variable should be cached for reading. Defaults to the Variablesn device. If not `None`, caches on another device. Typical use is ton cache on the device where the Ops using the Variable reside, ton deduplicate copying through `Switch` and other conditional statements.n partitioner: Optional callable that accepts a fully defined `TensorShape`n and `dtype` of the Variable to be created, and returns a list ofn partitions for each axis (currently only one axis can be partitioned).n validate_shape: If False, allows the variable to be initialized with an value of unknown shape. If True, the default, the shape of initial_valuen must be known.n use_resource: If False, creates a regular Variable. If true, creates ann experimental ResourceVariable instead with well-defined semantics.n Defaults to False (will later change to True). When eager execution isn enabled this argument is always forced to be True.n custom_getter: Callable that takes as a first argument the true getter, andn allows overwriting the internal get_variable method.n The signature of `custom_getter` should match that of this method,n but the most future-proof version will allow for changes:n `def custom_getter(getter, *args, **kwargs)`. Direct access ton all `get_variable` parameters is also allowed:n `def custom_getter(getter, name, *args, **kwargs)`. A simple identityn custom getter that simply creates variables with modified names is:n ```pythonn def custom_getter(getter, name, *args, **kwargs):n return getter(name + _suffix, *args, **kwargs)n ```nn Returns:n The created or existing `Variable`nn (or `PartitionedVariable`, if a partitioner was used).nn Raises:n ValueError:n 1. when creating a new variable and shape is not declared,n 2. when violating reuse during variable creation,n 3. or when `initializer` dtype and `dtype` dont match.n 4. Reuse is set inside `variable_scope`.n"""ndef foo():n with tf.variable_scope("foo", reuse=tf.AUTO_REUSE):n v = tf.get_variable("v", [1])n return vnnv1 = foo() # Creates v.nv2 = foo() # Gets the same, existing v.nassert v1 == v2n
推薦閱讀:
TAG:TensorFlow |