進一步理解 TensorFlow 的 Graph 機制
對於 TensorFlow 的核心概念,計算圖,我們應該要了解它的哪些知識呢?
讓我們來看關於 Graph 的官方 API 文檔:
tf.Graph
Defined in tensorflow/python/framework/ops.py
.
計算圖類定義在 ops.py 文件中。
A Graph
contains a set of tf.Operation
objects, which represent units of computation; andtf.Tensor
objects, which represent the units of data that flow between operations.
一個計算圖包含了一系列的 tf.Operation 對象,這些對象是一些計算單元; 還包含了一些 tensor 對象,這些 tensor 是表示在計算之間流動的數據。
A default Graph
is always registered, and accessible by calling tf.get_default_graph
. To add an operation to the default graph, simply call one of the functions that defines a new Operation
:
通常情況下,總會有一個計算圖默認被註冊,可以通過 tf.get_default_graph() 獲取到它。
簡單的調用一個函數就已經在默認的計算圖內部建立了一個新的計算操作。
c = tf.constant(4.0)
assert c.graph is tf.get_default_graph()
或者使用上下文管理器的方式使用計算圖:
Another typical usage involves the tf.Graph.as_default
context manager, which overrides the current default graph for the lifetime of the context:
g = tf.Graph() with g.as_default(): # Define operations and tensors in `g`. c = tf.constant(30.0) assert c.graph is g
Important note: This class is not thread-safe for graph construction. All operations should be created from a single thread, or external synchronization must be provided. Unless otherwise specified, all methods are not thread-safe.
重要提示:這個類在構建計算圖的時候並不是線程安全,所有的計算操作必須在單線程內部建立,或者提供額外的同步操作,所有的方法除非被單獨說明,都不是線程安全的。
A Graph
instance supports an arbitrary number of "collections" that are identified by name. For convenience when building a large graph, collections can store groups of related objects: for example, the tf.Variable
uses a collection (named tf.GraphKeys.GLOBAL_VARIABLES
) for all variables that are created during the construction of a graph. The caller may define additional collections by specifying a new name.
一個計算圖的實例支持任意數量不同名字的 collections ; 為了更加方便的建立一個更大的計算圖, collections 能夠存儲大量相關的對象: 比如說,所有的內部變數使用名字為 tf.GraphKeys.GLOBAL_VARIABLES 的 collection。 也可以自己新名字定義新的 collection。
Properties 計算圖能夠使用的屬性
building_function
Returns True if this graph represents a function.
如果該計算圖表示一個函數就返回 TRUE
collections
Returns the names of the collections known to this graph.
返回該計算圖所有已知的集合名稱,默認情況下為空
finalized
True if this graph has been finalized.
如果該計算圖最終確定了,就返回 TRUE,默認為 False
graph_def_versions
The GraphDef version information of this graph.
For details on the meaning of each version, see GraphDef
.
Returns: A VersionDef
.
返回 GraphDef 的版本信息
我試了一下,空的計算圖返回:producer: 24,不太明白這裡的版本信息代表什麼?
seed
The graph-level random seed of this graph.
計算圖整體級別的隨機種子數
version
Returns a version number that increases as ops are added to the graph.
Note that this is unrelated to the tf.Graph.graph_def_versions
.
Returns: An integer version that increases as ops are added to the graph.
該計算圖內部所有的計算操作數目,統計了變數以及計算的操作加總的個數
Methods 計算圖能夠使用的方法
__init__
Creates a new, empty Graph.
創建一個新的計算圖
add_to_collection(name, value)
Stores value
in the collection with the given name
.
存儲值到指定名稱的集合中
Note that collections are not sets, so it is possible to add a value to a collection several times.
需要注意的是collections 並不是Python 內置的集合 (Set)那樣的數據類型,所以可以多次存儲某個值到同一個名稱下面
Args:
name
: The key for the collection. TheGraphKeys
class contains many standard names for collections.value
: The value to add to the collection.
GraphKeys 中包含了許多標準的集合名稱
add_to_collections( names, value)
Stores value
in the collections given by names
.
存儲變數值到多個集合中
Note that collections are not sets, so it is possible to add a value to a collection several times. This function makes sure that duplicates in names
are ignored, but it will not check for pre-existing membership of value
in any of the collections in names
.
需要注意的是collections 並不是Python 內置的集合 (Set)那樣的數據類型,所以可以多次存儲某個值到同一個名稱下面,它會忽略重複的存儲時候使用的集合名稱,但是不會檢查之前存在的同樣名稱下面的值。
names
can be any iterable, but if names
is a string, it is treated as a single collection name.
集合名稱可以是任意一個可迭代對象,但如果是一個字元,會被當做一個單獨的集合名稱
Args:
names
: The keys for the collections to add to. TheGraphKeys
class contains many standard names for collections.value
: The value to add to the collections.
as_default
Returns a context manager that makes this Graph
the default graph.
返回一個上下文管理器,使得這個 Graph 成為默認的計算圖
This method should be used if you want to create multiple graphs in the same process. For convenience, a global default graph is provided, and all ops will be added to this graph if you do not create a new graph explicitly. Use this method with the with
keyword to specify that ops created within the scope of a block should be added to this graph.
這個方法在你需要在同一個程序裡面創建多個計算圖的時候比較有用。
為了方便,一個默認的全局計算圖事先已經提供好了,如果你沒有顯式的添加新的計算圖,那麼所有的計算操作都被添加到它的下面。這個方法通常和 with 的關鍵字同時使用。
The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default():
in that threads function.
默認的計算圖是和當前的線程綁定的,如果需要在新的線程使用默認的計算圖,就要顯式的在其他線程通過 with g.as_default(): 進行添加。
The following code examples are equivalent:
以下的代碼效果相等:
# 1. Using Graph.as_default(): g = tf.Graph() with g.as_default(): c = tf.constant(5.0) assert c.graph is g# 2. Constructing and making default: with tf.Graph().as_default() as g: c = tf.constant(5.0) assert c.graph is g
Returns: A context manager for using this graph as the default graph.
返回一個以當前計算圖作為默認計算圖的g上下文管理器
as_graph_def(from_version=None,add_shapes=False)
Returns a serialized GraphDef
representation of this graph.
返回當前計算圖的定義的序列化數據
The serialized GraphDef
can be imported into another Graph
(using tf.import_graph_def
) or used with the C++ Session API.
This method is thread-safe.
序列化後的圖定義數據可以用 tf.import_graph_def 導入另外一個計算圖,並且它是線程安全的
Args:
from_version
: Optional. If this is set, returns aGraphDef
containing only the nodes that were added to this graph since itsversion
property had the given value.add_shapes
: If true, adds an "_output_shapes" list attr to each node with the inferred shapes of each of its outputs.
Returns: A GraphDef
protocol buffer.
Raises: ValueError
: If the graph_def
would be too large.
這個方法的通常可以用來查看當前使用的計算圖的詳細結構,比如:
print(tf.get_default_graph().as_graph_def())#返回各個計算節點的詳細信息,下面展示其中一個節點的信息node { name: "Variable/initial_value" op: "Const" attr { key: "dtype" value { type: DT_FLOAT } } attr { key: "value" value { tensor { dtype: DT_FLOAT tensor_shape { } float_val: 1.0 } } }}
as_graph_element(obj,allow_tensor=True,allow_operation=True )
Returns the object referred to by obj
, as an Operation
or Tensor
.
This function validates that obj
represents an element of this graph, and gives an informative error message if it is not.
This function is the canonical way to get/validate an object of one of the allowed types from an external argument reference in the Session API.
This method may be called concurrently from multiple threads.
判斷 obj 是否是該計算圖的元素,如果是,就返回該元素的信息,如果不是就報錯
Args:
obj
: ATensor
, anOperation
, or the name of a tensor or operation. Can also be any object with an_as_graph_element()
method that returns a value of one of these types.allow_tensor
: If true,obj
may refer to aTensor
.allow_operation
: If true,obj
may refer to anOperation
.
Returns: The Tensor
or Operation
in the Graph corresponding to obj
.
Raises:
TypeError
: Ifobj
is not a type we support attempting to convert to types.ValueError
: Ifobj
is of an appropriate type but invalid. For example, an invalid string.KeyError
: Ifobj
is not an object in the graph.
cc = tf.constant(30.0)g = tf.Graph()with g.as_default(): c = tf.constant(30.0) print g.as_graph_element(cc)print g.as_graph_element(c)
cc 不是該計算圖的元素,就會報錯:
ValueError: Tensor Tensor("Const:0", shape=(), dtype=float32) is not an element of this graph.
clear_collection(name)
Clears all values in a collection.
Args:
name
: The key for the collection. TheGraphKeys
class contains many standard names for collections.
清除指定名稱的集合內容
colocate_with(op,ignore_existing=False )
Returns a context manager that specifies an op to colocate with.
Note: this function is not for public use, only for internal libraries.For example:
a = tf.Variable([1.0])
with g.colocate_with(a):
b and c will always be colocated with a, no matter where a is eventually placed.
NOTE Using a colocation scope resets any existing device constraints.
If op
is None
then ignore_existing
must be True
and the new scope resets all colocation and device constraints.
Args:
op
: The op to colocate all created ops with, orNone
.ignore_existing
: If true, only applies colocation of this op within the context, rather than applying all colocation properties on the stack. Ifop
isNone
, this value must beTrue
.
Raises:
ValueError
: if op is None but ignore_existing is False.
指定一個在不論任何地方的tensor或者計算操作(比如不同設備)都與該上下文管理器內部的計算操作在同一個位置。(注意:該方法並不公開使用,僅在代碼內部庫使用;返回的是上下文管理器)
container(container_name)
Returns a context manager that specifies the resource container to use.
返回一個上下文管理器來指定使用哪個資源容器
Stateful operations, such as variables and queues, can maintain their states on devices so that they can be shared by multiple processes. A resource container is a string name under which these stateful operations are tracked. These resources can be released or cleared with tf.Session.reset()
.
For example:
with g.container(experiment0): # All stateful Operations constructed in this context will be placed# in resource container "experiment0". v1 = tf.Variable([1.0]) v2 = tf.Variable([2.0]) with g.container("experiment1"): # All stateful Operations constructed in this context will be # placed in resource container "experiment1". v3 = tf.Variable([3.0]) q1 = tf.FIFOQueue(10, tf.float32) # All stateful Operations constructed in this context will be # be created in the "experiment0". v4 = tf.Variable([4.0]) q1 = tf.FIFOQueue(20, tf.float32) with g.container(""): # All stateful Operations constructed in this context will be # be placed in the default resource container. v5 = tf.Variable([5.0]) q3 = tf.FIFOQueue(30, tf.float32) # Resets container "experiment0", after which the state of v1, v2, v4, q1 # will become undefined (such as uninitialized).tf.Session.reset(target, ["experiment0"])
v1, v2, v4, q1 會被放在 experiment0 的資源容器中,所以 reset 執行後會被重置,導致沒有被定義
Args:
container_name
: container name string.
Returns: A context manager for defining resource containers for stateful ops, yields the container name.
control_dependencies(control_inputs)
Returns a context manager that specifies control dependencies.
返回上下文管理器來指定各個變數之間依賴控制的情況
Use with the with
keyword to specify that all operations constructed within the context should have control dependencies on control_inputs
. For example:
with g.control_dependencies([a, b, c]): # `d` and `e` will only run after `a`, `b`, and `c` have executed. d = ... e = ...
d,e 的計算只有在 a,b,c 計算完了之後才會開始計算
也可以使用多重依賴控制:
Multiple calls to control_dependencies()
can be nested, and in that case a new Operation
will have control dependencies on the union of control_inputs
from all active contexts.
with g.control_dependencies([a, b]): # Ops constructed here run after `a` and `b`. with g.control_dependencies([c, d]): # Ops constructed here run after `a`, `b`, `c`, and `d`.
從上面代碼可以看到,不同地方的代碼有不同的依賴,最後一層的只有在 abcd都計算完了才會開始計算
You can pass None to clear the control dependencies:
with g.control_dependencies([a, b]): # Ops constructed here run after `a` and `b`. with g.control_dependencies(None): # Ops constructed here run normally, not waiting for either `a` or `b`.
如果傳入 None,就會清除所有依賴
with g.control_dependencies([c, d]): # Ops constructed here run after `c` and `d`, also not waiting # for either `a` or `b`. # N.B. The control dependencies context applies only to ops that are # constructed within the context. Merely using an op or tensor in the # context does not add a control dependency. The following example # illustrates this point:
計算的過程必須寫在上下文管理器裡面才能起到依賴控制的效果,以下是錯誤和正確的示例:
# WRONG def my_func(pred, tensor): t = tf.matmul(tensor, tensor) with tf.control_dependencies([pred]): # The matmul op is created outside the context, so no control # dependency will be added. return t# RIGHT def my_func(pred, tensor): with tf.control_dependencies([pred]): # The matmul op is created in the context, so a control dependency # will be added. return tf.matmul(tensor, tensor)
Args:
control_inputs
: A list ofOperation
orTensor
objects which must be executed or computed before running the operations defined in the context. Can also beNone
to clear the control dependencies.
Returns:
A context manager that specifies control dependencies for all operations constructed within the context.
Raises:
TypeError
: Ifcontrol_inputs
is not a list ofOperation
orTensor
objects.
create_op( op_type, inputs, dtypes, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True, compute_device=True )
Creates an Operation
in this graph.
在這個計算圖內部創建計算操作,這是一個底層 API,一般不會用到這個方法,我們一般直接使用高級 API 來在計算內部增加計算節點
This is a low-level interface for creating an Operation
. Most programs will not call this method directly, and instead use the Python op constructors, such as tf.constant()
, which add ops to the default graph.
device(device_name_or_function)
Returns a context manager that specifies the default device to use.
返回上下文管理器來指定使用哪個設備來進行計算,它的參數可以是設備名稱字元串或者一個函數,在是函數的情況下,會使用函數返回的設備名稱;如果參數為 None,會使用所有設備
The device_name_or_function
argument may either be a device name string, a device function, or None:
- If it is a device name string, all operations constructed in this context will be assigned to the device with that name, unless overridden by a nested
device()
context. - If it is a function, it will be treated as a function from Operation objects to device name strings, and invoked each time a new Operation is created. The Operation will be assigned to the device with the returned name.
- If it is None, all
device()
invocations from the enclosing context will be ignored.
For information about the valid syntax of device name strings, see the documentation inDeviceNameUtils
.
For example:
with g.device(/device:GPU:0): # All operations constructed in this context will be placed # on GPU 0. with g.device(None): # All operations constructed in this context will have no # assigned device. # Defines a function from `Operation` to device string. def matmul_on_gpu(n): if n.type == "MatMul": return "/device:GPU:0" else: return "/cpu:0" with g.device(matmul_on_gpu): # All operations of type "MatMul" constructed in this context # will be placed on GPU 0; all other operations will be placed # on CPU 0. N.B. The device scope may be overridden by op wrappers or other library code. For example, a variable assignment op v.assign() must be colocated with the tf.Variable v, and incompatible device scopes will be ignored.
MatMul 就會使用 GPU0, 其他的計算使用 CPU0
Args:
device_name_or_function
: The device name or function to use in the context.
Yields: A context manager that specifies the default device to use for newly created ops.
finalize()
Finalizes this graph, making it read-only.
結束計算圖構建過程,使它變成只讀,不能再添加新的計算節點
After calling g.finalize()
, no new operations can be added to g
. This method is used to ensure that no operations are added to a graph when it is shared between multiple threads, for example when using a tf.train.QueueRunner
.
get_all_collection_keys()
Returns a list of collections used in this graph.
返回在該計算圖中所有用到的集合名稱列表
get_collection( name, scope=None )
Returns a list of values in the collection with the given name
.
返回指定名稱的集合的所有值的列表,如果沒有值就返回空列表
This is different from get_collection_ref()
which always returns the actual collection list if it exists in that it returns a new list each time it is called.
Args:
name
: The key for the collection. For example, theGraphKeys
class contains many standard names for collections.scope
: (Optional.) A string. If supplied, the resulting list is filtered to include only items whosename
attribute matchesscope
usingre.match
. Items without aname
attribute are never returned if a scope is supplied. The choice ofre.match
means that ascope
without special tokens filters by prefix.
Returns:
The list of values in the collection with the given name
, or an empty list if no value has been added to that collection. The list contains the values in the order under which they were collected.
get_collection_ref(name)
Returns a list of values in the collection with the given name
.
返回指定名稱的集合的所有值的列表,如果沒有值就返回空列表,和上面不同的是該列表是對象本身,而上面的方法返回的列表是對象的複製,所以這個方法可以直接對這個集合的內容進行修改; 另外一個不同點在於這個方法會對不存在的集合創建一個空的列表。
If the collection exists, this returns the list itself, which can be modified in place to change the collection. If the collection does not exist, it is created as an empty list and the list is returned.
This is different from get_collection()
which always returns a copy of the collection list if it exists and never creates an empty collection.
Args:
name
: The key for the collection. For example, theGraphKeys
class contains many standard names for collections.
Returns: The list of values in the collection with the given name, or an empty list if no value has been added to that collection.
get_name_scope()
Returns the current name scope.
返回當前的計算圖的命名空間
For example:
with tf.name_scope(scope1): with tf.name_scope(scope2): print(tf.get_default_graph().get_name_scope()) would print the string scope1/scope2.
Returns: A string representing the current name scope.
get_operation_by_name(name)
Returns the Operation
with the given name
.
返回指定名稱的計算操作的結構數據,包含了該計算操作的各個輸入,實際操作,屬性等,如下所示:
name: "divi"op: "RealDiv"input: "a"input: "b"attr { key: "T" value { type: DT_FLOAT }}
This method may be called concurrently from multiple threads.
Args:
name
: The name of theOperation
to return.
Returns:
The Operation
with the given name
.
Raises:
TypeError
: Ifname
is not a string.KeyError
: Ifname
does not correspond to an operation in this graph.
get_operations()
Return the list of operations in the graph.
返回該計算圖內的所有計算操作,如下所示:
[<tf.Operation Const type=Const>, <tf.Operation Const_1 type=Const>, <tf.Operation mul type=Mul>, <tf.Operation divi type=RealDiv>]
You can modify the operations in place, but modifications to the list such as inserts/delete have no effect on the list of operations known to the graph.
This method may be called concurrently from multiple threads.
Returns:
A list of Operations.
get_tensor_by_name(name)
Returns the Tensor
with the given name
.
返回該計算圖內的指定名稱的 tensor
This method may be called concurrently from multiple threads.
Args:
name
: The name of theTensor
to return.
Returns:
The Tensor
with the given name
.
Raises:
TypeError
: Ifname
is not a string.KeyError
: Ifname
does not correspond to a tensor in this graph.
gradient_override_map(op_type_map)
EXPERIMENTAL: A context manager for overriding gradient functions.
自定義梯度計算的映射字典
This context manager can be used to override the gradient function that will be used for ops within the scope of the context.
For example:
@tf.RegisterGradient("CustomSquare")def _custom_square_grad(op, grad): # ...with tf.Graph().as_default() as g: c = tf.constant(5.0) s_1 = tf.square(c) # Uses the default gradient for tf.square. with g.gradient_override_map({"Square": "CustomSquare"}): s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the # gradient of s_2.
Args:
op_type_map
: A dictionary mapping op type strings to alternative op type strings.
Returns:
A context manager that sets the alternative op type to be used for one or more ops created in that context.
Raises:
TypeError
: Ifop_type_map
is not a dictionary mapping strings to strings.
is_feedable(tensor)
Returns True
if and only if tensor
is feedable.
判斷該 tensor 是否採用 feed 的機制
is_fetchable(tensor_or_op)
Returns True
if and only if tensor_or_op
is fetch able.
判斷變數或者計算操作是否可以獲取到
name_scope(name)
Returns a context manager that creates hierarchical names for operations.
返回上下文管理器,用來創建多層級的操作命名空間
A graph maintains a stack of name scopes. A with name_scope(...):
statement pushes a new name onto the stack for the lifetime of the context.
計算圖的命名空間是一個堆棧,with name_scope(...)
會壓入一個元素,對該管理器內部的內容進行命名
The name
argument will be interpreted as follows:
- A string (not ending with /) will create a new name scope, in which
name
is appended to the prefix of all operations created in the context. Ifname
has been used before, it will be made unique by callingself.unique_name(name)
. - A scope previously captured from a
with g.name_scope(...) as scope:
statement will be treated as an "absolute" name scope, which makes it possible to re-enter existing scopes. - A value of
None
or the empty string will reset the current name scope to the top-level (empty) name scope.
名稱參數會被理解為三種情況:
- 字元就新建一個命名空間(如果之前有這個名字,會自動使它不重複);
- 如果是之前的命名空間,就會重新進入這個命名空間;
- None 或者空,就會進入頂級命名空間。
For example:
with tf.Graph().as_default() as g: c = tf.constant(5.0, name="c") assert c.op.name == "c" c_1 = tf.constant(6.0, name="c") assert c_1.op.name == "c_1" # Creates a scope called "nested" with g.name_scope("nested") as scope: nested_c = tf.constant(10.0, name="c") assert nested_c.op.name == "nested/c" # Creates a nested scope called "inner". with g.name_scope("inner"): nested_inner_c = tf.constant(20.0, name="c") assert nested_inner_c.op.name == "nested/inner/c" # Create a nested scope called "inner_1". with g.name_scope("inner"): nested_inner_1_c = tf.constant(30.0, name="c") assert nested_inner_1_c.op.name == "nested/inner_1/c" # Treats `scope` as an absolute name scope, and # switches to the "nested/" scope. with g.name_scope(scope): nested_d = tf.constant(40.0, name="d") assert nested_d.op.name == "nested/d" with g.name_scope(""): e = tf.constant(50.0, name="e") assert e.op.name == "e"
仔細看看,從這段代碼,可以很清晰的看到多層嵌套情況下的命名規則
The name of the scope itself can be captured by with g.name_scope(...) as scope:
, which stores the name of the scope in the variable scope
. This value can be used to name an operation that represents the overall result of executing the ops in a scope. For example:
命名空間的名字可以賦給一個計算或者一個變數,用它來代表執行這個命名空間的所有操作之後得到的整體結果
inputs = tf.constant(...)with g.name_scope(my_layer) as scope: weights = tf.Variable(..., name="weights") biases = tf.Variable(..., name="biases") affine = tf.matmul(inputs, weights) + biases output = tf.nn.relu(affine, name=scope)
NOTE: This constructor validates the given name
. Valid scope names match one of the following regular expressions:
有效的命名空間必須符合以下的正則表達式
[A-Za-z0-9.][A-Za-z0-9_.\-/]* (for scopes at the root)[A-Za-z0-9_.\-/]* (for other scopes)
Args:
name
: A name for the scope.
Returns:
A context manager that installs name
as a new name scope.
Raises:
ValueError
: Ifname
is not a valid scope name, according to the rules above.
prevent_feeding(tensor)
Marks the given tensor as unfeedable in this graph.
標記指定的 tensor 為不可 feed
prevent_fetching(op)
Marks the given op as unfetchable in this graph.
標記指定的計算操作不可獲取
unique_name( name, mark_as_used=True )
Return a unique operation name for name.
用來生成以『/』隔開的獨特名稱,一般很少使用到它
Note: You rarely need to call unique_name()
directly. Most of the time you just need to create with g.name_scope()
blocks to generate structured names.unique_name
is used to generate structured names, separated by "/"
, to help identify operations when debugging a graph. Operation names are displayed in error messages reported by the TensorFlow runtime, and in various visualization tools such as TensorBoard.
If mark_as_used
is set to True
, which is the default, a new unique name is created and marked as in use. If its set to False
, the unique name is returned without actually being marked as used. This is useful when the caller simply wants to know what the name to be created will be.
Args:
name
: The name for an operation.mark_as_used
: Whether to mark this name as being used.
Returns:
A string to be passed to create_op()
that will be used to name the operation being created.
補充下 Graph 的內部默認的集合都有哪些:
GLOBAL_VARIABLES
: the default collection ofVariable
objects, shared across distributed environment (model variables are subset of these). Seetf.global_variables
for more details. Commonly, allTRAINABLE_VARIABLES
variables will be inMODEL_VARIABLES
, and allMODEL_VARIABLES
variables will be inGLOBAL_VARIABLES
. 全局變數,在不同的環境中共享的變數,模型變數是它的子集,它們之間的關係是: 可以訓練的變數 < 模型變數 < 全局變數,< 表示屬於,子集關係LOCAL_VARIABLES
: the subset ofVariable
objects that are local to each machine. Usually used for temporarily variables, like counters. Note: usetf.contrib.framework.local_variable
to add to this collection.本地變數,屬於沒台機器,只是臨時使用的變數, 比如像計數器這樣的變數。MODEL_VARIABLES
: the subset ofVariable
objects that are used in the model for inference (feed forward). Note:usetf.contrib.framework.model_variable
to add to this collection. 模型變數,用於網路前向推導的變數TRAINABLE_VARIABLES
: the subset ofVariable
objects that will be trained by an optimizer. Seetf.trainable_variables
for more details. 可以訓練的變數,它會被優化器訓練的變數SUMMARIES
: the summaryTensor
objects that have been created in the graph. Seetf.summary.merge_all
for more details. 模型展示的變數QUEUE_RUNNERS
: theQueueRunner
objects that are used to produce input for a computation. Seetf.train.start_queue_runners
for more details. 用於生成輸入數據的隊列集合MOVING_AVERAGE_VARIABLES
: the subset ofVariable
objects that will also keep moving averages. Seetf.moving_average_variables
for more details.用於變數移動平均值存儲的集合REGULARIZATION_LOSSES
: regularization losses collected during graph construction. 存儲正則化 loss 的集合集合
總結下 Graph 的相關知識點:
- 大量使用上下文管理器來定義計算圖
- 默認就定義好了一個計算圖
- 具有獨特的集合 collections 概念,實際也使用較多
- 具有容器 container 概念,但實際使用不多
- 可以控制計算使用的設備 device
- 使用命名空間 name_scope 機制來使得計算圖更加清晰
那麼,有幾個問題可以思考下:
- 既然默認已經定義好了計算圖,那麼什麼時候需要我們自己定義新的計算圖?
- 如果我們需要定義多個計算圖,怎麼連接不同的計算圖並在其間傳遞變數?
- 為什麼要有這樣一個計算圖?
參考資料:https://tensorflow.google.cn/api_docs/python/tf/Graph
推薦閱讀:
※TensorFlow 101
※學習筆記TF061:分散式TensorFlow,分散式原理、最佳實踐
※如何評價deep mind開源的sonnet?
※cs20si:tensorflow for research 學習筆記2
※Google開源模塊化多任務訓練庫Tensor2Tensor
TAG:TensorFlow |