HDF
ÿÿÿÿÿÿÿÿÊ/ ÿÿÿÿÿÿÿÿ ` ¨ TREE ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ HEAP X È transformer_encoder 8 ¨ P layer_names transformer_encoder @ backend
H keras_version H ò TREE ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ HEAP X ¨ stack_13 @ è ( HEAP X H transformer @ P p 0 P GCOL
tensorflow 2.2.4-tf ¸ SNOD @ h h Xò
weight_names Õ # # stack_13/transformer/transformer_encoder/transformer_input_embedding/embedding/embeddings:0 stack_13/transformer/transformer_encoder/transformer_input_embedding/position_embedding/dense/kernel:0 stack_13/transformer/transformer_encoder/transformer_input_embedding/position_embedding/dense/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/layer_norm/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/layer_norm/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/weight_norm_dense_24/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/weight_norm_dense_24/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/layer_norm_25/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/layer_norm_25/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/weight_norm_dense_25/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/weight_norm_dense_25/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/layer_norm_26/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/layer_norm_26/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/weight_norm_dense_26/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/weight_norm_dense_26/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/layer_norm_27/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/attention_qkv_projection/layer_norm_27/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/weight_norm_dense_27/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_self_attention/self_attention/multi_head_attention/weight_norm_dense_27/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/layer_norm_1/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/layer_norm_1/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/stack/weight_norm_dense/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/stack/weight_norm_dense/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/stack/weight_norm_dense/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/stack/weight_norm_dense_1/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/stack/weight_norm_dense_1/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block/transformer_feed_forward/stack/weight_norm_dense_1/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/layer_norm_2/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/layer_norm_2/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/weight_norm_dense_28/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/weight_norm_dense_28/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/layer_norm_28/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/layer_norm_28/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/weight_norm_dense_29/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/weight_norm_dense_29/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/layer_norm_29/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/layer_norm_29/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/weight_norm_dense_30/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/weight_norm_dense_30/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/layer_norm_30/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/attention_qkv_projection_1/layer_norm_30/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/weight_norm_dense_31/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_self_attention_1/self_attention_1/multi_head_attention_1/weight_norm_dense_31/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/layer_norm_3/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/layer_norm_3/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/stack_1/weight_norm_dense_2/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/stack_1/weight_norm_dense_2/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/stack_1/weight_norm_dense_2/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/stack_1/weight_norm_dense_3/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/stack_1/weight_norm_dense_3/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_1/transformer_feed_forward_1/stack_1/weight_norm_dense_3/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/layer_norm_4/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/layer_norm_4/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/weight_norm_dense_32/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/weight_norm_dense_32/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/layer_norm_31/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/layer_norm_31/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/weight_norm_dense_33/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/weight_norm_dense_33/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/layer_norm_32/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/layer_norm_32/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/weight_norm_dense_34/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/weight_norm_dense_34/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/layer_norm_33/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/attention_qkv_projection_2/layer_norm_33/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/weight_norm_dense_35/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_self_attention_2/self_attention_2/multi_head_attention_2/weight_norm_dense_35/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/layer_norm_5/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/layer_norm_5/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/stack_2/weight_norm_dense_4/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/stack_2/weight_norm_dense_4/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/stack_2/weight_norm_dense_4/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/stack_2/weight_norm_dense_5/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/stack_2/weight_norm_dense_5/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_2/transformer_feed_forward_2/stack_2/weight_norm_dense_5/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/layer_norm_6/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/layer_norm_6/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/weight_norm_dense_36/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/weight_norm_dense_36/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/layer_norm_34/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/layer_norm_34/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/weight_norm_dense_37/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/weight_norm_dense_37/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/layer_norm_35/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/layer_norm_35/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/weight_norm_dense_38/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/weight_norm_dense_38/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/layer_norm_36/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/attention_qkv_projection_3/layer_norm_36/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/weight_norm_dense_39/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_self_attention_3/self_attention_3/multi_head_attention_3/weight_norm_dense_39/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/layer_norm_7/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/layer_norm_7/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/stack_3/weight_norm_dense_6/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/stack_3/weight_norm_dense_6/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/stack_3/weight_norm_dense_6/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/stack_3/weight_norm_dense_7/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/stack_3/weight_norm_dense_7/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_3/transformer_feed_forward_3/stack_3/weight_norm_dense_7/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/layer_norm_8/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/layer_norm_8/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/weight_norm_dense_40/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/weight_norm_dense_40/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/layer_norm_37/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/layer_norm_37/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/weight_norm_dense_41/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/weight_norm_dense_41/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/layer_norm_38/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/layer_norm_38/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/weight_norm_dense_42/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/weight_norm_dense_42/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/layer_norm_39/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/attention_qkv_projection_4/layer_norm_39/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/weight_norm_dense_43/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_self_attention_4/self_attention_4/multi_head_attention_4/weight_norm_dense_43/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/layer_norm_9/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/layer_norm_9/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/stack_4/weight_norm_dense_8/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/stack_4/weight_norm_dense_8/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/stack_4/weight_norm_dense_8/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/stack_4/weight_norm_dense_9/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/stack_4/weight_norm_dense_9/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_4/transformer_feed_forward_4/stack_4/weight_norm_dense_9/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/layer_norm_10/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/layer_norm_10/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/weight_norm_dense_44/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/weight_norm_dense_44/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/layer_norm_40/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/layer_norm_40/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/weight_norm_dense_45/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/weight_norm_dense_45/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/layer_norm_41/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/layer_norm_41/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/weight_norm_dense_46/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/weight_norm_dense_46/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/layer_norm_42/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/attention_qkv_projection_5/layer_norm_42/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/weight_norm_dense_47/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_self_attention_5/self_attention_5/multi_head_attention_5/weight_norm_dense_47/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/layer_norm_11/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/layer_norm_11/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/stack_5/weight_norm_dense_10/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/stack_5/weight_norm_dense_10/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/stack_5/weight_norm_dense_10/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/stack_5/weight_norm_dense_11/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/stack_5/weight_norm_dense_11/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_5/transformer_feed_forward_5/stack_5/weight_norm_dense_11/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/layer_norm_12/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/layer_norm_12/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/weight_norm_dense_48/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/weight_norm_dense_48/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/layer_norm_43/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/layer_norm_43/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/weight_norm_dense_49/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/weight_norm_dense_49/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/layer_norm_44/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/layer_norm_44/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/weight_norm_dense_50/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/weight_norm_dense_50/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/layer_norm_45/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/attention_qkv_projection_6/layer_norm_45/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/weight_norm_dense_51/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_self_attention_6/self_attention_6/multi_head_attention_6/weight_norm_dense_51/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/layer_norm_13/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/layer_norm_13/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/stack_6/weight_norm_dense_12/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/stack_6/weight_norm_dense_12/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/stack_6/weight_norm_dense_12/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/stack_6/weight_norm_dense_13/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/stack_6/weight_norm_dense_13/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_6/transformer_feed_forward_6/stack_6/weight_norm_dense_13/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/layer_norm_14/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/layer_norm_14/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/weight_norm_dense_52/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/weight_norm_dense_52/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/layer_norm_46/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/layer_norm_46/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/weight_norm_dense_53/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/weight_norm_dense_53/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/layer_norm_47/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/layer_norm_47/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/weight_norm_dense_54/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/weight_norm_dense_54/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/layer_norm_48/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/attention_qkv_projection_7/layer_norm_48/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/weight_norm_dense_55/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_self_attention_7/self_attention_7/multi_head_attention_7/weight_norm_dense_55/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/layer_norm_15/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/layer_norm_15/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/stack_7/weight_norm_dense_14/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/stack_7/weight_norm_dense_14/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/stack_7/weight_norm_dense_14/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/stack_7/weight_norm_dense_15/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/stack_7/weight_norm_dense_15/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_7/transformer_feed_forward_7/stack_7/weight_norm_dense_15/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/layer_norm_16/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/layer_norm_16/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/weight_norm_dense_56/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/weight_norm_dense_56/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/layer_norm_49/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/layer_norm_49/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/weight_norm_dense_57/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/weight_norm_dense_57/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/layer_norm_50/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/layer_norm_50/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/weight_norm_dense_58/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/weight_norm_dense_58/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/layer_norm_51/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/attention_qkv_projection_8/layer_norm_51/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/weight_norm_dense_59/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_self_attention_8/self_attention_8/multi_head_attention_8/weight_norm_dense_59/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/layer_norm_17/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/layer_norm_17/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/stack_8/weight_norm_dense_16/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/stack_8/weight_norm_dense_16/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/stack_8/weight_norm_dense_16/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/stack_8/weight_norm_dense_17/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/stack_8/weight_norm_dense_17/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_8/transformer_feed_forward_8/stack_8/weight_norm_dense_17/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/layer_norm_18/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/layer_norm_18/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/weight_norm_dense_60/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/weight_norm_dense_60/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/layer_norm_52/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/layer_norm_52/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/weight_norm_dense_61/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/weight_norm_dense_61/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/layer_norm_53/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/layer_norm_53/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/weight_norm_dense_62/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/weight_norm_dense_62/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/layer_norm_54/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/attention_qkv_projection_9/layer_norm_54/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/weight_norm_dense_63/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_self_attention_9/self_attention_9/multi_head_attention_9/weight_norm_dense_63/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/layer_norm_19/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/layer_norm_19/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/stack_9/weight_norm_dense_18/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/stack_9/weight_norm_dense_18/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/stack_9/weight_norm_dense_18/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/stack_9/weight_norm_dense_19/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/stack_9/weight_norm_dense_19/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_9/transformer_feed_forward_9/stack_9/weight_norm_dense_19/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/layer_norm_20/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/layer_norm_20/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/weight_norm_dense_64/kernel:0stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/weight_norm_dense_64/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/layer_norm_55/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/layer_norm_55/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/weight_norm_dense_65/kernel:0stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/weight_norm_dense_65/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/layer_norm_56/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/layer_norm_56/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/weight_norm_dense_66/kernel:0stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/weight_norm_dense_66/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/layer_norm_57/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/attention_qkv_projection_10/layer_norm_57/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/weight_norm_dense_67/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_self_attention_10/self_attention_10/multi_head_attention_10/weight_norm_dense_67/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/layer_norm_21/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/layer_norm_21/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/stack_10/weight_norm_dense_20/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/stack_10/weight_norm_dense_20/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/stack_10/weight_norm_dense_20/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/stack_10/weight_norm_dense_21/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/stack_10/weight_norm_dense_21/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_10/transformer_feed_forward_10/stack_10/weight_norm_dense_21/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/layer_norm_22/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/layer_norm_22/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/weight_norm_dense_68/kernel:0stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/weight_norm_dense_68/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/layer_norm_58/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/layer_norm_58/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/weight_norm_dense_69/kernel:0stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/weight_norm_dense_69/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/layer_norm_59/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/layer_norm_59/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/weight_norm_dense_70/kernel:0stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/weight_norm_dense_70/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/layer_norm_60/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/attention_qkv_projection_11/layer_norm_60/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/weight_norm_dense_71/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_self_attention_11/self_attention_11/multi_head_attention_11/weight_norm_dense_71/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/layer_norm_23/gamma:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/layer_norm_23/beta:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/stack_11/weight_norm_dense_22/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/stack_11/weight_norm_dense_22/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/stack_11/weight_norm_dense_22/g:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/stack_11/weight_norm_dense_23/kernel:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/stack_11/weight_norm_dense_23/bias:0 stack_13/transformer/transformer_encoder/encoder_stack/transformer_encoder_block_11/transformer_feed_forward_11/stack_11/weight_norm_dense_23/g:0 ( h! è »Ñ\ h TREE ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ è SNOD è ( TREE ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ È HEAP X transformer_encoder 8 SNOD P p TREE ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ Ð HEAP X 8 p transformer_input_embedding encoder_stack SNOD È 0 P 8 X TREE ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ Ø HEAP X 0 x embedding position_embedding ( SNOD ( ¨" Ð" ð" 8 X @ ` TREE ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ HEAP X embeddings:0 @ SNOD @ ` x
À SNOD Ø Êß>qT½Óu>fW