TensorFlow 2.20 introduces significant changes including the deprecation of tf.lite in favor of LiteRT for on-device inference, which offers improved NPU and GPU acceleration with a unified interface for Neural Processing Units. The release also adds autotune.min_parallelism to tf.data.Options for faster input pipeline warm-up
•3m read time• From blog.tensorflow.org
Table of contents
tf.lite is being replaced by LiteRTFaster input pipeline warm-up with tf.dataChanges to I/O GCS filesystem packageSort: