Can cutting-edge deep learning algorithms run on low power edge devices such as smartwatches? Our partner ThinGenious and researchers from Politecnico di Torino proved they can! Check out the preprint of our latest article on how multi-head cross-attention blocks can be replaced by efficient convolutions utilizing knowledge distillation to reduce a model’s computational requirements and memory footprint.

Skip to content