SmartStateIndia
AI News

ENOT Releases AI-Driven Optimization Tech for Deep Neural Networks, Democratizing AI for all

AI

ENOT, a leading developer of neural network optimization tools, releases AI-driven optimization technology for deep neural networks for AI developers and edge AI companies. The integration of ENOT’s framework makes deep neural networks faster, smaller and more energy-efficient – from cloud to edge computing. The technology helps achieve outstanding optimization ratios resulting in acceleration up to 20x and compression up to 25x, thus reducing total computing resource (hardware) costs by as much as 70%.

ENOT applies a unique neural architecture selection technology that outperforms all known methods for neural network compression and acceleration. ENOT’s framework has a simple to use Python API that can be quickly and easily integrated within various neural network training pipelines to greatly accelerate and compress neural networks.

It allows users to automate the search for the optimal neural network architecture, taking into account latency, RAM and model size constraints for different hardware and software platforms. ENOT’s neural network architecture search engine allows users to automatically find the best possible architecture from millions of available options, taking into account several parameters such as:

  • input resolution
  • depth of neural network
  • operation type
  • activation type
  • number of neurons at each layer, an
  • bit width for target hardware platform for NN inference.

This technology helps customers achieve significant efficiency through cost savings and much faster product launches, and can reduce time to market.

The ENOT framework is aimed at companies that utilize neural networks on edge devices, such as:

  • Electronics
  • Healthcare
  • Oil and gas
  • Autonomous driving
  • Cloud computing
  • Telecom
  • Mobile apps
  • Internet of Things (IoT)
  • Robotics

ENOT has already successfully completed more than 20 pilot projects where it optimized neural networks for several leading tech giants from around the world, among a dozen medium-sized OEMs and AI companies.

ENOT delivered 13.3x acceleration of a neural network used by a top-three smartphone manufacturer as part of the image enhancement process. The optimization reduced the neural network depth from 16 to 11 and reduced the input resolution from 224×224 pixels to 96×96 pixels, yet there was practically zero loss of accuracy.

Another project with the same company delivered 5.1x acceleration for a photo denoising neural network – again with an almost imperceptible change in image quality – even though the network had already been manually optimized. For end-users, this translates to faster processing and significantly extending battery life.

Sergey Aliamkin, CEO and founder of ENOT commented, “Today, when neural networks are widely used in production and applications. Neural networks should be more effective in terms of consumption of computational resources and affordable. Their implementation should be faster, better and cheaper for all.

“ENOT is at the forefront of next-level AI optimization, helping bring fast, real-time levels of AI advancement through high throughput data into reality as science fact, and our journey has only just begun with examples such as the Weedbot laser weeding machine that gained a 2.7 times thanks to the ENOT framework.

Related posts

Matrix Comsec Introduces its Range of Compact NVRs

SSI Bureau

Thales Speakers Bureau: Experts to ponder quantum technology, 5G, digital identity, cybersecurity and artificial intelligence

SSI Bureau

Varonis Introduces MDDR: Industry’s First Managed Data Detection and Response Offering

SSI Bureau

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More