IBM established the AI Hardware Center, a research and development lab aimed at increasing AI performance by 1000x in the next ten years.
Hardware acceleration is increasingly essential in applications that rely on real-time analytics and artificial intelligence. This has driven several industry efforts to find suitable solutions.
Last year, IBM established the AI Hardware Center, a research and development lab aimed at increasing AI performance by 1000x in the next ten years.
According to Karl Freund, a senior analyst at Moor Insights and Strategy, AI performance improvements require bit-length reductions, while still preserving the accuracy of higher bit-length calculations.
SEE ALSO: IBM Advances Sentiment Analytics via Watson AI Services
The common formats – 32bit and 16bit – are workable for most computation tasks, but for deep neural networks, IBM may need to reach even smaller than 8-bit to achieve its goal.
It recently published a paper on “Hybrid 8-bit Floating Point”, a format that uses different precision requirements for computations. IBM demonstrated the format’s accuracy, comparable to 16-bit math, but with 4X lower costs.
“This approach could theoretically enable someone to build a chip for training deep neural networks that would use ¼ the chip area, or perhaps deliver 4 times the performance at the same cost,” said Freund.
IBM is not the only technology company researching smaller bit formats, Google and Nvidia have both published papers on 8-bit for AI. Nvidia has been exploring 8-bit format for five years now, stating in 2016 that the format would be enough for AI.
However, times have changed since 2016 and now IBM is working on 2-bit chipsets for AI, which can meet the accuracy requirements of 16-bit. In a paper published in July last year, the company showed its attempts to build an accurate 2-bit format that heavily outperformed 8-bit. While there is still a ways to go, it shows that 8-bit is not enough for everyone.