top of page
Search
jenkinsdiana

ARM’s new chip design aims to revolutionize AI and machine learning



One of the most influential companies in tech has jumped on the Seattle-area bandwagon: Arm, which designs the mobile chips that power virtually every smartphone on the planet, is opening a Bellevue-based engineering center as it eyes the talent pool in the Pacific Northwest.


Starting next year, ARM processors will get significantly faster thanks to big changes in the company's Cortex-A chip designs. ARM is taking a page from rivals like AMD that have focused on raising the performance threshold in chips.




ARM’s new chip design focuses on AI and machine learning



But applications like virtual reality and machine learning need more performance, and ARM is preparing its processors to take on those emerging applications. ARM is adding more cores, instructions, and faster pipelines in smaller spaces to boost performance.


DynamIQ adds more performance without compromising ARM's power efficiency focus, Ronco said. Most devices from ARM don't require cooling fans, and that will remain the case with DynamIQ features in chip designs.


Architectures get modified with market demands of the time, and the chip industry is seeing the emergence of machine learning as a primary workload, said Dean McCarron, principal analyst at Mercury Research.


A chip could have different types of cores, for example, an eight-core chip with four fast CPUs for demanding applications and four slower CPUs for low-power, fast-moving tasks. That is already a part of ARM's "Big.Little" design but is being enhanced with DynamIQ.


At the heart of these improvements are the ability for CPUs to do more low-level and power-efficient processing of machine-learning tasks. All of the calculations can be combined to provide an estimated answer to a question. The improvements include more half-precision floating point operations, which are common in machine-learning-focused chips like Intel's upcoming Knights Mill, and AI GPUs from AMD and Nvidia.


Arm Holdings has introduced the Armv9 microarchitecture, the first overhaul of its CPU architecture in a decade, with heavy emphasis on security and all things artificial intelligence (AI) and machine learning (ML).


Also known as AI accelerators, AI chips are processors that perform AI-based calculations and other tasks vey fast. The machine learning technology that the chips are built with renders exceptional computing power for training algorithms and running applications that traditional computer chips cannot provide. The range of innovative AI-optimized chipset architectures is constantly expanding with several startups and well-established chip companies launching impressive hardware architectures optimized for machine learning, natural language processing, deep learning, and other areas of AI.


Given the ever-growing demands of AI applications, more chip makers are engaging in activities such as collaboration and launches. Nvidia Corporation, an American technology company recently teamed up with Arm Holdings or Arm, a multinational semiconductor and software design company with the aim of helping IoT chip companies to easily incorporate AI technology into their designs. Baidu, a Chinese technology company unveiled an AI chip to implement fast computing in various AI scenarios. Google, an American tech giant recently developed the third version of its AI-focused chips called Tensor Processing Units (TPUs) to perform a variety of tasks such as word recognition and more. Arm recently unveiled two new AI chips that deliver exceptional computational capability.


In February 2018, Arm, the UK-based chipmaker introduced two new processor designs that promise to deliver an excellent capability of computation for companies building machine learning-powered devices.


The Cortex-M55 processor delivers machine learning performance that's up to 15 times better than its previous iterations, and a five-fold improvement in digital signal processing performance. And the Ethos-U55 NPU, the industry's first microNPU, is designed to pair with the Cortex-M55 to handle heavier workloads.


Together the two chips can provide a 480-fold uplift in machine learning performance compared with previous units while maintaining a small, power-efficient package. When the chips hit the market in 2021 they will enable tech companies to implement AI applications locally on smaller devices without sacrificing performance or size.


Jeremy Kahn (Bloomberg) -- ARM, the U.K.-based semiconductor design firm, introduced a new chip targeted at markets ranging from self-driving cars to artificial intelligence. It could also give ARM a better chance of making inroads against Intel in the lucrative server and data center market.


The new design may help ARM, bought for $32 billion last year by SoftBank Group Corp., compete with chips engineered for neural networks, a promising type of artificial intelligence software. Rivals like Intel Corp. and International Business Machines Corp. have recently unveiled chips designed for these applications.


SoftBank Chief Executive Officer Masayoshi Son snapped up ARM to bet on the Internet of Things -- the idea that everything from refrigerators to industrial robots will be connected to the Internet in the future. Son hopes ARM will become the leading designer of chips in these devices.


Over and above the pure performance improvements, the new chip designs from ARM also come with specific processing components for machine learning and onboard AI computation. With AI and machine learning going to be the next big deal in the coming years, any computation improvement in this area will lead to significant improvements.


In addition to two new CPU designs, ARM also unveiled its new Mali GPU: the Mali-G72. Successor to the G71, the new GPU comes with 32 shader cores and offers 20 percent better performance density and 25 percent higher energy efficiency. In its slides, ARM claims that the G72 is 17 percent better than the G71 in machine learning benchmarks.


The new Arm v9 focuses on three areas: performance, security, and machine learning (ML) capabilities. Arm says the design will provide more than a 30% CPU performance boost over the next two generations of mobile and infrastructure CPUs.


Chips based on the decade-old v8 design offer the best performance-per-watt in computing today. Arm-based chips power nearly every smartphone, along with many tablets and an increasing number of laptops.


Apple Silicon, which uses an Arm v8-based design, will soon power every Apple computing product. All iPhones and iPads use Apple Silicon chips, and the company is in a two-year transition that will end with all Macs running Apple's Arm-based silicon. The Arm-based M1 chip powers the latest MacBook Air, MacBook Pro, and Mac mini.


Wait what? I thought Apple was an ARM Holdings co-founder, had a permanent architectural license and their own custom design for PCs that was radically different from - and better than - the small core design for embedded systems that the ARM pushes for Cortex-A for smartphones and the somewhat better (but still not very good) Marvell and N1 core designs that are used on servers (which again aren't very good as they constitute 3% of the market, forcing Amazon, Microsoft, Google etc. to also make their own core designs and causing Marvell, HP and most other ARM server vendors to drop out of the market leaving Ampere as the only player)? Even Fujitsu, who makes ARM supercomputers, relies on a custom design (a combination of the RISC license based on SPARC that they bought from Sun back in the day and things they licensed from ARM). While the M1 chip has a single core score that rivals Intel Core i7 and i9, the best Cortex Core for PCs and mobile barely surpasses the Intel Pentium. (Qualcomm is hyping up the multicore score, but even there it takes 8 performance cores to merely rival the Geekbench 5 score for the quad core Intel i5). I thought that Apple having their own big core design that ARM Holdings can't come close to was why Nvidia's purchase of ARM Holdings is like "meh" for Apple as their custom CPU and GPU designs are much better - by several times - than Cortex, Mali (the ARM Holdings GPU) and even Nvidia (either their old GPU architecture or their new Ampere one) anyway.


The new Cortex-M55 is a new generation IP more closely related to the M33, but brings a few new architectural advances with it that promise some large performance and flexibility improvements when it comes to machine learning as well as vector instructions.


Multicore chipsets using the big.LITTLE design, match low-powered cores for handling undemanding tasks and ensuring battery life is preserved, with higher speed cores aimed a taking care of more demanding tasks. Usually these cores are arranged in binary layouts; two or for low-powered cored matched with the same number of high-performance cores.


All of these architectures are more flexible than previous versions, and the chiplet/tile approach provides a way for big chipmakers to customize their chips while still serving a broad customer base. Meanwhile, systems companies such as Google, Meta, and Alibaba are taking this a step further, designing chips from scratch that are tuned specifically to their data type and processing goals.


Application-specific vs. general-purposeA big challenge for design teams is that more of the design is becoming front-loaded. Instead of just creating the chip architecture and then working out the details in the design process, more of it needs to be addressed right at the architectural level.


UK chip designer ARM has revealed its latest line of CPUs and GPUs specifically designed for these AI devices. Called Cortex-A75, Cortex-A55, and Mali-G72, the processors use the firm's DynamIQ technology.


It's claimed the A75 allows for a "massive single-thread compute uplift," Nandan Nayampally, the firm's vice president and general manager for compute products said, while the A55 is designed for a greater processing efficiency, and the G72 GPU was created for VR, gaming, and machine learning processes.


ARM says it expects to help ship 100 billion chips in the next five years and says all the new chips were designed to "power the most advanced compute". "We need to enable faster, more efficient and secure distributed intelligence between computing at the edge of the network and into the cloud," it said. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Comments


bottom of page