This Robotic Inchworm Just Made It to Higher Ground - IEEE Spectrum

2021-12-23 06:42:12 By : Ms. Shining Xia

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

It can now go from crawling horizontally to climbing vertically

The robot's gait sequence from crawling on the ground to climbing on the vertical wall

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Each “step” for an inchworm may be small, but the diversity of terrains and orientations that these critters can crawl over is vast. They are able to inch their way across both horizontal and vertical surfaces, and use their great dexterity to navigate over uneven terrain. For these reasons, many researchers have sought over the years to create robots inspired by the inchworm.

Guoying Gu is a professor at the Robotics Institute, School of Mechanical Engineering, at Shanghai Jiao Tong University, who recognizes a number of benefits of such robots. “However, most of existing inchworm-inspired soft robots have limited and specified working environments,” he says. “Especially, [the ability to] transition between horizontal and vertical planes has remained elusive.”

But, Gu and his colleagues have made significant headway on developing such a versatile robot, which they describe in a study recently published in IEEE Transactions on Robotics.

Notably, transitioning between horizontal and vertical planes is difficult for soft robots because they must be both strong and flexible—enough so to lift a foot from the ground and reach a foothold on the vertical wall or surface.

To allow more flexibility, Gu's team endowed their robot with three fiber-reinforced pneumatic actuators, which help give precise control over the “tail,” head” and “body” of the robot. A control system monitors the positioning of the actuators, and provides a coordinated movement across the robot’s whole body, allowing it to achieve the “Ω” shape of an inch worm as it crawls.

As well, the design includes two pressure suckers that have a double layer of silicone. Air between the layers can be pumped out, causing the suckers to become stiffer and more capable of dealing with higher amounts of external force and torque when suctioned to challenging surfaces.

The pneumatic actuators and suckers work together synchronously propel to the robotic inchworm forward. Just like a real inchworm, the robot extends its body, attaches its front foot (aka pressure sucker), and contracts its body before suctioning its back foot and taking another step forward. It can achieve a top speed of 21 mm/s on horizontal planes and 15 mm/s on vertical walls. In terms of loads, the robot is capable of carrying 500 grams (about 15 times its own weight) on horizontal planes, or 20 g on vertical walls.

“It is the first time to achieve transition locomotion of a soft mobile robot between horizontal and vertical planes, which may expand the workspace of the soft robot,” says Gu, noting that the robot could be useful for tasks such as inspection, cleaning, maintenance, and surveillance. The researchers also note that this design could be adapted for water, if the muscles were hydraulically actuated.

“[Our] next steps include implementing more sensors to further automate the control of the robot, reduce the size of the actuation system to make the robot untethered, and explore the possibility for our soft robot to move in more complicated environments, like on the ceiling and in unstructured territories,” says Gu.

Michelle Hampson is a freelance writer based in Halifax. She frequently contributes to Spectrum's Journal Watch coverage, which highlights newsworthy studies published in IEEE journals.

Samsung could double performance of neural nets with processing-in-memory

Samsung added AI compute cores to DRAM memory dies to speed up machine learning.

John von Neumann’s original computer architecture, where logic and memory are separate domains, has had a good run. But some companies are betting that it’s time for a change.

In recent years, the shift toward more parallel processing and a massive increase in the size of neural networks mean processors need to access more data from memory more quickly. And yet “the performance gap between DRAM and processor is wider than ever,” says Joungho Kim, an expert in 3D memory chips at Korea Advanced Institute of Science and Technology, in Daejeon, and an IEEE Fellow. The von Neumann architecture has become the von Neumann bottleneck.

What if, instead, at least some of the processing happened in the memory? Less data would have to move between chips, and you’d save energy, too. It’s not a new idea. But its moment may finally have arrived. Last year, Samsung, the world’s largest maker of dynamic random-access memory (DRAM), started rolling out processing-in-memory (PIM) tech. Its first PIM offering, unveiled in February 2021, integrated AI-focused compute cores inside its Aquabolt-XL high-bandwidth memory. HBM is the kind of specialized DRAM that surrounds some top AI accelerator chips. The new memory is designed to act as a “drop-in replacement” for ordinary HBM chips, said Nam Sung Kim, an IEEE Fellow, who was then senior vice president of Samsung’s memory business unit.

Last August, Samsung revealed results from tests in a partner’s system. When used with the Xilinx Virtex Ultrascale + (Alveo) AI accelerator, the PIM tech delivered a nearly 2.5-fold performance gain and a 62 percent cut in energy consumption for a speech-recognition neural net. Samsung has been providing samples of the technology integrated into the current generation of high-bandwidth DRAM, HBM2. It’s also developing PIM for the next generation, HBM3, and for the low-power DRAM used in mobile devices. It expects to complete the standard for the latter with JEDEC in the first half of 2022.

There are plenty of ways to add computational smarts to memory chips. Samsung chose a design that’s fast and simple. HBM consists of a stack of DRAM chips linked vertically by interconnects called through-silicon vias (TSVs). The stack of memory chips sits atop a logic chip that acts as the interface to the processor.

The third-largest DRAM maker says it does not have a processing-in-memory product. However, in 2019 it acquired the AI-tech startup Fwdnxt, with the goal of developing “innovation that brings memory and computing closer together.”

The Israeli startup has developed memory with integrated processing cores designed to accelerate queries in data analytics.

Engineers at the DRAM-interface-tech company did an exploratory design for processing-in-memory DRAM focused on reducing the power consumption of high-bandwidth memory (HBM).

Furthest along, the world’s largest DRAM maker is offering the Aquabolt-XL with integrated AI computing cores. It has also developed an AI accelerator for memory modules, and it’s working to standardize AI-accelerated DRAM.

Engineers at the second-largest DRAM makers and Purdue University unveiled results for Newton, an AI-accelerating HBM DRAM in 2020, but the company decided not to commercialize it and pursue PIM for standard DRAM instead.

The highest data bandwidth in the stack lies within each chip, followed by the TSVs, and finally the connections to the processor. So Samsung chose to put the processing on the DRAM chips to take advantage of the high bandwidth there. The compute units are designed to do the most common neural-network calculation, called multiply and accumulate, and little else. Other designs have put the AI logic on the interface chip or used more complex processing cores.

Samsung’s two largest competitors, SK hynix and Micron Technology, aren’t quite ready to take the plunge on PIM for HBM, though they’ve each made moves toward other types of processing-in-memory.

Icheon, South Korea–based SK hynix, the No. 2 DRAM supplier, is exploring PIM from several angles, says Il Park, vice president and head of memory-solution product development. For now it is pursuing PIM in standard DRAM chips rather than HBM, which might be simpler for customers to adopt, says Park.

HBM PIM is more of a mid- to long-term possibility, for SK hynix. At the moment, customers are already dealing with enough issues as they try to move HBM DRAM physically closer to processors. “Many experts in this domain do not want to add more, and quite significant, complexity on top of the already busy situation involving HBM,” says Park.

That said, SK hynix researchers worked with Purdue University computer scientists on a comprehensive design of an HBM-PIM product called Newton in 2019. Like Samsung’s Aquabolt-XL, it places multiply-and-accumulate units in the memory banks to take advantage of the high bandwidth within the dies themselves.

“Samsung has put a stake in the ground,” —Bob O’Donnell, chief analyst at Technalysis Research

Meanwhile, Rambus, based in San Jose, Calif., was motivated to explore PIM because of power-consumption issues, says Rambus fellow and distinguished inventor Steven Woo. The company designs the interfaces between processors and memory, and two-thirds of the power consumed by system-on-chip and its HBM memory go to transporting data horizontally between the two chips. Transporting data vertically within the HBM uses much less energy because the distances are so much shorter. “You might be going 10 to 15 millimeters horizontally to get data back to an SoC,” says Woo. “But vertically you’re talking on the order of a couple hundred microns.”

Rambus’s experimental PIM design adds an extra layer of silicon at the top of the HBM stack to do AI computation. To avoid the potential bandwidth bottleneck of the HBM’s central through-silicon vias, the design adds TSVs to connect the memory banks with the AI layer. Having a dedicated AI layer in each memory chip could allow memory makers to customize memories for different applications, argues Woo.

How quickly PIM is adopted will depend on how desperate the makers of AI accelerators are for the memory-bandwidth relief it provides. “Samsung has put a stake in the ground,” says Bob O'Donnell, chief analyst at Technalysis Research. “It remains to be seen whether [PIM] becomes a commercial success.

This article appears in the January 2022 print issue as "AI Computing Comes to Memory Chips."