Tiniest ever 128Gbit NAND flash chip flaunted

A little bit of TLC from SanDisk, Toshiba


SanDisk and Toshiba have jointly developed the smallest 128Gbit NAND flash chip in the world by using a 3-bit multi-level cell design (TLC) and a 19nm process.

The thing, just 170.6mm2 in area, is SanDisk's fifth generation of TLC technology. It uses something called All Bit-Line (ABL) programming and lots of technology tweaks described in a SanDisk White paper (PDF) to get a write speed of 18MB/sec and a speed of 400Mbit/s through a toggle mode interface.

SanDisks fifth generation 128Gbit TLC die

SanDisk's fifth generation 128Gbit TLC chip die

For example, the white paper says

In our first generation X3, we reported 8MB/sec write performance. Scaling to 19nm degrades the performance significantly. Process and cell structure changes, such as Air Gap recovers some of the degradation but that is not enough. In this design we have a) adopted a 16KB page size to double the performance capability, b) temperature compensation scheme for 10 per cent performance boost, and an enhanced 3-step programming to reduce FG-FG coupling by 95 per cent. A combination of these design features and process/cell structure changes allowed us to reach 18MB/sec on this advanced 19nm technology node.

The chip is in production already, with SanDisk saying products using it began shipping late in 2011 – although it doesn't say which products. A 64Gbit version of the chip compatible with the MicroSD format has been developed and a production ramp has started.

The company says that its 128Gbit TLC chip has enough performance to be able to replace 2-bit MLC ships in certain applications and hints pretty clearly at smartphones, tablets and SSDs.

How cost-effective this is going to be is open to question. On the face of it a 128GB SSD built with 128Gbit TLC chips should be significantly cheaper than one built with 2-bit chips, as you don't need so many chips.

But TLC has much less endurance – write cycles – than MLC; five times less or even worse. SanDisk doesn't say what the endurance is, which is a bad sign. We could readily imagine that there has to be so much over-provisioning of flash in a 3-bit product, compared to a 2-bit product, that the cost advantages are significantly eroded.

Until SanDisk and Toshiba announce actual products using their TLC chips we can't know what the endurance statistics are going to be, and what the cost penalties are going to be to turn a low endurance number into an acceptable one through over-provisioning and, perhaps, better controller technology to get usable data out of TLC cells nearing the end of their life.

Anobit, the controller company acquired by Apple, says its signal processing-based technology can make TLC flash as long-lived and as reliable as MLC flash. If controller tech and over-provisioning can actually deliver acceptable and affordable endurance for TLC NAND-based SSDs and other enterprise flash drive formats, such as PCIe cards, then we're set to see a good bump up in flash product capacity over the next six to eighteen months as products hit the market.

SanDisk and Toshiba talked about their fantastically small NAND chippery at the International Solid State Circuits Conference (ISSCC) in San Francisco on 22 February. This follows on from OCZ demonstrating a TLC drive at CES 2012. Intel and Micron also have TLC technology and will probably introduce 20nm TLC chips later this year. Ditto Samsung. The flash market and its customers are going to get a lot of TLC in the next few months. ®

Similar topics


Other stories you might like

  • VMware claims ‘bare-metal’ performance from virtualized Nvidia GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual datacenter product updates across CPU, GPU, and DPU
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Now Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading

Biting the hand that feeds IT © 1998–2022