1000 Cores... And This Is NOT a GPU
1000 Cores... And This Is NOT a GPU
One small step for man. 1000 cores for my next general-purpose CPU. UC Davis just built something amazing!
Join the DZone community and get the full member experience.Join For Free
xMatters delivers integration-driven collaboration that relays data between systems, while engaging the right people to proactively resolve issues. Read the Monitoring in a Connected Enterprise whitepaper and learn about 3 tools for resolving incidents quickly.
I wished for a Xeon E7-8890 v4 for my birthday. 24 cores! Sure, it costs over $7,000 USD, but what could be better than 24 cores? Well, as you might have guessed, I didn't get my birthday wish. But, maybe that was good luck. It turns out that a group at the University of California, Davis, Department of Electrical and Computer Engineering has designed a CPU chip with 1,000 independent cores! They are calling it “KiloCore” and it computes 1.78 trillion instructions per second and contains 621 million transistors. The chip was built by IBM in their fab using 32 nm CMOS technology.The KiloCore was unveiled during the 2016 Symposium on VLSI Technology and Circuits on June 16 in Honolulu.
Some of you are going to say "this isn't big news", I have an Nvidia card in my computer with nearly as many cores as that. But there is a big difference: The Nvidia chips are Single-Instruction-Multiple-Data GPGPUs (General-Purpose Graphic Processing Units). Their lineage goes back to special processing boards designed by Silicon Graphics in the 1980s. Previous to that time all engineering workstations used the CPU to render graphics. This new SGI dedicated graphics processor board took advantage of the fact that graphics rendering can be decomposed to a massively parallel set of identical computations done on individual pixels (or small groups of pixels). This produced speedups of tens or hundreds of times on the rendering pipeline and permitted real-time 3-D model rendering for CAD/CAM systems of that era. But the paradigm for programming a GPU is different than programming a conventional CPU. The power comes from getting all of the individual processing units to follow many instances of the exact same program but with different input data. A very different way of thinking about the problem is required and it can often be counterintuitive to conventional programming norms (e.g. more threads running smaller programs can be much faster). For the curious here's a link to information about the Nvidia CUDA platform.
The KiloCore chip is an array of 1,000 CPUs. They can all run their individual programs independently just like the multicore PCs (phones, tablets, etc.) that we're all familiar with today. This could be a serious threat to the expensive and difficult to program GPGPUs that we use today.
Professor of Electrical and Computer Engineering at UC Davis, Bevan Baas leads the team in the KiloCore design with the help of graduate students Aaron Stillmaker, Jon Pimentel, Timothy Andreas, Bin Liu, Anh Tran, and Emmanuel Adeagbo. They announced their project here.
Of course, even though the chip is a collection of CPUs there remain programming management issues centered around getting all the programs and data into and out of all of these cores. Nothing is ever quite as simple as we would hope. But the team has completed a version 1 compiler and program mapping tool to help automate program development for these chips.
And as if all of that wasn't great, these chips are astonishingly power efficient too. One of these chips operating at 115 billion instructions per second consumed only 0.7 Watts of power. You could run it off of a AA battery! I don't know about you but I can hardly wait until these devices start showing up in my computers. I would not be surprised to see a next generation supercomputer based on a chip similar to this one, and it won't have the power and cooling requirements of a small city.
I know I've said this before in previous articles, but with the advent of massively multicore computers and quantum computers, it is time for us to seriously think about new algorithms. There are things we can actually do now that we couldn't dream of a decade ago. We are starting a new age of computing.
Opinions expressed by DZone contributors are their own.