One of the talks that I attended at the GPU Technology Conference was “Early Evaluation of the Jetson TK1 Development Board for Power and Performance” given by Jee Choi, a graduate research assistant at Georgia Tech.
and here’s the slides for the talk.
I had been thinking about the implications of building large clusters of Tegra K1 based machines for GPU based computation in data centers. The idea is that the computation/watt is the key metric for deployment. Broadly, in the last several years the new metric is not how much raw computation is available, but rather how much computation is available at some price. That price has changed over the years, but now it is closely correlated to the amount of power used rather than the cost of the hardware involved. Computers in the cloud becoming commodities and all that.
Jee Choi stated that there are a broad range of factors to consider when examining power consumption. Low power, is just that, low power consumption. This does not mean that the power consumed per computational unit is not necessarily less on a Jetson versus a Titan or a K40. There are many factors to consider, which are outlined in the talk.
Micro benchmarks showed interesting results, and the high performance computing idea of “Race To Halt” may not be valid according to the benchmarks. “Race to Halt” means compute everything as fast as possible then go to a halt/idle state.
I saw it live, I’ve listened to it again on the recording. Good stuff.