What earlier seemed a dead end of hardware development – huge chips – help Cerebras Systems solve their clients’ deep learning bottleneck problems, allowing to train AI in hours instead of weeks, shared CEO and cofounder of AI computer startup Andrew Feldman.

According to Andrew Feldman, company’s clients share that the current AI systems consumes a lot of time to be trained: training runs for consuming neural networks can take up to six weeks, allowing a very limited amount of ideas to be tested. But as Feldman assures new chip that the company offers helps clients to solve this pain:

“The idea is to test more ideas. If you can [train a network] instead in 2 or 3 hours, you can run thousands of ideas.”

Gigantic chip is placed within CS-1 computers, which is almost entirely made of this one single superchip – Cerebras’s Wafer Scale Engine (WSE).

Some customers, like company Argonne National Laboratory that specialises on supercomputing has already received the new hardware.

Feldman assured that new breakthrough performance that the company was able to achieve can start to become clear this year, when more clients will have a chance to test more ideas.