Thinking outside of the graphics processing unit (GPU) box can reward you with faster time to solution for artificial intelligence (AI) and a lower total cost of ownership (TCO).
Intel reports that Intel® Xeon® Scalable processors account for 70 percent of artificial intelligence (AI) data center inferencing. There appear to be several reasons behind this overwhelming popularity. Many organizations are finding that x86 data-center processors with built-in accelerators can boost performance without specialized software workarounds or expensive hardware add-ons. And in many cases, these AI-enhanced CPUs enable deployment of an open, unified platform that can deliver better performance for AI and non-AI workloads, and that can also help lower total cost of ownership (TCO).