Article: Accelerating Endpoint Inferencing - FirstEDA
23018
portfolio_page-template-default,single,single-portfolio_page,postid-23018,ajax_fade,page_not_loaded,,qode-child-theme-ver-1.0.0,qode-theme-ver-14.5,qode-theme-bridge,wpb-js-composer js-comp-ver-6.0.5,vc_responsive

Article: Accelerating Endpoint Inferencing

June 6th, 2019 – By: Kevin Fogarty

 

 

Chipmakers are getting ready to debut inference chips for endpoint devices, even though the rest of the machine-learning ecosystem has yet to be established.

 

Whatever infrastructure does exist today is mostly in the cloud, on edge-computing gateways, or in company-specific data centers, which most companies continue to use. For example, Tesla has its own data center. So do most major carmakers, banks, and virtually every Fortune 1,000 company. And while some processes have been moved into public clouds, the majority of data is staying put for privacy reasons.