Google SBC Coral USB Accelerator, Performs high-speed ML inferencing, Supports all major platforms, Supports TensorFlow Lite, Supports AutoML Vision Edge Datasheet Get started guide The Coral USB Accelerator adds an Edge TPU coprocessor to your system. It includes a USB socket you can connect to a host computer to perform accelerated ML inferencing. The on-board Edge TPU is a small ASIC designed by Google that provides high performance ML inferencing with a low power cost. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power-efficient manner. Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner. See more performance benchmarks. Supports all major platforms Connects via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Features: – Models are built using TensorFlow – Fully supports MobileNet and Inception architectures though custom architectures are possible – Compatible with Google Cloud ML accelerator Google Edge TPU coprocessor: 4 TOPS (int8); 2 TOPS per watt Connector USB 3.0 Type-C* (data/power) * Compatible with USB 2.0 but inferencing speed is slower. Dimensions 65 mm x 30 mm System requirements – One of the following operating systems: – Linux Debian 6.0 or higher, or any derivative thereof (such as Ubuntu 10.0+), and an x86-64 or ARM64 system architecture – macOS 10.15, with either MacPorts or Homebrew installed – Windows 10 – One available USB port (for the best performance, use a USB 3.0 port) – Python 3.5, 3.6, or 3.7
Google SBC Coral USB Accelerator, Performs high-speed ML inferencing, Supports all major platforms, Supports TensorFlow Lite, Supports AutoML Vision Edge Datasheet Get started guide The Coral USB Accelerator adds an Edge TPU coprocessor to your system. It includes a USB socket you can connect to a host computer to perform accelerated ML inferencing. The on-board Edge TPU is a small ASIC designed by Google that provides high performance ML inferencing with a low power cost. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power-efficient manner. Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner. See more performance benchmarks. Supports all major platforms Connects via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Features: – Models are built using TensorFlow – Fully supports MobileNet and Inception architectures though custom architectures are possible – Compatible with Google Cloud ML accelerator Google Edge TPU coprocessor: 4 TOPS (int8); 2 TOPS per watt Connector USB 3.0 Type-C* (data/power) * Compatible with USB 2.0 but inferencing speed is slower. Dimensions 65 mm x 30 mm System requirements – One of the following operating systems: – Linux Debian 6.0 or higher, or any derivative thereof (such as Ubuntu 10.0+), and an x86-64 or ARM64 system architecture – macOS 10.15, with either MacPorts or Homebrew installed – Windows 10 – One available USB port (for the best performance, use a USB 3.0 port) – Python 3.5, 3.6, or 3.7
in 6 offers
Google SBC Coral USB Accelerator, Performs high-speed ML inferencing, Supports all major platforms, Supports TensorFlow Lite, Supports AutoML Vision Edge Datasheet Get started guide The Coral USB Accelerator adds an Edge TPU coprocessor to your system. It includes a USB socket you can connect to a host computer to perform accelerated ML inferencing. The on-board Edge TPU is a small ASIC designed by Google that provides high performance ML inferencing with a low power cost. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power-efficient manner. Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner. See more performance benchmarks. Supports all major platforms Connects via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Features: – Models are built using TensorFlow – Fully supports MobileNet and Inception architectures though custom architectures are possible – Compatible with Google Cloud ML accelerator Google Edge TPU coprocessor: 4 TOPS (int8); 2 TOPS per watt Connector USB 3.0 Type-C* (data/power) * Compatible with USB 2.0 but inferencing speed is slower. Dimensions 65 mm x 30 mm System requirements – One of the following operating systems: – Linux Debian 6.0 or higher, or any derivative thereof (such as Ubuntu 10.0+), and an x86-64 or ARM64 system architecture – macOS 10.15, with either MacPorts or Homebrew installed – Windows 10 – One available USB port (for the best performance, use a USB 3.0 port) – Python 3.5, 3.6, or 3.7
Google SBC Coral USB Accelerator, Performs high-speed ML inferencing, Supports all major platforms, Supports TensorFlow Lite, Supports AutoML Vision Edge Datasheet Get started guide The Coral USB Accelerator adds an Edge TPU coprocessor to your system. It includes a USB socket you can connect to a host computer to perform accelerated ML inferencing. The on-board Edge TPU is a small ASIC designed by Google that provides high performance ML inferencing with a low power cost. For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power-efficient manner. Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 FPS, in a power efficient manner. See more performance benchmarks. Supports all major platforms Connects via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Features: – Models are built using TensorFlow – Fully supports MobileNet and Inception architectures though custom architectures are possible – Compatible with Google Cloud ML accelerator Google Edge TPU coprocessor: 4 TOPS (int8); 2 TOPS per watt Connector USB 3.0 Type-C* (data/power) * Compatible with USB 2.0 but inferencing speed is slower. Dimensions 65 mm x 30 mm System requirements – One of the following operating systems: – Linux Debian 6.0 or higher, or any derivative thereof (such as Ubuntu 10.0+), and an x86-64 or ARM64 system architecture – macOS 10.15, with either MacPorts or Homebrew installed – Windows 10 – One available USB port (for the best performance, use a USB 3.0 port) – Python 3.5, 3.6, or 3.7
Last updated at 22/10/2024 10:28:54
+ $9.00 delivery
Go to store
Go to store
Go to store
Go to store
Affiliate Disclosure: We may receive a small commission for purchases made through this link at no extra cost to you. This helps support our site. Thank you!
Go to store
See 2 more history offers
available 11 months ago
Low stock
Affiliate Disclosure: We may receive a small commission for purchases made through this link at no extra cost to you. This helps support our site. Thank you!
available 12 months ago
Low stock
Affiliate Disclosure: We may receive a small commission for purchases made through this link at no extra cost to you. This helps support our site. Thank you!
originally posted on pbtech.co.nz
originally posted on pbtech.co.nz
originally posted on pbtech.co.nz
Updated about 19 hours ago
See 2 more history offers