3D Sensing Technology | Parallel Structured Light | |
Output data | 3D points (x y z), Normals (x y z), Depth Map (z), Color Image (RGB), Texture (grayscale intensity), Confidence (float) | |
Dimensions | S & S+: 308 x 68 x 85 mm | |
M & M+: 428 x 68 x 85 mm | ||
L & L+: 628 x 68 x 85 mm | ||
Baseline | 550.0 mm | |
Weight | 1150 g | |
Temperature working range | Full: 0 - 40 degrees | |
Optimal: 22 - 25 degrees | ||
Power | PoE or 24V | |
Data connection | 1 Gbit Ethernet | |
Processing unit | NVIDIA PascalTM Architecture GPU with 256 CUDA cores | |
Scanning range | S: 366-558 mm | |
S+: 630 - 1574 mm | ||
M: 497 - 939 mm | ||
M+: 630 - 1574 mm | ||
L: 778 - 3034 mm | ||
L+: 1300 - 3780 mm | ||
Optimal scanning distance (sweet spot) | S: 444 mm | |
S+: 907 mm | ||
M: 653 mm | ||
M+: 907 mm | ||
L: 1252 mm | ||
L+: 1944 mm | ||
Scanning area (at sweet spot) | S: 424 x 312 mm | |
S+: 828 x 610 mm | ||
M: 588 x 444 mm | ||
M+: 841 x 610 mm | ||
L: 1027 x 836 mm | ||
L+: 1656 x 1288 mm | ||
Scanner / Static (Operation Mode / Scene) | ||
Resolution | 2 Million 3D points (1680 x 1200) | |
Point to point distance (at sweet spot) | S: 0.250 mm | |
S+: 0.520 mm | ||
M: 0.370 mm | ||
M+: 0.520 mm | ||
L: 0.720 mm | ||
L+: 1.150 mm | ||
Calibration accuracy (1 σ) | S: 0.150 mm | |
S+: 0.500 mm | ||
M: 0.250 mm | ||
M+: 0.300 mm | ||
L: 0.900 mm | ||
L+: 1.500 mm | ||
Temporal noise (1 σ) | S: 0.050 mm | |
S+: 0.100 mm | ||
M: 0.050 mm | ||
M+: 0.050 mm | ||
L: 0.100 mm | ||
L+: 0.400 mm | ||
Maximum FPS | 2 fps | |
Camera / Dynamic (Operation Mode / Scene) | ||
Resolution | 2 Million 3D points (1680 x 1200) | |
Point to point distance (at sweet spot) | S: 0.370 mm | |
S+: 0.760 mm | ||
M: 0.550 mm | ||
M: 0.550 mm | ||
L: 1.050 mm | ||
L+: 1.680 mm | ||
Calibration accuracy (1 σ) | S: 0.300 mm | |
S+: 1.000 mm | ||
M: 0.500 mm | ||
M: 0.500 mm | ||
L: 1.250 mm | ||
L+: 2.050 mm | ||
Temporal noise (1 σ) | S: 0.100 mm | |
S+: 0.150 mm | ||
M: 0.100 mm | ||
M+: 0.100 mm | ||
M+: 0.100 mm | ||
L+: 0.550 mm | ||
Maximum FPS | 20 fps |
The need for automation has been a pressing demand across logistics and e-commerce companies. Automation answers the increasing demand for online shopping and international shipping. It frees companies of the repetitive traditional picking and handling that is physically strenuous and error-prone. With automation, human operators can work on safer and more complex tasks, increasing their value while at the same time, improving throughput, accuracy, and trustworthiness during processes. Although in simple tasks like picking and placing objects, robots in industrial environments may find them challenging—thus the need for sensors that allow high-performance vision for detecting, picking, and placing objects accurately.
When using 3D vision camera systems, companies save manual labor time while relieving human work in risky working environments. These camera systems increase efficiency as they reduce overwhelming labor costs and improve product quality. Machine vision technologies likewise allow for longer operating hours, enhancing productivity and elevating revenues. It furthers production ability by reducing the need for production outsourcing, saving additional costs, and better product quality control. The continuous inspection from these machine cameras ensures accuracy, making mandatory review processes and problem tracing easier.
MotionCam-3D Color captures high accuracy and high-resolution 3D area snapshots, suited in large work areas which may be in arbitrary motion. It is based on the patented CMOS sensor and Parallel Structured Light technology of Photoneo. It scans sequential structured light devices and has the ability of scanning dynamic scenes. Each Photoneo 3D sensor provides data calculations sent to users in data formats, such as point cloud, normals, depth map, and confidence. The information transfers to computers running the driver software through a 1Gbps Ethernet connection.
The motion camera device has various available versions. The S version suits best for small scenes requiring capture of the highest possible accuracy. The S+, with its larger scanning volume, has the same body size but offers more powerful parameters on top of an extended scanning range. For medium-sized applications, the M model works well since it has a significant scanning range boost. The more accurate M+ version can be used for both medium and large projects. For large projects, which the other models cannot handle, the L version provides a significant upgrade. And for the largest objects and biggest volumes, the L+ version serves as the ultimate scanning range.
Principle
These Photoneo 3D sensors, which are measurement devices, derive their function from the optical triangulation principle. The scanned object reflects the modulated light from the projection unit and the camera captures such based on which the object distance is computed. In MotionCam-3D (Color), scenes can occur in arbitrary movement or vibration as it is powered using the revolutionary Parallel Structured Light™—a patented principle that provides top-notch quality in the 3D reconstruction of dynamic scenes from structured light scanning. The clever sensor design acquires one snapshot unlike the sequential scanning of standard image sensors. With the Parallel Structure Light method, it’s as if the 3D scene freezes in time.
Since the system depends on the reflection of projected light, objects best for scanning include rough surface objects like wood, rubber, and paper. It also sits well on matte-finished objects like sand-blasted aluminum and cast iron. Other materials for scanning are unpolished plastic materials, fruits, skin, textiles, and plants.
The device may not provide similar results to intensively reflective materials, such as mirrors and polished metals, liquids, and transparent objects. Smoke and dispensed particles can also negatively influence the 3D data.
Scanning Performance
The scanning range has 2 values, the minimal and maximal distances. The shortest range from the S version spans 366mm, while the farthest from the L+ version can reach 3780mm. The best scanning results come from the sweet spot—the optimal scanning distance denoting the focus distance of the 2D camera. The range starts from the 444mm of the S version to the L+ version’s 1944mm. For the sweet spot in the scanning area, the range starts from the S version’s 424 x 312mm to the L+ version’s 1656 x 1288mm.
Operation Mode
MotionCam-3D (Color) devices can switch from Camera (Dynamic) mode to Scanner (Static Mode), and vice versa. The camera mode can capture objects in motion without blur even when the device itself is moving. It has a resolution of 2 Million 3D points (1680 x 1200) and a maximum FPS of 20 fps. The P2P sweet spot distance ranges from the S version’s 0.370 mm to 1.680 mm of the L+. Calibration accuracy (1 σ) starts from 0.300 mm (S version) to 2.050 mm (L+ version), while the temporal noise (1 σ) ranges from 0.100 mm (S) to 0.550 (L+). It can be installed on moving constructions or robotic arms and still capture dynamic scenes as it functions well even when in motion.
In Scanner mode, both the scene and the device must be static. It still offers a resolution of 2 Million 3D points (1680 x 1200) but a maximum FPS of only 2. The P2P sweet spot ranges from 0.250 mm of the S version to 1.150mm of the L+. The calibration accuracy (1 σ) starts from 0.150 mm (S) to 1.500 mm (L), and the temporal noise (1 σ) from 0.050 mm (S) to 0.400 mm (L). All devices can support 2D mode for 2D texture data outputs.
PhoXi Control
For additional manual control, Photoneo devices have the PhoXi Control application. Their graphical user interface (GUI) sets up the scanning environment, configures advanced scanner parameters, and visualizes the output. It can also serve as a powerful debugging tool. On the other hand, the Application Programming Interface triggers a similar response as the GUI—executing the scan, sending it as an output, and displaying it simultaneously in the GUI. API is the central platform allowing for custom applications in the Photoneo 3D sensors. The device performs all computations to facilitate development while reducing computing demands.
Wevolver 2023