FOR ADVANCED ROBOTICS & AUTOMATION
We provide businesses the opportunity to develop their own robotic applications by making standalone perception systems modular, accessible and reliable.
Current autonomy is hard to scale, because the perception module requires a large perception and embedded team.
Our modularised sensor hardware and expandable software can better support both robot manufacturers and budding enthusiasts by speeding up their development process.
This lowers the barriers in adopting vision technologies, so that even the smallest robotics companies can enjoy cutting-edge perception technology from Day 1.
Visual sensor calibration is often unnecessarily complicated and time consuming, and technical errors can occur unpredictably. Engineers often have to visit the site to conduct routine maintenance or repair.
These unnecessary mundane works are now automatically handled by Vilota's cloud monitoring and calibration packages. Your engineers can focus on what they do best. Your company saves operating time and cost.
Sensors are commonly built to work in isolation, which creates data fragmentation and perception blindspots on your robots.
Our sensors are designed to operate in a network environment and communicate with one another to share temporal and spatial data they collect. This allows for real-time, robust 360° perception of any environments.
High level understanding of digital images and videos
Computation and processing right at the source of data acquisition
In-house knowhow and leading technologies in combining multiple sensory data
To realise our vision in democratising 3D vision perception, we offer both software and hardware solutions to achieve state-of-the-art 3D vision solutions for your robots and businesses
Hardware-neutral API level product to offer onboard calculation for immediately actionable sensory data
API offered :
Our cameras are collectively known as OmniSense. Our goal for an end-state OmniSense is to include a communication protocol, on top of having compute and AI chips onboard. This allows for visual navigation under challenging environments, where multi-sensor redundancy could be achieved with vision sensors mounted on mobile objects and static infrastructure.
Vision Kit Lite is credit-card size, suitable to be mounted on drones and small robots. It is able to cover 360-degree for navigation, tracking and monitoring purposes. It comprises of our first-generation OmniSense and a compute system with embedded Perceptive Kernel.
Our Streaming Dev Kit contains edge compute and vision sensors. Suitable for robotic researchers and developers interested to get their hands on our Perceptive Kernel MovTrack.
The Dev Kit is able to stream over low latency network and data processing is done onboard.
You may also drop us a request for our brochure & factsheet