Color camera sensors often operate by having a 2×2 pattern of filters placed in front of every block of monochrome pixels in the sensor (often duplicating green to get four colors). This is shown in the figure below, where each filter rejects all but a single frequency range of light. There is smooth fall-off between […]
Category: Training Materials
Multi-Camera Video Stitching Solutions
Stitching together video from multiple overlapping cameras can provide both wide field of view and high resolution compared to a single camera viewing the same total region. During the development of multi-camera video systems for and for several clients and for internal ReliaSolve projects, the following solutions have proven useful. Synchronization When the camera system […]
The Thing About Video
There are rules of thumb for data processing that apply across a wide range of data sources and analysis techniques. These let us ignore some costs and apply abstractions that simplify workflow and enable clean programming interfaces and orthogonal composition of software components. The thing about video is that it breaks the rules. Here are […]
VR Concepts Illustrated Using OSVR
Published as chapter 32 of the book VR Gems (William Sherman, editor), VR_Concepts_Illustrated_Using_OSVR describes the technology required to develop effective immersive environment systems. The immersive nature of virtual and augmented reality systems engages the human visual system in ways that require wider field of view and lower latency than other 3D computer graphics systems to provide artifact-free rendering […]
Scientific Visualization Workshop
ReliaSolve offers a full-day hands-on training workshop in scientific visualization, including 2D scalar fields, 3D scalar fields (volumes) and vector visualization. It includes both perceptual-psychology-based theoretical content drawn from the UNC Visualization in the Sciences graduate course and hands-on visualization using freely-available toolkits. Participants receive USB drives containing all materials, software, and sample data sets. […]
Model-based SLAM
Using insights from the human visual system, this linked document describes thoughts on making use of built-in inertial sensors and the rich texture prevalent in natural scenes to drive the construction of a multi-scale model for from live drone video. The basic approach is to construct a 3D model of the environment in which a […]