This site describe resources that allow to build, assemble, and operate a cage-mounted Kiosk System for testing, training and enriching nonhuman primates.
The Kiosk Station (KS-1) is described in detail in the article
Womelsdorf, Thomas, Neumann, Watson, Banaie Boroujeni, Hassani, Parker, Hoffman (2021) A Kiosk Station for the Assessment of Multiple Cognitive Domains and Enrichment of Monkeys.
The resources described below (and in the article) include (1) technical drawings/sketches, (2) a bill-of-materials overview with a manual for Kiosk assembly and cable management, (3) I/O hardware details for connecting a computer to reward pumps, as well as (4-6) overviews of the behavioral control suite with example exeriments and analysis scripts.
Kiosk Station KS-1 hardware information and assembly guide: The Kiosk Station KS-1 consists of three parts for which technical drawings exist for most componts on github. The Kiosk sits in a frame that is custom tailored to a typical apartment cage. We show in the drawing a frame that is flexibly mounted into a cage made by primateproducts.com. The Kiosk is hooked and screwed into that frame with ease. A single person can achieve mounting and unmounting a Kiosk when using a hydraulic trolley. A manual and assembly guide with all details are available online at:
I/O SynchBox: USB connected input output control for multiple Kiosk devices:
The I/O SynchBox is a USB-connected device designed to synchronize and collect data from several pieces of equipment. It allows triggering reward pumps or synchronize cameras. It also allows connecting a joystick for behavioral control. The IO box firmware and design are freely on github repository that also contains a manual and tools for rebuiling it:
NeuroCam: Synchronized multi-camera experiment-monitoring system:
Animal performance in the Kiosk Station is monitored from outside of the housing area with a Web-interface that configures, starts, and preprocesses up to 5 cameras using the NeuroCam. The NeuroCam is a set of scripts designed to aggregate video footage from multiple cameras and to let a user monitor and annotate that footage in real-time. Frame alignment is performed in post-processing, and video footage is saved frame-by-frame to allow later analysis without motion-compression artifacts. The NeuroCam’s Software and Hardware is documented in a user guide with code and instructions for rebuilding at:
Unified Suite for Experiments (USE): Temporally precise behavioral control with 3D rendered objects :The Unified-Suite-for-Experiments, USE, is a freely available psychophyscal suite controlling cogntive tasks with touch, joystick or gaze.
The technical and programming details are described in detail in the paper
Watson, Voloh, Thomas, Hasan & Womelsdorf (2019) USE: An integrative suite for temporally-precise psychophysical experiments in virtual environments for human, nonhuman, and artificially intelligent agents. J Neurosci Methods 326:108374.
USE is programmed in the unity3D programing framework (using #C) which uses an advanced video engine for fast 3D rendering and is popular in the computer game world. Excellent Unity-specific tutorials are available e.g. at https://unity3d.com/learn/tutorials. Unity3D is freely avalable online here (https://store.unity.com).
How the USE suite is organized to control cognitive tasks, presents visual stimuli, registers subjects responses and provides reward feedback is desribed in tutorials available on the github at