If you have questions about installing and running Presentation, click here .
If you have questions about how to program Presentation, click here .
An official website of the United States government
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
- Publications
- Account settings
- Advanced Search
- Journal List
A flexible user-interface for audiovisual presentation and interactive control in neurobehavioral experiments
Christopher t noto, suleman mahzar, james gnadt, jagmeet s kanwal.
- Author information
- Article notes
- Copyright and License information
Email: [email protected]
CTN: Participated in experimental design, surgical procedures, writing scripts, data acquisition and analysis, and writing the manuscript.
SM: Participated in experimental design, assisted with surgical procedures, writing scripts, and data acquisition.
JG: Designed original experimental set up, participated in scientific discussions, animal acquisition, care and surgeries.
JSK: Conceptualized and participated in experimental design, data analysis and writing the manuscript.
All authors read and agreed to the final version of the manuscript.
Competing interests: No competing interests have been disclosed.
Accepted 2013 May 16; Collection date 2013.
This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Data associated with the article are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).
Version Changes
Updated. changes from version 1.
We thank the referees for their reviews of our manuscript and we have addressed the issues raised in the revision. Principally, we address issues relating to sampling rate for specific analog channels collected during our experiments and hardware specifications (ADC conversion, sequencer stepping rate), and the general pitfalls inherent in using video monitoring system in neurophysiology
A major problem facing behavioral neuroscientists is a lack of unified, vendor-distributed data acquisition systems that allow stimulus presentation and behavioral monitoring while recording neural activity. Numerous systems perform one of these tasks well independently, but to our knowledge, a useful package with a straightforward user interface does not exist. Here we describe the development of a flexible, script-based user interface that enables customization for real-time stimulus presentation, behavioral monitoring and data acquisition. The experimental design can also incorporate neural microstimulation paradigms. We used this interface to deliver multimodal, auditory and visual (images or video) stimuli to a nonhuman primate and acquire single-unit data. Our design is cost-effective and works well with commercially available hardware and software. Our design incorporates a script, providing high-level control of data acquisition via a sequencer running on a digital signal processor to enable behaviorally triggered control of the presentation of visual and auditory stimuli. Our experiments were conducted in combination with eye-tracking hardware. The script, however, is designed to be broadly useful to neuroscientists who may want to deliver stimuli of different modalities using any animal model.
Introduction
In neurophysiological research, correlating neural signals driven by stimulus presentation and behavioral response needs to be completed within a limited time frame, generally less than 2 hours when conducted with non-human primates. This requires effective and efficient control of presentation of stimuli, acquisition of data, and monitoring of behavior for reward and task progression. Behavioral neuroscientists have to continuously struggle to both keep up with technological advances to accelerate data throughput and to customize stimulus delivery and data acquisition systems to do cutting-edge research. This adds to the burden of labor-intensive electrophysiological recordings from single or multiple neurons in awake-behaving animals, which nevertheless continues to be one of the most reliable and useful ways to understand neural computations and function. Stimulus presentation paradigms may also need to be routinely modified to conform to the goals of an experiment. All of this has to be accomplished with the constraint of maintaining the experimental animal in a healthy condition until the experiment has run its course, which may take from weeks to months. Moreover, user requirements, dictated by the scientific data and state of knowledge, are a moving target that makes it difficult for for-profit vendors to meet all the needs of their customers. Laboratory heads are frequently faced with the task of either hiring a permanent programmer at the cost of tens of thousands of dollars in annual salary to create and maintain a new program, or abandoning a particular line of experiments that scientifically may be the right direction in which to proceed. Even the choice of hardware and software packages that laboratory personnel could interface with and manipulate easily largely depends upon the available expertise of those working in the laboratory and frequently shifts with the departure of key personnel.
To effectively meet our own needs for the study of gaze control in response to the presentation of audiovisual stimuli, we developed a user-interface that provides a template for others facing a similar challenge. Specifically, we describe an experimental design that uses a custom-written script for controlling communication between Presentation software (Neurobehavioral Systems, Inc., Albany CA) package and data acquisition hardware (Cambridge Electronic Design, Ltd., Cambridge, UK) along with vendor-provided Spike2 software. Each package runs independently on separate personal computers ( Figure 1A ).
Figure 1. System hardware connectivity and experiment flow chart.
A . The system is divided into three levels: Data Acquisition and Behavioral Monitoring, Stimulus Presentation Control, and Stimulus Presentation. B . Typical flow of experiment based on performance of subjects. C . Logical flow chart for behavioral tasks used in the training and testing paradigms.
Rapid eye-movements, or saccades, channel important visual information into the association and prefrontal cortex where it is integrated with previous knowledge to take decisive action 1 – 3 . Much research effort has gone into validating the role of the superior colliculus (SC) 4 – 7 and cortical areas, such as the frontal eye fields 2 and lateral intraparietal areas 1 , in the control of eye movements during visual saccades. Less is known about the kinematic properties and control of auditory priming of saccades 8 , 9 and virtually nothing is known about the modulation of visual saccades by contextual auditory information. Clearly, sensory recall of auditory objects as well as error-correction and decision mechanisms underlying memory-guided saccade initiation or head orientation need to be invoked 9 – 11 . It is less clear, however, where in the brain these two functionally distinct mechanisms might converge 12 , 13 .
We used our newly developed audiovisual presentation and control scripts to acquire new data on responses to auditory and visual stimuli in a reward-driven behavioral task that involved tracking eye movements in a nonhuman primate. Our goal was to facilitate the exploration of neurons that integrate multimodal sensory information from naturalistic stimuli to elicit adaptive behavior. As a first step, we trained monkeys to associate relatively novel sounds, including animal vocalizations, with images that were also considered novel for monkeys maintained in a captive environment. To begin to explore the pontine circuitry creating such associations as well as eye movements, we first used species-specific calls to identify complex auditory stimulus-driven neurons in the IC, and naturalistic images to identify visual and saccade-driven neurons in the SC. This narrowed our search space for finding audiovisual neurons, located potentially at the boundary region between the IC and the SC, and testing if reward modulated their activity. We focused on eye movements as the adaptive behavior since these can be accurately tracked, provide a rapid response, and are controlled by neural activity within the SC 4 – 7 .
Materials and methods
We used two software packages, Presentation (Neurobehavioral Systems, Inc., Albany CA) and Spike2 (Cambridge Electronic Design, Ltd.) in conjunction with data acquisition hardware (Power1401 plus , Cambridge Electronic Design, Ltd.), to control stimulus presentation based on our subject’s behavior. This required communication between Presentation and Spike2 software via serial and parallel ports to either advance or terminate a subject’s task in real-time based on either correct or incorrect behavioral responses, respectively. Data acquisition at a relatively high sampling rate (0.1 ms resolution) by the Power1401 was performed concurrently with stimulus presentation and behavioral monitoring. Our design integrates hardware that is either routinely available in a neurophysiology laboratory or commonly available from vendors ( Table 1 and Figure 1A ). Presentation software is readily available from Neurobehavioral Systems for on-line download ( http://www.neurobs.com ). We chose Presentation because of its large, comprehensive scripting language and intuitive user interface, and because the software allowed a simple method to communicate via both serial and parallel ports of a personal computer running a Windows operating system. The software itself is easy to use and numerous example scripts make the language easy to learn. Spike2 and the Power1401 plus are available for purchase from Cambridge Electronic Design ( http://www.ced.co.uk/indexu.shtml ). We chose Spike2 and the Power1401 plus because of its extensive scripting capabilities, ease of use, and inherent control of data acquisition (here, an analog-to-digital converter cycled through the incoming signals at 1 MHz), independent sequencer control, and straightforward manipulation of both parallel and serial ports (see http://www.ced.co.uk/pru.shtml for hardware specifications). Each software package runs independently on its own personal computer to avoid compromising processor resources. Under Presentation control, video is output by a Radeon 9250 video card on a 55” Visio flat panel HD TV and sound is output using a SoundBlaster audio card by a Bose speaker system. A 16 ms error is inherent in the presentation of visual stimuli due to the 60 Hz refresh rate of the LCD monitor. For experiments designed to perturb subconscious elements of the visual system, display delays could be accounted for by additional code written into the scripts discussed below, or an LCD display may be substituted with some other form of imaging, e.g. a fast stepping motor turning a vertically oriented circular slide tray.
Table 1. A list of computer and electronic items used in our stimulus presentation, behavioral monitoring, and protocol control system (standard data acquisition and electronic equipment used for electrophysiology is not listed here).
Animal care and preparation.
Three Rhesus monkeys ( Macacca mullata ; 2 males and 1 female) acquired from a research facility at Wake Forest University, were available during various stages of testing and data acquisition for the development of protocols described here. Compatible animals were housed in paired grooming/contact cages (~2.5 cubic meters), in a room with a light and dark cycle set by an automatic day/night timer (light from 6AM to 6PM daily) and with full view of colony mates in a large open room. Cages were continuously equipped with swings, mirrors, foraging devices and/or small toys. Daily care and medical maintenance of the animals, including a balanced diet of dry food formula, vegetables and fruit, were routinely provided. Environmental enrichment for the monkeys included playing of natural sounds, radio or TV and daily handling, mock grooming and socialization by laboratory personnel.
Surgical procedures: eye coil implantation and neural recordings
Animals were prepared for participation in experiment by performing two surgeries. For the first surgery, we implanted a head restraining device and one scleral eye coil. With the head secured in a stereotaxic device, a 5 cm midline incision was made in the scalp. Periosteum and muscle was retracted using blunt techniques and the calvarium scraped free of soft tissue. A 3 cm stainless steel bar, which fits a head restraining apparatus of the primate chair, was attached vertically to the calvarium using surgical stainless steel screws and a stainless steel recording chamber anchored to the skull using screws and a mound of acrylic bone cement 14 , 15 . The screws are mounted into small burr holes in the bone and buried in the bone acrylic along with the head post and electrical connectors. A scleral eye coil was implanted on one eye. Briefly, the conjunctiva was cut near the limbus and reflected to expose the sclera. A coil made of three turns of Teflon-insulated wire was sutured to the sclera using 6-0 Vicryl, and the conjunctiva was sutured back over the coil. The ends of the coil wire were led out of the orbit subdermally to the acrylic cap where they were attached to a small electrical connector. One week post-surgery, we began a daily task-specific training regimen. Once training proceeded to an acceptable level, generally within a few months, another aseptic surgery was performed to implant an eye coil on the second eye and one or two stainless steel recording chamber(s) were mounted into the head cap under stereotaxic guidance. The acrylic overlaying the appropriate portion of the skull was removed using dental burrs in a hand drill and a 15 mm craniotomy was made. Stainless steel recording cylinders were placed over the craniotomy and cemented into place with bone or dental acrylic. The sterile interior of each cylinder was secured with a threaded Teflon cap having a pressure-release vent.
Post-surgical maintenance included prophylactic antibiotics for 7 days (Baytril, daily 2–5 mg/kg) and 2–5 days of narcotic analgesics (buprenorphine, 0.05–0.1 mg/kg BID) followed by 3–5 days of acetaminophen (5–10 mg/kg). Flunixin, a non-steroidal anti-inflamatory agent, was administered for 1 to 3 days (0.5–1 mg/kg). We also monitored body weight and food/water intake daily, and performed maintenance of the skin margin and cleaned the recording cylinders.
Behavioral training
During the behavioral training, the monkeys sat in the Plexiglass primate chair within a cube of magnetic field coils. To avoid recording of eye movements being confounded with head motion and to stabilize the head while electrodes are inserted in the brain, the head was restrained painlessly by clamping the head post to a device on the chair. To motivate the subjects to perform adequately, for five days per week they received their daily fluids as reward for proper behavior. When daily training or experiments are terminated prematurely, fluids are supplemented up to the normal daily level for that subject. Fluid intake was monitored and recorded daily. Additionally, pulpy fruit or vegetables were used to reward good behavior when returning the animal to the home cage.
Using standard behavioral shaping procedures, the animals were trained to fixate and to follow small visual or auditory stimuli by rewarding them with a drop of fruit juice from a gravity-fed “straw” for successfully completing each series of eye movements defined by the presentation of the stimuli. Training and experimental procedures were performed for no longer than 5 hours per day, usually for 1–3 hours. Animals exhibiting discomfort were readjusted within the chair or returned to their home cage. The daily manipulations for the animals did not produce pain or distress. The cooperative demeanor of the monkeys gives us reason to believe that they find the laboratory situation stimulating and the social interaction with the investigators satisfying.
All surgical and experimental procedures were performed in accordance with federal and institutional guidelines on the care and use of laboratory animals as part of protocols approved by the Georgetown animal care and use committee (protocol # 09-025).
Stimulus display and trial design
Figure 1B shows what is displayed on the screen and the actions of the subject in response to the presentation of a visual stimulus. Figure 1C is a logical flow diagram to show the various steps listed as 4 tasks in the experimental scheme. The tasks are described as follows:
1. Association task:
A sound is played and the associated target image is simultaneously presented at the center of the screen. In our experiments, short (1 s) tone bursts and natural sounds (communication calls) were presented at stimulus levels of ~80 dB SPL (decibels of sound pressure level).
2. Left-right-association task:
A sound is played and an associated target image is simultaneously presented centered at a horizontal location a user-selected distance from the center of the screen, either on the left or on the right side (the decision to present left or right is decided randomly at run-time).
3. Single distracter task:
(a). A sound is played and at the same time a “green dot” is presented at the center with simultaneous presentation of the associated target image and a distracter image on either sides of the circle. The position of images is decided randomly at run-time.
(b). The target image and distracter image are retained on screen and eye-focus is monitored.
4. Multiple distracters task:
(a). A sound is played and at the same time, a “green dot” is presented at the center with simultaneous presentation of an associated “target image” and multiple (user selected) distracter images at user-specified locations on the screen. The position of the images is deliberately kept fixed in this task.
(b). The target image and distracter image are retained on the screen and eye-position is monitored.
Experimental design
Running the script described in Figure 2A provides a user-interface in Spike2 that begins a cascade of dialog boxes that request information relevant to the experiment (e.g. subject name) and the basic parameters needed to monitor the behavior of the subject (e.g. detection window size, reward duration). After supplying the basic information ( Figure 2B ), a list of experimental scenarios is presented to the user in order to select the condition a subject will face. We have programmed a number of saccade-related tasks that use one (or more) of eight audio stimuli to direct our subject’s behavior to learned associations of visual images. A check box arrangement indicates a combination of stimuli the user intends to use in the experiment. As well, a number of timing variables (‘Time to get on Target’, ‘Initial Fixation Time’, ‘Fixation Time for Reward’) are adjustable by the user. Clicking the ‘OK’ button collapses the association-training dialog box allowing the user to hit the ‘Run’ button to initiate the scenario or to select a different scenario. From this point forward, the parameters dialog, the experimental scenario dialog, and a quit option are always available as buttons to the user on the Spike2 program interface. Selecting another experimental scenario automatically names and saves the current data file while initiating data collection into a new file for the newly selected scenario. Clicking on the ‘quit’ button saves the current data file, terminates the presentation of the ongoing stimulus to the subject and ends execution of the Spike2 script.
Figure 2. Details of script and user input.
A . Flow chart showing the design of experiment control sequence shared between the four scripts. B . The initial interactive user-interface used to collect basic information about the experiment set-up. C . User-interface used to collect the initial parameters for behavioral monitoring of subjects during experiments.
The association-training paradigm
Although quite simple, our ‘Association Training Paradigm’ allowed us to illustrate the inner workings of 1) the “Spike2 control” script, 2) the “sequencer script”, and 3) the “presentation script” as they operate across all the current scenarios available to the user. Before going on, we should discuss what we expect from the program and subject, so we can better discuss the interweaving functions of these three scripts. Figure 1B shows the progression of stimuli if the subject succeeds across all phases of the trial or fails at any time in the trial. This task has three phases: 1) an initial black screen or timeout screen, 2) an initial fixation target, and 3) test stimulus presentation. During phase 1, behavior is not actively monitored. The duration of the timeout is set to 2 seconds in the presentation script. At the inception of phase 2, the sequencer acting through the script loaded to the Power1401 memory begins monitoring eye position. The subject must first acquire the target and maintain gaze on the target within a small “forgiveness” window for the user-defined epoch of time. Successful fixation of the target advances the scenario to phase 3 by a command issued first from the sequencer to the Spike2 ‘control’ script and then from the ‘control’ script to the presentation script. Failure results in a reset to the black screen and a brief timeout using the same flow from sequencer to presentation script. A response token is sent directly back from the presentation script to both the ‘control’ script and sequencer ensuring that all three scripts remain synchronized. Phase 3 consists of the presentation of our test stimuli, here the co-presentation of an image and sound. Successful fixation of the image within a forgiveness window, equal to the size of the image and for the user-defined time, initiated by a dialog box shown in Figure 3A , results in the delivery of a reward to the subject as commanded by the sequencer. Successful fixation or failure to look at the image commands a reset of the experimental process to the black screen for a 2 second refresh period.
Figure 3. Menu-driven user interface.
A . The list of proposed paradigms users may select from to start a session; currently available are Calibration and the Association Tasks, and include “walk” tasks that were not implemented in the present version. “Walks” are saccade tasks designed to use a single target that appears on a black screen in various locations, moving in patterns ascribed by the selection buttons and subsequent dialogs boxes that may be added by the user. They can be used to train the animal and record metrics of their eye movements. B . User-interface that allows for selection of the auditory-visual pairing used during the association training paradigm and the timing criteria necessary for successful completion of the task.
Script components
Our stimulus delivery and data acquisition package consisted of four primary components that operate in conjunction with one another. A “sequencer” script written from within Spike2 is loaded to the Power1401 for real-time monitoring of eye position and saccades. Sequencer scripts (included in the “Sequencer Files” file below) are ultimately responsible for issuing commands that direct progression through a task and to reward the subject. Two scripts (included in the “Spike2 Control Scripts” file below) operate in the Spike2 software environment. The first script controls the interfaces into which a user inputs relevant control parameters. The second script provides the functional control between the scripts running on the Spike2 computer and script running on the Presentation computer. One script (included in the “Presentation Scripts” file below) runs in the Presentation software environment commanding the output of stimuli and communicating by connections of the parallel line (to the Power1401) and the serial line (to the computer running Spike2) a time-stamp indicating when the presentation script commands the presentation of a stimulus.
Sequencer script
The sequencer script downloaded to the Power1401 module runs using an independent clock ticking at 1 µs from the Spike2 computer, but communicates with it through a high-speed USB port. The sequencer script consists of two parts: 1) the initialization section and 2) the monitoring section. The initialization sections load the user-defined variables set while interacting with the dialog boxes created by the ‘interface’ script. In our example of the association-training scenario, the variables loaded are the edges of the forgiveness window, the three timing criteria (time to get on target, initial fixation time, and fixation time for reward), and reward pulse duration. The sequencer cannot act on these values directly so we convert them to sequencer-relevant values. The edges of the forgiveness window are converted from the user-defined values in degrees to digital-to-analog converter (DAC) values. The timing criteria and reward pulse duration are converted from milliseconds to sequencer steps per ms. The monitoring section is made up of the same number of sections as the scenarios or situations (here three), each with specific tasks. The first task checks that the subject acquires the fixation target after it is presented within the user-defined epoch of time. The state of fixation, success or failure, is sent to the Spike2 ‘control’ script. If the sequencer determines the subject has worked within the task bounds, the sequencer steps to the next phase of the monitoring section and waits for a confirmation that the scenario has advanced from the ‘control’ script. The second task checks that fixation is maintained on the target for the specified time. Once again, information about the state of fixation, either success or failure, is sent to the Spike2 ‘control’ script. If the sequencer determines the subject has worked within the task bounds, the sequencer steps to the final phase of the monitoring section and waits for a confirmation that the status has advanced from the ‘control’ script. The final phase operates exactly as the second phase except that if the subject complies, a reward pulse is sent from the Power1401 to a reward delivery system though one of the digital I/O ports. Regardless of whether the trial is deemed a success or failure, the sequencer returns to the initialization section and resets the variables to their initial user-defined state. The process loops with each trial.
Multiple sequencer files necessary for monitoring and rewarding correct animal behavior during the various tasks
The Spike2 ‘control’ script
The ‘control’ script runs in the background on the Spike2 computer and uses the first bit of the COM-1 port input to communicate with the Presentation computer and controls advancement through the Presentation script. The serial line conveys the hexadecimal representation of the words (descriptors and terminators) used to call images and sounds and response tokens between acquisition and presentation computers, respectively. This bit was opened, written to, and closed by the respective portions of the scripts running on the acquisition or presentation computers. Each task consists of a looping “do case” function with progress through the function determined by the fixation state passed from the sequencer. In our example of the association-training task, there exist three fixation failure situations and three successful fixation situations. The first failure scenario occurs when the screen is black and the subject has no target to fixate. The script simply calls for the presentation of the fixation target by issuing a command to the computer running Presentation. The second and third failure situations are similar and initiate a command for the presentation of the black screen to the subject. The first success case assumes that the subject’s gaze is directed toward the fixation target’s location when there is none present. In this situation, the script calls for the fixation target to be presented, just as in the first failure case. The second success situation calls for the Presentation script to display the test stimulus. The third success scenario initiates a reset of the screen to a blank (black) display by the Presentation script. Following each call to the Presentation computer, the ‘control’ script listens for a reply on bit 1 of the COM-1 port. Upon receipt of the response the “do case” state is returned to the sequencer to allow advancement through the monitoring sections of the script ensuring proper stepwise alignment of all three scripts throughout the task.
The two control scripts used to interact between users and the presentation script as well as the configuration used to collect data.
The presentation script
The Presentation script running on the Presentation computer acts as a slave to the Spike2 ‘control’ script receiving instructions and replying through the first bit of the COM port. This script has three primary sections: 1) the video monitor setup, 2) image and sound object creation, and 3) the experimental loop. The first section of this script requires the user to predefine the current display properties (resolution and color depth) including the height and width of the monitor, and distance of the subject to the screen. In this way, Presentation calibrates itself so that target and image positions may be stated in degrees and drawn at the appropriate size. The second section predefines all the potential objects that may be called during the experiment after selection via a dialog box ( Figure 3B ) and their association, if any. For example, our fixation target is a small green dot. We have created an object (e.g. named ‘greendot’) that holds all the relevant information about how presentation draws our fixation target (e.g. dot size, color of the background, etc.) when a call is made to the object. In the third section, the Experimental Loop monitors the COM-1 port for communication from the Spike2 computer. This loop is largely comprised of “if, then, else” statements. Each communication from the ‘control’ script is pre-defined so that when the ‘control’ script shunts words and terminators (e.g. ‘grendot\n’) to the Presentation computer, the Experimental Loop recognizes the word (grendot) and terminator (\n) and falls into the appropriate “if” statement. In the case of the ‘grendot\n’ combination, the “if” statement calls for our object ‘greendot’ so that the fixation target is displayed on the monitor, and at the same time triggers a reply to the ‘control’ script on the COM-1 port and to the Power1401 on bit 8 of the parallel port (a 1 ms low-high-low transistor-transistor logic (TTL). The script then returns to the loop, listening for the next command from the Spike2 computer. In this way, each object may be called in any sequence as commanded by the Spike2 ‘control’ sequence. In the case of our example, the next word that the loop would receive would be ‘SndPICn\n’. Similarly, the loop falls into the appropriate “if” statement, displays the test stimulus, replies to the Spike2 computer and Power1401, and returns to the loop.
The single necessary presentation script needed to display images and sounds upon command from the Spike2 control script.
Paired sound and image files necessary for running all association tasks, frequency tone ranges used for testing frequency tuning of neurons (not discussed in the text) and species-specific communication calls.
Data acquisition and analysis
Up to five days a week, a two-hour neural recording period occurred between 10AM and 4PM to ensure overlap with veterinary staff hours. Animals were moved from their home cage to an adjacent room for neural recording sessions while seated comfortably in a primate chair. In the recording room, the animal’s head is fixed facing forward, in full view of the LED (light emitting diode) monitor set 48 inches in front of them with the center of the screen at the approximate height of the animals straight ahead gaze. Extracellular neuronal recordings were made using standard electrophysiological methods in behaving subjects using fine wire tungsten microelectrodes (31 gage, Microprobe, Inc.) mounted in a guide tube of stainless steel hypodermic tubing 16 . Transdural penetrations were made by a hydraulic microdrive (FHC, Inc.) advancing a tungsten electrode through the bore of a 21 gage hypodermic needle mounted in a micropositioner that attaches to the outside of the chronic recording cylinders on the animal’s head. Neuronal activity was recorded on the hard drive of the data acquisition computer running the Spike2 control scripts via a high impedance amplifier system (AMC Systems, Inc.). We collected one channel of raw neural signal at either 25kHz or 50kHz, four channels corresponding to horizontal and vertical eye position at 1kHz, one auditory channel at 25kHz, and one channel of timestamps at 10kHz, generated on-the-fly during acquisition of data using an adjustable threshold set on the channel collecting the neural signal, for spike times. Digitizing the raw neural signal allowed for post-hoc analysis using the Spike2 software that provides software window discriminators and level detectors as well as various forms of waveform analysis including template matching and spike sorting, using PCA algorithms.
Data analysis and recording was conducted using Spike2 software (Cambridge Electronic Design, Ltd.). Custom-written scripts were used to build raster plots and peristimulus – time histograms (PSTHs) for display of processed data for well-isolated single units whenever possible. Only sample data from single or few-unit activity are provided here to demonstrate feasibility for the purposes of this project, which was designed for development of experimental control procedures.
Results and discussion
As proof of concept, we present here a number of behavioral and neural responses from various brain structures that are activated in response to naturalistic stimuli presented within our experimental set up. To reiterate, we were primarily concerned with capturing 4 basic types of neural responses: 1) visual, 2) auditory, 3) saccade, and 4) reward-driven. This analysis utilizes the timestamps placed in the data files by the presentation script’s 1 ms TTL pulse sent to the Power1401 during data acquisition. Neurons were recorded from the midbrain in the putative inferior and superior colliculi (IC and SC, respectively) of one of our nonhuman primate subjects. Figure 4 describes typical neural responses in the IC following the presentation of complex communication sounds or “calls” that contained acoustic features preferred by the neuron 17 – 20 . Of the ten neurons from which electrophysiological activity was recorded, all responded to at least one of the seven sounds presented. As an example, Figure 4 shows the response of two neurons from the same animal to the same three sounds. We found that each sound produced a distinct temporal response pattern. These patterns could range from no or transient increases in the overall firing rate (upper left panel) to intense phase-locked responses to acoustic features within a call (lower right panel).
Figure 4. Responses of collicular neurons to communication sounds.
Amplitude envelops (top) and raster and PSTH (10 ms bins) plots superimposed on call spectrographs of three different call types (grunt, harmonic, noisy) to show the response of 2 neurons (neuron 1 is from a female and neuron 2 is from a male) in the monkey inferior colliculus. Each call presentation was repeated 40 times per histogram. Grey vertical dotted lines indicate sound onset. Note the response build-up to the third predominant amplitude modulation in the last call. Average first peak response latency to calls was 20.9 +/- 3.5 ms (n = 10). Responses with a potential for temporal facilitation are enclosed by ellipses, although response enhancement may also depend on the basic acoustic patterns within complex sounds or on amplitude tuning. Calls were downloaded from the following web site: http://www.soundboard.com/sb/Rhesus_Monkey_sounds.aspx .
The neuron shown in Figure 5A and 5B illustrates the characteristic visual activity one expects to find while recording from rostral-superficial layers of the SC 21 . Once gaze was directed to position the eyes within the receptive field of this neuron we observed steady, low-rate firing within ~20 ms. In this example, the subject was required to make a saccade to capture a sound-associated image. After the fixation target was extinguished and the target image was presented in the peripheral field of vision, the neural response declined and resumed only when the eyes were positioned again on the target image. The neuron shown in Figure 5C and 5D fits the characteristics attributed to neurons of the intermediate layers of the SC 22 , 23 . Namely, a 60–80 ms build up in activity followed by a burst of spikes just prior to the initiation of direction-dependent saccades to our visual stimulus. Examination of the neural data collected during “spontaneous” eye movement behavior shows that this neuron preferred saccade vector (>20 degrees amplitude, 137 degrees angle), which is well off the axis of our stimulus.
Figure 5. Fixation and saccade-related neural activity in the intermediate superior colliculus.
A . Summed histogram from multiple trials triggering visual stimulus-induced activity in a “fixation” neuron located at the rostral pole and superficial layers of the superior colliculus (SC). Electrophysiological responses (timestamps for spikes) were aligned to the time at which the subject acquired the fixation target to begin the trial. B . Saccade-triggered transient suppression of neural activity in a different neuron located within intermediate layers and caudal to the fixation-neuron in the same animal. Dashed vertical line is at time zero for stimulus presentation in “A” and for target fixation in “B”. The width of the grey bar indicates the neuron’s visual delay (>20 ms). C . Raster plots (above) and binned profile of summed response (below) to compare neural activity during rightward (top panel) and leftward (bottom panel) saccades. Grey bins indicate the build-up phase while the black bins indicate the burst phase of the neuron. Dashed vertical line indicates saccade onset. D . Heat map of saccade-related neural activation. Black box enclosed by dashed lines indicates position of the target image relative to central gaze. Solid white arrow represents the vector for the preferred saccade as indicated by firing rate of the neuron.
Example data collected in the IC and SC showing auditory, visual and saccade-related activity, as well as reward-related activity.
Of the total population of neurons studied in the SC by Jay and Sparks 12 , 79% showed saccade-related bursts prior to eye movements to either visual or auditory evoked target stimuli suggesting that saccades evoked by either stimulus share a common efferent pathway to generate the movement. Meredith et al. 13 recorded 113 neurons in the SC (82/113 were auditory-visual neurons) of anesthetized cats during presentation of single and temporally overlapping sensory stimuli. Peak response in neural firing to multisensory signals occurred when stimuli were presented concurrently, with the second stimulus starting <100 ms from the first. Since then, research has shown that in the deep layers of the SC, most neurons respond to both visual and auditory stimuli; 99 of 121 SC neurons showed significant alteration in firing rates due to eye position 12 .
Approximately 60% of neurons within the IC have been shown to respond to not only sound 17 , but to some extent visual- and saccade-related activity 24 , 25 . Inputs from the lateral nucleus of the IC and the nucleus of the brachium of the IC to the SC also exist 26 . This pathway may be responsible in part for the auditory activity observed in the deep layers of the SC 27 and is one route via which auditory information can influence saccadic eye movements. The response of IC neurons to visual stimulus and during eye movements is much less robust than the activity observed following visual stimulation and during saccades in the SC. The use of natural stimuli is expected to boost the responses of IC neurons in an audiovisual recall task to reveal multi-sensory integration that can influence saccade-related activity. Figure 6 illustrates a neuron’s activity that is putatively considered reward-dependent 28 , 29 . The neuron was located rostral to the IC and deep to the region known to contain neurons controlling saccade-related activity in the SC. The neural activity was clearly phase-locked to the task, but was less obviously linked to auditory stimuli ( Figure 6A ), contrary to what one would ordinarily expect in IC neurons (compare with Figure 4 ). This activity was not strictly linked to visual stimuli, nor was it saccade-related in terms of SC activity. The activity of this neuron seemed to indicate an expectation of reward that builds up based on successfully meeting task-related milestones ( Figure 6B ). During the task and especially following the onset of the sound, very distinct differences existed in the firing pattern of this neuron compared to between the two conditions.
Figure 6. Reward induced neural activity.
A . Single experimental trial illustrating stimulus presentation and related neural activity. During these trials, the subject was rewarded for successful discrimination of a sound-associated (target) image and a distractor image. B . Spike density waveform averaged from 20 trials aligned on fixation target onset. All other behavioral and stimulus markers are centered at their average time of occurrence and grey boxes indicate the first standard deviation in event time.
Both auditory and visual activity in space is read out in the SC in a manner that is appropriate for generating accurate saccades to sounds and images, respectively, although visually evoked saccades have high velocity, greater precision and shorter reaction times than auditory evoked saccades 4 , 11 , 30 . These and many other findings clearly indicate an extensive auditory input to the SC. Briefly, visual information from the retina drives the development of and maintains a spatial representation of auditory space in the IC 31 – 33 . This has been demonstrated in owls 33 , 34 and is believed to be true in mammals. We presume that retinal inputs pass through the optic tectum and the superficial layers of the superior colliculus (SCs) before converging on auditory areas in the IC ( Figure 7 ). A pathway from the retina to SCs to IC is known to exist in mammals 35 . Over the long term, the convergence of visual and auditory signals reinforces an enduring spatial map in the IC. Recently, many neurons within the IC (the brachium of the IC. The external capsule of the IC, and the core of the IC), have been shown to respond to not only sound 17 , but to some extent visual- and saccade-related activity and in some cases responses are modifiable by reward 24 , 25 .
Figure 7. Schematic representing neural circuits creating visual-auditory interactions in the tectum.
The superior (SC) and inferior colliculi (IC) receive direct visual (shades of green) and auditory (shades of red) projections, respectively and have reciprocal connections with each other. The IC also receives emotive inputs from the amygdala (AM) either directly 43 or via reward circuitry in the ventral striatum 44 , and has reciprocal connections with the auditory cortex (AC) for cognitive processing 48 . Saccadic eye movements are controlled by outputs from the SC via local burst generators (BG) driving motor neurons innervating extraocular muscles (EOM). The SC receives information from the visual cortex and premotor neurons in the frontal cortex, particularly the frontal eye fields. AC = Auditory Cortex; CN = cochlear nucleus (VIIIn); MN = motor neurons (nuclei of cranial nerves III, IV and VI); PM = Pre-motor Cortex; VC = Visual Cortex.
In many species, including humans, who rely predominantly on vision for their survival, auditory cues may trigger eye movements either for interaction with the environment or for communication with conspecifics. Many researchers point to the SC and IC as components of a multi-modal sensory integration system, where visual and auditory signals within the brain merge into a co-dependent representation of the world 3 – 5 , 36 , 37 . Neurophysiological and anatomical data support the idea that this linkage occurs only two or three synapses beyond the retina and auditory nerve. Signals sent out of the IC and SC are also fed-back onto their independent systems helping to modulate behavior (see Figure 7 ).
In summary, collecting behavioral and neural data using our suite of scripts and hardware together with subsequent analysis yielded new insights, providing strong evidence for the advantage of using a novel and customized paradigm. Our scripted user-interface demonstrated that pairing auditory and visual stimuli caused modest changes in activity throughout the trial period in a receptive neuron located deep within the SC. This was in contrast to the response of the same neuron presented with the same stimuli when the animal listened to them passively. The SC appears to be the site where sensory signals encoded in different frames of reference converge, and are translated into a common coordinate system commanding movement execution 11 (e.g. retinotopic-centered commands to resolve motor error). Integration of auditory and visual information also appears to occur at this site. A major cortico-collicular auditory projection suggests that the cortex may direct this integration via the IC, particularly during the learning phase 38 , 39 . After that, subcortical circuits may function autonomously for computing a reaction.
Conclusions
In conclusion, we have developed a simple and relatively straightforward user-interface that directs and monitors subject behavior as well as acquires data. This particular set-up and the customized paradigms used in this experiment may be impossible for vendors of commercial stimulus presentation and data acquisition software and hardware to develop for the general neuroscience community due to the specific needs of each research laboratory. Our experimental design and custom scripts, however, are flexible to meet virtually all experimental control and data acquisition needs of those interested in conducting behaviorally controlled, response-based experiments. We have used a modified design to run psychophysics experiments on human subjects and these can be combined with dense array EEG recordings in response to the presentation of auditory and visual stimuli 40 . In essence, our template can be used to build any type of subject-interactive experiments. There is high potential for applying our pragmatic design to control neurobehavioral experiments using readily available hardware and software. Our studies, using earlier methodologies, showed that arousal has a role in bottom-up modulation of thalamic activity in the control of eye-movements 41 , 42 . Our new methodology allowed us to discover the location of audiovisual neurons at which reward-based, and possibly anxiety-driven, influences may converge to modulate behavior 43 – 47 . Studying these circuits in intact, normal animals is important to decipher the interplay of excitation and inhibition between different neural circuits for dynamic control of eye movement and gaze control.
Acknowledgements
Mr. Dolphus Truss was especially helpful in assisting with animal care issues.
Funding Statement
Work supported in part by grant EY015870 to JSK from the National Eye Institute (NEI). We also thank the Biomedical Graduate Research Organization (BGRO) of Georgetown University for financial support to JSK during the later phase of this project. The content is sole responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
v2; ref status: indexed
- 1. Andersen RA, Mountcastle VB: The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex. J Neurosci. 1983;3(3):532–548 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 2. Bichot NP, Schall JD, Thompson KG, et al. : Visual feature selectivity in frontal eye fields induced by experience in mature macaques. Nature. 1996;381(6584):697–699 10.1038/381697a0 [ DOI ] [ PubMed ] [ Google Scholar ]
- 3. Miller EK, Cohen JD: An integrative theory of prefrontal cortex function. Annu Rev Neurosci. 2001;24:167–202 10.1146/annurev.neuro.24.1.167 [ DOI ] [ PubMed ] [ Google Scholar ]
- 4. Sparks DL: Response properties of eye movement-related neurons in the monkey superior colliculus. Brain Res. 1975;90(1):147–152 10.1016/0006-8993(75)90690-3 [ DOI ] [ PubMed ] [ Google Scholar ]
- 5. Guthrie BL, Porter JD, Sparks DL, et al. : Corollary discharge provides accurate eye position information to the oculomotor system. Science. 1983;221(4616):1193–1195 10.1126/science.6612334 [ DOI ] [ PubMed ] [ Google Scholar ]
- 6. Sparks DL: The neural encoding of the location of targets for saccadic eye movements. J Exp Biol. 1989;146:195–207 [ DOI ] [ PubMed ] [ Google Scholar ]
- 7. Sparks DL, Nelson JS: Sensory and motor maps in the mammalian superior colliculus. TINS. 1987;10(8):312–317 10.1016/0166-2236(87)90085-3 [ DOI ] [ Google Scholar ]
- 8. Lewis JW, Wightman FL, Brefczynski JA, et al. : Human brain regions involved in recognizing environmental sounds. Cereb Cortex. 2004;14(9):1008–1021 10.1093/cercor/bhh061 [ DOI ] [ PubMed ] [ Google Scholar ]
- 9. Zella JC, Brugge JF, Schnupp JW, et al. : Passive eye displacement alters auditory spatial receptive fields of cat superior colliculus neurons. Nat Neurosci. 2001;4(12):1167–1169 10.1038/nn773 [ DOI ] [ PubMed ] [ Google Scholar ]
- 10. Valentine DE, Moss CF: Spatially selective auditory responses in the superior colliculus of the echolocating bat. J Neurosci. 1997;17(5):1720–1733 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 11. Bergeron A, Matsuo S, Guitton D, et al. : Superior colliculus encodes distance to target, not saccade amplitude, in multi-step gaze shifts. Nat Neurosci. 2003;6(4):404–413 10.1038/nn1027 [ DOI ] [ PubMed ] [ Google Scholar ]
- 12. Jay MF, Sparks DL: Sensorimotor integration in the primate superior colliculus. II. Coordinates of auditory signals. J Neurophysiol. 1987;57(1):35–55 [ DOI ] [ PubMed ] [ Google Scholar ]
- 13. Meredith MA, Nemitz JW, Stein BE, et al. : Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. J Neurosci. 1987;7(10):3215–3229 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 14. Robinson DA: A method of measuring eye movement using a scleral search coil in a magnetic field. IEEE Trans Biomed Eng. 1963;10:137–145 [ DOI ] [ PubMed ] [ Google Scholar ]
- 15. Ramcharan EJ, Gnadt JW, Sherman SM, et al. : Single-unit recording in the lateral geniculate nucleus of the awake behaving monkey. Methods. 2003;30(2):142–151 10.1016/S1046-2023(03)00075-6 [ DOI ] [ PubMed ] [ Google Scholar ]
- 16. Noto CT, Gnadt JW: Saccade trajectories evoked by sequential and colliding stimulation of the monkey superior colliculus. Brain Res. 2009;1295:99–118 10.1016/j.brainres.2009.07.069 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 17. Versnel H, Zwiers MP, van Opstal AJ, et al. : Spectrotemporal response properties of inferior colliculus neurons in alert monkey. J Neurosci. 2009;29(31):9725–9739 10.1523/JNEUROSCI.5459-08.2009 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 18. Portfors CV: Combination sensitivity and processing of communicaiton calls in the inferior colliculus of the moustahced bat, Pteronotus parnellii . An Acad Bras Cienc. 2004;76(2):253–257 10.1590/S0001-37652004000200010 [ DOI ] [ PubMed ] [ Google Scholar ]
- 19. Suta D, Kvasnak E, Popelar J, et al. : Representation of species-specific vocalizations in the inferior colliculus of the guinea pig. J Neurophysiol. 2003;90(6):3794–3808 10.1152/jn.01175.2002 [ DOI ] [ PubMed ] [ Google Scholar ]
- 20. Klug A, Bauer EE, Hanson JT, et al. : Response selectivity for species-specific calls in the inferior colliculus of Mexican free-tailed bats is generated by inhibition. J Neurophysiol. 2002;88(4):1941–1954 [ DOI ] [ PubMed ] [ Google Scholar ]
- 21. Goldberg ME, Wurtz RH: Activity of superior colliculus in behaving monkey. I. Visual receptive fields of single neurons. J Neurophysiol. 1972;35(4):542–559 [ DOI ] [ PubMed ] [ Google Scholar ]
- 22. Mohler CW, Wurtz RH: Organization of monkey superior colliculus: intermediate layer cells discharging before eye movements. J Neurophysiol. 1976;39(4):722–744 [ DOI ] [ PubMed ] [ Google Scholar ]
- 23. Wurtz RH, Goldberg ME: Superior colliculus cell responses related to eye movements in awake monkeys. Science. 1971;171(3966):82–84 10.1126/science.171.3966.82 [ DOI ] [ PubMed ] [ Google Scholar ]
- 24. Porter KK, Metzger RR, Groh JM: Visual- and saccade-related signals in the primate inferior colliculus. Proc Natl Acad Sci U S A. 2007;104(45):17855–17860 10.1073/pnas.0706249104 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 25. Porter KK, Metzger RR, Groh JM, et al. : Representation of eye position in primate inferior colliculus. J Neurophysiol. 2006;95(3):1826–1842 10.1152/jn.00857.2005 [ DOI ] [ PubMed ] [ Google Scholar ]
- 26. King AJ, Jiang ZD, Moore DR, et al. : Auditory brainstem projections to the ferret superior colliculus: anatomical contribution to the neural coding of sound azimuth. J Comp Neurol. 1998;390(3):342–365 [ DOI ] [ PubMed ] [ Google Scholar ]
- 27. Sparks DL: Neural cartography: sensory and motor maps in the superior colliculus. Brain Behav Evol. 1988;31(1):49–56 10.1159/000116575 [ DOI ] [ PubMed ] [ Google Scholar ]
- 28. Ikeda T, Hikosaka O: Reward-dependent gain and bias of visual responses in primate superior colliculus. Neuron. 2003;39(4):693–700 10.1016/S0896-6273(03)00464-1 [ DOI ] [ PubMed ] [ Google Scholar ]
- 29. Ikeda T, Hikosaka O: Positive and negative modulation of motor response in primate superior colliculus by reward expectation. J Neurophysiol. 2007;98(6):3163–3170 10.1152/jn.00975.2007 [ DOI ] [ PubMed ] [ Google Scholar ]
- 30. Sparks DL: Conceptual issues related to the role of the superior colliculus in the control of gaze. Curr Opin Neurobiol. 1999;9(6):698–707 10.1016/S0959-4388(99)00039-2 [ DOI ] [ PubMed ] [ Google Scholar ]
- 31. Knudsen EI: Early auditory experience aligns the auditory map of space in the optic tectum of the barn owl. Science. 1983;222(4626):939–942 10.1126/science.6635667 [ DOI ] [ PubMed ] [ Google Scholar ]
- 32. Knudsen EI, Knudsen PF: Visuomotor adaptation to displacing prisms by adult and baby barn owls. J Neurosci. 1989;9(9):3297–3305 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 33. Knudsen EI, Knudsen PF: Vision calibrates sound localization in developing barn owls. J Neurosci. 1989;9(9):3306–3313 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 34. Miller GL, Knudsen EI: Early auditory experience induces frequency-specific, adaptive plasticity in the forebrain gaze fields of the barn owl. J Neurophysiol. 2001;85(5):2184–2194 [ DOI ] [ PubMed ] [ Google Scholar ]
- 35. Doubell TP, Baron J, Skaliora I, et al. : Topographical projection from the superior colliculus to the nucleus of the brachium of the inferior colliculus in the ferret: convergence of visual and auditory information. Eur J Neurosci. 2000;12(12):4290–4308 10.1111/j.1460-9568.2000.01337.x [ DOI ] [ PubMed ] [ Google Scholar ]
- 36. Glimcher PW, Sparks DL: Effects of low-frequency stimulation of the superior colliculus on spontaneous and visually guided saccades. J Neurophysiol. 1993;69(3):953–964 [ DOI ] [ PubMed ] [ Google Scholar ]
- 37. Sparks DL, Mays LE, Porter JD, et al. : Eye movements induced by pontine stimulation: interaction with visually triggered saccades. J Neurophysiol. 1987;58(2):300–318 [ DOI ] [ PubMed ] [ Google Scholar ]
- 38. Yan J, Suga N: Corticofugal modulation of time-domain processing of biosonar information in bats. Science. 1996;273(5278):1100–1103 10.1126/science.273.5278.1100 [ DOI ] [ PubMed ] [ Google Scholar ]
- 39. Perales M, Winer JA, Prieto JJ, et al. : Focal projections of cat auditory cortex to the pontine nuclei. J Comp Neurol. 2006;497(6):959–980 10.1002/cne.20988 [ DOI ] [ PubMed ] [ Google Scholar ]
- 40. Rana N, Medvedev AV, Noto CT, et al. : Brain activation in response to sounds signaling distress: Time-frequency analysis of EEG activity. Soc Neurosci. 2011; Program # 171.01, Washington DC, USA. Reference Source [ Google Scholar ]
- 41. Noto CT, Gnadt JW, Beex LM, et al. : Parabrachial brainstem activation enhances contrast detection in macaque monkeys: Psychometric and neurometric analyses. Soc Neurosci. 2010; Program # 674.3, San DIego, CA. Reference Source [ Google Scholar ]
- 42. Noto CT, Gnadt JW, Kanwal JS, et al. : Neurometric and psychometric performance of Rhesus monkeys during a contrast discrimination task. Soc Neurosci. 2009; Program # 651.4, Chicago, IL, USA. Reference Source [ Google Scholar ]
- 43. Marsh RA, Fuzessery ZM, Grose CD, et al. : Projection to the inferior colliculus from the basal nucleus of the amygdala. J Neurosci. 2002;22(23):10449–10460 [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
- 44. Fudge JL, Kunishio K, Walsh P, et al. : Amygdaloid projections to ventromedial striatal subterritories in the primate. Neuroscience. 2002;110(2):257–275 10.1016/S0306-4522(01)00546-2 [ DOI ] [ PubMed ] [ Google Scholar ]
- 45. Eichenberger GC, Ribeiro SJ, Osaki MY, et al. : Neuroanatomical and psychopharmacological evidence for interaction between opioid and GABAergic neural pathways in the modulation of fear and defense elicited by electrical and chemical stimulation of the deep layers of the superior colliculus and dorsal periaqueductal gray matter. Neuropharmacology. 2002;42(1):48–59 10.1016/S0028-3908(01)00155-1 [ DOI ] [ PubMed ] [ Google Scholar ]
- 46. Nobre MJ, Brandao ML: Modulation of auditory-evoked potentials recorded in the inferior colliculus by GABAergic mechanisms in the basolateral and central nuclei of the amygdala in high- and low-anxiety rats. Brain Res. 2011;1421:20–29 10.1016/j.brainres.2011.09.013 [ DOI ] [ PubMed ] [ Google Scholar ]
- 47. Dagnino-Subiabre A, Perez MA, Terreros G, et al. : Corticosterone treatment impairs auditory fear learning and the dendritic morphology of the rat inferior colliculus. Hear Res. 2012;294(1–2):104–113 10.1016/j.heares.2012.09.008 [ DOI ] [ PubMed ] [ Google Scholar ]
- 48. Yukie M: Connections between the amygdala and auditory cortical areas in the macaque monkey. Neurosci Res. 2002;42(3):219–229 10.1016/S0168-0102(01)00325-X [ DOI ] [ PubMed ] [ Google Scholar ]
Referee response for version 2
Bruce cumming.
Competing interests: No competing interests were disclosed.
This would be a much stronger contribution if the authors added measurements of the critical parameters, including quality of video synchronization and loop delay. Their responses place some theoretical limits, but a measurement would be much better.
Nonetheless, as a forum for publishing code that can be used by people not concerned with these details, this may be useful.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Farrel Robinson
The amendments made to the article have improved the manuscript.
Referee response for version 1
This title is appropriate. The content of this article clearly describes the authors’ interface, components, and connections and the behaviour of this system. The authors support their assertion that they have developed a flexible and useful system for controlling and recording behaviour experiments. The authors show enough appropriate examples to support their assertions.
Vincent Ferrera
This paper describes an interesting but fairly standard set-up for neurophysiology data acquisition in behaving subjects. It would be nice to have more details about how timing and synchronization were verified, and also what are the special capabilities of the hardware that justifies their selection? Nevertheless, it is good to know what other people are using and this is therefore a useful contribution.
Minor comments
The Power 1401 is rather expensive. What special capabilities justify this cost?
Introduction: “which nevertheless continues to be the most reliable and useful way to understand neural computations and function.” This statement is debatable.
The communication between the spike2 and presentation computers appears critical. More details would be welcome. What exactly is conveyed by the 1-bit com port? What is the signal timing diagram? How is this bit manipulated?
A critical issue regarding the use of LCD displays for neurophysiology experiments is stimulus timing. How did the experimenters determine the relationship between the video output of the presentation computer and the actual appearance of stimuli on the display?
Not clear about the hardware for neural signals. Was neural data stored on a third computer? Or on the spike2 computer? How many channels, what sampling rate, etc? How were spikes detected – window discriminator, waveform analysis?
Jagmeet Kanwal
Competing interests: No competing interests.
Thank you for taking the time to review our manuscript. Below we provide our detailed response to specific comments per changes made in the updated version of the manuscript.
We generally agree with the reviewer’s point. Our main objective was to document an example for the benefit of others, given that the F1000Research journal provides a mechanism for archiving stimuli, scripts and raw data that the research community at large can readily access. We provide additional details in the revised version of the manuscript as suggested by the reviewer. Specifically, we include information relating to our data collection sampling parameters, serial line communication timing, analog-to-digital converter (ADC) sampling rates and clock speeds for sequencer monitoring of behavior.
The technical specifications can be found at: http://www.ced.co.uk/pru.shtml . We now cite this resource in the revised version. We specifically choose the Power 1401 because the hardware is capable of converting analog signals at high (1MHz) rates while simultaneously reading the ADC channels to allow the user to monitor subject behavior. Thus, the behavioral monitoring script loaded into the 1401’s sequencer at a 1 µs tick rate; i.e., one line of code was read with each tick of the clock. Combining these two capabilities gave us practically instantaneous monitoring of our subject and the ability to respond to their behavior with sub-millisecond precision. In the examples presented, we simply rewarded the animal for maintaining its eye position within a defined spatiotemporal window. One could extend this monitoring by defining a threshold on eye velocity, using that threshold to output a TTL pulse to trigger an external stimulator with similar precision.
This statement now reads: “which nevertheless continues to be one of the most reliable and useful ways to understand neural computations and function. ”
Bit-1 of the serial line, com port, conveys the hexadecimal representation of the words (descriptors and terminators) used to call images and sounds and response tokens between acquisition and presentation computers, respectively. This bit was opened, written to, and closed by the respective portions of the scripts running on the acquisition or presentation computers. The signal passed between computers in less than a millisecond. We are unclear on which “signal timing diagram” is referred to by the reviewer since we do not use this term in the manuscript.
We agree that LCD displays include an inherent error in the timing of the display of images because of their refresh rates (at 60Hz, the error could be up to ~16ms per frame). The error can be exacerbated by slow video rendering by the video card, which nowadays is less of an issue than in the past. We did not determine the exact error of our system. However, the response time of the display was 8 ms, which is half the refresh rate meaning that the system carries a maximum error of approximately 16 ms to refresh the entire image. For computing absolute perceptual or behavioral response delays, the refresh rate can be subtracted from the timing of the behavioral response, e.g., in our case eye-movements, though we were not interested in these particular parameters, only in using eye movements to control stimulus presentation and reward delivery.
The neural signal passed through a filter and amplifier before undergoing analog-to-digital conversion by the 1401. We stored the data on the same computer running the acquisition script. We typically collect one channel of raw neural signal at either 25kHz or 50kHz, four channels corresponding to horizontal and vertical eye position at 1kHz, one auditory channel at 25kHz, and one channel of timestamps at 10kHz, generated on-the-fly during acquisition of data using an adjustable threshold set on the channel collecting the neural data on spike timestamps. Digitizing the raw neural signal allowed for post-hoc analysis using the Spike2 software that provides software window discriminators and level detectors as well as various forms of waveform analysis including template matching and spike sorting using PCA algorithms.
The authors describe their system for presenting audio-visual stimuli and making neural recordings, by combining two packages, 'Presentation' and 'Spike2'. It sounds like a very reasonable arrangement for their purposes. But they do not make a case that this system achieves anything special methodologically. People have been doing experiments similar to those they describe for many (20+) years with a variety of different systems. It seems that the advance here is that this system is easy to use/set up. This is very likely to be true provided users do substantially similar experiments. But that’s true for almost any system. How easy would it be to do something else? Without some way of demonstrating in some objective way what the range of the system is, it is impossible to say if there is real methodological contribution here.
Smaller questions
An important problem is how precisely video events are synchronized with the neurophysiology. The moment at which a display computer requests an image change and the moment at which the first pixels of the new image are actually displayed on their LCD are two different things, and may well not even have a fixed delay (depending on the details of both the display and the rendering). How is this achieved? What is the delay? How variable is it? These crucial parameters are not reported.
What is the total loop delay from detecting some event in the A/D stream (eye movement, Spike) and the change in some output (electrical stimulus, image refresh with a new image)? If detecting these events depends upon the Spike2 control script, then delays can be quite long. Implementing them in the Sequencer is much harder. This loop delay potentially places fundamental limits on the range of applications that might be possible (whether gazed contingent displays are possible, or performing cancellation tests with antidromic stimulation).
Since the paper objective is to describe a software/hardware system, the details about surgery and training, and most of the results, seem irrelevant. This space would be better used describing measures of the system performance.
In principle one might do something similar combining other separate systems e.g. any other commercial electrophysiology system and psychophysics toolbox for the display. What are the merits of the different possibilities? Without comparing the available options, this description is of limited use.
I have read this submission. I believe that I have an appropriate level of expertise to state that I do not consider it to be of an acceptable scientific standard, for reasons outlined above.
We thank the referee for taking out the time to review our manuscript and have addressed the issues raised in the revision. Below we respond to each comment.
We offer those so inclined to use these two packages a clear, working set of scripts as an example that can be used as a springboard in the development of their own experiments. In this way, our paper makes a significant methodological contribution, saving others precious time to focus on their experimental design. We were motivated by the F1000Research journal’s ability to provide a mechanism for archiving stimuli, scripts and raw data that the research community at large can readily access.
Smaller questions :
We agree that LCD displays include an inherent error in the timing of the display of images because of their refresh rates (at 60Hz the error could be up to ~16ms per frame). The error can be exacerbated by slow video rendering by the video card, which nowadays is less of an issue than in the past. The delay in our system was ~16ms, but the exact error (undetermined) is likely much less. We now include information on these issues in the revised version of the manuscript.
The analog-to-digital converter (ADC) cycles through the analog signals at 1MHz (providing a read rate of 1 µs for the behavioral monitoring script loaded to the 1401’s sequencer); one line of code is acted upon with each tick of the clock. The sequencer has direct access to the ADC signals. The monitoring of events in this way is dependent on the length of the sequencer script loop. In our example, we sampled the position of the eye every 9 µs while in the loop. This means reading even a couple of hundred lines of code takes an order of magnitude less time for loop delays, allowing implementation of cancellation tests with antidromic stimulation, if needed. As indicated earlier, the delays of concern originate more with display issues for visual stimulation using naturalistic stimuli (simpler stimuli can be presented by other means), but these are still much smaller than perceptual delays, which are on the order of a couple of hundred milliseconds. For perturbing subconscious perception, either display delays could be accounted for in a stimulation paradigm or an LCD display may be substituted with a motorized slide projector.
We describe relevant details regarding system performance as well as offer the results obtained in our study as a proof of concept, highlighting a potential scientific advance that may not be possible with a more cumbersome system.
We agree that there are many options available for conducting neurophysiological studies and sometimes making the best choice can be difficult. A comparison between specific options would be helpful, but is somewhat arbitrary in the absence of knowledge of all available equipment and experimental goals. Therefore, we simply provide one example for a specific set of experiments and leave it to the reader to make the necessary comparisons given their objectives and available equipment. For this reason, we also believe it is important to illustrate the usage of the proposed setup in generating new findings and describe them adequately as well as provide animal protocols, as indicated by the editors/publishers. To that end, this manuscript accomplishes a clear and specific methodological goal.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
- View on publisher site
- PDF (2.7 MB)
- Collections
Similar articles
Cited by other articles, links to ncbi databases.
- Download .nbib .nbib
- Format: AMA APA MLA NLM
COMMENTS
Precise, Powerful Stimulus Delivery. Presentation® is a stimulus delivery and experiment control program for neuroscience that runs on any Windows PC and delivers auditory, visual and multimodal stimuli with sub-millisecond temporal precision.
NeuroBehavioral Systems (NBS) was founded in 2000 with a single focus: to provide researchers with precise and powerful software tools to create innovative behavioral and physiological experiments. The result was Presentation®.
Presentation® is a stimulus delivery and experiment control program for neuroscience. Presentation... runs on Windows Vista/7/8/10; is optimized for behavioral, psychological and physiological experiments using fMRI, ERP, MEG, single neuron recording, reaction time measures, and other performance measures
Presentation is a Windows software application for conducting psychological and neurobehavioral experiments, developed by Neurobehavioral Systems Inc. and first released in 2003.
NeuroBehavioral Systems (NBS) has a single focus: to provide researchers with precise and powerful software tools to create innovative behavioral and physiological experiments.
This help file provides complete documentation for the Presentation stimulus delivery and experimental control software. We have divided the documentation into the following sections: What's New. Specific features that are new in this version of Presentation. Introduction.
Presentation is a stimulus delivery and experimental control software system for neuroscience. Presentation runs on Windows and uses standard PC hardware. Presentation was designed for behavioral and physiological experiments that collect fMRI, ERP, MEG, reaction time, and electrophysiological (e.g. single neuron) data.
Presentation FAQ. If you have questions about installing and running Presentation, click here. If you have questions about how to program Presentation, click here.
We used two software packages, Presentation (Neurobehavioral Systems, Inc., Albany CA) and Spike2 (Cambridge Electronic Design, Ltd.) in conjunction with data acquisition hardware (Power1401 plus, Cambridge Electronic Design, Ltd.), to control stimulus presentation based on our subject’s behavior. This required communication between ...
Neurobehavioral System Presentation is a software for conducting psychological and neurobehavioral experiments. This software supports auditory and visual stimuli creation and delivery, records responses from nearly any input device and allows control of parallel ports, serial ports, Transmission Control Protocol/Internet Protocol (TCP/IP) and ...