API Reference#
ant package#
Subpackages#
Submodules#
ant.realtime_nf module#
- class ant.realtime_nf.NFRealtime(subject_id, visit, session, subjects_dir, montage, mri=False, subject_fs_id='fsaverage', subjects_fs_dir=None, filtering=False, l_freq=1.0, h_freq=40.0, artifact_correction=False, ref_channel='Fp1', save_raw=True, save_nf_signal=True, config_file=None, verbose=None)#
Bases:
object
EEG/Neurofeedback real-time session.
This class sets up a subject session for real-time EEG/neurofeedback processing. It validates inputs, configures session parameters and prepares logging. Optional preprocessing settings such as filtering and artifact correction can be specified.
- Parameters:
subject_id (str) – Unique identifier for the subject. Must be a non-empty string.
visit (int) – Visit number (>= 1).
session ({"baseline", "main"}) – Session type.
subjects_dir (str) – Path to the directory containing subject data.
montage (str) – EEG montage: either a built-in montage or a path to a
.bvct
file.mri (bool) – Whether MRI data are available.
subject_fs_id (str, default "fsaverage") – FreeSurfer subject ID.
subjects_fs_dir (str | None, default None) – Path to FreeSurfer subjects directory.
filtering (bool, default False) – Whether to apply band-pass filtering.
l_freq (float, default 1.0) – Low-frequency cutoff (Hz).
h_freq (float, default 40.0) – High-frequency cutoff (Hz).
artifact_correction ({False, "orica", "lms"}, default False) – Artifact-correction method.
ref_channel (str, default "Fp1") – Reference EEG channel in case the artifact_correction method is “lms”.
save_raw (bool, default True) – Whether to save raw EEG data.
save_nf_signal (bool, default True) – Whether to save the neurofeedback signal.
config_file (str | None, default None) – Path to a YAML configuration file. If None, uses the project default.
verbose (int | None, default None) – Logging verbosity level.
- Raises:
ValueError – If any parameter fails validation.
- Attributes:
modality_params
Current modality parameters.
Methods
Compute and save the inverse operator for source localization.
connect_to_lsl
([chunk_size, mock_lsl, ...])Connect to a Lab Streaming Layer (LSL) EEG stream.
create_report
([overwrite])Create an HTML report summarizing the neurofeedback session.
get_blink_template
([max_iter, method])Identify the eye-blink component from the baseline EEG and store its template.
Default parameters defined in the configuration YAML.
initiate_brain_plot
([clim, camera_position, ...])Initialize the 3D brain plot for real-time visualization.
plot_brain
(data[, interval, mode, freq_range])Plot dynamic brain activation or power.
plot_rt
([bufsize])Visualise raw EEG signals from the LSL stream in real time.
record_baseline
(baseline_duration[, winsize])Record a baseline EEG segment for neurofeedback feature extraction.
record_main
(duration[, modality, picks, ...])Record EEG data and extract neural features for neurofeedback.
run_orica
(n_channels[, learning_rate, ...])Initialize an online ICA (ORICA) instance for real-time artifact removal.
save
([nf_data, acq_delay, artifact_delay, ...])Save neurofeedback session data to disk.
scale_down
(ch_idx)Halve the vertical scale for the given channel index.
scale_up
(ch_idx)Double the vertical scale for the given channel index.
update_nf_plot
(new_vals[, labels])Update the neurofeedback plot in real time.
- VALID_ARTIFACT_METHODS = {'lms', 'orica', False}#
- VALID_SESSIONS = {'baseline', 'main'}#
- compute_inv_operator()#
Compute and save the inverse operator for source localization.
Notes
Stores the operator in
self.inv
.Saves it under
subject_dir/inv/visit_{visit}-inv.fif
.
Examples
>>> session.compute_inv_operator()
- connect_to_lsl(chunk_size=10, mock_lsl=False, fname=None, n_repeat=inf, bufsize_baseline=4, bufsize_main=3, acquisition_delay=0.001, timeout=5.0)#
Connect to a Lab Streaming Layer (LSL) EEG stream.
This method connects to a live or mock LSL stream, sets the montage and metadata, and exposes the stream’s public methods as attributes.
- Parameters:
chunk_size (int, default 10) – Number of samples per chunk for streaming.
mock_lsl (bool, default False) – If True, stream a pre-recorded EEG file instead of live LSL data.
fname (str | Path | None, default None) – Path to the EEG file for mock streaming. Uses sample data if None.
n_repeat (int | float, default np.inf) – Number of times to repeat the mock recording.
bufsize_baseline (int, default 4) – Buffer size for ‘baseline’ sessions.
bufsize_main (int, default 3) – Buffer size for ‘main’ sessions.
acquisition_delay (float, default 0.001) – Delay (s) between consecutive acquisition attempts.
timeout (float, default 2.0) – Max time (s) to wait for an LSL connection.
- Raises:
FileNotFoundError – If mock_lsl is True and the specified fname does not exist.
ConnectionError – If the LSL stream cannot be connected within timeout.
- Return type:
- create_report(overwrite=True)#
Create an HTML report summarizing the neurofeedback session.
The report includes baseline recordings, EEG sensor visualizations, and—if applicable—brain-label figures for source-based modalities.
- Parameters:
overwrite (bool, optional) – If True (default), overwrite an existing report file with the same name.
- Return type:
Notes
Baseline raw data is added without PSD or butterfly plots.
For sensor-space modalities, both 2-D topomap and 3-D sensor layouts are shown.
For source-space modalities, brain-label figures are added via
plot_glass_brain()
. - The HTML file is saved to<subject_dir>/reports
with a name containing the subject ID, visit number, and modality.Examples
>>> session.create_report(overwrite=True)
- get_blink_template(max_iter=800, method='infomax')#
Identify the eye-blink component from the baseline EEG and store its template.
- Parameters:
Examples
>>> session.get_blink_template(max_iter=1000, method="fastica")
- get_default_params()#
Default parameters defined in the configuration YAML.
- Returns:
Default parameters for all neural-feature modalities.
- Return type:
- initiate_brain_plot(clim=[0, 0.6], camera_position='yz', azimuth=45)#
Initialize the 3D brain plot for real-time visualization.
This method sets up the cortical surface mesh, scalar arrays, and vertex indices for both hemispheres. It then initializes the PyVista plotter to visualize brain activation.
Notes
The subjects_dir currently points to a fixed FreeSurfer directory.
Update this path as needed. - The method calls setup_surface to generate surface mesh data and setup_plotter to create the PyVista plotter.
- property modality_params: dict#
Current modality parameters.
- Returns:
Dictionary with the current neural-feature modality parameters.
- Return type:
- plot_brain(data, interval=0.05, mode='activation', freq_range=(8.0, 13.0))#
Plot dynamic brain activation or power.
- Parameters:
data (ndarray, shape (n_channels, n_times)) – EEG/MEG data in volts.
interval (float) – Pause (s) between updates in the animation.
mode ({'activation', 'power'}) – ‘activation’ -> show source activation (original behaviour). ‘power’ -> show band-limited source power.
freq_range (tuple of (float, float)) – Frequency range (fmin, fmax) in Hz to compute power. Used only if mode=’power’.
- plot_rt(bufsize=0.2)#
Visualise raw EEG signals from the LSL stream in real time.
- Parameters:
bufsize (float) – Buffer/window size in seconds for the StreamReceiver display. Default is 0.2 s.
Notes
Uses
mne_lsl.stream_viewer.StreamViewer
to create a real-time display of the EEG stream.Examples
>>> session.plot_rt(bufsize=0.2)
- record_baseline(baseline_duration, winsize=3.0)#
Record a baseline EEG segment for neurofeedback feature extraction.
Continuously streams data from the LSL connection for the given duration, fetching samples in chunks of length
winsize
seconds. The baseline is saved to disk as a Raw FIF file and an inverse operator is computed.- Parameters:
- Returns:
The method saves the baseline data and updates: *
self.raw_baseline
: mne.io.Raw The recorded baseline as a Raw object. * Internal inverse operator viaself.compute_inv_operator()
.- Return type:
None
Notes
Output file:
<subject_dir>/baseline/visit_<visit>-raw.fif
.The LSL stream remains connected after this method completes.
- record_main(duration, modality='sensor_power', picks=None, winsize=1.0, estimate_delays=False, modality_params=None, show_raw_signal=True, show_nf_signal=True, time_window=10.0, show_brain_activation=False, show_design_viz=True, design_viz='VisualRorschach', use_ring_buffer=False)#
Record EEG data and extract neural features for neurofeedback.
Streams EEG data from the LSL connection for
duration
seconds, applies the chosen feature-extraction modality (or list of modalities), and optionally visualizes the neurofeedback signal in real time.- Parameters:
duration (float) – Desired recording length in seconds.
modality (str | list of str, default 'sensor_power') – Neurofeedback feature(s) to extract.
picks (str | list of str | None, default None) – Channel names to include. If None, all channels are used.
winsize (float, default 1.0) – Window size in seconds for fetching data from the buffer.
estimate_delays (bool, default False) – If True, acquisition, artifact, method, and plot delays are measured.
modality_params (dict | None, default None) – Optional parameter overrides for each modality.
show_raw_signal (bool, default True) – Show real-time raw EEG signal plot.
show_nf_signal (bool, default True) – Show real-time neurofeedback feature plot.
time_window (float, default 10.0) – Length of time window (seconds) displayed in the NF plot.
show_design_viz (bool, default True) – Show the neurofeedback “design” visualization (e.g., py5 sketch).
design_viz (str, default 'VisualRorschach') – Name of the visualization preset.
use_ring_buffer (bool, default False) – If True, use ring-buffer processing instead of fixed winsize pulls.
- Raises:
NotImplementedError – If a requested modality is not implemented.
AssertionError – If a source-based modality is used with non-None
picks
.
- Return type:
Notes
Baseline artifact correction supports ‘lms’ or ‘orica’.
Real-time visualization uses PyQt6 + pyqtgraph.
Examples
>>> session.record_main(duration=120, ... modality="sensor_power", ... picks=["C3", "C4"], ... winsize=2)
- run_orica(n_channels, learning_rate=0.1, block_size=256, online_whitening=True, calibrate_pca=False, forgetfac=1.0, nonlinearity='tanh', random_state=None)#
Initialize an online ICA (ORICA) instance for real-time artifact removal.
- Parameters:
n_channels (int) – Number of EEG channels to include.
learning_rate (float) – Learning rate for online ICA updates.
block_size (int) – Number of samples processed per update.
online_whitening (bool) – Whether to apply online whitening to the data.
calibrate_pca (bool) – If True, calibrates PCA before running ICA.
forgetfac (float) – Forgetting factor for ORICA update (1.0 means no forgetting).
nonlinearity (str) – Nonlinearity function used in ICA (e.g., ‘tanh’, ‘cube’).
random_state (int | None) – Random seed for reproducibility.
Notes
After initialization, use
self.orica.transform(data_chunk)
to process data and optionally remove identified artifact components.Examples
>>> session.run_orica(n_channels=64, learning_rate=0.05)
- save(nf_data=True, acq_delay=True, artifact_delay=True, method_delay=True, raw_data=False, format='json')#
Save neurofeedback session data to disk.
Disconnects the LSL stream and writes neurofeedback results, delay metrics, and (optionally) raw EEG data to the subject directory.
- Parameters:
nf_data (bool, optional) – If True, save the neurofeedback data. Default is True.
acq_delay (bool, optional) – If True, save acquisition delay data. Default is True.
artifact_delay (bool, optional) – If True, save artifact-related delay data. Default is True.
method_delay (bool, optional) – If True, save method delay data. Default is True.
raw_data (bool, optional) – If True, save raw EEG data. (Not implemented.) Default is False.
format (str, optional) – File format to save the data. Currently only
'json'
is supported. Default is'json'
.
- Raises:
NotImplementedError – If
raw_data=True
, since saving raw EEG data is not yet implemented.
Notes
Creates the folders
neurofeedback
,delays
,main
, andreports
inside
self.subject_dir
if they do not exist. - JSON files are named using the subject ID and visit number.Examples
>>> session.save(nf_data=True, acq_delay=False, raw_data=False)
- scale_down(ch_idx)#
Halve the vertical scale for the given channel index.
- scale_up(ch_idx)#
Double the vertical scale for the given channel index.
- update_nf_plot(new_vals, labels=None)#
Update the neurofeedback plot in real time.
- Parameters:
Notes
On the first call the plot data structures and y-axis ticks
are created; subsequent calls simply roll and update the data. - Values are normalised by the modality-specific scaling factors in
self.scales_dict
and vertically offset byself.shifts
so that multiple modalities can be visualised together.Examples
>>> session.update_nf_plot([0.3, 0.7], labels=["Sensor Power", "Band Ratio"])