Patch-clamp data analysis in Python: animate time series data

As part of data presentation, animating the electrophysiological traces can be an additional resource to increase the attention and interest in presentations, lectures, or websites.

In this tutorial, I will show how to easily animate path-clamp traces, or any time series data, in Python. Additionally, I will show how to reproduce the characteristic clicking sound associated with electrophysiological recordings of single neurons. For instance, when neurons in the visual cortex fire action potentials in response to specific visual stimuli patterns, as in the classical electrophysiological experiments by Hubel and Wiesel. See the video below.


This tutorial is part of a series of tutorials on patch-clamp data analysis for Python and Clampfit. Find all the posts here.

Feel free to email me if something is unclear or the code does not work as expected. If you find this tutorial helpful, consider sharing it, as it could also be useful to others. Additionally, a big thanks if you acknowledge the post in your work (see how here and here). Thanks!

Contents



The example data used in this tutorial is a cell-attached patch-clamp recording, sometimes called on-cell configuration. This configuration is “easily” obtained when you create a tight seal between the electrode tip and the membrane surface, just before applying negative pressure to break the membrane and get the whole cell or other configurations. See Figure 1 from Hamill et al., 1981 summarizing the different patch-clamp configurations. Cell-attached has been widely used to study ion channels and was indeed the first application of the patch clamp method to record currents from single ion channels, particularly acetylcholine receptors at the neuromuscular junction (Neher and Sakmann, 1976).

Figure 1. Schematic representation of patch-clamp configurations. Image from Hamill et al., 1981.

In the cell-attached configuration, we can record both in voltage-clamp and current-clamp. Action potentials (post) can be recorded as current potentials across the membrane patch in voltage-clamp (Figure 2), whereas the current-clamp mode can be used to record the membrane potential with some caveats. In fact, a loose-patch seal (megaohm) is also enough to record currents associated with action potentials, allowing recordings for several hours. For cell-attached recordings, the pipette is generally filled with the extracellular solution or artificial cerebrospinal fluid (ACSF).

Figure 2. Cell-attached recording of current spikes from a hippocampal pyramidal cell (A), and a zoom-in of a potential current from A to show the negative to positive shape (B). From Perkins, 2006.

One important consideration for voltage-clamp recordings in the cell-attached configuration is that the potential in the patch is equal to the actual membrane potential minus the potential inside the pipette (Figure 2). In general, setting the holding potential between -60 and -70 mV, as you would do before break-in to whole-cell (Figure 1), would bring the potential over the patch close to 0 mV (see Figure 2), thereby opening voltage-gated ion channels and depolarizing the cell. However, the commonly used holding potential of 0 mV in this configuration (guilty as charged) can potentially depolarize the cell with very tight seals. This means that we have to set a holding potential in which no current is applied from the amplifier if we do not want to affect the membrane potential. For a deep dive into the cell-attached configuration, I highly recommend reading Perkins, 2006.

Figure 2. Voltage clamp in cell-attached recordings. The actual patch potential (Epatch) is the subtraction of the membrane potential (Em) from the holding potential (HP). From the book “Patch clamping” by Molleman, 2003.

The cell-attached technique has been successfully applied to study neuronal firing properties in vivo as well as ex vivo. The advantage of using ex vivo methods, such as brain slices (see resources), is to have better control over the environmental and technical conditions while preserving local connectivity. Although some neurons in brain slices continue firing spontaneously ex vivo, others do not because excitatory inputs are cut out. In such cases, some approaches have been used to induce spontaneous activity in brain slices to mimic in vivo conditions:


The example data for this tutorial is a cell-attached recording of a Lhx6+ neuron expressing the fluorescent protein GFP from a prefrontal cortical slice of a Lhx6 Cre-lox mouse (post). You can download the ‘abf’ file here.

If you would like to try this tutorial without even installing Python, I have uploaded the Jupyter Notebook to Google Colab. Click on the below “Open in Colab” button (“Open link in the new tab or window”). In Google Colab, you only need to upload the ‘.abf’ and run the cells by clicking on the ‘Play’ icon or pressing ‘Ctrl + Enter’.

Open In Colab

A brief note on Lhx6. Lhx6 is a transcription factor required for the differentiation of cortical GABAergic neurons originating in the medial ganglionic eminence (Du et al., 2008), a region of the embryonic telencephalon (Figure 3). Lhx6 is part of the complex and precise molecular machinery (see Lim et al., 2018) involved in the migration and development of interneurons from the ganglionic eminences. Cortical interneurons are mainly derived from the medial and caudal regions, whereas interneurons in the striatum and olfactory bulb are generated in the lateral eminence (Figure 3).

Figure 3. Development of mouse glutamatergic and GABAergic interneurons. On the left, a schematic of a mouse brain at embryonic day 15 shows the lateral, medial, and ganglionic eminences (LGE, MGE, CGE) and the main final destination of the interneurons originating in each area. The boxes highlight important transcription factors for the development of GABAergic and glutamatergic neurons. On the right, a coronal section shows the migration of interneurons from the MGE. In gray, the subventricular zone that originates glutamatergic neurons. Adapted from Fitzgerald et al., 2020.

I used ACSF as both intracellular and bath solution to record the neuronal firing with a tight seal in voltage-clamp at a holding potential of 0 mV. This was part of some experiments where I tried to pharmacologically elicit firing activity and then test the effect of ethanol on the firing rate. In this particular experiment, neuronal firing started 3 minutes after bath application of carbachol (20 µM), and the example data is a 10-second fragment when the neuron started to fire action potentials.

General recommendations
Do not overwrite the raw data, annotate/track the analysis files, and keep your files organized with clear file names. Acquiring good practices in data management is hard, but it really pays off. If you do not how to start, check these guidelines for data management and Python (“The Good Research Code Handbook” by Patrick Mineault.)


The following is a brief explanation of how to install Python (one method among several), create an environment, and use the packages in Jupyter Notebooks.

Important notes
▪ Replace path/to/directoryfilename, or myenv with your actual directory, name of your file, and name of your Python environment, respectively. This post by Jean-Marc Alkazzi will help you understand what is a Python environment.
▪ The # symbol is used in Python for comments, and what you write after the symbol is not read as code by Python.
▪ For locating files and folders, it is important to understand what is the working directory, and the absolute and relative paths. Read here about these concepts.


  • Install a Python distribution: Miniconda or Anaconda. The main difference is that Anaconda has more libraries/packages pre-installed and has a graphical user interface called Navigator.
  • Run the Anaconda Prompt and type the commands in bold:
    • To change the path to your working/data directory: cd path/to/directoryFilenames and paths without root folders will be assumed to be located under the working directory.
    • To create a Conda environment (libraries, packages, etc. are isolated from other environments): conda create -n myenvOnce you create the environment, skip this step the next time. If you have installed Miniconda, install the environment with some basic packages: conda create -n myenv pip jupyterlab matplotlib ipympl numpy pandas scipy.
    • Then, activate your new environment: conda activate myenv
    • Other packages can be installed in your Python environment using pip or conda installpip install name-of-the-package or conda install name-of-the-packageNote: Write the name of the packages without spaces.
    • Run the Jupyter Notebook (more about notebooks here): jupyter lab


Importing the Python libraries is only necessary at the beginning of your script. Besides the common libraries for data analysis (MatplotlibNumPyPandas, and Scipy), you may need to install moviepy and pyABF.

If you find issues reproducing the videos with Python, you will probably need to install the ffmpeg packages. The K-Lite Codec Pack might fix potential audio issues in Windows. Windows also recommends using ‘.mp4’ files encoded with ‘H.264’ video and AAC audio.

To enable interactive features of Matplotlib (plotting library) in Jupyter lab, enable ipympl with the command %matplotlib widget. In this backend configuration, visualization problems may arise if you use plt.plot(), so use instead fig, ax = plt.subplots(). The line plt.close('all') helps prevent problems when you run many plots in Jupyter lab.

# Libraries for numpy arrays and data tables
import numpy as np
import pandas as pd

# Import abf files
import pyabf

# Find Peaks
import scipy
from scipy import signal  # Filtering
from scipy.signal import find_peaks  # Find Peaks function
from scipy.io.wavfile import write  # Audio from NumPy array

# Plots and animations
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from matplotlib import animation

# Merge video and audio files
from moviepy.editor import VideoFileClip, AudioFileClip

# Interactive plots in Jupyter lab
%matplotlib widget  
plt.close('all')


You can use f-strings to assign a variable such as ‘experiment_id‘ for loading the recording and saving the results throughout the Notebook.


ABF files
I used the package pyABF to load the ‘Axon’ example file. These files can also be read with the package Neo. See the other code snippets for loading ‘txt’ and ‘csv’ files.

# Experiment ID
experiment_id = 'Cell-attached_Lhx6_Carbachol'  # E.g. 20230809_neuron01_a

# Load the ABF file
data = f'path/to/folder/{experiment_id}.abf'
abf = pyabf.ABF(data)
print(abf)


TXT files

# Load the saved text file and create NumPy arrays
file_txt = np.loadtxt(f'Data/{experiment_id}.txt', skiprows=1)  # Skip header row

time = file_txt[:, 0]  # Assuming time is in the first column
signal = file_txt[:, 1]  # Assuming voltage is in the second column


CSV files

df = pd.read_csv(f'path/to/file/{experiment_id}.txt',
                   delimiter=",", keep_default_na=False)

fs = 10000
time = df['time']/fs
y_variable = df['y_label']

time_np = x.to_numpy()
y_variable_np = y.to_numpy()


Save the ‘abf’ file as ‘txt’ (ASCII) file
In case you need to export ABF files as ASCII files to share with collaborators.

# Save the abf file as ASCII
np.savetxt(f'path/to/file/{experiment_id}.txt', 
           np.column_stack((abf.sweepX, abf.sweepY)), 
           fmt='%.6e', delimiter='\t', 
           header=f"{abf.sweepLabelX}\t{abf.sweepLabelY}", 
           comments='')


The example data is already a fragment of the original trace extracted in Clampfit (see how to do it here). However, you can directly select the region of interest within the code. For example: abf.sweepY[0:20000] and abf.sweepX[0:20000].

# Sampling rate
fs = int(abf.dataPointsPerMs * 1000)

# Quick plot to see the trace/s
plt.figure(figsize=(6,3))

# To select channel/sweep: abf.setSweep(sweepNumber=0, channel=0)
for sweepNumber in abf.sweepList:  # Only 1 sweep in the example
    abf.setSweep(sweepNumber)
    y_variable = abf.sweepY
    time = abf.sweepX
    
    # Plot
    plt.plot(time, y_variable)
    plt.ylabel(abf.sweepLabelY)
    plt.xlabel(abf.sweepLabelX)

# Print the sampling rate
print("Sampling rate:", fs)

# Show the plot
plt.tight_layout()  # Adjust the padding around the plot
plt.show()

[Output]:

To know more about filtering in Python, see this previous post.

# Lowpass Bessel filter
b_lowpass, a_lowpass = signal.bessel(4,     # Order of the filter
                                     2000,  # Cutoff frequency
                                     'low', # Type of filter
                                     analog=False,  # Analog or digital filter
                                     norm='phase',  # Critical frequency normalization
                                     fs=fs)  # fs: sampling frequency

signal_filtered = signal.filtfilt(b_lowpass, a_lowpass, y_variable)

# Adjust the baseline if needed
# signal_filtered = signal_filtered - np.median(signal_filtered)

# Simple plot
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(time, signal_filtered)
plt.xlabel(abf.sweepLabelX)
plt.ylabel(abf.sweepLabelY)

# Show the plot
plt.tight_layout()
plt.show()

[Output]:


Skip this code cell if you do not want to show markers for the detected spikes or events. For more details about peak detection, read this previous post.

# Assign the variables here to simplify the code
time = abf.sweepX
y_variable = signal_filtered  # Raw or filtered signal

# Threshold for peak detection (absolute value)
peaks_theshold = 50

# Find peaks function
peaks, peaks_dict = find_peaks(-y_variable,  # Note: change the polarity of signal for negative peaks
                               height=peaks_theshold)

# Plot the detected spikes in the trace
fig, ax = plt.subplots(figsize=(8, 3))
ax.plot(time, y_variable)

# Red dot for each detected spike
ax.plot(peaks/fs, y_variable[peaks], "r.")
ax.set_xlabel(abf.sweepLabelX)
ax.set_ylabel(abf.sweepLabelY)

fig.tight_layout()  # Adjust layout
plt.show()

[Output]:


To animate any time series using matplotlib, you can use the function FuncAnimation. In my limited experience, the trick is to plot enough frames to create a smooth animation but to keep it as lightweight as possible for easy reproduction in Python and save it as a small file. To achieve this, you should adjust the ‘interval’ parameter according to your data to get around 20-30 data points (frames) per second. For more examples of matplotlib animations, check out the resources section.

# Initialize the plot
fig, ax = plt.subplots(figsize=(8, 3)) 

# Define plot parameters
line, = ax.plot([], [], lw=1, color='tab:blue')  # Initialize a line plot
events, = ax.plot([], [], 'o', color='magenta', markersize=3)  # Optional: detected events

# Axis options
ax.set_xlabel(abf.sweepLabelX)  # Set x-axis label
ax.set_ylabel("Current (pA)")  # You can also use 'abf.sweepLabelY'
ax.set_xlim(np.min(time), np.max(time))  # Set x-axis limits 

# Set y-axis limits with some upper and bottom blank space
ax.set_ylim(np.min(y_variable) + (0.15 * np.min(y_variable)), 
            np.max(y_variable) + (0.1 * np.max(y_variable)))  

# Animation function
interval = 400  # Adapt to your data fs

def animate(frame):
    end_frame = (frame + 1) * interval  # End frame for each update in the animation
    line.set_data(time[:end_frame], y_variable[:end_frame])  # Update line plot data

    # Comment out the 'events' lines if you do not want to show detected peaks
    events_time = time[peaks[peaks < end_frame]]  # Extract event times
    events_signal = y_variable[peaks[peaks < end_frame]]
    events.set_data(events_time, (np.min(y_variable[peaks]) * 1.05))  # Event dots at fixed height
    # events.set_data(events_time, events_signal * 1.1)  # Event dots at variable heights
    
    return line, events

# Create the animation
frames = len(time) // interval

anim = animation.FuncAnimation(fig, animate, frames=frames,
                               interval=5, blit=True, repeat=False)

print("Animation (number of frames):", frames)

fig.tight_layout()  # Adjust layout
plt.show()  # Display the animation

Save the animation as video

The animation is created using the function matplotlib.animation.FFMpegWriter (library to process video) and saved as ‘.mp4’ with animation.save. Note: The command %%time at the top of the cell is used to measure the cell execution time.

%%time

# Animation parameters
duration_s = 10  # Select the length of the video here

anim_fps = frames/duration_s

extra_args = [
    '-vcodec', 'libx264',  # Video codec
    '-s', '2400x900',]  # Set the output resolution, same ratio as figure size

# Create the video with FFMpegWriter
writer = animation.FFMpegWriter(fps=anim_fps, bitrate=3000, 
                                codec="h264",  extra_args=extra_args)

# Save path
anim.save(f'path/to/file/{experiment_id}_video.mp4', writer=writer)

# Print the fps and duration of the final video
print('Video_duration (s):', frames/anim_fps)
print('Video_fps:', anim_fps)

[Output]:


Save the animation as gif

The animation can be also saved as a gif with matplotlib.animation.PillowWriter,

writergif = animation.PillowWriter(fps=anim_fps, bitrate=2000)

anim.save(f'path/to/file/{experiment_id}_video.gif', writer=writergif)

[Output]:


What is the sound of an action potential? Most patch-clamp amplifiers allow you to connect a speaker and listen to the crackling sound of spikes, which is very cool. I am not sure if Clampex allows you to save the audio, but there is a workaround with Python. Here I show code snippets on how to use the libraries scipy wavfile, SoundFile, or IPython display to convert time-series data into audio files. Feel free to play around with the parameters and jazz up your presentations.

Scipy wavfile

# Audio parameters
amplitude = np.iinfo(np.int16).max  # Get the max value of the audio type (e.g. 16 bits)
audio_fs = int(len(y_variable)/duration_s)
gain = 1  # Set to 1 if not needed

# Normalize the ephys data in the range [-1, 1]
normalized_data = y_variable / np.max(np.abs(y_variable))

# Scale data option 1: to the audio range and, optionally, multiply by some gain
# scaled_data = np.int16(normalized_data * amplitude * gain)

# Scale data option 2: clipping the data to get less background noise
scaled_data = np.int16(np.clip(normalized_data, -1, -0.06) * amplitude * gain)

# Save the NumPy array as a WAV file
# write(f'path/to/file/{experiment_id}_scipy.wav', rate=audio_fs, data=-scaled_data)
write(f'Analysis/Animation/{experiment_id}_audio.wav', rate=audio_fs, data=scaled_data)


SoundFile
Save the NumPy array as a WAV file using ‘SoundFfile’.

import soundfile as sf

sf.write(f'path/to/file/{experiment_id}_soundfile.wav', 
         scaled_data, audio_fs, subtype='PCM_16')


IPython
To save the audio, click on the 3 vertical dots and click “Download”.

from IPython.display import Audio

# You can also try to convert directly the time-series data
Audio(y_variable, rate=audio_fs)


You can easily merge the video and audio files created earlier using the library MoviePy. Change the parameters as needed to fix or improve the video results.

Alternatively, you can use free and easy-to-use video editors like OpenShot Video Editor or Shotcut (both cross-platforms). If you need a basic audio editor as well, try Audacity.

# Load the video and audio files
video_clip = VideoFileClip(f'path/to/file/{experiment_id}_video.mp4')
audio_clip = AudioFileClip(f'path/to/file/{experiment_id}_audio.wav')

# Set the audio of the video clip
video_clip = video_clip.set_audio(audio_clip)

# Save the video
save_path = f'path/to/file/{experiment_id}_merge.mp4'

video_clip.write_videofile(save_path, 
                           codec='libx264', bitrate='3000k',
                           audio_codec='aac', temp_audiofile='temp_audio.m4a', 
                           remove_temp=True)

# Close the clips
video_clip.close()
audio_clip.close()

[Final output]:

Bonus track. Animation of a current clamp recording from a culture of stem-cell derived Nkx2.1 interneurons. You can download the ‘.abf’ file here. In this example, I cranked up the gain to 10 to get a noisier sound. Together with Lhx6, and many other molecular markers, Nkx2.1 is a crucial transcriptor factor regulating the development of cortical interneurons generated in the medial ganglionic eminence (Butt et al., 2008).

Leave a comment