In [1]:
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
In [2]:
%%capture --no-display
import os
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32,"
import sys
import pickle as pkl
import interactive_plotting as iplot
In [3]:
from bokeh.io import output_notebook
output_notebook()
Loading BokehJS ...
In [4]:
aud_res_df_path = '../results/main_auditory_results_df.pkl'
with open(aud_res_df_path, "rb") as f:
    aud_res_df = pkl.load(f)
aud_res_df = aud_res_df.loc[aud_res_df['nonlinearity']=='sigmoid'].query('noise_ratio==0.5')
In [5]:
%%capture --no-display
this_save_dir = 'auditory_images/'
root_dir='./'
aud_url_pd = iplot.get_urls_for_auditory_df_data(aud_res_df.loc[aud_res_df['nonlinearity']=='sigmoid'], 
                                                 'log_reg_factor', 'num_hidden_units', 'final_val_cost', 
                                                 server_address='', root_dir=root_dir, this_save_dir=this_save_dir, 
                                                 verbose=False)
In [6]:
%%capture --no-display
iplot.plot_linked_heatmap_weights(aud_url_pd, x_key='log_reg_factor', y_key='num_hidden_units',
                                  RF_plot_title = 'Corresponding spectrotemporal RFs', 
                                  add_t_slider=False)

Figure 8-Figure supplement 2 | Interactive figure exploring the relationship between the strength of L1 regularization on the network weights and the structure of the RFs the network produces when the network is trained on auditory inputs.

The left hand panel shows the performance of the network with the hyperparameter settings specified on the x and y axes. The x axis signifies the strength of L1 regularization placed on the weights of the network during training. The y axis signifies the number of hidden units in the network. The colour represents the predictive capacity of the model as measured by the prediction error (mean squared error) on a held out validation set.

How to interact with the figure:

Hover over a point in the left hand panel to show the corresponding spectrotemporal receptive fields of the network in the right hand panel. Using the settings near the right hand panel, zoom, pan and reset the image to explore the shapes of the spectrotemporal receptive fields. Many hidden units’ weight matrices decayed to near zero during training. Inactive units were excluded from analysis and are not shown.

In [7]:
vis_res_df_path = '../results/main_visual_results_df.pkl'
with open(vis_res_df_path, "rb") as f:
    vis_res_df = pkl.load(f)
vis_res_df = vis_res_df.query('noise_ratio==0.5 and log_reg_factor>-8')
In [8]:
%%capture --no-display
this_save_dir = 'visual_images/'
root_dir='./'
visual_url_pd = iplot.get_urls_for_visual_df_data(vis_res_df, 'log_reg_factor', 'num_filters', 
                                                  'final_val_cost', server_address='', 
                                                  root_dir=root_dir, this_save_dir=this_save_dir, 
                                                  verbose=False)
In [9]:
%%capture --no-display
iplot.plot_linked_heatmap_weights(visual_url_pd, x_key='log_reg_factor', y_key='num_filters', 
                                  RF_plot_title = 'Corresponding spatial RFs', 
                                  add_t_slider=True)

Figure 8-Figure supplement 3 | Interactive figure exploring the relationship between the strength of L1 regularization on the network weights and the structure of the RFs the network produces when the network is trained on visual inputs.

The left panel shows the performance of the network with the hyperparameter settings specified on the x and y axes. The x axis signifies the strength of L1 regularization placed on the weights of the network during training. The y axis signifies the number of hidden units in the network. The colour represents the predictive capacity of the model as measured by the prediction error (mean squared error) on a held out validation set.

How to interact with the figure:

Hover over a point in the left panel to show the corresponding spatial receptive fields of the network in the right panel. Using the settings on the right of the right hand panel, zoom, pan and reset the image to explore the shapes of the spatial receptive fields. Change the slider labelled 'time step' to change the time-step of the spatial receptive fields being shown in the right hand panel. Some hidden units’ weight matrices decayed to near zero during training. Inactive units were excluded from analysis and are not shown.