NuPIC
0.2.7.dev0
Numenta Platform for Intelligent Computing

Class implementing the temporal pooler algorithm as described in the published Cortical Learning Algorithm documentation. More...
Classes  
class  SegmentUpdate 
Class used to carry instructions for updating a segment. More...  
Public Member Functions  
def  __init__ 
Construct the TP. More...  
def  saveToFile 
Implemented in TP10X2.TP10X2.saveToFile.  
def  loadFromFile 
Implemented in TP10X2.TP10X2.loadFromFile.  
def  reset 
Reset the state of all cells. More...  
def  resetStats 
Reset the learning and inference stats. More...  
def  getStats 
Return the current learning and inference stats. More...  
def  printState 
Print an integer array that is the same shape as activeState. More...  
def  printConfidence 
Print a floating point array that is the same shape as activeState. More...  
def  printColConfidence 
Print up to maxCols number from a flat floating point array. More...  
def  printStates 
def  printOutput 
def  printInput 
def  printParameters 
Print the parameter settings for the TP.  
def  printActiveIndices 
Print the list of [column, cellIdx] indices for each of the active cells in state. More...  
def  printComputeEnd 
Called at the end of inference to print out various diagnostic information based on the current verbosity level. More...  
def  printSegmentUpdates 
def  printCell 
def  printCells 
def  getNumSegmentsInCell 
def  getNumSynapses 
def  getNumStrongSynapses 
def  getNumStrongSynapsesPerTimeSlot 
def  getNumSynapsesPerSegmentMax 
def  getNumSynapsesPerSegmentAvg 
def  getNumSegments 
def  getNumCells 
def  getSegmentOnCell 
def  addToSegmentUpdates 
Store a dated potential segment update. More...  
def  removeSegmentUpdate 
Remove a segment update (called when seg update expires or is processed) More...  
def  computeOutput 
Computes output for both learning and inference. More...  
def  getActiveState 
Return the current active state. More...  
def  getPredictedState 
Return a numpy array, predictedCells, representing the current predicted state. More...  
def  predict 
This function gives the future predictions for <nSteps> timesteps starting from the current TP state. More...  
def  getAvgLearnedSeqLength 
def  inferBacktrack 
This "backtracks" our inference state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells. More...  
def  inferPhase1 
Update the inference active state from the last set of predictions and the current bottomup. More...  
def  inferPhase2 
Phase 2 for the inference state. More...  
def  updateInferenceState 
Update the inference state. More...  
def  learnBacktrack 
This "backtracks" our learning state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells. More...  
def  learnPhase1 
Compute the learning active state given the predicted state and the bottomup input. More...  
def  learnPhase2 
Compute the predicted segments given the current set of active cells. More...  
def  updateLearningState 
Update the learning state. More...  
def  compute 
Handle one compute, possibly learning. More...  
def  infer 
def  learn 
def  updateSegmentDutyCycles 
This gets called on every compute. More...  
def  columnConfidences 
Compute the column confidences given the cell confidences. More...  
def  topDownCompute 
Topdown compute  generate expected input given output of the TP. More...  
def  trimSegmentsInCell 
This method goes through a list of segments for a given cell and deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining. More...  
def  trimSegments 
This method deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining. More...  
def  cleanUpdatesList 
Removes any update that would be for the given col, cellIdx, segIdx. More...  
def  finishLearning 
Called when learning has been completed. More...  
def  checkPrediction2 
This function will replace checkPrediction. More...  
def  isSegmentActive 
A segment is active if it has >= activationThreshold connected synapses that are active due to activeState. More...  
def  getSegmentActivityLevel 
This routine computes the activity level of a segment given activeState. More...  
def  getBestMatchingCell 
Find weakly activated cell in column with at least minThreshold active synapses. More...  
def  getBestMatchingSegment 
For the given cell, find the segment with the largest number of active synapses. More...  
def  getCellForNewSegment 
Return the index of a cell in this column which is a good candidate for adding a new segment. More...  
def  getSegmentActiveSynapses 
Return a segmentUpdate data structure containing a list of proposed changes to segment s. More...  
def  chooseCellsToLearnFrom 
Choose n random cells to learn from. More...  
def  processSegmentUpdates 
Go through the list of accumulated segment updates and process them as follows: More...  
def  adaptSegment 
This function applies segment update information to a segment in a cell. More...  
def  getSegmentInfo 
Returns information about the distribution of segments, synapses and permanence values in the current TP. More...  
Public Attributes  
version  
numberOfCols  
cellsPerColumn  
initialPerm  
connectedPerm  
minThreshold  
newSynapseCount  
permanenceInc  
permanenceDec  
permanenceMax  
globalDecay  
activationThreshold  
doPooling  
Allows to turn off pooling.  
segUpdateValidDuration  
burnIn  
Used for evaluating the prediction score.  
collectStats  
If true, collect training/inference stats.  
seed  
verbosity  
pamLength  
maxAge  
maxInfBacktrack  
maxLrnBacktrack  
maxSeqLength  
maxSegmentsPerCell  
maxSynapsesPerSegment  
outputType  
activeColumns  
cells  
Cells are indexed by column and index in the column Every self.cells[column][index] contains a list of segments Each segment is a structure of class Segment.  
lrnIterationIdx  
iterationIdx  
segID  
unique segment id, so we can put segments in hashes  
currentOutput  
pamCounter  
pamCounter gets reset to pamLength whenever we detect that the learning state is making good predictions (at least half the columns predicted). More...  
collectSequenceStats  
If True, the TP will compute a signature for each sequence.  
resetCalled  
This gets set when we receive a reset and cleared on the first compute following a reset. More...  
avgInputDensity  
We keep track of the average input density here.  
learnedSeqLength  
Keeps track of the length of the sequence currently being learned. More...  
avgLearnedSeqLength  
Keeps track of the moving average of all learned sequence length. More...  
segmentUpdates  
We store the lists of segments updates, per cell, so that they can be applied later during learning, when the cell gets bottomup activation. More...  
cellConfidence  
colConfidence  
lrnActiveState  
infActiveState  
lrnPredictedState  
infPredictedState  
Class implementing the temporal pooler algorithm as described in the published Cortical Learning Algorithm documentation.
The implementation here attempts to closely match the pseudocode in the documentation. This implementation does contain several additional bells and whistles such as a column confidence measure.
Document other constructor parameters.
Have some higher level flags for fast learning, HiLo, Pooling, etc.
def __init__  (  self,  
numberOfCols = 500 , 

cellsPerColumn = 10 , 

initialPerm = 0.11 , 

connectedPerm = 0.50 , 

minThreshold = 8 , 

newSynapseCount = 15 , 

permanenceInc = 0.10 , 

permanenceDec = 0.10 , 

permanenceMax = 1.0 , 

globalDecay = 0.10 , 

activationThreshold = 12 , 

doPooling = False , 

segUpdateValidDuration = 5 , 

burnIn = 2 , 

collectStats = False , 

seed = 42 , 

verbosity = VERBOSITY , 

checkSynapseConsistency = False , 

pamLength = 1 , 

maxInfBacktrack = 10 , 

maxLrnBacktrack = 5 , 

maxAge = 100000 , 

maxSeqLength = 32 , 

maxSegmentsPerCell = 1 , 

maxSynapsesPerSegment = 1 , 

outputType = 'normal' 

) 
Construct the TP.
pamLength  Number of time steps to remain in "Pay Attention Mode" after we detect we've reached the end of a learned sequence. Setting this to 0 disables PAM mode. When we are in PAM mode, we do not burst unpredicted columns during learning, which in turn prevents us from falling into a previously learned sequence for a while (until we run through another 'pamLength' steps). The advantge of PAM mode is that it requires fewer presentations to learn a set of sequences which share elements. The disadvantage of PAM mode is that if a learned sequence is immediately followed by set set of elements that should be learned as a 2nd sequence, the first pamLength elements of that sequence will not be learned as part of that 2nd sequence. 
maxAge  Controls global decay. Global decay will only decay segments that have not been activated for maxAge iterations, and will only do the global decay loop every maxAge iterations. The default (maxAge=1) reverts to the behavior where global decay is applied every iteration to every segment. Using maxAge > 1 can significantly speed up the TP when global decay is used. 
maxSeqLength  If not 0, we will never learn more than maxSeqLength inputs in a row without starting over at start cells. This sets an upper bound on the length of learned sequences and thus is another means (besides maxAge and globalDecay) by which to limit how much the TP tries to learn. 
maxSegmentsPerCell  The maximum number of segments allowed on a cell. This is used to turn on "fixed size CLA" mode. When in effect, globalDecay is not applicable and must be set to 0 and maxAge must be set to 0. When this is used (> 0), maxSynapsesPerSegment must also be > 0. 
maxSynapsesPerSegment  The maximum number of synapses allowed in a segment. This is used to turn on "fixed size CLA" mode. When in effect, globalDecay is not applicable and must be set to 0 and maxAge must be set to 0. When this is used (> 0), maxSegmentsPerCell must also be > 0. 
outputType  Can be one of the following: 'normal', 'activeState', 'activeState1CellPerCol'. 'normal': output the OR of the active and predicted state. 'activeState': output only the active state. 'activeState1CellPerCol': output only the active state, and at most 1 cell/column. If more than 1 cell is active in a column, the one with the highest confidence is sent up. Default is 'normal'. 
doPooling  If True, pooling is enabled. False is the default. 
burnIn  Used for evaluating the prediction score. Default is 2. 
collectStats  If True, collect training / inference stats. Default is False. 
def adaptSegment  (  self,  
segUpdate  
) 
This function applies segment update information to a segment in a cell.
Synapses on the active list get their permanence counts incremented by permanenceInc. All other synapses get their permanence counts decremented by permanenceDec. We also increment the positiveActivations count of the segment.
segUpdate  SegmentUpdate instance 
def addToSegmentUpdates  (  self,  
c,  
i,  
segUpdate  
) 
Store a dated potential segment update.
The "date" (iteration index) is used later to determine whether the update is too old and should be forgotten. This is controlled by parameter segUpdateValidDuration.
c  TODO: document 
i  TODO: document 
segUpdate  TODO: document 
def checkPrediction2  (  self,  
patternNZs,  
output = None , 

colConfidence = None , 

details = False 

) 
This function will replace checkPrediction.
This function produces goodnessofmatch scores for a set of input patterns, by checking for their presence in the current and predicted output of the TP. Returns a global count of the number of extra and missing bits, the confidence scores for each input pattern, and (if requested) the bits in each input pattern that were not present in the TP's prediction.
patternNZs  a list of input patterns that we want to check for. Each element is a list of the nonzeros in that pattern. 
output  The output of the TP. If not specified, then use the TP's current output. This can be specified if you are trying to check the prediction metric for an output from the past. 
colConfidence  The column confidences. If not specified, then use the TP's current self.colConfidence. This can be specified if you are trying to check the prediction metrics for an output from the past. 
details  if True, also include details of missing bits per pattern. 
[ totalExtras, totalMissing, [conf_1, conf_2, ...], [missing1, missing2, ...] ]
totalExtras  a global count of the number of 'extras', i.e. bits that are on in the current output but not in the or of all the passed in patterns 
totalMissing  a global count of all the missing bits, i.e. the bits that are on in the or of the patterns, but not in the current output 
conf_i  the confidence score for the i'th pattern inpatternsToCheck This consists of 3 items as a tuple: (predictionScore, posPredictionScore, negPredictionScore) 
missing_i  the bits in the i'th pattern that were missing in the output. This list is only returned if details is True. 
def chooseCellsToLearnFrom  (  self,  
c,  
i,  
s,  
n,  
activeState  
) 
Choose n random cells to learn from.
This function is called several times while learning with timeStep = t1, so we cache the set of candidates for that case. It's also called once with timeStep = t, and we cache that set of candidates.
def cleanUpdatesList  (  self,  
col,  
cellIdx,  
seg  
) 
Removes any update that would be for the given col, cellIdx, segIdx.
NOTE: logically, we need to do this when we delete segments, so that if an update refers to a segment that was just deleted, we also remove that update from the update list. However, I haven't seen it trigger in any of the unit tests yet, so it might mean that it's not needed and that situation doesn't occur, by construction.
def columnConfidences  (  self,  
cellConfidences = None 

) 
Compute the column confidences given the cell confidences.
If None is passed in for cellConfidences, it uses the stored cell confidences from the last compute.
cellConfidences  Cell confidences to use, or None to use the the current cell confidences. 
def compute  (  self,  
bottomUpInput,  
enableLearn,  
computeInfOutput = None 

) 
Handle one compute, possibly learning.
bottomUpInput  The bottomup input, typically from a spatial pooler 
enableLearn  If true, perform learning 
computeInfOutput  If None, default behavior is to disable the inference output when enableLearn is on. If true, compute the inference output If false, do not compute the inference output 
It is an error to have both enableLearn and computeInfOutput set to False By default, we don't compute the inference output when learning because it slows things down, but you can override this by passing in True for computeInfOutput
def computeOutput  (  self  ) 
Computes output for both learning and inference.
In both cases, the output is the boolean OR of activeState and predictedState at t. Stores currentOutput for checkPrediction.
def finishLearning  (  self  ) 
Called when learning has been completed.
This method just calls trimSegments(). (finishLearning is here for backward compatibility)
def getActiveState  (  self  ) 
Return the current active state.
This is called by the node to obtain the sequence output of the TP.
def getAvgLearnedSeqLength  (  self  ) 
def getBestMatchingCell  (  self,  
c,  
activeState,  
minThreshold  
) 
Find weakly activated cell in column with at least minThreshold active synapses.
c  which column to look at 
activeState  the active cells 
minThreshold  minimum number of synapses required 
def getBestMatchingSegment  (  self,  
c,  
i,  
activeState  
) 
For the given cell, find the segment with the largest number of active synapses.
This routine is aggressive in finding the best match. The permanence value of synapses is allowed to be below connectedPerm. The number of active synapses is allowed to be below activationThreshold, but must be above minThreshold. The routine returns the segment index. If no segments are found, then an index of 1 is returned.
c  TODO: document 
i  TODO: document 
activeState  TODO: document 
def getCellForNewSegment  (  self,  
colIdx  
) 
Return the index of a cell in this column which is a good candidate for adding a new segment.
When we have fixed size resources in effect, we insure that we pick a cell which does not already have the max number of allowed segments. If none exists, we choose the least used segment in the column to reallocate.
colIdx  which column to look at 
def getNumCells  (  self  ) 
def getNumSegments  (  self  ) 
def getNumSegmentsInCell  (  self,  
c,  
i  
) 
c  column index 
i  cell index within column 
def getNumStrongSynapses  (  self  ) 
def getNumStrongSynapsesPerTimeSlot  (  self  ) 
def getNumSynapses  (  self  ) 
def getNumSynapsesPerSegmentAvg  (  self  ) 
def getNumSynapsesPerSegmentMax  (  self  ) 
def getPredictedState  (  self  ) 
Return a numpy array, predictedCells, representing the current predicted state.
predictedCells[c][i] represents the state of the i'th cell in the c'th column.
def getSegmentActiveSynapses  (  self,  
c,  
i,  
s,  
activeState,  
newSynapses = False 

) 
Return a segmentUpdate data structure containing a list of proposed changes to segment s.
Let activeSynapses be the list of active synapses where the originating cells have their activeState output = 1 at time step t. (This list is empty if s is None since the segment doesn't exist.) newSynapses is an optional argument that defaults to false. If newSynapses is true, then newSynapseCount  len(activeSynapses) synapses are added to activeSynapses. These synapses are randomly chosen from the set of cells that have learnState = 1 at timeStep.
c  TODO: document 
i  TODO: document 
s  TODO: document 
activeState  TODO: document 
newSynapses  TODO: document 
def getSegmentActivityLevel  (  self,  
seg,  
activeState,  
connectedSynapsesOnly = False 

) 
This routine computes the activity level of a segment given activeState.
It can tally up only connected synapses (permanence >= connectedPerm), or all the synapses of the segment, at either t or t1.
seg  TODO: document 
activeState  TODO: document 
connectedSynapsesOnly  TODO: document 
def getSegmentInfo  (  self,  
collectActiveData = False 

) 
Returns information about the distribution of segments, synapses and permanence values in the current TP.
If requested, also returns information regarding the number of currently active segments and synapses.
( nSegments, nSynapses, nActiveSegs, nActiveSynapses, distSegSizes, distNSegsPerCell, distPermValues, distAges )
nSegments  total number of segments 
nSynapses  total number of synapses 
nActiveSegs  total no. of active segments (0 if collectActiveData is False) 
nActiveSynapses  total no. of active synapses 0 if collectActiveData is False 
distSegSizes  a dict where d[n] = number of segments with n synapses 
distNSegsPerCell  a dict where d[n] = number of cells with n segments 
distPermValues  a dict where d[p] = number of synapses with perm = p/10 
distAges  a list of tuples (ageRange, numSegments) 
def getSegmentOnCell  (  self,  
c,  
i,  
segIdx  
) 
c  column index 
i  cell index in column 
segIdx  TODO: document 
Returns the segment as following list: [ [segmentID, sequenceSegmentFlag, positiveActivations, totalActivations, lastActiveIteration, lastPosDutyCycle, lastPosDutyCycleIteration], [col1, idx1, perm1], [col2, idx2, perm2], ... ]
segmentId  TODO: document 
sequenceSegmentFlag  TODO: document 
positiveActivations  TODO: document 
totalActivations  TODO: document 
lastActiveIteration  TODO: document 
lastPosDutyCycle  TODO: document 
lastPosDutyCycleIteration  TODO: document 
[col1,idx1,perm1]  TODO: document 
def getStats  (  self  ) 
Return the current learning and inference stats.
This returns a dict containing all the learning and inference stats we have collected since the last resetStats(). If collectStats is False, then None is returned.
The following keys are returned in the dict when @ref collectStats is True:
nPredictions  the number of predictions. This is the total number of inferences excluding burnin and the last inference. 
curPredictionScore  the score for predicting the current input (predicted during the previous inference) 
curMissing  the number of bits in the current input that were not predicted to be on. 
curExtra  the number of bits in the predicted output that are not in the next input 
predictionScoreTotal  the sum of every prediction score to date 
predictionScoreAvg  predictionScoreTotal / nPredictions 
pctMissingTotal  the total number of bits that were missed over all predictions 
pctMissingAvg  pctMissingTotal / nPredictions 
prevSequenceSignature  signature for the sequence immediately preceding the last reset. 'None' if collectSequenceStats is False 
def infer  (  self,  
bottomUpInput  
) 
def inferBacktrack  (  self,  
activeColumns  
) 
This "backtracks" our inference state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells.
activeColumns  The list of active column indices This will adjust @ref infActiveState['t'] if it does manage to lock on to a sequence that started earlier. It will also compute infPredictedState['t'] based on the possibly updated @ref infActiveState['t'], so there is no need to call inferPhase2() after calling inferBacktrack(). This looks at:  @ref infActiveState['t'] This updates/modifies:  @ref infActiveState['t']  @ref infPredictedState['t']  @ref colConfidence['t']  @ref cellConfidence['t'] How it works:  This method gets called from updateInferenceState when we detect either of the following two conditions: # The current bottomup input had too many unexpected columns # We fail to generate a sufficient number of predicted columns for the next time step. Either of these two conditions indicate that we have fallen out of a learned sequence. Rather than simply "giving up" and bursting on the unexpected input columns, a better approach is to see if perhaps we are in a sequence that started a few steps ago. The real world analogy is that you are driving along and suddenly hit a deadend, you will typically go back a few turns ago and pick up again from a familiar intersection. This backtracking goes hand in hand with our learning methodology, which always tries to learn again from start cells after it loses context. This results in a network that has learned multiple, overlapping paths through the input data, each starting at different points. The lower the global decay and the more repeatability in the data, the longer each of these paths will end up being. The goal of this function is to find out which starting point in the past leads to the current input with the most context as possible. This gives us the best chance of predicting accurately going forward. Consider the following example, where you have learned the following subsequences which have the given frequencies: ?  Q  C  D  E 10X seq 0 ?  B  C  D  F 1X seq 1 ?  B  C  H  I 2X seq 2 ?  B  C  D  F 3X seq 3 ?  Z  A  B  C  D  J 2X seq 4 ?  Z  A  B  C  H  I 1X seq 5 ?  Y  A  B  C  D  F 3X seq 6  W  X  Z  A  B  C  D <= input history ^ current time step Suppose, in the current time step, the input pattern is D and you have not predicted D, so you need to backtrack. Suppose we can backtrack up to 6 steps in the past, which path should we choose? From the table above, we can see that the correct answer is to assume we are in seq 4. How do we implement the backtrack to give us this right answer? The current implementation takes the following approach: # Start from the farthest point in the past. # For each starting point S, calculate the confidence of the current input, conf(startingPoint=S), assuming we followed that sequence. Note that we must have learned at least one sequence that starts at point S. # If conf(startingPoint=S) is significantly different from conf(startingPoint=S1), then choose S1 as the starting point. The assumption here is that starting point S1 is the starting point of a learned subsequence that includes the current input in it's path and that started the longest ago. It thus has the most context and will be the best predictor going forward. From the statistics in the above table, we can compute what the confidences will be for each possible starting point: startingPoint confidence of D  B (t2) 4/6 = 0.667 (seq 1,3)/(seq 1,2,3) Z (t4) 2/3 = 0.667 (seq 4)/(seq 4,5) First of all, we do not compute any confidences at starting points t1, t3, t5, t6 because there are no learned sequences that start at those points. Notice here that Z is the starting point of the longest subsequence leading up to the current input. Event though starting at t2 and starting at t4 give the same confidence value, we choose the sequence starting at t4 because it gives the most context, and it mirrors the way that learning extends sequences. 
def inferPhase1  (  self,  
activeColumns,  
useStartCells  
) 
Update the inference active state from the last set of predictions and the current bottomup.
This looks at:  @ref infPredictedState['t1'] This modifies:  @ref infActiveState['t']
activeColumns  list of active bottomups 
useStartCells  If true, ignore previous predictions and simply turn on the start cells in the active columns 
def inferPhase2  (  self  ) 
Phase 2 for the inference state.
The computes the predicted state, then checks to insure that the predicted state is not oversaturated, i.e. look too close like a burst. This indicates that there were so many separate paths learned from the current input columns to the predicted input columns that bursting on the current input columns is most likely generated mix and match errors on cells in the predicted columns. If we detect this situation, we instead turn on only the start cells in the current active columns and regenerate the predicted state from those.
This looks at:
This modifies:
def isSegmentActive  (  self,  
seg,  
activeState  
) 
A segment is active if it has >= activationThreshold connected synapses that are active due to activeState.
Notes: studied various cutoffs, none of which seem to be worthwhile list comprehension didn't help either
seg  TODO: document 
activeState  TODO: document 
def learn  (  self,  
bottomUpInput,  
computeInfOutput = None 

) 
def learnBacktrack  (  self  ) 
This "backtracks" our learning state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells.
This will adjust @ref lrnActiveState['t'] if it does manage to lock on to a sequence that started earlier.
This method gets called from updateLearningState when we detect either of the following two conditions:
Either of these two conditions indicate that we want to start over on start cells.
Rather than start over on start cells on the current input, we can accelerate learning by backtracking a few steps ago and seeing if perhaps a sequence we already at least partially know already started.
This updates/modifies:
This trashes:
def learnPhase1  (  self,  
activeColumns,  
readOnly = False 

) 
Compute the learning active state given the predicted state and the bottomup input.
activeColumns  list of active bottomups 
readOnly  True if being called from backtracking logic. This tells us not to increment any segment duty cycles or queue up any updates. 
This looks at:
This modifies:
def learnPhase2  (  self,  
readOnly = False 

) 
Compute the predicted segments given the current set of active cells.
readOnly  True if being called from backtracking logic. This tells us not to increment any segment duty cycles or queue up any updates. 
This computes the lrnPredictedState['t'] and queues up any segments that became active (and the list of active synapses for each segment) into the segmentUpdates queue
This looks at:
This modifies:
def predict  (  self,  
nSteps  
) 
This function gives the future predictions for <nSteps> timesteps starting from the current TP state.
The TP is returned to its original state at the end before returning.
nSteps  The number of future time steps to be predicted 
def printActiveIndices  (  self,  
state,  
andValues = False 

) 
Print the list of [column, cellIdx] indices for each of the active cells in state.
state  TODO: document 
andValues  TODO: document 
def printCell  (  self,  
c,  
i,  
onlyActiveSegments = False 

) 
def printCells  (  self,  
predictedOnly = False 

) 
def printColConfidence  (  self,  
aState,  
maxCols = 20 

) 
Print up to maxCols number from a flat floating point array.
aState  TODO: document 
maxCols  TODO: document 
def printComputeEnd  (  self,  
output,  
learn = False 

) 
Called at the end of inference to print out various diagnostic information based on the current verbosity level.
output  TODO: document 
learn  TODO: document 
def printConfidence  (  self,  
aState,  
maxCols = 20 

) 
Print a floating point array that is the same shape as activeState.
aState  TODO: document 
maxCols  TODO: document 
def printInput  (  self,  
x  
) 
def printOutput  (  self,  
y  
) 
def printSegmentUpdates  (  self  ) 
def printState  (  self,  
aState  
) 
Print an integer array that is the same shape as activeState.
aState  TODO: document 
def printStates  (  self,  
printPrevious = True , 

printLearnState = True 

) 
def processSegmentUpdates  (  self,  
activeColumns  
) 
Go through the list of accumulated segment updates and process them as follows:
if the segment update is too old, remove the update else if the cell received bottomup, update its permanences else if it's still being predicted, leave it in the queue else remove it.
activeColumns  TODO: document 
def removeSegmentUpdate  (  self,  
updateInfo  
) 
Remove a segment update (called when seg update expires or is processed)
updateInfo  tuple (creationDate, SegmentUpdate) 
def reset  (  self  ) 
Reset the state of all cells.
This is normally used between sequences while training. All internal states are reset to 0.
def resetStats  (  self  ) 
Reset the learning and inference stats.
This will usually be called by user code at the start of each inference run (for a particular data set).
def topDownCompute  (  self,  
topDownIn = None 

) 
def trimSegments  (  self,  
minPermanence = None , 

minNumSyns = None 

) 
This method deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining.
minPermanence  Any syn whose permamence is 0 or < minPermanence will be deleted. If None is passed in, then self.connectedPerm is used. 
minNumSyns  Any segment with less than minNumSyns synapses remaining in it will be deleted. If None is passed in, then self.activationThreshold is used. 
def trimSegmentsInCell  (  self,  
colIdx,  
cellIdx,  
segList,  
minPermanence,  
minNumSyns  
) 
This method goes through a list of segments for a given cell and deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining.
colIdx  Column index 
cellIdx  Cell index within the column 
segList  List of segment references 
minPermanence  Any syn whose permamence is 0 or < minPermanence will be deleted. 
minNumSyns  Any segment with less than minNumSyns synapses remaining in it will be deleted. 
def updateInferenceState  (  self,  
activeColumns  
) 
Update the inference state.
Called from compute() on every iteration.
activeColumns  The list of active column indices. 
def updateLearningState  (  self,  
activeColumns  
) 
Update the learning state.
Called from compute() on every iteration
activeColumns  List of active column indices 
def updateSegmentDutyCycles  (  self  ) 
This gets called on every compute.
It determines if it's time to update the segment duty cycles. Since the duty cycle calculation is a moving average based on a tiered alpha, it is important that we update all segments on each tier boundary.
activationThreshold 
activeColumns 
avgLearnedSeqLength 
Keeps track of the moving average of all learned sequence length.
cellConfidence 
cellsPerColumn 
colConfidence 
connectedPerm 
currentOutput 
globalDecay 
infActiveState 
infPredictedState 
initialPerm 
iterationIdx 
learnedSeqLength 
Keeps track of the length of the sequence currently being learned.
lrnActiveState 
lrnIterationIdx 
lrnPredictedState 
maxAge 
maxInfBacktrack 
maxLrnBacktrack 
maxSegmentsPerCell 
maxSeqLength 
maxSynapsesPerSegment 
minThreshold 
newSynapseCount 
numberOfCols 
outputType 
pamCounter 
pamCounter gets reset to pamLength whenever we detect that the learning state is making good predictions (at least half the columns predicted).
Whenever we do not make a good prediction, we decrement pamCounter. When pamCounter reaches 0, we start the learn state over again at start cells.
pamLength 
permanenceDec 
permanenceInc 
permanenceMax 
resetCalled 
This gets set when we receive a reset and cleared on the first compute following a reset.
seed 
segmentUpdates 
We store the lists of segments updates, per cell, so that they can be applied later during learning, when the cell gets bottomup activation.
We store one list per cell. The lists are identified with a hash key which is a tuple (column index, cell index).
segUpdateValidDuration 
verbosity 
version 