NuPIC  0.2.7.dev0
Numenta Platform for Intelligent Computing
 All Classes Namespaces Files Functions Variables Pages
Classes | Public Member Functions | Public Attributes | List of all members
TP Class Reference

Class implementing the temporal pooler algorithm as described in the published Cortical Learning Algorithm documentation. More...

Inheritance diagram for TP:
TP10X2

Classes

class  SegmentUpdate
 Class used to carry instructions for updating a segment. More...
 

Public Member Functions

def __init__
 Construct the TP. More...
 
def saveToFile
 Implemented in TP10X2.TP10X2.saveToFile.
 
def loadFromFile
 Implemented in TP10X2.TP10X2.loadFromFile.
 
def reset
 Reset the state of all cells. More...
 
def resetStats
 Reset the learning and inference stats. More...
 
def getStats
 Return the current learning and inference stats. More...
 
def printState
 Print an integer array that is the same shape as activeState. More...
 
def printConfidence
 Print a floating point array that is the same shape as activeState. More...
 
def printColConfidence
 Print up to maxCols number from a flat floating point array. More...
 
def printStates
 
def printOutput
 
def printInput
 
def printParameters
 Print the parameter settings for the TP.
 
def printActiveIndices
 Print the list of [column, cellIdx] indices for each of the active cells in state. More...
 
def printComputeEnd
 Called at the end of inference to print out various diagnostic information based on the current verbosity level. More...
 
def printSegmentUpdates
 
def printCell
 
def printCells
 
def getNumSegmentsInCell
 
def getNumSynapses
 
def getNumStrongSynapses
 
def getNumStrongSynapsesPerTimeSlot
 
def getNumSynapsesPerSegmentMax
 
def getNumSynapsesPerSegmentAvg
 
def getNumSegments
 
def getNumCells
 
def getSegmentOnCell
 
def addToSegmentUpdates
 Store a dated potential segment update. More...
 
def removeSegmentUpdate
 Remove a segment update (called when seg update expires or is processed) More...
 
def computeOutput
 Computes output for both learning and inference. More...
 
def getActiveState
 Return the current active state. More...
 
def getPredictedState
 Return a numpy array, predictedCells, representing the current predicted state. More...
 
def predict
 This function gives the future predictions for <nSteps> timesteps starting from the current TP state. More...
 
def getAvgLearnedSeqLength
 
def inferBacktrack
 This "backtracks" our inference state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells. More...
 
def inferPhase1
 Update the inference active state from the last set of predictions and the current bottom-up. More...
 
def inferPhase2
 Phase 2 for the inference state. More...
 
def updateInferenceState
 Update the inference state. More...
 
def learnBacktrack
 This "backtracks" our learning state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells. More...
 
def learnPhase1
 Compute the learning active state given the predicted state and the bottom-up input. More...
 
def learnPhase2
 Compute the predicted segments given the current set of active cells. More...
 
def updateLearningState
 Update the learning state. More...
 
def compute
 Handle one compute, possibly learning. More...
 
def infer
 
def learn
 
def updateSegmentDutyCycles
 This gets called on every compute. More...
 
def columnConfidences
 Compute the column confidences given the cell confidences. More...
 
def topDownCompute
 Top-down compute - generate expected input given output of the TP. More...
 
def trimSegmentsInCell
 This method goes through a list of segments for a given cell and deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining. More...
 
def trimSegments
 This method deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining. More...
 
def cleanUpdatesList
 Removes any update that would be for the given col, cellIdx, segIdx. More...
 
def finishLearning
 Called when learning has been completed. More...
 
def checkPrediction2
 This function will replace checkPrediction. More...
 
def isSegmentActive
 A segment is active if it has >= activationThreshold connected synapses that are active due to activeState. More...
 
def getSegmentActivityLevel
 This routine computes the activity level of a segment given activeState. More...
 
def getBestMatchingCell
 Find weakly activated cell in column with at least minThreshold active synapses. More...
 
def getBestMatchingSegment
 For the given cell, find the segment with the largest number of active synapses. More...
 
def getCellForNewSegment
 Return the index of a cell in this column which is a good candidate for adding a new segment. More...
 
def getSegmentActiveSynapses
 Return a segmentUpdate data structure containing a list of proposed changes to segment s. More...
 
def chooseCellsToLearnFrom
 Choose n random cells to learn from. More...
 
def processSegmentUpdates
 Go through the list of accumulated segment updates and process them as follows: More...
 
def adaptSegment
 This function applies segment update information to a segment in a cell. More...
 
def getSegmentInfo
 Returns information about the distribution of segments, synapses and permanence values in the current TP. More...
 

Public Attributes

 version
 
 numberOfCols
 
 cellsPerColumn
 
 initialPerm
 
 connectedPerm
 
 minThreshold
 
 newSynapseCount
 
 permanenceInc
 
 permanenceDec
 
 permanenceMax
 
 globalDecay
 
 activationThreshold
 
 doPooling
 Allows to turn off pooling.
 
 segUpdateValidDuration
 
 burnIn
 Used for evaluating the prediction score.
 
 collectStats
 If true, collect training/inference stats.
 
 seed
 
 verbosity
 
 pamLength
 
 maxAge
 
 maxInfBacktrack
 
 maxLrnBacktrack
 
 maxSeqLength
 
 maxSegmentsPerCell
 
 maxSynapsesPerSegment
 
 outputType
 
 activeColumns
 
 cells
 Cells are indexed by column and index in the column Every self.cells[column][index] contains a list of segments Each segment is a structure of class Segment.
 
 lrnIterationIdx
 
 iterationIdx
 
 segID
 unique segment id, so we can put segments in hashes
 
 currentOutput
 
 pamCounter
 pamCounter gets reset to pamLength whenever we detect that the learning state is making good predictions (at least half the columns predicted). More...
 
 collectSequenceStats
 If True, the TP will compute a signature for each sequence.
 
 resetCalled
 This gets set when we receive a reset and cleared on the first compute following a reset. More...
 
 avgInputDensity
 We keep track of the average input density here.
 
 learnedSeqLength
 Keeps track of the length of the sequence currently being learned. More...
 
 avgLearnedSeqLength
 Keeps track of the moving average of all learned sequence length. More...
 
 segmentUpdates
 We store the lists of segments updates, per cell, so that they can be applied later during learning, when the cell gets bottom-up activation. More...
 
 cellConfidence
 
 colConfidence
 
 lrnActiveState
 
 infActiveState
 
 lrnPredictedState
 
 infPredictedState
 

Detailed Description

Class implementing the temporal pooler algorithm as described in the published Cortical Learning Algorithm documentation.

The implementation here attempts to closely match the pseudocode in the documentation. This implementation does contain several additional bells and whistles such as a column confidence measure.

Todo:

Document other constructor parameters.

Have some higher level flags for fast learning, HiLo, Pooling, etc.

Constructor & Destructor Documentation

def __init__ (   self,
  numberOfCols = 500,
  cellsPerColumn = 10,
  initialPerm = 0.11,
  connectedPerm = 0.50,
  minThreshold = 8,
  newSynapseCount = 15,
  permanenceInc = 0.10,
  permanenceDec = 0.10,
  permanenceMax = 1.0,
  globalDecay = 0.10,
  activationThreshold = 12,
  doPooling = False,
  segUpdateValidDuration = 5,
  burnIn = 2,
  collectStats = False,
  seed = 42,
  verbosity = VERBOSITY,
  checkSynapseConsistency = False,
  pamLength = 1,
  maxInfBacktrack = 10,
  maxLrnBacktrack = 5,
  maxAge = 100000,
  maxSeqLength = 32,
  maxSegmentsPerCell = -1,
  maxSynapsesPerSegment = -1,
  outputType = 'normal' 
)

Construct the TP.

Parameters
pamLengthNumber of time steps to remain in "Pay Attention Mode" after we detect we've reached the end of a learned sequence. Setting this to 0 disables PAM mode. When we are in PAM mode, we do not burst unpredicted columns during learning, which in turn prevents us from falling into a previously learned sequence for a while (until we run through another 'pamLength' steps). The advantge of PAM mode is that it requires fewer presentations to learn a set of sequences which share elements. The disadvantage of PAM mode is that if a learned sequence is immediately followed by set set of elements that should be learned as a 2nd sequence, the first pamLength elements of that sequence will not be learned as part of that 2nd sequence.
maxAgeControls global decay. Global decay will only decay segments that have not been activated for maxAge iterations, and will only do the global decay loop every maxAge iterations. The default (maxAge=1) reverts to the behavior where global decay is applied every iteration to every segment. Using maxAge > 1 can significantly speed up the TP when global decay is used.
maxSeqLengthIf not 0, we will never learn more than maxSeqLength inputs in a row without starting over at start cells. This sets an upper bound on the length of learned sequences and thus is another means (besides maxAge and globalDecay) by which to limit how much the TP tries to learn.
maxSegmentsPerCellThe maximum number of segments allowed on a cell. This is used to turn on "fixed size CLA" mode. When in effect, globalDecay is not applicable and must be set to 0 and maxAge must be set to 0. When this is used (> 0), maxSynapsesPerSegment must also be > 0.
maxSynapsesPerSegmentThe maximum number of synapses allowed in a segment. This is used to turn on "fixed size CLA" mode. When in effect, globalDecay is not applicable and must be set to 0 and maxAge must be set to 0. When this is used (> 0), maxSegmentsPerCell must also be > 0.
outputTypeCan be one of the following: 'normal', 'activeState', 'activeState1CellPerCol'. 'normal': output the OR of the active and predicted state. 'activeState': output only the active state. 'activeState1CellPerCol': output only the active state, and at most 1 cell/column. If more than 1 cell is active in a column, the one with the highest confidence is sent up. Default is 'normal'.
doPoolingIf True, pooling is enabled. False is the default.
burnInUsed for evaluating the prediction score. Default is 2.
collectStatsIf True, collect training / inference stats. Default is False.

Member Function Documentation

def adaptSegment (   self,
  segUpdate 
)

This function applies segment update information to a segment in a cell.

 Synapses on the active list get their permanence counts incremented by
 permanenceInc. All other synapses get their permanence counts decremented
 by permanenceDec.

 We also increment the positiveActivations count of the segment.
Parameters
segUpdateSegmentUpdate instance
Returns
True if some synapses were decremented to 0 and the segment is a candidate for trimming
def addToSegmentUpdates (   self,
  c,
  i,
  segUpdate 
)

Store a dated potential segment update.

The "date" (iteration index) is used later to determine whether the update is too old and should be forgotten. This is controlled by parameter segUpdateValidDuration.

Parameters
cTODO: document
iTODO: document
segUpdateTODO: document
def checkPrediction2 (   self,
  patternNZs,
  output = None,
  colConfidence = None,
  details = False 
)

This function will replace checkPrediction.

 This function produces goodness-of-match scores for a set of input patterns,
 by checking for their presence in the current and predicted output of the
 TP. Returns a global count of the number of extra and missing bits, the
 confidence scores for each input pattern, and (if requested) the
 bits in each input pattern that were not present in the TP's prediction.
Parameters
patternNZsa list of input patterns that we want to check for. Each element is a list of the non-zeros in that pattern.
outputThe output of the TP. If not specified, then use the TP's current output. This can be specified if you are trying to check the prediction metric for an output from the past.
colConfidenceThe column confidences. If not specified, then use the TP's current self.colConfidence. This can be specified if you are trying to check the prediction metrics for an output from the past.
detailsif True, also include details of missing bits per pattern.
Returns
list containing:
           [
             totalExtras,
             totalMissing,
             [conf_1, conf_2, ...],
             [missing1, missing2, ...]
           ]
Return values
totalExtrasa global count of the number of 'extras', i.e. bits that are on in the current output but not in the or of all the passed in patterns
totalMissinga global count of all the missing bits, i.e. the bits that are on in the or of the patterns, but not in the current output
conf_ithe confidence score for the i'th pattern inpatternsToCheck This consists of 3 items as a tuple: (predictionScore, posPredictionScore, negPredictionScore)
missing_ithe bits in the i'th pattern that were missing in the output. This list is only returned if details is True.
def chooseCellsToLearnFrom (   self,
  c,
  i,
  s,
  n,
  activeState 
)

Choose n random cells to learn from.

 This function is called several times while learning with timeStep = t-1, so
 we cache the set of candidates for that case. It's also called once with
 timeStep = t, and we cache that set of candidates.
Returns
tuple (column index, cell index).
def cleanUpdatesList (   self,
  col,
  cellIdx,
  seg 
)

Removes any update that would be for the given col, cellIdx, segIdx.

NOTE: logically, we need to do this when we delete segments, so that if an update refers to a segment that was just deleted, we also remove that update from the update list. However, I haven't seen it trigger in any of the unit tests yet, so it might mean that it's not needed and that situation doesn't occur, by construction.

def columnConfidences (   self,
  cellConfidences = None 
)

Compute the column confidences given the cell confidences.

If None is passed in for cellConfidences, it uses the stored cell confidences from the last compute.

Parameters
cellConfidencesCell confidences to use, or None to use the the current cell confidences.
Returns
Column confidence scores
def compute (   self,
  bottomUpInput,
  enableLearn,
  computeInfOutput = None 
)

Handle one compute, possibly learning.

Parameters
bottomUpInputThe bottom-up input, typically from a spatial pooler
enableLearnIf true, perform learning
computeInfOutputIf None, default behavior is to disable the inference output when enableLearn is on. If true, compute the inference output If false, do not compute the inference output
Returns
TODO: document
 It is an error to have both enableLearn and computeInfOutput set to False

 By default, we don't compute the inference output when learning because it
 slows things down, but you can override this by passing in True for
 computeInfOutput
def computeOutput (   self)

Computes output for both learning and inference.

In both cases, the output is the boolean OR of activeState and predictedState at t. Stores currentOutput for checkPrediction.

def finishLearning (   self)

Called when learning has been completed.

This method just calls trimSegments(). (finishLearning is here for backward compatibility)

def getActiveState (   self)

Return the current active state.

This is called by the node to obtain the sequence output of the TP.

def getAvgLearnedSeqLength (   self)
Returns
Moving average of learned sequence length
def getBestMatchingCell (   self,
  c,
  activeState,
  minThreshold 
)

Find weakly activated cell in column with at least minThreshold active synapses.

Parameters
cwhich column to look at
activeStatethe active cells
minThresholdminimum number of synapses required
Returns
tuple (cellIdx, segment, numActiveSynapses)
def getBestMatchingSegment (   self,
  c,
  i,
  activeState 
)

For the given cell, find the segment with the largest number of active synapses.

This routine is aggressive in finding the best match. The permanence value of synapses is allowed to be below connectedPerm. The number of active synapses is allowed to be below activationThreshold, but must be above minThreshold. The routine returns the segment index. If no segments are found, then an index of -1 is returned.

Parameters
cTODO: document
iTODO: document
activeStateTODO: document
def getCellForNewSegment (   self,
  colIdx 
)

Return the index of a cell in this column which is a good candidate for adding a new segment.

 When we have fixed size resources in effect, we insure that we pick a
 cell which does not already have the max number of allowed segments. If
 none exists, we choose the least used segment in the column to re-allocate.
Parameters
colIdxwhich column to look at
Returns
cell index
def getNumCells (   self)
Returns
the total number of cells
def getNumSegments (   self)
Returns
the total number of segments
def getNumSegmentsInCell (   self,
  c,
  i 
)
Parameters
ccolumn index
icell index within column
Returns
the total number of synapses in cell (c, i)
def getNumStrongSynapses (   self)
Todo:
implement this, it is used by the node's getParameter() call
def getNumStrongSynapsesPerTimeSlot (   self)
Todo:
implement this, it is used by the node's getParameter() call
def getNumSynapses (   self)
Returns
the total number of synapses
def getNumSynapsesPerSegmentAvg (   self)
Returns
the average number of synapses per segment
def getNumSynapsesPerSegmentMax (   self)
Todo:
implement this, it is used by the node's getParameter() call, it should return the max # of synapses seen in any one segment.
def getPredictedState (   self)

Return a numpy array, predictedCells, representing the current predicted state.

 predictedCells[c][i] represents the state of the i'th cell in the c'th
 column.
Returns
numpy array of predicted cells, representing the current predicted state. predictedCells[c][i] represents the state of the i'th cell in the c'th column.
def getSegmentActiveSynapses (   self,
  c,
  i,
  s,
  activeState,
  newSynapses = False 
)

Return a segmentUpdate data structure containing a list of proposed changes to segment s.

Let activeSynapses be the list of active synapses where the originating cells have their activeState output = 1 at time step t. (This list is empty if s is None since the segment doesn't exist.) newSynapses is an optional argument that defaults to false. If newSynapses is true, then newSynapseCount - len(activeSynapses) synapses are added to activeSynapses. These synapses are randomly chosen from the set of cells that have learnState = 1 at timeStep.

Parameters
cTODO: document
iTODO: document
sTODO: document
activeStateTODO: document
newSynapsesTODO: document
def getSegmentActivityLevel (   self,
  seg,
  activeState,
  connectedSynapsesOnly = False 
)

This routine computes the activity level of a segment given activeState.

 It can tally up only connected synapses (permanence >= connectedPerm), or
 all the synapses of the segment, at either t or t-1.
Parameters
segTODO: document
activeStateTODO: document
connectedSynapsesOnlyTODO: document
def getSegmentInfo (   self,
  collectActiveData = False 
)

Returns information about the distribution of segments, synapses and permanence values in the current TP.

If requested, also returns information regarding the number of currently active segments and synapses.

Returns
tuple described below:
     (
       nSegments,
       nSynapses,
       nActiveSegs,
       nActiveSynapses,
       distSegSizes,
       distNSegsPerCell,
       distPermValues,
       distAges
     )
Return values
nSegmentstotal number of segments
nSynapsestotal number of synapses
nActiveSegstotal no. of active segments (0 if collectActiveData is False)
nActiveSynapsestotal no. of active synapses 0 if collectActiveData is False
distSegSizesa dict where d[n] = number of segments with n synapses
distNSegsPerCella dict where d[n] = number of cells with n segments
distPermValuesa dict where d[p] = number of synapses with perm = p/10
distAgesa list of tuples (ageRange, numSegments)
def getSegmentOnCell (   self,
  c,
  i,
  segIdx 
)
Parameters
ccolumn index
icell index in column
segIdxTODO: document
Returns
list representing the the segment on cell (c, i) with index sidx.
 Returns the segment as following list:

     [  [segmentID, sequenceSegmentFlag, positiveActivations,
         totalActivations, lastActiveIteration,
         lastPosDutyCycle, lastPosDutyCycleIteration],
        [col1, idx1, perm1],
        [col2, idx2, perm2], ...
     ]
Return values
segmentIdTODO: document
sequenceSegmentFlagTODO: document
positiveActivationsTODO: document
totalActivationsTODO: document
lastActiveIterationTODO: document
lastPosDutyCycleTODO: document
lastPosDutyCycleIterationTODO: document
[col1,idx1,perm1]TODO: document
def getStats (   self)

Return the current learning and inference stats.

This returns a dict containing all the learning and inference stats we have collected since the last resetStats(). If collectStats is False, then None is returned.

Returns
dict
 The following keys are returned in the dict when @ref collectStats is True:
Return values
nPredictionsthe number of predictions. This is the total number of inferences excluding burn-in and the last inference.
curPredictionScorethe score for predicting the current input (predicted during the previous inference)
curMissingthe number of bits in the current input that were not predicted to be on.
curExtrathe number of bits in the predicted output that are not in the next input
predictionScoreTotalthe sum of every prediction score to date
predictionScoreAvgpredictionScoreTotal / nPredictions
pctMissingTotalthe total number of bits that were missed over all predictions
pctMissingAvgpctMissingTotal / nPredictions
prevSequenceSignaturesignature for the sequence immediately preceding the last reset. 'None' if collectSequenceStats is False
def infer (   self,
  bottomUpInput 
)
Todo:
document
def inferBacktrack (   self,
  activeColumns 
)

This "backtracks" our inference state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells.

Parameters
activeColumnsThe list of active column indices
 This will adjust @ref infActiveState['t'] if it does manage to lock on to a
 sequence that started earlier. It will also compute infPredictedState['t']
 based on the possibly updated @ref infActiveState['t'], so there is no need to
 call inferPhase2() after calling inferBacktrack().

 This looks at:
     - @ref infActiveState['t']

 This updates/modifies:
     - @ref infActiveState['t']
     - @ref infPredictedState['t']
     - @ref colConfidence['t']
     - @ref cellConfidence['t']

 How it works:
 -------------------------------------------------------------------
 This method gets called from updateInferenceState when we detect either of
 the following two conditions:

 -# The current bottom-up input had too many un-expected columns
 -# We fail to generate a sufficient number of predicted columns for the
    next time step.

 Either of these two conditions indicate that we have fallen out of a
 learned sequence.

 Rather than simply "giving up" and bursting on the unexpected input
 columns, a better approach is to see if perhaps we are in a sequence that
 started a few steps ago. The real world analogy is that you are driving
 along and suddenly hit a dead-end, you will typically go back a few turns
 ago and pick up again from a familiar intersection.

 This back-tracking goes hand in hand with our learning methodology, which
 always tries to learn again from start cells after it loses context. This
 results in a network that has learned multiple, overlapping paths through
 the input data, each starting at different points. The lower the global
 decay and the more repeatability in the data, the longer each of these
 paths will end up being.

 The goal of this function is to find out which starting point in the past
 leads to the current input with the most context as possible. This gives us
 the best chance of predicting accurately going forward. Consider the
 following example, where you have learned the following sub-sequences which
 have the given frequencies:

                   ? - Q - C - D - E      10X      seq 0
                   ? - B - C - D - F      1X       seq 1
                   ? - B - C - H - I      2X       seq 2
                   ? - B - C - D - F      3X       seq 3
           ? - Z - A - B - C - D - J      2X       seq 4
           ? - Z - A - B - C - H - I      1X       seq 5
           ? - Y - A - B - C - D - F      3X       seq 6

         ----------------------------------------
       W - X - Z - A - B - C - D          <= input history
                               ^
                               current time step

 Suppose, in the current time step, the input pattern is D and you have not
 predicted D, so you need to backtrack. Suppose we can backtrack up to 6
 steps in the past, which path should we choose? From the table above, we can
 see that the correct answer is to assume we are in seq 4. How do we
 implement the backtrack to give us this right answer? The current
 implementation takes the following approach:

 -# Start from the farthest point in the past.
 -# For each starting point S, calculate the confidence of the current
    input, conf(startingPoint=S), assuming we followed that sequence.
    Note that we must have learned at least one sequence that starts at
    point S.
 -# If conf(startingPoint=S) is significantly different from
    conf(startingPoint=S-1), then choose S-1 as the starting point.

 The assumption here is that starting point S-1 is the starting point of
 a learned sub-sequence that includes the current input in it's path and
 that started the longest ago. It thus has the most context and will be
 the best predictor going forward.

 From the statistics in the above table, we can compute what the confidences
 will be for each possible starting point:

     startingPoint           confidence of D
     -----------------------------------------
     B (t-2)               4/6  = 0.667   (seq 1,3)/(seq 1,2,3)
     Z (t-4)               2/3  = 0.667   (seq 4)/(seq 4,5)

 First of all, we do not compute any confidences at starting points t-1, t-3,
 t-5, t-6 because there are no learned sequences that start at those points.

 Notice here that Z is the starting point of the longest sub-sequence leading
 up to the current input. Event though starting at t-2 and starting at t-4
 give the same confidence value, we choose the sequence starting at t-4
 because it gives the most context, and it mirrors the way that learning
 extends sequences.
def inferPhase1 (   self,
  activeColumns,
  useStartCells 
)

Update the inference active state from the last set of predictions and the current bottom-up.

 This looks at:
     - @ref infPredictedState['t-1']
 This modifies:
     - @ref infActiveState['t']
Parameters
activeColumnslist of active bottom-ups
useStartCellsIf true, ignore previous predictions and simply turn on the start cells in the active columns
Returns
True if the current input was sufficiently predicted, OR if we started over on startCells. False indicates that the current input was NOT predicted, and we are now bursting on most columns.
def inferPhase2 (   self)

Phase 2 for the inference state.

The computes the predicted state, then checks to insure that the predicted state is not over-saturated, i.e. look too close like a burst. This indicates that there were so many separate paths learned from the current input columns to the predicted input columns that bursting on the current input columns is most likely generated mix and match errors on cells in the predicted columns. If we detect this situation, we instead turn on only the start cells in the current active columns and re-generate the predicted state from those.

Returns
True if we have a decent guess as to the next input. Returing False from here indicates to the caller that we have reached the end of a learned sequence.

This looks at:

This modifies:

def isSegmentActive (   self,
  seg,
  activeState 
)

A segment is active if it has >= activationThreshold connected synapses that are active due to activeState.

 Notes: studied various cutoffs, none of which seem to be worthwhile
        list comprehension didn't help either
Parameters
segTODO: document
activeStateTODO: document
def learn (   self,
  bottomUpInput,
  computeInfOutput = None 
)
Todo:
document
def learnBacktrack (   self)

This "backtracks" our learning state, trying to see if we can lock onto the current set of inputs by assuming the sequence started up to N steps ago on start cells.

 This will adjust @ref lrnActiveState['t'] if it does manage to lock on to a
 sequence that started earlier.
Returns
>0 if we managed to lock on to a sequence that started earlier. The value returned is how many steps in the past we locked on. If 0 is returned, the caller needs to change active state to start on start cells.

How it works:

This method gets called from updateLearningState when we detect either of the following two conditions:

  1. Our PAM counter (pamCounter) expired
  2. We reached the max allowed learned sequence length

Either of these two conditions indicate that we want to start over on start cells.

Rather than start over on start cells on the current input, we can accelerate learning by backtracking a few steps ago and seeing if perhaps a sequence we already at least partially know already started.

This updates/modifies:

This trashes:

def learnPhase1 (   self,
  activeColumns,
  readOnly = False 
)

Compute the learning active state given the predicted state and the bottom-up input.

Parameters
activeColumnslist of active bottom-ups
readOnlyTrue if being called from backtracking logic. This tells us not to increment any segment duty cycles or queue up any updates.
Returns
True if the current input was sufficiently predicted, OR if we started over on startCells. False indicates that the current input was NOT predicted, well enough to consider it as "inSequence"

This looks at:

This modifies:

def learnPhase2 (   self,
  readOnly = False 
)

Compute the predicted segments given the current set of active cells.

Parameters
readOnlyTrue if being called from backtracking logic. This tells us not to increment any segment duty cycles or queue up any updates.

This computes the lrnPredictedState['t'] and queues up any segments that became active (and the list of active synapses for each segment) into the segmentUpdates queue

This looks at:

This modifies:

def predict (   self,
  nSteps 
)

This function gives the future predictions for <nSteps> timesteps starting from the current TP state.

The TP is returned to its original state at the end before returning.

  1. We save the TP state.
  2. Loop for nSteps
    1. Turn-on with lateral support from the current active cells
    2. Set the predicted cells as the next step's active cells. This step in learn and infer methods use input here to correct the predictions. We don't use any input here.
  3. Revert back the TP state to the time before prediction
Parameters
nStepsThe number of future time steps to be predicted
Returns
all the future predictions - a numpy array of type "float32" and shape (nSteps, numberOfCols). The ith row gives the tp prediction for each column at a future timestep (t+i+1).
def printActiveIndices (   self,
  state,
  andValues = False 
)

Print the list of [column, cellIdx] indices for each of the active cells in state.

Parameters
stateTODO: document
andValuesTODO: document
def printCell (   self,
  c,
  i,
  onlyActiveSegments = False 
)
Todo:
document
def printCells (   self,
  predictedOnly = False 
)
Todo:
document
def printColConfidence (   self,
  aState,
  maxCols = 20 
)

Print up to maxCols number from a flat floating point array.

Parameters
aStateTODO: document
maxColsTODO: document
def printComputeEnd (   self,
  output,
  learn = False 
)

Called at the end of inference to print out various diagnostic information based on the current verbosity level.

Parameters
outputTODO: document
learnTODO: document
def printConfidence (   self,
  aState,
  maxCols = 20 
)

Print a floating point array that is the same shape as activeState.

Parameters
aStateTODO: document
maxColsTODO: document
def printInput (   self,
  x 
)
Todo:
document
def printOutput (   self,
  y 
)
Todo:
document
def printSegmentUpdates (   self)
Todo:
document
def printState (   self,
  aState 
)

Print an integer array that is the same shape as activeState.

Parameters
aStateTODO: document
def printStates (   self,
  printPrevious = True,
  printLearnState = True 
)
Todo:
document
def processSegmentUpdates (   self,
  activeColumns 
)

Go through the list of accumulated segment updates and process them as follows:

 if the segment update is too old, remove the update
 else if the cell received bottom-up, update its permanences
 else if it's still being predicted, leave it in the queue
 else remove it.
Parameters
activeColumnsTODO: document
def removeSegmentUpdate (   self,
  updateInfo 
)

Remove a segment update (called when seg update expires or is processed)

Parameters
updateInfotuple (creationDate, SegmentUpdate)
def reset (   self)

Reset the state of all cells.

This is normally used between sequences while training. All internal states are reset to 0.

def resetStats (   self)

Reset the learning and inference stats.

This will usually be called by user code at the start of each inference run (for a particular data set).

def topDownCompute (   self,
  topDownIn = None 
)

Top-down compute - generate expected input given output of the TP.

Parameters
topDownIntop down input from the level above us
Returns
best estimate of the TP input that would have generated bottomUpOut.
def trimSegments (   self,
  minPermanence = None,
  minNumSyns = None 
)

This method deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining.

Parameters
minPermanenceAny syn whose permamence is 0 or < minPermanence will be deleted. If None is passed in, then self.connectedPerm is used.
minNumSynsAny segment with less than minNumSyns synapses remaining in it will be deleted. If None is passed in, then self.activationThreshold is used.
Returns
tuple (numSegsRemoved, numSynsRemoved)
def trimSegmentsInCell (   self,
  colIdx,
  cellIdx,
  segList,
  minPermanence,
  minNumSyns 
)

This method goes through a list of segments for a given cell and deletes all synapses whose permanence is less than minPermanence and deletes any segments that have less than minNumSyns synapses remaining.

Parameters
colIdxColumn index
cellIdxCell index within the column
segListList of segment references
minPermanenceAny syn whose permamence is 0 or < minPermanence will be deleted.
minNumSynsAny segment with less than minNumSyns synapses remaining in it will be deleted.
Returns
tuple (numSegsRemoved, numSynsRemoved)
def updateInferenceState (   self,
  activeColumns 
)

Update the inference state.

Called from compute() on every iteration.

Parameters
activeColumnsThe list of active column indices.
def updateLearningState (   self,
  activeColumns 
)

Update the learning state.

Called from compute() on every iteration

Parameters
activeColumnsList of active column indices
def updateSegmentDutyCycles (   self)

This gets called on every compute.

It determines if it's time to update the segment duty cycles. Since the duty cycle calculation is a moving average based on a tiered alpha, it is important that we update all segments on each tier boundary.

Member Data Documentation

activationThreshold
Todo:
document
activeColumns
Todo:
document
avgLearnedSeqLength

Keeps track of the moving average of all learned sequence length.

cellConfidence
Todo:
document
cellsPerColumn
Todo:
document
colConfidence
Todo:
document
connectedPerm
Todo:
document
currentOutput
Todo:
document
globalDecay
Todo:
document
infActiveState
Todo:
document
infPredictedState
Todo:
document
initialPerm
Todo:
document
iterationIdx
Todo:
document
learnedSeqLength

Keeps track of the length of the sequence currently being learned.

lrnActiveState
Todo:
document
lrnIterationIdx
Todo:
document
lrnPredictedState
Todo:
document
maxAge
Todo:
document
maxInfBacktrack
Todo:
document
maxLrnBacktrack
Todo:
document
maxSegmentsPerCell
Todo:
document
maxSeqLength
Todo:
document
maxSynapsesPerSegment
Todo:
document
minThreshold
Todo:
document
newSynapseCount
Todo:
document
numberOfCols
Todo:
document
outputType
Todo:
document
pamCounter

pamCounter gets reset to pamLength whenever we detect that the learning state is making good predictions (at least half the columns predicted).

Whenever we do not make a good prediction, we decrement pamCounter. When pamCounter reaches 0, we start the learn state over again at start cells.

pamLength
Todo:
document
permanenceDec
Todo:
document
permanenceInc
Todo:
document
permanenceMax
Todo:
document
resetCalled

This gets set when we receive a reset and cleared on the first compute following a reset.

seed
Todo:
document
segmentUpdates

We store the lists of segments updates, per cell, so that they can be applied later during learning, when the cell gets bottom-up activation.

We store one list per cell. The lists are identified with a hash key which is a tuple (column index, cell index).

segUpdateValidDuration
Todo:
document
verbosity
Todo:
document
version
Todo:
document

The documentation for this class was generated from the following file: