psychsim package

Submodules

psychsim.action module

class psychsim.action.Action(arg={}, description=None)[source]

Bases: dict

Variables:special – a list of keys that are reserved for system use
agentLess()[source]

Utility method that returns a subject-independent version of this action :rtype: Action

clear() → None. Remove all items from D.[source]
getParameters()[source]
Returns:list of special parameters for this action
Return type:list(str)
match(pattern)[source]
parse(element)[source]
root()[source]
Returns:the base action table, with only special keys “subject”, “verb”, and “object”
Return type:Action
special = ['subject', 'verb', 'object']
class psychsim.action.ActionSet[source]

Bases: frozenset

agentLess()[source]

Utility method that returns a subject-independent version of this action set :rtype: ActionSet

get(key, default=None)[source]
items()[source]
match(pattern)[source]
Parameters:pattern (dict) – a table of key-value patterns that the action must match
Returns:the first action that matches the given pattern, or C{None} if none
Return type:Action
psychsim.action.act2dict(actions)[source]
Returns:a dictionary (indexed by actor) of actions equivalent to the Action, ActionSet, or dictionary passed in
psychsim.action.filterActions(pattern, actions)[source]
Returns:the subset of given actions that match the given pattern
psychsim.action.makeActionSet(subject, verb, obj=None)[source]
psychsim.action.powerset(iterable)[source]

Utility function, taken from Python doc recipes powerset([1,2,3]) –> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)

psychsim.agent module

class psychsim.agent.Agent(name, world=None)[source]

Bases: object

Variables:
  • name – agent name
  • world – the environment that this agent inhabits
  • actions – the set of possible actions that the agent can choose from
  • legal – a set of conditions under which certain action choices are allowed (default is that all actions are allowed at all times)
  • omega – the set of observable state features
  • x – X coordinate to be used in UI
  • y – Y coordinate to be used in UI
  • color – color name to be used in UI
  • belief_threshold (float) – belief-update outcomes that have a likelihood belief this threshold are pruned (default is None, which means no pruning)
addAction(action, condition=None, description=None, codePtr=False)[source]
addModel(name, **kwargs)[source]
Adds a new possible model for this agent (to be used as either true model or else as mental model another agent has of it). Possible arguments are:
  • R: the reward table for the agent under this model (default is True), L{KeyedTree}S{->}float
  • beliefs: the beliefs the agent has under this model (default is True), L{MatrixDistribution}
  • horizon: the horizon of the value function under this model (default is True),int
  • rationality: the rationality parameter used in a quantal response function when modeling others (default is 10),float
  • discount: discount factor used in lookahead
  • selection: selection mechanism used in L{decide}
  • parent: another model that this model inherits from (default is True)
Parameters:name (sotr) – the label for this model
Returns:the model created
Return type:dict
add_action(action, condition=None, description=None, codePtr=False)[source]
Parameters:condition (L{KeyedPlane}) – optional legality condition
Returns:the action added
Return type:L{ActionSet}
belief2model(parent, belief, find_match=True)[source]
Parameters:find_match (bool) – if True, then try to find an existing model that matches the beliefs (takes time, but reduces model proliferation)
compilePi(model=None, horizon=None, debug=False)[source]
compileV(model=None, horizon=None, debug=False)[source]
create_belief_state(state=None, model=None, include=None, ignore=None, stateType=<class 'psychsim.pwl.state.VectorDistributionSet'>)[source]

Handles all combinations of state type and specified belief type

decide(state=None, horizon=None, others=None, model=None, selection=None, actions=None, keySet=None, debug={}, context='')[source]

Generate an action choice for this agent in the given state

Parameters:
  • state (L{KeyedVector}) – the current state in which the agent is making its decision
  • horizon (int) – the value function horizon (default is use horizon specified in model)
  • others (strS{->}L{ActionSet}) – the optional action choices of other agents in the current time step
  • model (str) – the mental model to use (default is model specified in state)
  • selection (str) – how to translate value function into action selection - random: choose one of the maximum-value actions at random - uniform: return a uniform distribution over the maximum-value actions - distribution: return a distribution (a la quantal response or softmax) using rationality of the given model - consistent: make a deterministic choice among the maximum-value actions (default setting for a model) - None: use the selection method specified by the given model (default)
  • actions – possible action choices (default is all legal actions)
  • keySet – subset of state features to project over (default is all state features)
deleteModel(name)[source]

Deletes the named model from the space

Warning

does not check whether there are remaining references to this model

expand_value(node, actions, model=None, subkeys=None, horizon=None, update_beliefs=True, debug={}, context='')[source]

Expands a given value node by a single step, updating the sequence of states and expected rewards accordingly

expectation(other, model=None, state=None)[source]
Returns:what I expect this other agent to do
filter_models(state=None, **kwargs)[source]
findAttribute(name, model)[source]
Returns:the name of the nearest ancestor model (include the given model itself) that specifies a value for the named feature
find_action(pattern: Dict[str, str]) → psychsim.action.ActionSet[source]
Returns:An L{ActionSet} containing an L{Action} that matches all of the field-value pairs in the pattern, if any exist
getActions(vector=None, actions=None)[source]
getAttribute(name, model)[source]
Returns:the value for the specified parameter of the specified mental model
getBelief(vector=None, model=None)[source]
Parameters:model – the model of the agent to use, default is to use model specified in the state vector
Returns:the agent’s belief in the given world
getLegalActions(vector=None, actions=None)[source]
param vector:the world in which to test legality
param actions:the set of actions to test legality of (default is all available actions)
Returns:the set of possible actions to choose from in the given state vector :rtype: {L{ActionSet}}
getReward(model=None)[source]
getState(feature, state=None, unique=False)[source]
get_nth_level(n, state=None, **kwargs)[source]
Returns:a list of the names of all nth-level models for this agent
get_true_model(unique=True)[source]
Returns:the name of the “true” model of this agent, i.e., the model by which the real agent is governed in the real world
Return type:str
Parameters:unique (bool) – If True, assume there is a unique true model (default is True)
hasAction(atom)[source]
Returns:True iff this agent has the given action (possibly in combination with other actions)
Return type:bool
ignore(agents, model=None)[source]
index2model(index, throwException=False)[source]

Convert a numeric representation of a model to a name :param index: the numeric representation of the model :type index: int :rtype: str

model2index(model)[source]

Convert a model name to a numeric representation :param model: the model name :type model: str :rtype: int

n_level(n, parent_models=None, null={}, prefix='', **kwargs)[source]
Warning:Does not check whether there are existing models
oldvalue(vector, action=None, horizon=None, others=None, model=None, keys=None)[source]

Computes the expected value of a state vector (and optional action choice) to this agent

Parameters:
  • vector (L{KeyedVector}) – the state vector (not distribution) representing the possible world under consideration
  • action (L{ActionSet}) – prescribed action choice for the agent to evaluate; if None, then use agent’s own action choice (default is None)
  • horizon (int) – the number of time steps to project into the future (default is agent’s horizon)
  • others (strS{->}L{ActionSet}) – optional table of actions being performed by other agents in this time step (default is no other actions)
  • model – the model of this agent to use (default is True)
  • keys – subset of state features to project over in computing future value (default is all state features)
predict(vector, name, V, horizon=0)[source]

Generate a distribution over possible actions based on a table of values for those actions :param V: either a L{ValueFunction} instance, or a dictionary of float values indexed by actions :param vector: the current state vector :param name: the name of the agent whose behavior is to be predicted

printModel(model=None, buf=None, index=None, prefix='', reward=False, previous=None)[source]
printReward(model=True, buf=None, prefix='')[source]
resetBelief(state=None, model=None, include=None, ignore=None, stateType=<class 'psychsim.pwl.state.VectorDistributionSet'>)[source]
reward(vector=None, model=None, recurse=True)[source]
param recurse:True iff it is OK to recurse into another agent’s reward (default is True)
type recurse:bool
Returns:the reward I derive in the given state (under the given model, default being the True model) :rtype: float
setAttribute(name, value, model=None)[source]

Set a parameter value for the given model(s) :param name: the feature of the model to set :type name: str :param value: the new value for the parameter :param model: the model to set the horizon for, where None means set it for all (default is None)

setBelief(key, distribution, model=None, state=None)[source]
setHorizon(horizon, model=None)[source]
Parameters:model – the model to set the horizon for, where None means set it for all (default is None)
setLegal(action, tree)[source]

Sets the legality decision tree for a given action :param action: the action whose legality we are setting :param tree: the decision tree for the legality of the action :type tree: L{KeyedTree}

setParameter(name, value, model=None)[source]
setPolicy(policy, model=None)[source]
setReward(tree, weight=0, model=None)[source]

Adds/updates a goal weight within the reward function for the specified model.

setState(feature, value, state=None, noclobber=False, recurse=False)[source]
Parameters:recurse – if True, set this feature to the given value for all agents’ beliefs (and beliefs of beliefs, etc.)
set_fully_observable()[source]

Helper method that sets up observations for this agent so that it observes everything (within reason)

set_observations(unobservable=None)[source]
stateEstimator(state, actions, horizon=None)[source]
updateBeliefs(state=None, actions={}, horizon=None, context='')[source]
updateBeliefsOLD(trueState=None, actions={}, max_horizon=None, context='')[source]

Warning

Even if this agent starts with True beliefs, its beliefs can deviate after actions with stochastic effects (i.e., the world transitions to a specific state with some probability, but the agent only knows a posterior distribution over that resulting state). If you want the agent’s beliefs to stay correct, then set the static attribute on the model to True.

value(belief, action, model, horizon=None, others=None, keySet=None, updateBeliefs=True, debug={}, context='')[source]
valueIteration(horizon=None, ignore=None, model=True, epsilon=1e-06, debug=0, maxIterations=None)[source]

Compute a value function for the given model

zero_level(parent_model=None, null=None)[source]
Return type:str
class psychsim.agent.ValueFunction(xml=None)[source]

Bases: object

Representation of an agent’s value function, either from caching or explicit solution

actionTable(name, state, horizon)[source]
Returns:a table of values for actions for the given agent in the given state
add(name, state, action, horizon, value)[source]

Adds the given value to the current value function

get(name, state, action, horizon, ignore=None)[source]
printV(agent, horizon)[source]
set(name, state, action, horizon, value)[source]
psychsim.agent.explain_decision(decision)[source]

psychsim.graph module

Class definition for representation of dependency structure among all variables in a PsychSim scenario

class psychsim.graph.DependencyGraph(myworld=None)[source]

Bases: dict

Representation of dependency structure among PsychSim variables

clear() → None. Remove all items from D.[source]
computeEvaluation()[source]

Determine the order in which to compute new values for state features

computeGraph(agents=None, state=None, belief=False)[source]
computeLineage()[source]

Add ancestors to everybody, also computes layers

deleteKeys(toDelete)[source]
getEvaluation()[source]
getLayers()[source]
getRoot()[source]
items() → a set-like object providing a view on D's items[source]
keys() → a set-like object providing a view on D's keys[source]
values() → an object providing a view on D's values[source]

psychsim.helper_functions module

psychsim.modeling module

class psychsim.modeling.Domain(fname, logger=<RootLogger root (WARNING)>)[source]

Bases: object

Structure for representing model-building information @ivar idFields: fields representing unique IDs for each record @type idFields: str[] @ivar filename: root filename for all model-related files @type filename: str @ivar fields: key of mappings from fields of data to model variables @type fields: strS{->}dict @ivar data: table of records with relevant variables @ivar variations: list of dependency variations to explore @ivar models: list of model name codes to explore @ivar targets: set of fields to predict

links2model(links)[source]
processData(raw)[source]

Takes in raw data and extracts the relevant fields

readDataFile(fname)[source]
readInputData(fname=None)[source]
readKey(fname=None)[source]
readPredictions(fname=None)[source]
readVariations(fname=None)[source]
recordID(record)[source]
targetHistogram(missing=None, data=None)[source]
unmatched()[source]
writePredictions(fname=None)[source]
psychsim.modeling.leaf2matrix(tree, key)[source]
psychsim.modeling.noisyOrTree(tree, value)[source]

psychsim.probability module

class psychsim.probability.Distribution(args=None, rationality=None)[source]

Bases: object

A probability distribution over hashable objects

addProb(element, value)[source]

Utility method that increases the probability of the given element by the given value

add_prob(element, probability)[source]

Utility method that increases the probability of the given element by the given value :param element: the domain element :param probability: the probability to add for this element :type probability: float

clear()[source]
domain()[source]
Returns:the sample space of this probability distribution
Return type:generator
element_to_str(element)[source]
entropy()[source]
Returns:entropy (in bits) of this distribution
epsilon = 1e-08
expectation()[source]
Returns:the expected value of this distribution
Return type:float
first()[source]
Returns:the first element in this distribution’s domain (most useful if there’s only one element)
get(element)[source]
getProb(element)[source]
is_complete(epsilon=None)[source]
Returns:True iff the total probability mass is 1 (or within epsilon of 1)
items()[source]
keys()[source]
max(k=1, number=1)[source]
Parameters:k – default is 1

:param number of values to return for each element (element if 1, element and probability if 2, element probability and index in domain if 3) :returns: the top k most probable elements in this distribution (breaking ties by returning the highest-valued element)

normalize()[source]

Normalizes the distribution so that the sum of values = 1

probability()[source]
Returns:the total probability mass in this distribution
prune(epsilon=1e-08)[source]
prune_elements(epsilon=1e-08)[source]

Merge any elements that are within epsilon of each other

prune_probability(threshold, normalize=False)[source]

Removes any elements in the distribution whose probability is strictly less than the given threshold :param normalize: Normalize distribution after pruning if True (default is False) :return: the probability mass remaining after pruning (and before any normalization)

prune_size(k)[source]

Remove least likely elements to get domain to size k :returns: the remaining total probability

remove_duplicates()[source]

Makes sure all elements are unique (combines probability mass when appropriate) :warning: modifies this distribution in place

replace(old, new)[source]

Replaces on element in the sample space with another. Raises an exception if the original element does not exist, and an exception if the new element already exists (i.e., does not do a merge)

sample() → Tuple[Any, float][source]
Returns:an element from this domain, with a sample probability given by this distribution
scale_prob(factor)[source]
Returns:a new Distribution whose probability values have all been multiplied by the given factor
select(maximize=False)[source]

Reduce distribution to a single element, sampled according to the given distribution :returns: the probability of the selection made

set(element)[source]

Reduce distribution to be 100% for the given element :param element: the element that will be the only one with nonzero probability

sorted_string()[source]
values()[source]

psychsim.reward module

psychsim.reward.achieveFeatureValue(key, value, agent)[source]
psychsim.reward.achieveGoal(key, agent)[source]
psychsim.reward.maximizeFeature(key, agent)[source]
psychsim.reward.minimizeDifference(key1, key2, agent)[source]
psychsim.reward.minimizeFeature(key, agent)[source]
psychsim.reward.null_reward(agent)[source]
Returns:a reward function that always returns 0

psychsim.shell module

psychsim.shell.act(label)[source]
psychsim.shell.choose()[source]
psychsim.shell.loadScenario(filename)[source]
psychsim.shell.printHelp()[source]
psychsim.shell.step(actions=None)[source]

psychsim.world module

class psychsim.world.World(xml=None, stateType=<class 'psychsim.pwl.state.VectorDistributionSet'>)[source]

Bases: object

Variables:
  • agents – table of agents in this world, indexed by name
  • state – the distribution over states of the world
  • variables – definitions of the domains of possible variables (state features, relationships, observations)
  • symbols – utility storage of symbols used across all enumerated state variables
  • dynamics – table of action effect models
  • dependency – dependency structure among state features that impose temporal constraints
  • history – accumulated list of outcomes from simulation steps
  • termination – list of conditions under which the simulation terminates (default is none)
addActionEffects()[source]

For backward compatibility with scenarios that didn’t do this from the beginning

addAgent(agent, setModel=True, avoid_beliefs=True)[source]
addDynamics(tree, action=True, enforceMin=False, enforceMax=False)[source]
addTermination(tree, action=True)[source]

Adds a possible termination condition to the list

add_agent(agent, setModel=True, avoid_beliefs=True)[source]
applyEffect(state, effect, select=False, max_k: Optional[int] = None) → float[source]
audit()[source]

Pre-flight simulation check

clearCoords()[source]
decodeVariable(key, distribution)[source]
defineRelation(subj, obj, name, domain=<class 'float'>, lo=0.0, hi=1.0, **kwargs)[source]

Defines a binary relationship between two agents :param subj: one of the agents in the relation (if a directed link, it is the “origin” of the edge) :type subj: str :param obj: one of the agents in the relation (if a directed link, it is the “destination” of the edge) :type obj: str :param name: the name of the relation (e.g., the verb to use between the subject and object) :type name: str

defineState(entity, feature, domain=<class 'float'>, lo=0.0, hi=1.0, description=None, combinator=None, codePtr=False)[source]
defineVariable(key, domain=<class 'float'>, lo=0.0, hi=1.0, description=None, combinator=None, codePtr=False, avoid_beliefs=True)[source]

Define the type and domain of a given element of the state vector

Parameters:
  • key (str) – string label for the column being defined
  • domain (class) – the domain of values for this feature. Acceptable values are: - float: continuous range - int: discrete numeric range - bool: True/False value - list: enumerated set of discrete values - ActionSet: enumerated set of actions, of the named agent (as key)
  • lo (float/int/list) – for float/int features, the lowest possible value. for list features, a list of possible values.
  • hi (float/int) – for float/int features, the highest possible value
  • description (str) – optional text description explaining what this state feature means
  • combinator – how should multiple dynamics for this variable be combined
define_state(entity, feature, domain=<class 'float'>, lo=0, hi=1, description=None, combinator=None, codePtr=False)[source]

Defines a state feature associated with a single agent, or with the global world state. :param entity: if C{None}, the given feature is on the global world state; otherwise, it is local to the named agent :type entity: str

deltaAction(state=None, actions=None, horizon=None, tiebreak=None, keySubset=None, debug={}, context='')[source]
deltaOrder(actions, vector)[source]

Warning

assumes that no one is acting out of turn

Returns:the new turn sequence resulting from the performance of the given actions
deltaState(actions, state, uncertain=False)[source]

Computes the change across a subset of state features

deltaTurn(state, actions=None)[source]

Computes the change in the turn order based on the given actions :param start: The original state :param end: The final state (which will be modified to reflect the new turn order) :type start,end: L{VectorDistributionSet} :returns: The dynamics functions applied to update the turn order

encodeVariable(key, value)[source]
explain(outcomes, level=1, buf=None)[source]

Generate a more readable interpretation of outcomes generated by L{step}

Parameters:
  • outcomes (dict[]) – the return value from L{step}
  • level (int) – the level of explanation detail: 0. No explanation 1. Agent decisions 2. Agent value functions 3. Agent expectations 4. Effects of expected actions 5. World state (possibly subjective) at each step
  • buf – the string buffer to put the explanation into (default is standard out)
explainAction(state=None, agents=None, buf=None, level=1)[source]
Parameters:agents – subset of agents whose actions will be extracted (default is all acting agents)
explainDecision(decision, buf=None, level=2, prefix='')[source]

Subroutine of L{explain} for explaining agent decisions

float2value(key, flt)[source]
getAction(name=None, state=None, unique=False)[source]
Returns:the C{ActionSet} last performed by the given entity
getActionEffects(joint, keySet, dynamics=None)[source]
Parameters:uncertain – True iff there is uncertainty about which actions will be performed
getActions(vector, agents=None, actions=None)[source]
Returns:the set of all possible action combinations that could happen in the given state
getAncestors(keySubset, actions)[source]
Returns:a set of keys that potentially influence at least one key in the given set of keys (including this set as well)
getConditionalDynamics(action, key, tree=None)[source]
getDescription(key, feature=None)[source]
getDynamics(key, action, state=None)[source]
getFeature(key, state=None, unique=False)[source]
Parameters:
  • key (str) – the label of the state element of interest
  • state (L{VectorDistribution}) – the distribution over possible worlds (default is the current world state)
Returns:

a distribution over values for the given feature

Return type:

L{psychsim.probability.Distribution}

getMentalModel(modelee, vector)[source]
getModel(modelee, state=None, unique=False)[source]
Returns:the name of the model of the given agent indicated by the given state vector. If the given agent is a list, descends down the recursive beliefs to return the model at the bottom of that recursion
Return type:str
getState(entity, feature, state=None, unique=False)[source]

For backward compatibility :param entity: the name of the entity of interest (C{None} if the feature of interest is of the world itself) :type entity: str :param feature: the state feature of interest :type feature: str :param unique: assume there is a unique true value and return it (not a Distribution)

getTurnDynamics(key, actions)[source]
getValue(key, state=None)[source]

Helper method that returns a single value from a vector or a singleton distribution :param key: the label of the state element of interest :type key: str :param state: the distribution over possible worlds (default is the current world state) :type state: L{VectorDistribution} or L{psychsim.pwl.KeyedVector} :returns: a single value for the given feature

get_current_models(state=None, cycle_check=False, all_models=None, tree=None, recurse=True)[source]
has_agent(agent)[source]
Parameters:agent (L{Agent} or str) – The agent (or agent name) to look for
Returns:C{True} iff this C{World} already has an agent with the same name
Return type:bool
initialize()[source]
memory = False
modelGC(check=False)[source]

Garbage collect orphaned models.

nearestVector(vector, vectors)[source]
next(vector=None)[source]
Returns:a list of agents (by name) whose turn it is in the current epoch
Return type:str[]
printBeliefs(name, state=None, buf=None, prefix='', beliefs=True)[source]
printDelta(old, new, buf=None, prefix='')[source]

Prints a kind of diff patch for one state vector with respect to another :param old: the “original” state vector :type old: L{psychsim.pwl.KeyedVector} :param new: the state vector we want to see the diff of :type new: L{VectorDistribution}

printState(distribution=None, buf=None, prefix='', beliefs=True, first=True, models=None)[source]

Utility method for displaying a distribution over possible worlds :type distribution: L{VectorDistribution} :param buf: the string buffer to put the string representation in (default is standard output) :param prefix: a string prefix (e.g., tabs) to insert at the beginning of each line :type prefix: str :param beliefs: if C{True}, print out inaccurate beliefs, too :type beliefs: bool

printVector(vector, buf=None, prefix='', first=True, beliefs=False, csv=False, models=None)[source]

Utility method for displaying a single possible world :type vector: L{psychsim.pwl.KeyedVector} :param buf: the string buffer to put the string representation in (default is standard output) :param prefix: a string prefix (e.g., tabs) to insert at the beginning of each line :type prefix: str :param first: if C{True}, then the first line is the continuation of an existing line (default is C{True}) :type first: bool :param csv: if C{True}, then print the vector as comma-separated values (default is C{False}) :type csv: bool :param beliefs: if C{True}, then print any agent beliefs that might deviate from this vector as well (default is C{False}) :type beliefs: bool

pruneModels(vector)[source]

Do you want a version of a possible world without all the fuss of agent models? Then this is the method for you!

reachable(state=None, transition=None, horizon=-1, ignore=[], debug=False)[source]

@note: The C{__predecessors__} entry for each reachable vector is a set of possible preceding states (i.e., those whose value must be updated if the value of this vector changes :returns: transition matrix among states reachable from the given state (default is current state) :rtype: psychsim.pwl.KeyedVectorS{->}ActionSetS{->}VectorDistribution

resymbolize(state=None)[source]
rotateTurn(name, state=None)[source]

Changes the given state vector so that the named agent is up next, preserving the current turn sequence

save(filename)[source]
Returns:the filename used (possibly with a .psy extension added)
Return type:str
scaleState(vector)[source]

Normalizes the given state vector so that all elements occur in [0,1] :param vector: the vector to normalize :type vector: L{psychsim.pwl.KeyedVector} :returns: the normalized vector :rtype: L{psychsim.pwl.KeyedVector}

setAllParallel()[source]

Utility method that sets the order to be all agents (who have actions) acting in parallel

setDynamics(key, action, tree, enforceMin=False, enforceMax=False, codePtr=False)[source]

Defines the effect of an action on a given state feature :param key: the key of the affected state feature :type key: str :param action: the action affecting the state feature :type action: L{Action} or L{ActionSet} :param tree: the decision tree defining the effect :type tree: L{psychsim.pwl.KeyedTree} :param codePtr: if C{True}, tags the dynamics with a pointer to the module and line number where the tree is defined :type codePtr: bool

setFeature(key, value, state=None, noclobber=False, recurse=False)[source]
setJoint(distribution, state=None)[source]

Sets the state for a combination of state features :param distribution: The joint distribution to join to the current state :type distribution: VectorDistribution :raises ValueError: if joint is over features already present in state :raises ValueError: if joint is not over at least two features

setMentalModel(modeler, modelee, distribution, model=None)[source]

Sets the distribution over mental models one agent has of another entity @note: Normalizes the distribution given

setModel(modelee, distribution, state=None, model=None)[source]
setOrder(order)[source]

Equivalent to the more pythonic set_order

setParallel(flag=True)[source]

Turns on multiprocessing when agents have turns in parallel :param flag: multiprocessing is on iff C{True} (default is C{True}) :type flag: bool

setState(entity, feature, value, state=None, noclobber=False, recurse=False)[source]

For backward compatibility :param entity: the name of the entity whose state feature we’re setting (does not have to be an agent) :type entity: str :type feature: str :param recurse: if True, set this feature to the given value for all agents’ beliefs (and beliefs of beliefs, etc.)

setTurnDynamics(name, action, tree)[source]

Convenience method for setting custom dynamics for the turn order :param name: the name of the agent whose turn dynamics are being set :type name: str :param action: the action affecting the turn order :type action: L{Action} or L{ActionSet} :param tree: the decision tree defining the effect on this agent’s turn order :type tree: L{psychsim.pwl.KeyedTree}

set_feature(key, value, state=None, noclobber=False, recurse=False)[source]

Set the value of an individual element of the state vector :param key: the label of the element to set :type key: str :type value: float or L{psychsim.probability.Distribution} :param state: the state distribution to modify (default is the current world state) :type state: L{VectorDistribution} :param recurse: if True, set this feature to the given value for all agents’ beliefs (and beliefs of beliefs, etc.)

set_order(order)[source]

Initializes the turn order to the given order :param order: the turn order, as a list of names (each agent acts in sequence) or a list of sets of names (agents within a set acts in parallel) :type order: str[] or {str}[]

step(actions=None, state=None, real=True, select=False, keySubset=None, horizon=None, tiebreak=None, updateBeliefs=True, debug={}, threshold=None, context='', max_k=None)[source]

The simulation method :param actions: optional argument setting a subset of actions to be performed in this turn :type actions: strS{->}L{ActionSet} :param state: optional initial state distribution (default is the current world state distribution) :type state: L{VectorDistribution} :param real: if C{True}, then modify the given state; otherwise, this is only hypothetical (default is C{True}) :type real: bool :param float threshold: Outcomes with a likelihood below this threshold are pruned (default is None, no pruning)

terminated(state=None)[source]

Evaluates world states with respect to termination conditions :param state: the state vector (or distribution thereof) to evaluate (default is the current world state) :type state: L{psychsim.pwl.KeyedVector} or L{VectorDistribution} :returns: C{True} iff the given state (or all possible worlds if a distribution) satisfies at least one termination condition :rtype: bool

updateModels(outcome, vector)[source]
value2float(key, value)[source]
Returns:the float value (appropriate for storing in a L{psychsim.pwl.KeyedVector}) corresponding to the given (possibly symbolic, bool, etc.) value
psychsim.world.loadWorld(filename)[source]
psychsim.world.scaleValue(value, entry)[source]
Returns:a new float value that has been normalized according to the feature’s domain

Module contents