abstochkin.agentstatedata

Class for storing the state of all agents of a certain species during an AbStochKin simulation.

  1"""
  2Class for storing the state of all agents of a certain species during an
  3AbStochKin simulation.
  4"""
  5
  6#  Copyright (c) 2024-2025, Alex Plakantonakis.
  7#
  8#  This program is free software: you can redistribute it and/or modify
  9#  it under the terms of the GNU General Public License as published by
 10#  the Free Software Foundation, either version 3 of the License, or
 11#  (at your option) any later version.
 12#
 13#  This program is distributed in the hope that it will be useful,
 14#  but WITHOUT ANY WARRANTY; without even the implied warranty of
 15#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 16#  GNU General Public License for more details.
 17#
 18#  You should have received a copy of the GNU General Public License
 19#  along with this program.  If not, see <http://www.gnu.org/licenses/>.
 20
 21from copy import deepcopy
 22from dataclasses import dataclass, field
 23
 24import numpy as np
 25
 26
 27@dataclass
 28class AgentStateData:
 29    """
 30    Class for storing the state of all agents of a certain species during an
 31    AbStochKin simulation.
 32
 33    Attributes
 34    ----------
 35    p_init : int
 36        The initial population size of the species whose data is
 37        represented in an `AgentStateData` object.
 38    max_agents : int
 39        The maximum number of agents for the species whose data
 40        is represented in an `AgentStateData` object.
 41    reps : int
 42        The number of times the AbStochKin algorithm will repeat a simulation.
 43        This will be the length of the `asv` list.
 44    fill_state: int
 45
 46    asv_ini : numpy.ndarray
 47        Agent-State Vector (asv) is a species-specific 2-row vector to monitor
 48        agent state according to Markov's property. This array is the initial
 49        asv, i.e., at `t=0`. The array shape is `(2, max_agents)`
 50    asv : list of numpy.ndarray
 51        A list of length `reps` with copies of `asv_ini`. Each simulation run
 52        uses its corresponding entry in `asv` to monitor the state of all
 53        agents.
 54    """
 55
 56    p_init: int  # initial population size
 57    max_agents: int  # maximum number of agents represented in asv
 58    reps: int  # number of times simulation is repeated
 59    fill_state: int
 60
 61    asv_ini: np.ndarray = field(init=False, default_factory=lambda: np.array([]))
 62    asv: list[np.ndarray] = field(init=False, default_factory=lambda: list(np.array([])))
 63
 64    def __post_init__(self):
 65        # Set up initial (t=0) agent-state vector (asv):
 66        self.asv_ini = np.concatenate(
 67            (np.ones(shape=(2, self.p_init), dtype=np.int8),
 68             np.full(shape=(2, self.max_agents - self.p_init),
 69                     fill_value=self.fill_state,
 70                     dtype=np.int8)),
 71            axis=1
 72        )
 73
 74        # Set up separate copy of the initial `asv` for each repetition of the
 75        # algorithm to facilitate parallelization of ensemble runs.
 76        self.asv = [deepcopy(self.asv_ini) for _ in range(self.reps)]
 77
 78    def apply_markov_property(self, r: int):
 79        """
 80        The future state of the system depends only on its current state.
 81        This method is called at the end of each time step in an AbStochKin
 82        simulation. Therefore, the new agent-state vector becomes the
 83        current state.
 84        """
 85        self.asv[r][0, :] = self.asv[r][1, :]
 86
 87    def cleanup_asv(self):
 88        """ Empty the contents of the array `asv`. """
 89        self.asv = list(np.array([]))
 90
 91    def get_vals_o1(self,
 92                    r: int,
 93                    stream: np.random.Generator,
 94                    p_vals: np.ndarray,
 95                    state: int = 1):
 96        """
 97        Get random values in [0,1) at a given time step for agents of a given
 98        state. Agents of other states have a value of zero.
 99
100        Get probability values at a given time step for agents of a given state.
101        Agents of other states have a transition probability of zero.
102
103        Notes
104        -----
105        Note that only elements of the `asv` that have the same state in the
106        previous and current time steps are considered. This is to ensure that
107        agents that have already transitioned to a different state in the
108        current time step are not reconsidered for a possible transition.
109        """
110        nonzero_elems = np.all(self.asv[r] == state, axis=0)
111        final_rand_nums = stream.random(self.max_agents) * nonzero_elems
112        final_p_vals = p_vals * nonzero_elems
113
114        return final_rand_nums, final_p_vals
115
116    def get_vals_o2(self,
117                    other,
118                    r: int,
119                    stream: np.random.Generator,
120                    p_vals: np.ndarray,
121                    state: int = 1):
122        """
123        Get random values in [0,1) at a given time step for interactions between
124        agents of a given state. Agents of other states have a value of zero.
125
126        Get probability values at a given time step for interactions between
127        agents of a given state. Interactions of agents in other states
128        have a transition probability of zero.
129
130        Notes
131        -----
132        Note that only elements of the `asv` that have the same state in the
133        previous and current time steps are considered. This is to ensure that
134        agents that have already transitioned to a different state in the
135        current time step are not reconsidered for a possible transition.
136        """
137        nonzero_rows = np.all(self.asv[r] == state, axis=0).reshape(-1, 1)
138        nonzero_cols = np.all(other.asv[r] == state, axis=0).reshape(1, -1)
139
140        rand_nums = stream.random(size=(self.max_agents, other.max_agents))
141
142        final_rand_nums = rand_nums * nonzero_rows * nonzero_cols
143        final_p_vals = p_vals * nonzero_rows * nonzero_cols
144
145        return final_rand_nums, final_p_vals
146
147    def __str__(self):
148        return f"Agent-State Vector with \n" \
149               f"Initial population size: {self.p_init}\n" \
150               f"Maximum number of agents: {self.max_agents}\n" \
151               f"Repeat simulation {self.reps} times\n" \
152               f"Fill state: {self.fill_state}"
@dataclass
class AgentStateData:
 28@dataclass
 29class AgentStateData:
 30    """
 31    Class for storing the state of all agents of a certain species during an
 32    AbStochKin simulation.
 33
 34    Attributes
 35    ----------
 36    p_init : int
 37        The initial population size of the species whose data is
 38        represented in an `AgentStateData` object.
 39    max_agents : int
 40        The maximum number of agents for the species whose data
 41        is represented in an `AgentStateData` object.
 42    reps : int
 43        The number of times the AbStochKin algorithm will repeat a simulation.
 44        This will be the length of the `asv` list.
 45    fill_state: int
 46
 47    asv_ini : numpy.ndarray
 48        Agent-State Vector (asv) is a species-specific 2-row vector to monitor
 49        agent state according to Markov's property. This array is the initial
 50        asv, i.e., at `t=0`. The array shape is `(2, max_agents)`
 51    asv : list of numpy.ndarray
 52        A list of length `reps` with copies of `asv_ini`. Each simulation run
 53        uses its corresponding entry in `asv` to monitor the state of all
 54        agents.
 55    """
 56
 57    p_init: int  # initial population size
 58    max_agents: int  # maximum number of agents represented in asv
 59    reps: int  # number of times simulation is repeated
 60    fill_state: int
 61
 62    asv_ini: np.ndarray = field(init=False, default_factory=lambda: np.array([]))
 63    asv: list[np.ndarray] = field(init=False, default_factory=lambda: list(np.array([])))
 64
 65    def __post_init__(self):
 66        # Set up initial (t=0) agent-state vector (asv):
 67        self.asv_ini = np.concatenate(
 68            (np.ones(shape=(2, self.p_init), dtype=np.int8),
 69             np.full(shape=(2, self.max_agents - self.p_init),
 70                     fill_value=self.fill_state,
 71                     dtype=np.int8)),
 72            axis=1
 73        )
 74
 75        # Set up separate copy of the initial `asv` for each repetition of the
 76        # algorithm to facilitate parallelization of ensemble runs.
 77        self.asv = [deepcopy(self.asv_ini) for _ in range(self.reps)]
 78
 79    def apply_markov_property(self, r: int):
 80        """
 81        The future state of the system depends only on its current state.
 82        This method is called at the end of each time step in an AbStochKin
 83        simulation. Therefore, the new agent-state vector becomes the
 84        current state.
 85        """
 86        self.asv[r][0, :] = self.asv[r][1, :]
 87
 88    def cleanup_asv(self):
 89        """ Empty the contents of the array `asv`. """
 90        self.asv = list(np.array([]))
 91
 92    def get_vals_o1(self,
 93                    r: int,
 94                    stream: np.random.Generator,
 95                    p_vals: np.ndarray,
 96                    state: int = 1):
 97        """
 98        Get random values in [0,1) at a given time step for agents of a given
 99        state. Agents of other states have a value of zero.
100
101        Get probability values at a given time step for agents of a given state.
102        Agents of other states have a transition probability of zero.
103
104        Notes
105        -----
106        Note that only elements of the `asv` that have the same state in the
107        previous and current time steps are considered. This is to ensure that
108        agents that have already transitioned to a different state in the
109        current time step are not reconsidered for a possible transition.
110        """
111        nonzero_elems = np.all(self.asv[r] == state, axis=0)
112        final_rand_nums = stream.random(self.max_agents) * nonzero_elems
113        final_p_vals = p_vals * nonzero_elems
114
115        return final_rand_nums, final_p_vals
116
117    def get_vals_o2(self,
118                    other,
119                    r: int,
120                    stream: np.random.Generator,
121                    p_vals: np.ndarray,
122                    state: int = 1):
123        """
124        Get random values in [0,1) at a given time step for interactions between
125        agents of a given state. Agents of other states have a value of zero.
126
127        Get probability values at a given time step for interactions between
128        agents of a given state. Interactions of agents in other states
129        have a transition probability of zero.
130
131        Notes
132        -----
133        Note that only elements of the `asv` that have the same state in the
134        previous and current time steps are considered. This is to ensure that
135        agents that have already transitioned to a different state in the
136        current time step are not reconsidered for a possible transition.
137        """
138        nonzero_rows = np.all(self.asv[r] == state, axis=0).reshape(-1, 1)
139        nonzero_cols = np.all(other.asv[r] == state, axis=0).reshape(1, -1)
140
141        rand_nums = stream.random(size=(self.max_agents, other.max_agents))
142
143        final_rand_nums = rand_nums * nonzero_rows * nonzero_cols
144        final_p_vals = p_vals * nonzero_rows * nonzero_cols
145
146        return final_rand_nums, final_p_vals
147
148    def __str__(self):
149        return f"Agent-State Vector with \n" \
150               f"Initial population size: {self.p_init}\n" \
151               f"Maximum number of agents: {self.max_agents}\n" \
152               f"Repeat simulation {self.reps} times\n" \
153               f"Fill state: {self.fill_state}"

Class for storing the state of all agents of a certain species during an AbStochKin simulation.

Attributes
  • p_init (int): The initial population size of the species whose data is represented in an AgentStateData object.
  • max_agents (int): The maximum number of agents for the species whose data is represented in an AgentStateData object.
  • reps (int): The number of times the AbStochKin algorithm will repeat a simulation. This will be the length of the asv list.
  • fill_state (int):

  • asv_ini (numpy.ndarray): Agent-State Vector (asv) is a species-specific 2-row vector to monitor agent state according to Markov's property. This array is the initial asv, i.e., at t=0. The array shape is (2, max_agents)

  • asv (list of numpy.ndarray): A list of length reps with copies of asv_ini. Each simulation run uses its corresponding entry in asv to monitor the state of all agents.
AgentStateData(p_init: int, max_agents: int, reps: int, fill_state: int)
p_init: int
max_agents: int
reps: int
fill_state: int
asv_ini: numpy.ndarray
asv: list[numpy.ndarray]
def apply_markov_property(self, r: int):
79    def apply_markov_property(self, r: int):
80        """
81        The future state of the system depends only on its current state.
82        This method is called at the end of each time step in an AbStochKin
83        simulation. Therefore, the new agent-state vector becomes the
84        current state.
85        """
86        self.asv[r][0, :] = self.asv[r][1, :]

The future state of the system depends only on its current state. This method is called at the end of each time step in an AbStochKin simulation. Therefore, the new agent-state vector becomes the current state.

def cleanup_asv(self):
88    def cleanup_asv(self):
89        """ Empty the contents of the array `asv`. """
90        self.asv = list(np.array([]))

Empty the contents of the array asv.

def get_vals_o1( self, r: int, stream: numpy.random._generator.Generator, p_vals: numpy.ndarray, state: int = 1):
 92    def get_vals_o1(self,
 93                    r: int,
 94                    stream: np.random.Generator,
 95                    p_vals: np.ndarray,
 96                    state: int = 1):
 97        """
 98        Get random values in [0,1) at a given time step for agents of a given
 99        state. Agents of other states have a value of zero.
100
101        Get probability values at a given time step for agents of a given state.
102        Agents of other states have a transition probability of zero.
103
104        Notes
105        -----
106        Note that only elements of the `asv` that have the same state in the
107        previous and current time steps are considered. This is to ensure that
108        agents that have already transitioned to a different state in the
109        current time step are not reconsidered for a possible transition.
110        """
111        nonzero_elems = np.all(self.asv[r] == state, axis=0)
112        final_rand_nums = stream.random(self.max_agents) * nonzero_elems
113        final_p_vals = p_vals * nonzero_elems
114
115        return final_rand_nums, final_p_vals

Get random values in [0,1) at a given time step for agents of a given state. Agents of other states have a value of zero.

Get probability values at a given time step for agents of a given state. Agents of other states have a transition probability of zero.

Notes

Note that only elements of the asv that have the same state in the previous and current time steps are considered. This is to ensure that agents that have already transitioned to a different state in the current time step are not reconsidered for a possible transition.

def get_vals_o2( self, other, r: int, stream: numpy.random._generator.Generator, p_vals: numpy.ndarray, state: int = 1):
117    def get_vals_o2(self,
118                    other,
119                    r: int,
120                    stream: np.random.Generator,
121                    p_vals: np.ndarray,
122                    state: int = 1):
123        """
124        Get random values in [0,1) at a given time step for interactions between
125        agents of a given state. Agents of other states have a value of zero.
126
127        Get probability values at a given time step for interactions between
128        agents of a given state. Interactions of agents in other states
129        have a transition probability of zero.
130
131        Notes
132        -----
133        Note that only elements of the `asv` that have the same state in the
134        previous and current time steps are considered. This is to ensure that
135        agents that have already transitioned to a different state in the
136        current time step are not reconsidered for a possible transition.
137        """
138        nonzero_rows = np.all(self.asv[r] == state, axis=0).reshape(-1, 1)
139        nonzero_cols = np.all(other.asv[r] == state, axis=0).reshape(1, -1)
140
141        rand_nums = stream.random(size=(self.max_agents, other.max_agents))
142
143        final_rand_nums = rand_nums * nonzero_rows * nonzero_cols
144        final_p_vals = p_vals * nonzero_rows * nonzero_cols
145
146        return final_rand_nums, final_p_vals

Get random values in [0,1) at a given time step for interactions between agents of a given state. Agents of other states have a value of zero.

Get probability values at a given time step for interactions between agents of a given state. Interactions of agents in other states have a transition probability of zero.

Notes

Note that only elements of the asv that have the same state in the previous and current time steps are considered. This is to ensure that agents that have already transitioned to a different state in the current time step are not reconsidered for a possible transition.