Dask
DaskBuffer
DaskBuffer (pool:Union[tspace.storage.pool.parquet.ParquetPool,tspace.sto rage.pool.avro.avro.AvroPool,NoneType]=None, batch_size:int, recipe:configparser.ConfigParser, driver:tspace.config.drivers.Driver, truck:tspace.config.vehicles.Truck, meta:tspace.data.core.ObservationMeta, torque_table_row_names:list[str], query:Optional[tspace.data.core.PoolQuery]=None, logger:Optional[logging.Logger]=None, dict_logger:Optional[dict]=None)
*A Buffer connected with a data array file pool
Args: recipe: ConfigParser containing a folder for the data files and the ObservationMeta batch_size: the batch size for sampling driver: the driver truck: the subject of the experiment meta: the metadata of the overservation ObservationMeta
torque_table_row_names: the names of the torque table rows, e.g. [‘r0, r1, r2, r3, r4, r5, r6, r7, r8, r9’] pool: the pool to sample from, default is ParquetPool
query: the query to sample from the pool, default is PoolQuery
logger: the logger dict_logger: the dictionary logger*
DaskBuffer.__post_init__
DaskBuffer.__post_init__ ()
set logger and load the pool to the buffer
DaskBuffer.load
DaskBuffer.load ()
load the pool to the buffer
DaskBuffer.sample
DaskBuffer.sample ()
Sampling from the MongoDB pool
DaskBuffer.close
DaskBuffer.close ()
close the pool, for destructor
DaskBuffer.decode_batch_records
DaskBuffer.decode_batch_records (batch:pandas.core.frame.DataFrame)
*Decode the batch records from dask DataFrame to numpy arrays
sample from parquet pool through dask give dask DataFrame, no heavy decoding required just slicing and converting to numpy array
Arg:
batch: the batch of records from dask DataFrame
Return:
states: the states of the batch
actions: the actions of the batch
rewards: the rewards of the batch
nstates: the next states of the batch*