Skip to content

Benchmark

Benchmark

Evaluate the performance of an engine on a dataset.

The performance time is measured in seconds and the error is measured as the mean squared error over the differences between the expected dataset output values and the obtained output values.

related

Attributes

data instance-attribute

data = data

engine instance-attribute

engine = engine

error instance-attribute

error: list[float] = []

name instance-attribute

name = name

random instance-attribute

random = RandomState(seed=seed)

rows instance-attribute

rows = rows

seed instance-attribute

seed = seed

shuffle instance-attribute

shuffle = shuffle

test_data instance-attribute

test_data = view()

time instance-attribute

time: list[float] = []

Functions

__init__

__init__(
    name: str,
    engine: Engine,
    data: ScalarArray,
    *,
    rows: int | float = 1.0,
    shuffle: bool = True,
    seed: int | None = None
) -> None

Constructor.

Parameters:

Name Type Description Default
name str

name of the benchmark

required
engine Engine

engine to benchmark

required
data ScalarArray

data to benchmark the engine on

required
rows int | float

number (int) or ratio (float) of rows to use from the data

1.0
shuffle bool

shuffles the data

True
seed int | None

seed to shuffle the data.

None

__repr__

__repr__() -> str

Return the code to construct the benchmark in Python.

Returns:

Type Description
str

code to construct the benchmark in Python.

engine_and_data classmethod

engine_and_data(example: ModuleType) -> tuple[Engine, ScalarArray]

Create the engine and load the dataset for the example.

Parameters:

Name Type Description Default
example ModuleType

is the module to benchmark (eg, fuzzylite.examples.terms.arc)

required

Returns:

Type Description
tuple[Engine, ScalarArray]

tuple of engine and dataset

for_example classmethod

for_example(example: ModuleType, rows: int | float = 1.0, shuffle: bool = True, seed: int | None = None) -> Benchmark

Create benchmark for the example.

Parameters:

Name Type Description Default
example ModuleType

example to benchmark (eg, fuzzylite.examples.terms.arc)

required
rows int | float

number (int) or ratio (float) of rows to use from the data

1.0
shuffle bool

whether to shuffle the data

True
seed int | None

seed to shuffle the data

None

Returns:

Type Description
Benchmark

a benchmark ready for the example

measure

measure(*, runs: int = 1) -> None

Measure the performance of the engine on the dataset for a number of runs.

Parameters:

Name Type Description Default
runs int

number of runs to evaluate the engine on the test data

1

prepare

prepare() -> None

Prepare the engine and dataset to benchmark.

prepare_data

prepare_data() -> None

Prepare the dataset to benchmark on.

prepare_engine

prepare_engine() -> None

Prepare the engine to benchmark.

reset

reset() -> None

Reset the benchmark.

run

run() -> None

Run the benchmark once (without computing statistics).

summary

summary() -> dict[str, Any]

Summarize the benchmark results.

Returns:

Type Description
dict[str, Any]

dictionary of statistics containing the performance time in seconds and the mean squared error

summary_markdown

summary_markdown(*, header: bool = False) -> str

Summarize the benchmark results and format them using markdown.

Parameters:

Name Type Description Default
header bool

whether to include table header in summary

False