PHPBench 1.1.0

Last modified 2022/09/14 11:14

PHPBench

PHPBench 1.1.0 as been tagged!

PHPBench 1.0 removed many features, most were deemed useless, the HTML report was an exception and it has been re-introduced in 1.1 along with other improvements, some of the more notable ones being:

  • “Safe” parameters: You can now use any serializable class as a parameter. This change included some internal refactorings and I have, to the best of my knowledge, preserved the B/C promise.
  • $include and $include-glob configuration directives to include other configuratoin files.
  • Support for passing environment variables to the benchmark process.
  • Documented examples: this new section aims to demonstrate common approaches.

In this post I’ll talk about the HTML reports and other features which were built to support them.

HTML Templates

The first task was to be able to render the existing PHPBench reports (aggregate, default etc) in HTML.

This has been done by using simple PHP templates, which can be configured. Each template maps 1-1 with an object (e.g. a report document, a table, an expression node, a bar chart).

The 1.0 release introduced an expression language. One of the features of this language is that it evaluates to an AST. This enables the results to be easily formatted:

PHPBench

Compare this with the console:

PHPBench

Notice that the same formatting has been applied to both outputs.

Components

The existing reports act upon the entire suite, they can be “combined” by opting to generate multiple reports, but each will act against the whole.

In 1.1 this limitation has been overcome by introducing a new component report generator.

While the “report generators” of 1.0 and before acted on the entire suite, components act on a data frame, and components can include other components. This allows a component (e.g. a section) to partition the data and include other components for each data frame partition.

A (contrived) example configuration:

{
    "generator": "component",
    "partition": ["benchmark_name"],
    "components": [
        {
            "component": "section",
            "partition": ["subject_name"],
            "components": [
                {
                    "component": "text",
                    "text": "This is an example component: {{ "{{ first(frame['subject_name']) }}" }}"
                },
                {
                    "component": "table_aggregate",
                    "partition": ["subject_name"],
                    "title": "Subject: {{ "{{ first(frame['benchmark_name']) }}" }}",
                    "row": {
                        "name": "first(partition['subject_name'])",
                        "net_time": "sum(partition['result_time_net']) as time"
                    }
                }
            ]
        }
    ]
}

Which then renders:

Component

Barcharts

The barchart_aggregate component allows you to configure bar charts in your reports:

Bar Chart

The HTML charts (--output=html) are rendered thanks to plotlyjs. But they also work on the console (--output=console):

Bar Chart

An example of the hashing benchmark is published in the documentation

Combining them all

Sections can optionally render their partitions in tabs:

tabs

Summary

HTML reports can be used to provide better visual feedback, and can also be published in a CI build pipeline.

The components allow more complex reports to be generated. New components can be added in the future to provide f.e. grid layouts, pie charts, etc.

Next Steps

For 1.2 I will probably look into improving the Executor(s). Executors are responsible for executing and collecting information about a benchmark, currently the default executor will generate a PHP script based on a template and execute it, this approach isn’t very flexible.

It would be interesting to be able to configure exactly how the script should be built, and to make it easily possible to customise it without overriding the template.

Other possiblities:

  • Random execution: distribute the sampling over all the benchmarks instead of running them sequentially.
  • OpCodeCounter: an executor to count opcodes as an additional metric, these results could be combined with regular results via a CompositeExecutor.
  • … create an issue if you have other ideas.

If you want to sponsor me (or a feature) you can do so on github and you can reach out to me through an issue or on Twitter.