Warning, /pim/sink/tests/hawd/README is written in an unsupported language. File is not indexed.

0001 How Are We Doing? This is a tool to track numbers over time for later
0002 comparison and charting so as to track the progress of things.
0003 Think: performance regressions using benchmark numbers over time.
0004 
0005 There are two parts to HAWD: the library and the command line tool. Both
0006 use a hawd.conf file and HAWD dataset definition files.
0007 
0008 The path to a hawd.conf file can either be supplied explicitly to the
0009 HAWD::State class, or HAWD::State will search the directory tree (from
0010 the current directory up) to find it. hawd.conf is a json file which 
0011 currently knows the following two entries:
0012 
0013     results: path to where results should be stored
0014     project: where the project's dataset definition files are
0015 
0016 Tilde expansion is supported. It is recommended to include a copy of
0017 hawd.conf in the source dir's root and have the build system "install"
0018 a copy of it in the build dir with the proper paths filled in. This makes
0019 it easiest to run from the build dir and prevents hardcoding too many
0020 paths into hawd.conf.
0021 
0022 A dataset definition file is also a json file and must appear in the path
0023 pointed to by the project entry in the hawd.conf. The name of the file is
0024 also the name used to store the dataset on disk. Recognized values in the
0025 json file include:
0026 
0027     name: the user-visible name of the dataset
0028     description: a description of the dataset
0029     columns: a json object containing value definitions
0030 
0031 A value definition is a json object which allows one to define the type,
0032 unit and min/max values. An example of a dataset definition file follows:
0033 
0034 {
0035     "name": "Buffer Creation",
0036     "description": "Tests how fast buffer creation is",
0037     "columns": {
0038         "numBuffers": { "type": "int" },
0039         "time": { "type": "int", "unit": "ms", "min": 0, "max": 100 },
0040         "ops": { "type": "float", "unit": "ops/ms" }
0041     }
0042 }
0043 
0044 The hawd library is used wherever data needs to be stored or fetched from
0045 a dataset. Most often this involves using the Dataset and Dataset::Row classes
0046 something like this, where the dataset definition file is in the file at path
0047 $project/buffer_creation:
0048 
0049     HAWD::State state;
0050     HAWD::Dataset dataset("buffer_creation", state);
0051     HAWD::Dataset::Row row = dataset.row();
0052     row.setValue("numBuffers", count);
0053     row.setValue("time", bufferDuration);
0054     row.setValue("ops", opsPerMs);
0055     dataset.insertRow(row);
0056 
0057 That's it! insertRow will return the qin64 key the row was stored under
0058 so that it can be fetched again with ease if needed with Dataset::row(qint64 key).
0059 Note that Row objects must always be created by a Dataset object to be used
0060 with that Dataset, due to internal sanity checking.
0061 
0062 The hawd command line allows one to list datasets, check the definitions for errors,
0063 print tables of data, annotate rows and more. Run hawd on its own to see a list of
0064 available commands.
0065 
0066 //TODO: better documentation of the hawd command line