diff --git a/README.md b/README.md index 184cd03e..cdb6c7ed 100644 --- a/README.md +++ b/README.md @@ -138,7 +138,7 @@ Examples: curl -X DELETE http://pushgateway.example.org:9091/metrics/job/some_job -* Delete all metrics in all groups (requires to enable the admin api`--web.enable-admin-api`): +* Delete all metrics in all groups (requires to enable the admin API via the command line flag `--web.enable-admin-api`): curl -X PUT http://pushgateway.example.org:9091/api/v1/admin/wipe @@ -202,10 +202,12 @@ timestamps. If you think you need to push a timestamp, please see [When To Use The Pushgateway](https://prometheus.io/docs/practices/pushing/). -In order to make it easier to alert on pushers that have not run recently, the -Pushgateway will add in a metric `push_time_seconds` with the Unix timestamp -of the last `POST`/`PUT` to each group. This will override any pushed metric by -that name. +In order to make it easier to alert on failed pushers or those that have not +run recently, the Pushgateway will add in the metrics `push_time_seconds` and +`push_failure_time_seconds` with the Unix timestamp of the last successful and +failed `POST`/`PUT` to each group. This will override any pushed metric by that +name. A value of zero for either metric implies that the group has never seen a +successful or failed `POST`/`PUT`. ## API @@ -277,20 +279,24 @@ header. (Use the value `application/vnd.google.protobuf; proto=io.prometheus.client.MetricFamily; encoding=delimited` for protocol buffers, otherwise the text format is tried as a fall-back.) -The response code upon success is always 202 (even if the same -grouping key has never been used before, i.e. there is no feedback to -the client if the push has replaced an existing group of metrics or -created a new one). +The response code upon success is either 200 or 400. A 200 response implies a +successful push, either replacing an existing group of metrics or creating a +new one. A 400 response can happen if the request is malformed or if the pushed +metrics are inconsistent with metrics pushed to other groups or collide with +metrics of the Pushgateway itself. An explanation is returned in the body of +the response and logged on error level. + +In rare cases, it is possible that the Pushgateway ends up with an inconsistent +set of metrics already pushed. In that case, new pushes are also rejected as +inconsistent even if the culprit is metrics that were pushed earlier. Delete +the offending metrics to get out of that situation. _If using the protobuf format, do not send duplicate MetricFamily proto messages (i.e. more than one with the same name) in one push, as they will overwrite each other._ -A successfully finished request means that the pushed metrics are -queued for an update of the storage. Scraping the push gateway may -still yield the old results until the queued update is -processed. Neither is there a guarantee that the pushed metrics are -persisted to disk. (A server crash may cause data loss. Or the push +Note that the Pushgateway doesn't provide any strong guarantees that the pushed +metrics are persisted to disk. (A server crash may cause data loss. Or the push gateway is configured to not persist to disk at all.) A `PUT` request with an empty body effectively deletes all metrics with the @@ -351,12 +357,14 @@ The default port the Pushgateway is listening to is 9091. The path looks like: The Pushgateway exposes the following metrics via the configured `--web.telemetry-path` (default: `/metrics`): - The pushed metrics. -- For each pushed group, a metric `push_time_seconds` as explained above. +- For each pushed group, a metric `push_time_seconds` and + `push_failure_time_seconds` as explained above. - The usual metrics provided by the [Prometheus Go client library](https://github.com/prometheus/client_golang), i.e.: - `process_...` - `go_...` - `promhttp_metric_handler_requests_...` -- A number of metrics specific to the Pushgateway, as documented by the example scrape below. +- A number of metrics specific to the Pushgateway, as documented by the example + scrape below. ``` # HELP pushgateway_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which pushgateway was built. @@ -385,7 +393,23 @@ pushgateway_http_requests_total{code="202",handler="push",method="post"} 6 pushgateway_http_requests_total{code="400",handler="push",method="post"} 2 ``` - + +### Alerting on failed pushes + +It is in general a good idea to alert on `push_time_seconds` being much farther +behind than expected. This will catch both failed pushes as well as pushers +being down completely. + +To detect failed pushes much earlier, alert on `push_failure_time_seconds > +push_time_seconds`. + +Pushes can also fail because they are malformed. In this case, they never reach +any metric group and therefore won't set any `push_failure_time_seconds` +metrics. Those pushes are still counted as +`pushgateway_http_requests_total{code="400",handler="push"}`. You can alert on +the `rate` of this metric, but you have to inspect the logs to identify the +offending pusher. + ## Development The normal binary embeds the web files in the `resources` directory. diff --git a/asset/assets_vfsdata.go b/asset/assets_vfsdata.go index 7a6c604d..83976c68 100644 --- a/asset/assets_vfsdata.go +++ b/asset/assets_vfsdata.go @@ -21,7 +21,7 @@ var Assets = func() http.FileSystem { fs := vfsgen۰FS{ "/": &vfsgen۰DirInfo{ name: "/", - modTime: time.Date(2019, 7, 25, 21, 18, 47, 838503628, time.UTC), + modTime: time.Date(2019, 9, 19, 20, 47, 26, 383590026, time.UTC), }, "/static": &vfsgen۰DirInfo{ name: "static", @@ -408,10 +408,10 @@ var Assets = func() http.FileSystem { }, "/template.html": &vfsgen۰CompressedFileInfo{ name: "template.html", - modTime: time.Date(2019, 7, 25, 21, 18, 47, 822503584, time.UTC), - uncompressedSize: 8560, + modTime: time.Date(2019, 9, 19, 20, 47, 26, 371590056, time.UTC), + uncompressedSize: 8687, - compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xd4\x5a\xeb\x73\xdb\x36\x12\xff\x2c\xfd\x15\x5b\xd6\xd3\xa4\x1d\x93\x4c\xd2\xf4\xe6\x26\x91\x74\xe3\x38\x8f\x7a\xae\x75\x72\x91\xd3\x4e\x3f\xdd\x40\xc4\x92\x44\x02\x02\x2c\x00\x4a\xd6\xa8\xfa\xdf\x6f\x00\xf0\x29\xeb\xe1\x3e\xef\xee\x4b\x4c\x00\x8b\x7d\xef\x0f\xc0\x2a\x93\xcf\x5e\xbe\xbd\xbc\xf9\xe9\xdd\x2b\xc8\x4d\xc1\x67\xe3\xcd\x26\xfe\x6a\x7c\x29\xcb\xb5\x62\x59\x6e\xe0\xc9\xa3\xc7\x4f\xe1\x26\x47\x78\xa7\x64\x81\x26\xc7\x4a\xc3\x45\x65\x72\xa9\xf4\xf8\x3b\x96\xa0\xd0\x48\xa1\x12\x14\x15\x98\x1c\xe1\xa2\x24\x49\x8e\x50\xaf\x9c\xc3\x0f\xa8\x34\x93\x02\x9e\x44\x8f\xe0\xa1\x25\x08\xea\xa5\xe0\xcb\xe7\xe3\xb5\xac\xa0\x20\x6b\x10\xd2\x40\xa5\x11\x4c\xce\x34\xa4\x8c\x23\xe0\x6d\x82\xa5\x01\x26\x20\x91\x45\xc9\x19\x11\x09\xc2\x8a\x99\xdc\x09\xa9\x59\x44\xe3\x9f\x6a\x06\x72\x61\x08\x13\x40\x20\x91\xe5\x1a\x64\xda\xa7\x02\x62\xc6\xe3\xdc\x98\xf2\x59\x1c\xaf\x56\xab\x88\x38\x0d\x23\xa9\xb2\x98\x7b\x0a\x1d\x7f\x77\x75\xf9\xea\x7a\xfe\x2a\x7c\x12\x3d\x1a\x8f\x3f\x08\x8e\x5a\x83\xc2\x9f\x2b\xa6\x90\xc2\x62\x0d\xa4\x2c\x39\x4b\xc8\x82\x23\x70\xb2\x02\xa9\x80\x64\x0a\x91\x82\x91\x56\xc7\x95\x62\x86\x89\xec\x1c\xb4\x4c\xcd\x8a\x28\x1c\x53\xa6\x8d\x62\x8b\xca\x0c\x9c\xd3\x68\xc4\x34\xf4\x09\xa4\x00\x22\x20\xb8\x98\xc3\xd5\x3c\x80\x17\x17\xf3\xab\xf9\xf9\xf8\xc7\xab\x9b\x6f\xdf\x7e\xb8\x81\x1f\x2f\xde\xbf\xbf\xb8\xbe\xb9\x7a\x35\x87\xb7\xef\xe1\xf2\xed\xf5\xcb\xab\x9b\xab\xb7\xd7\x73\x78\xfb\x1a\x2e\xae\x7f\x82\x7f\x5e\x5d\xbf\x3c\x07\x64\x26\x47\x05\x78\x5b\x2a\xab\xbb\x54\xc0\xac\xdb\x90\x46\xe3\x39\xe2\x40\x78\x2a\xbd\x32\xba\xc4\x84\xa5\x2c\x01\x4e\x44\x56\x91\x0c\x21\x93\x4b\x54\x82\x89\x0c\x4a\x54\x05\xd3\x36\x70\x1a\x88\xa0\x63\xce\x0a\x66\x88\x71\xe3\x3b\xe6\x44\xe3\xaf\xe2\xed\x76\x3c\xb1\xe9\xe3\x98\x4d\x03\x14\xc1\x6c\x0c\x30\xc9\x91\x50\xfb\x01\x30\x29\xd0\x10\xb0\x61\x08\xad\x5f\x97\xd3\xe0\x52\x0a\x83\xc2\x84\x37\xeb\x12\x03\x48\xfc\x68\x1a\x18\xbc\x35\xb1\x65\xf5\x1c\x92\x9c\x28\x8d\x66\x5a\x99\x34\xfc\x7b\xd0\xe7\x23\x48\x81\xd3\x40\xc9\x85\x34\xba\xb7\x57\x48\x26\x28\xde\x9e\x0b\x99\x4a\xce\xe5\xaa\xd9\x63\x98\xe1\x38\xeb\x25\xf0\xbb\x4a\xe7\x19\x31\xb8\x22\xeb\x49\xec\x57\xc7\x9e\x94\x33\xf1\x09\x14\xf2\x69\xa0\x73\xa9\x4c\x52\x19\x60\x89\x14\x01\xe4\x0a\xd3\x69\xb0\xd9\x44\xef\x88\xc9\xdf\x29\x4c\xd9\xed\x76\x1b\x6b\xeb\x95\x24\x4e\xc9\xd2\x52\x45\x2c\x91\xff\x58\x4e\x37\x9b\xe8\x45\xc5\x38\xbd\x12\xa9\x8c\x14\x2e\x99\x75\xe4\x76\x1b\x34\x32\x74\xa2\x58\x69\x40\xab\xe4\x20\xc3\x8f\x3f\x57\xa8\xd6\xe1\xd7\xd1\xd3\xe8\x71\x54\x30\x11\x7d\xd4\xc7\x18\x4f\x62\xcf\x73\x76\x6f\x01\x0b\x29\x8d\x36\x8a\x94\xe1\xd3\xe8\xeb\xe8\x71\x68\xf3\x31\xfe\xa8\xbb\xf9\x3f\x45\x6a\x5a\x89\xc4\x65\xd1\xbd\x39\xf7\xa2\x62\xd6\x25\xd6\xf9\x91\x68\x1d\xd4\x51\x32\x6b\x8e\x3a\x47\x34\x27\x42\xb4\xd7\xe0\x44\xef\x5a\x9c\xe8\xa3\x8a\xfd\x61\xea\x94\x6d\x32\xfe\x65\x22\x5b\x43\x9f\x86\x19\x5f\x97\xb9\xcd\x58\x3d\x74\x41\x6f\xe1\x9e\xde\x98\xc4\xbe\xc6\xed\xe7\x42\xd2\xb5\x9b\x13\x64\x09\x09\x27\x5a\x4f\x03\x41\x96\x0b\xa2\x20\x65\xb7\x48\x43\x23\x4b\xf0\x13\x21\xde\x96\x44\xd0\x50\x17\xcd\x04\x25\xea\x13\x2c\x32\xf7\xb7\x31\x9a\xb2\x96\x8f\xad\x71\xc2\x04\xaa\x30\xe5\x15\xa3\x35\x85\x15\x5a\x19\x23\x45\xed\x1a\x3f\x08\x86\xc2\x43\x23\xb3\x8c\xa3\x0a\x80\x12\x43\xea\x91\xe5\xc8\x39\x29\x35\x36\xd3\x44\x65\x68\xa6\xc1\xe7\x82\x2c\xc3\x1a\x51\x02\x20\x8a\x91\x5a\x57\xa4\xd3\x20\x25\xdc\x6e\x70\xb3\x96\x46\x49\xee\xc5\xec\xec\xe0\x64\x61\x43\x73\xe3\x44\x59\x0b\x59\xe6\xd0\x33\x98\x8d\x47\x13\x5d\x12\xb1\x5f\xc3\xd0\x41\x8d\xcd\xfe\x92\x88\xd6\xc2\xd8\x5b\xd5\x8e\xc9\xce\xe6\x85\x22\x82\x36\xb1\xff\x3c\x98\x0d\xc0\x8d\xb4\xdb\x3e\x0b\x43\xb8\x94\x9c\x63\x62\x1c\x7a\xdb\x20\xd9\xac\xd2\xe7\xf6\x48\x28\xf4\xb9\x45\x7a\x90\xee\x1c\xa9\xad\xf1\x67\x85\xd5\xcd\x1e\x0a\x61\xd8\xf2\xb2\x81\x61\x74\xc7\xf2\xa1\x56\x8d\x7b\xa1\xf5\xb3\xb5\xbd\xe2\x3b\x64\x82\x2c\xdb\x58\xd6\x89\xde\xa3\x08\x99\xc1\x02\x48\x62\xd8\x12\x03\x90\x22\xe1\x2c\xf9\x34\x0d\xca\xce\xc2\x48\xaf\x98\x49\xf2\x1b\xf9\x3d\x1a\xc5\x12\xfd\xf0\xcb\xc0\x69\x56\xf8\x61\xc8\x99\x95\x7b\xc7\x6f\xa1\xb5\xbc\xe7\xb3\x7a\xb7\xf3\xd7\xc8\xfa\x9c\xb3\x13\x5a\x9d\x50\x67\x6e\x88\xa9\x5a\x6d\xb4\x1b\xdd\x57\x19\xbf\xf7\x57\xe9\xd2\x27\xe8\xb8\xc3\x5d\xf6\xf6\x04\xd6\xcf\xe2\x38\x63\x26\xaf\x16\x51\x22\x8b\x1e\x14\xc5\x3d\x4b\xe2\x05\x97\x8b\xb8\x20\xda\xa0\x8a\xdf\xbf\xba\x78\xf9\xfd\xab\xa8\xa0\x01\x34\x75\xf2\xef\x05\x27\xe2\x53\x30\xfb\x16\x79\xd9\x4b\x33\x9f\xb1\x56\xe3\xd1\x24\xae\x78\x97\xc4\x94\x2d\xeb\xaa\x6e\x3e\x27\xb1\x20\xee\x63\x7c\xbc\xd4\x07\xf1\xa4\xac\xc9\x97\xcd\xe6\xcc\x16\x2e\x3c\x9b\x42\xb4\xdd\xde\x01\x0c\x92\x24\x52\x51\x5b\x73\x6e\xff\x47\xb9\x08\xbb\xa9\x46\xad\xcd\x46\x11\x91\x21\x44\x3e\xfe\x6f\x94\xac\x4a\x5d\x33\x73\x12\xb2\x4b\x59\x09\x63\x65\x38\x61\x91\x1b\xb6\x04\x03\xad\x89\xa2\x2e\xc5\x77\xe6\x42\x8b\x8f\x16\x7c\xac\x16\x99\x15\x10\x96\x44\x20\x0f\x5b\xee\x0e\x49\x6d\xa0\xf3\x27\xcd\xc6\x62\x11\x3e\x6a\x53\xa5\x46\xb8\x7a\x69\x61\x04\x2c\x8c\x08\x35\x26\x52\x50\xa2\xd6\x6d\x81\xd9\xe0\x0c\x50\xf0\x5e\x70\xf7\x71\xa0\xc8\xfd\x00\xef\xe3\x5d\xe5\xed\xf9\xdf\xc3\x35\x2f\xd5\xe1\x19\xb4\x67\x4a\xf7\xd5\xa2\x43\x48\xe5\xaa\x87\x78\xa3\xda\xed\x45\x17\x8e\x26\xbe\xa3\x7e\xbc\xce\xd8\x39\x9c\x71\xe1\xd6\xe6\x52\x19\xa4\xdf\x59\xc4\xd5\x0d\xdd\x8e\x36\x0b\x42\x33\x84\xcd\x86\xa5\x80\x3f\xbb\x8d\x36\x1f\x82\xed\xd6\x2d\x84\x2b\xe2\x6e\xbd\x9b\x0d\x72\x7b\x39\xef\x88\x98\xd0\xc6\xbe\x3c\x5a\xca\x52\xb1\x82\xa8\xb5\xa7\x6c\x26\x99\x48\xe5\x66\x83\x82\x5a\x5f\x6c\x36\x67\x5c\x6c\xb7\xf6\x10\x76\x77\x51\xe8\xdb\x12\x79\x2d\xc1\x91\x04\x3b\x46\x3b\x06\xb5\x2b\x1b\xc8\xf7\xc6\x1c\xcd\x83\x5b\xed\xfe\x50\xeb\x17\x05\x29\x97\xc4\x84\xee\xe5\x76\x08\xa0\x72\xb9\x7a\x89\xfc\x7b\x49\x09\x7f\xb8\xb9\x87\x47\x9d\xdb\xce\xd8\x76\x7b\xde\x28\xf9\xa0\x36\xf2\xc1\x33\x78\x70\xd2\xcc\x07\xf5\x26\xb0\xfb\xff\x6c\x71\xf0\x0b\x2c\x88\xc6\xbf\x3d\x1d\xca\x7d\x70\xa0\xee\x1e\x9c\x03\x2e\x51\x98\x2f\x83\xd9\x4b\xe4\x68\x10\x1c\xc3\xee\xc4\x75\xf0\x9b\x3f\x71\x60\xe6\x50\x6b\xd4\x1e\x7d\x3b\x75\xd3\x62\x57\x53\x65\xdd\x3d\x80\x23\x5d\xac\x0f\xd7\xbe\xaf\xc7\x92\x28\xf7\x84\xf9\x7c\x17\xa8\x46\x77\x61\x26\xb4\x17\xad\xb6\xec\x0e\x63\x9e\xf7\x51\xc7\x6d\x5f\xd5\xf6\x6a\xca\x3e\xaa\xce\xe1\xcc\x14\xa9\x0b\x4b\x7d\x24\x42\x57\x53\xb6\x30\x0f\xe0\xe1\xe8\x30\x20\x1e\x47\xc4\x5a\xc7\xd6\x2d\x45\x4f\xbf\xfd\x98\xf8\x57\x80\x62\x31\x50\xe5\x7e\xa0\xb8\xb3\xa9\xd1\xf4\x0f\x02\xc6\x2e\x04\x36\x4c\xd6\xe3\x87\x81\xce\xc3\x12\x77\x20\x60\x01\xc9\x14\x69\xf4\x06\x8d\x0f\xe8\x6b\x52\x30\xbe\xb6\x63\x7b\x72\x6f\xb7\xbb\x12\x0e\xf2\xd3\x55\x92\xa0\xd6\xc7\x38\xda\xa7\xfc\x5d\x8e\x9c\x68\x03\x16\x82\x90\x3e\x83\x7a\xf3\x0d\x2b\x50\x1b\x52\x94\xf0\x0b\x18\x56\xe0\x6b\xa9\x0a\x62\xa0\xb5\xab\x57\x82\xbe\x06\x9f\xba\x4c\xaa\x8b\xb0\xab\xc2\x9d\x40\x9d\xae\xc2\x83\xf9\xb6\x53\x86\x47\x6b\x07\x6a\xb5\x0e\xd5\xa5\x37\xc1\xb8\x7e\x51\x13\x77\x37\x70\xff\x86\xda\x28\x56\x22\xad\x47\x0b\xa9\x28\x2a\xa4\xfd\x8c\x31\xfe\x51\x35\x1a\x8d\x26\x46\xb9\xbf\x6e\x72\xe6\x91\x6e\x12\x9b\xbc\x37\xf9\x03\xe1\x15\xb6\x73\x93\xd8\xef\x68\xee\x5d\x2d\xab\x86\xb5\x7f\xa5\x8d\x46\xa3\xb6\xf0\xf7\x45\xd3\x0f\x5c\x3c\x06\x3a\xd0\xfa\xab\x77\x77\x72\x4a\x79\xca\x13\x27\xaf\x95\x72\x4d\x0a\x3c\x7d\xfc\x76\x94\xbf\xe9\x0c\x8e\xae\x5d\x91\xb8\xa7\xf0\x1b\x34\xce\x43\xfd\x13\x77\x34\x3c\x73\x7d\x8e\xb5\xb6\x0d\xad\x74\x5d\xc7\xe8\x0d\xa9\x32\xec\x8c\xdc\x6c\x96\x96\x27\xf4\xb8\xf7\xb8\x3a\xc5\x76\x39\xb8\xe4\x41\xf5\xbb\x78\x7c\x10\x16\xcc\xe8\xef\xe2\x31\xaf\x0a\xeb\xbf\x5e\xc0\x7e\x4b\xa2\xee\x1c\x1e\xd1\xbf\x2a\x22\x0c\xe3\x3d\x79\x96\xb1\xcf\x9b\xd1\xc4\xe4\xa0\x13\x59\xba\x9e\xdd\x2a\x98\x35\xc4\xe0\xc3\xd3\xed\x6d\xb3\x78\x64\x63\xb0\xc7\xb8\x5e\x94\xea\xf4\x56\x03\x6d\x7a\x01\x3d\xaa\xc1\x9c\x14\x25\x47\x70\x31\xd9\x15\x6a\xc5\xf9\xf5\xba\xde\x8f\x0b\x3d\x25\x63\x5e\x15\x87\xcd\xf2\x34\xf3\xaa\x38\x2e\x65\x12\xbb\x10\xcc\x4e\xc5\xf6\x5b\xa6\x8d\xcc\x14\x29\xfe\xe0\xe8\xbe\xa8\x92\x4f\x68\x7e\x8d\x67\x9d\x89\x1a\xbe\xe0\xf8\x7c\x90\xa4\x1f\xca\x12\xd5\x0b\x59\xd9\x38\xed\x73\xfc\x65\x55\x54\x9c\xd8\xf7\xfe\xbd\x9c\x7f\xff\x88\xdf\x48\x43\x38\xe8\xff\xfb\xb8\x3b\x73\xed\xf7\x3e\x10\xfb\x55\xc3\x56\x70\x2b\xb2\x5b\x6e\x75\x69\x4e\x8b\x1d\x6d\xda\x67\x7c\x77\x28\x37\x5b\xfa\xa3\x9d\x57\x4d\xb3\xd4\x5e\xa6\xbb\xae\xc0\xa0\x45\x30\x34\xb4\xd7\x30\x68\x3e\x4e\x75\x0b\xea\x7e\x0b\x65\xcb\x00\x5c\x7f\x74\x1a\x50\xa6\x4b\x4e\xd6\xcf\x40\x48\x81\xcf\x9b\x0e\x63\xfe\x64\xf6\xbe\x12\xf6\x06\x02\x57\x22\x75\x97\x10\x26\x85\xbf\xf3\x1f\xaf\x1e\x7b\xd1\xf4\x3f\x7e\x0d\xeb\x67\x58\x5c\x5d\x9f\xd2\x34\xdd\xd1\x5e\xf6\x00\x74\xc3\x7c\x36\x37\xc4\x3e\x84\x7c\xd2\xf4\x97\x5c\x72\xbe\x60\xca\xe4\x4d\xb6\xb4\x6b\x71\xc7\xa6\x0d\xd6\x20\x54\xad\x99\xae\x83\xfb\x17\x1b\xd9\xde\x30\x3e\xe1\xfa\x1c\xce\x7c\xfa\xdb\xb7\x45\xdb\x4e\x6e\x5b\x29\xfb\x3c\x32\xa8\xaa\xcd\xc6\x72\x69\x40\x63\xd7\x3d\x9e\xf7\x31\xf7\x0c\x73\xea\xa4\xbb\x5c\x30\xaa\x12\x5e\x73\x92\xe9\xff\xa6\xab\x9c\x02\xff\x6b\x6e\x6a\x6b\x71\x5c\x37\x96\x29\x72\x28\x24\x25\xbc\xee\x12\xb7\x17\x74\x8a\x3c\x74\x0b\xed\xe5\xdc\x93\xa5\x84\x62\x60\x7d\xe3\xde\xf3\xd3\x20\x7c\x1c\x80\x92\xbe\x50\x09\x97\xd9\x9e\xab\xbb\x65\xd5\x3c\x1d\xdd\x62\xce\x28\x45\x31\x0d\x8c\xaa\x70\xcf\x4f\x06\x4e\x50\xe8\xd9\x79\xe5\x42\x5d\x04\xb3\x3d\xbd\x3b\xbf\xd8\x34\xb2\x77\x9a\x78\x7e\xb1\x16\x5c\x37\xea\xbe\x19\x2e\xba\x5f\x10\xeb\x1e\x02\x93\x02\x2e\xa5\x48\x59\x57\x66\xdf\x0c\x02\x71\xec\x07\x8b\x84\xcb\xf6\x31\x4a\x99\x2e\x58\x2b\x63\xf8\xc3\xc2\xa5\xa3\xdb\x69\xfa\xba\x7b\xf7\x1e\xc7\x7c\x61\xe1\x4d\x3f\x1f\xfe\xaa\x00\xc3\x47\xd6\xa0\xbd\x31\xb0\xad\xeb\x35\x4c\xca\x61\x44\xc3\x42\x67\xc1\xcc\x85\xff\x46\xc2\x02\x21\x65\x36\x54\x40\xd7\x82\x14\x2c\x21\x9c\xaf\x23\x9b\x0e\x93\xb8\x3c\x2a\x21\x95\xd2\xb4\xae\x3d\xf1\xae\xdf\xef\x9b\xd9\xa5\x7d\x21\xf0\x61\xdf\x66\x3f\xa7\xfa\xf5\xd0\xeb\x99\x1d\xe8\x93\x51\xd7\x10\x72\xfd\xa0\x87\x6d\x7f\xe8\xae\xc7\x1a\x4f\xee\x6b\x6f\xfb\x0a\x99\xc4\xbe\x80\x26\xb1\xff\xff\x15\xff\x09\x00\x00\xff\xff\x2c\xff\xfe\xe7\x70\x21\x00\x00"), + compressedContent: []byte("\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\xff\xd4\x5a\xeb\x73\xdb\x36\x12\xff\x2c\xfd\x15\x5b\xd6\xd3\xa4\x1d\x93\x4c\xd2\xf4\xe6\x26\x91\x74\xe3\x38\x8f\x7a\xae\x75\x72\x91\xd2\x4e\x3f\xdd\x40\xc4\x92\x44\x02\x02\x2c\x00\x4a\xd6\xa8\xfa\xdf\x6f\x00\xf0\x29\x4b\xb2\xfb\xbc\xbb\x2f\x31\x01\x2c\xf6\xbd\x3f\x00\xab\x4c\x3e\x7b\xf9\xf6\x72\xf1\xd3\xbb\x57\x90\x9b\x82\xcf\xc6\xdb\x6d\xfc\xd5\xf8\x52\x96\x1b\xc5\xb2\xdc\xc0\x93\x47\x8f\x9f\xc2\x22\x47\x78\xa7\x64\x81\x26\xc7\x4a\xc3\x45\x65\x72\xa9\xf4\xf8\x3b\x96\xa0\xd0\x48\xa1\x12\x14\x15\x98\x1c\xe1\xa2\x24\x49\x8e\x50\xaf\x9c\xc3\x0f\xa8\x34\x93\x02\x9e\x44\x8f\xe0\xa1\x25\x08\xea\xa5\xe0\xcb\xe7\xe3\x8d\xac\xa0\x20\x1b\x10\xd2\x40\xa5\x11\x4c\xce\x34\xa4\x8c\x23\xe0\x4d\x82\xa5\x01\x26\x20\x91\x45\xc9\x19\x11\x09\xc2\x9a\x99\xdc\x09\xa9\x59\x44\xe3\x9f\x6a\x06\x72\x69\x08\x13\x40\x20\x91\xe5\x06\x64\xda\xa7\x02\x62\xc6\xe3\xdc\x98\xf2\x59\x1c\xaf\xd7\xeb\x88\x38\x0d\x23\xa9\xb2\x98\x7b\x0a\x1d\x7f\x77\x75\xf9\xea\x7a\xfe\x2a\x7c\x12\x3d\x1a\x8f\x3f\x08\x8e\x5a\x83\xc2\x9f\x2b\xa6\x90\xc2\x72\x03\xa4\x2c\x39\x4b\xc8\x92\x23\x70\xb2\x06\xa9\x80\x64\x0a\x91\x82\x91\x56\xc7\xb5\x62\x86\x89\xec\x1c\xb4\x4c\xcd\x9a\x28\x1c\x53\xa6\x8d\x62\xcb\xca\x0c\x9c\xd3\x68\xc4\x34\xf4\x09\xa4\x00\x22\x20\xb8\x98\xc3\xd5\x3c\x80\x17\x17\xf3\xab\xf9\xf9\xf8\xc7\xab\xc5\xb7\x6f\x3f\x2c\xe0\xc7\x8b\xf7\xef\x2f\xae\x17\x57\xaf\xe6\xf0\xf6\x3d\x5c\xbe\xbd\x7e\x79\xb5\xb8\x7a\x7b\x3d\x87\xb7\xaf\xe1\xe2\xfa\x27\xf8\xe7\xd5\xf5\xcb\x73\x40\x66\x72\x54\x80\x37\xa5\xb2\xba\x4b\x05\xcc\xba\x0d\x69\x34\x9e\x23\x0e\x84\xa7\xd2\x2b\xa3\x4b\x4c\x58\xca\x12\xe0\x44\x64\x15\xc9\x10\x32\xb9\x42\x25\x98\xc8\xa0\x44\x55\x30\x6d\x03\xa7\x81\x08\x3a\xe6\xac\x60\x86\x18\x37\xbe\x65\x4e\x34\xfe\x2a\xde\xed\xc6\x13\x9b\x3e\x8e\xd9\x34\x40\x11\xcc\xc6\x00\x93\x1c\x09\xb5\x1f\x00\x93\x02\x0d\x01\x1b\x86\xd0\xfa\x75\x35\x0d\x2e\xa5\x30\x28\x4c\xb8\xd8\x94\x18\x40\xe2\x47\xd3\xc0\xe0\x8d\x89\x2d\xab\xe7\x90\xe4\x44\x69\x34\xd3\xca\xa4\xe1\xdf\x83\x3e\x1f\x41\x0a\x9c\x06\x4a\x2e\xa5\xd1\xbd\xbd\x42\x32\x41\xf1\xe6\x5c\xc8\x54\x72\x2e\xd7\xcd\x1e\xc3\x0c\xc7\x59\x2f\x81\xdf\x55\x3a\xcf\x88\xc1\x35\xd9\x4c\x62\xbf\x3a\xf6\xa4\x9c\x89\x4f\xa0\x90\x4f\x03\x9d\x4b\x65\x92\xca\x00\x4b\xa4\x08\x20\x57\x98\x4e\x83\xed\x36\x7a\x47\x4c\xfe\x4e\x61\xca\x6e\x76\xbb\x58\x5b\xaf\x24\x71\x4a\x56\x96\x2a\x62\x89\xfc\xc7\x6a\xba\xdd\x46\x2f\x2a\xc6\xe9\x95\x48\x65\xa4\x70\xc5\xac\x23\x77\xbb\xa0\x91\xa1\x13\xc5\x4a\x03\x5a\x25\x47\x19\x7e\xfc\xb9\x42\xb5\x09\xbf\x8e\x9e\x46\x8f\xa3\x82\x89\xe8\xa3\x3e\xc5\x78\x12\x7b\x9e\xb3\x7b\x0b\x58\x4a\x69\xb4\x51\xa4\x0c\x9f\x46\x5f\x47\x8f\x43\x9b\x8f\xf1\x47\xdd\xcd\xff\x29\x52\xd3\x4a\x24\x2e\x8b\xee\xcd\xb9\x17\x15\xb3\x29\xb1\xce\x8f\x44\xeb\xa0\x8e\x92\xd9\x70\xd4\x39\xa2\xb9\x23\x44\x07\x0d\x4e\xf4\xbe\xc5\x89\x3e\xa9\xd8\x1f\xa6\x4e\xd9\x26\xe3\x5f\x26\xb2\x35\xf4\x69\x98\xf1\x4d\x99\xdb\x8c\xd5\x43\x17\xf4\x16\xee\xe9\x8d\x49\xec\x6b\xdc\x7e\x2e\x25\xdd\xb8\x39\x41\x56\x90\x70\xa2\xf5\x34\x10\x64\xb5\x24\x0a\x52\x76\x83\x34\x34\xb2\x04\x3f\x11\xe2\x4d\x49\x04\x0d\x75\xd1\x4c\x50\xa2\x3e\xc1\x32\x73\x7f\x1b\xa3\x29\x6b\xf9\xd8\x1a\x27\x4c\xa0\x0a\x53\x5e\x31\x5a\x53\x58\xa1\x95\x31\x52\xd4\xae\xf1\x83\x60\x28\x3c\x34\x32\xcb\x38\xaa\x00\x28\x31\xa4\x1e\x59\x8e\x9c\x93\x52\x63\x33\x4d\x54\x86\x66\x1a\x7c\x2e\xc8\x2a\xac\x11\x25\x00\xa2\x18\xa9\x75\x45\x3a\x0d\x52\xc2\xed\x06\x37\x6b\x69\x94\xe4\x5e\xcc\xde\x0e\x4e\x96\x36\x34\x0b\x27\xca\x5a\xc8\x32\x87\x9e\xc1\x6c\x3c\x9a\xe8\x92\x88\xc3\x1a\x86\x0e\x6a\x6c\xf6\x97\x44\xb4\x16\xc6\xde\xaa\x76\x4c\xf6\x36\x2f\x15\x11\xb4\x89\xfd\xe7\xc1\x6c\x00\x6e\xa4\xdd\xf6\x59\x18\xc2\xa5\xe4\x1c\x13\xe3\xd0\xdb\x06\xc9\x66\x95\x3e\xb7\x47\x42\xa1\xcf\x2d\xd2\x83\x74\xe7\x48\x6d\x8d\x3f\x2b\xac\x6e\xf6\x50\x08\xc3\x96\x97\x0d\x0c\xa3\x7b\x96\x0f\xb5\x6a\xdc\x0b\xad\x9f\xad\xed\x15\xdf\x23\x13\x64\xd5\xc6\xb2\x4e\xf4\x1e\x45\xc8\x0c\x16\x40\x12\xc3\x56\x18\x80\x14\x09\x67\xc9\xa7\x69\x50\x76\x16\x46\x7a\xcd\x4c\x92\x2f\xe4\xf7\x68\x14\x4b\xf4\xc3\x2f\x03\xa7\x59\xe1\x87\x21\x67\x56\xee\x2d\xbf\x85\xd6\xf2\x9e\xcf\xea\xdd\xce\x5f\x23\xeb\x73\xce\xee\xd0\xea\x0e\x75\xe6\x86\x98\xaa\xd5\x46\xbb\xd1\x7d\x95\xf1\x7b\x7f\x95\x2e\x7d\x82\x8e\x3b\xdc\x66\x6f\x4f\x60\xfd\x2c\x8e\x33\x66\xf2\x6a\x19\x25\xb2\xe8\x41\x51\xdc\xb3\x24\x5e\x72\xb9\x8c\x0b\xa2\x0d\xaa\xf8\xfd\xab\x8b\x97\xdf\xbf\x8a\x0a\x1a\x40\x53\x27\xff\x5e\x72\x22\x3e\x05\xb3\x6f\x91\x97\xbd\x34\xf3\x19\x6b\x35\x1e\x4d\xe2\x8a\x77\x49\x4c\xd9\xaa\xae\xea\xe6\x73\x12\x0b\xe2\x3e\xc6\xa7\x4b\x7d\x10\x4f\xca\x9a\x7c\xd9\x6e\xcf\x6c\xe1\xc2\xb3\x29\x44\xbb\xdd\x2d\xc0\x20\x49\x22\x15\xb5\x35\xe7\xf6\x7f\x94\xcb\xb0\x9b\x6a\xd4\xda\x6e\x15\x11\x19\x42\xe4\xe3\xff\x46\xc9\xaa\xd4\x35\x33\x27\x21\xbb\x94\x95\x30\x56\x86\x13\x16\xb9\x61\x4b\x30\xd0\x9a\x28\xea\x52\x7c\x6f\x2e\xb4\xf8\x68\xc1\xc7\x6a\x91\x59\x01\x61\x49\x04\xf2\xb0\xe5\xee\x90\xd4\x06\x3a\x7f\xd2\x6c\x2c\x96\xe1\xa3\x36\x55\x6a\x84\xab\x97\x96\x46\xc0\xd2\x88\x50\x63\x22\x05\x25\x6a\xd3\x16\x98\x0d\xce\x00\x05\xef\x05\x77\x1f\x07\x8a\xdc\x0f\xf0\x3e\xde\x56\xde\x9e\xff\x3d\x5c\xf3\x52\x1d\x9e\x41\x7b\xa6\x74\x5f\x2d\x3a\x84\x54\xae\x7b\x88\x37\xaa\xdd\x5e\x74\xe1\x68\xe2\x3b\xea\xc7\xeb\x8c\x9d\xc3\x19\x17\x6e\x6d\x2e\x95\x41\xfa\x9d\x45\x5c\xdd\xd0\xed\x69\xb3\x24\x34\x43\xd8\x6e\x59\x0a\xf8\xb3\xdb\x68\xf3\x21\xd8\xed\xdc\x42\xb8\x26\xee\xd6\xbb\xdd\x22\xb7\x97\xf3\x8e\x88\x09\x6d\xec\xcb\xa3\xa5\x2c\x15\x2b\x88\xda\x78\xca\x66\x92\x89\x54\x6e\xb7\x28\xa8\xf5\xc5\x76\x7b\xc6\xc5\x6e\x67\x0f\x61\x77\x17\x85\xbe\x2d\x91\xd7\x12\x1c\x49\xb0\x67\xb4\x63\x50\xbb\xb2\x85\xfc\x7a\x8d\xa5\xee\x8d\xb4\xc7\x4b\x1b\x0b\xf5\xf3\x2a\x49\x50\xeb\xdd\xee\x80\xc9\xb5\xd6\x8c\xf3\xfa\x93\x5a\xf7\xa9\x00\x94\xb4\x39\x41\x38\x2a\x13\xcc\x2c\x27\xb0\xa5\x0f\x29\x61\x1c\xe9\x67\xb5\x66\x43\x9d\x0e\xa7\xe1\x8d\x76\x7f\x3c\x5f\x48\xb9\x24\x26\x74\x0f\xc7\x63\xf8\x98\xcb\xf5\x4b\xe4\xdf\x4b\x4a\xf8\xc3\xed\x3d\x02\xea\x8c\x3f\x63\xbb\xdd\x79\xe3\xa3\x07\xb5\x8f\x1f\x3c\x83\x07\x77\x7a\xf9\x41\xbd\x09\xec\xfe\x3f\x5b\x1c\xfc\x02\x4b\xa2\xf1\x6f\x4f\x87\x72\x1f\x1c\x29\xfb\x07\xe7\x80\x2b\x14\xe6\xcb\x60\xf6\x12\x39\x1a\x04\xc7\x70\x10\xfd\x49\x9c\x3f\x71\x58\xea\x40\x73\xd4\x9e\xbc\x7b\x65\xdb\x42\x67\x53\xe4\xdd\x35\x84\x23\x5d\x6e\x8e\x43\x8f\x87\x83\x92\x28\xf7\x82\xfa\x7c\x1f\x27\x47\xb7\x51\x2e\xb4\xf7\xbc\xb6\xea\x8f\x43\xae\xf7\x51\xc7\xed\x10\x68\xf4\x4a\xda\xbe\xe9\xce\xe1\xcc\x14\xa9\x0b\x4b\x7d\x22\x43\x57\xd2\x16\x17\x8e\xc0\xf1\xe8\x38\x1e\x9f\x06\xe4\x5a\xc7\xd6\x2d\x45\x4f\xbf\xc3\x90\xfc\x57\x60\x72\x31\x50\xe5\x7e\x98\xbc\xb7\xa9\xd1\xf4\x0f\xc2\xe5\x2e\x04\x36\x4c\xd6\xe3\xc7\x71\xd6\x23\x0d\x77\x20\x60\xf1\xd0\x14\x69\xf4\x06\x8d\x0f\xe8\x6b\x52\x30\xbe\xb1\x63\x7b\x71\xd8\xed\xf6\x25\x1c\xe5\xa7\x3d\xcc\x9d\xe2\xb8\xd8\x94\x78\x9b\x23\x6f\xd0\x0d\xe9\x33\xa8\x37\x2f\x58\x81\xda\x90\xa2\x84\x5f\xc0\xb0\x02\x5f\x4b\x55\x10\x03\xad\x5d\xbd\x12\xf4\x35\xf8\xd4\x65\x52\x5d\x84\x5d\x15\xee\x05\xea\xee\x2a\x3c\x9a\x6f\x7b\x65\x78\xb2\x76\xa0\x56\xeb\x58\x5d\x7a\x13\x8c\x6b\x57\x35\x71\x77\x03\xf7\x6f\xa8\x8d\x62\x25\xd2\x7a\xb4\x94\x8a\xa2\x42\xda\xcf\x18\xe3\xdf\x74\xa3\xd1\x68\x62\x94\xfb\xeb\x26\x67\x1e\xe9\x26\xb1\xc9\x7b\x93\x3f\x10\x5e\x61\x3b\x37\x89\xfd\x8e\xe6\xda\xd7\xb2\x6a\x58\xfb\x47\xe2\x68\x34\x6a\x0b\xff\x50\x34\xfd\xc0\xc5\x63\xa0\x03\xad\xbf\x7a\x57\x37\xa7\x94\xa7\xbc\xe3\xe0\xb7\x52\xae\x49\x81\x77\x9f\xfe\x1d\xe5\x6f\xba\x02\x44\xd7\xae\x48\xdc\x4b\xfc\x0d\x1a\xe7\xa1\xfe\x81\x3f\x1a\x1e\xf9\x3e\xc7\x5a\xdb\x86\x56\xba\xa6\x67\xf4\x86\x54\x19\x76\x46\x6e\xb7\x2b\xcb\x13\x7a\xdc\x7b\x5c\x9d\x62\xfb\x1c\x5c\xf2\xa0\xfa\x5d\x3c\x3e\x08\x0b\x66\xf4\x77\xf1\x98\x57\x85\xf5\x5f\x2f\x60\xbf\x25\x51\xf7\x0e\x8f\xe8\x5f\x15\x11\x86\xf1\x9e\x3c\xcb\xd8\xe7\xcd\x68\x62\x72\xd0\x89\x2c\x5d\xcb\x70\x1d\xcc\x1a\x62\xf0\xe1\xe9\xf6\xb6\x59\x3c\xb2\x31\x38\x60\x5c\x2f\x4a\x75\x7a\xab\x81\x36\xbd\x80\x9e\xd4\x60\x4e\x8a\x92\x23\xb8\x98\xec\x0b\xb5\xe2\xfc\x7a\x5d\xef\xa7\x85\xde\x25\x63\x5e\x15\xc7\xcd\xf2\x34\xf3\xaa\x38\x2d\x65\x12\xbb\x10\xcc\xee\x8a\xed\xb7\x4c\x1b\x99\x29\x52\xfc\xc1\xd1\x7d\x51\x25\x9f\xd0\xfc\x1a\xcf\x3a\x13\x35\x7c\xc1\xf1\xf9\x20\x49\x3f\x94\x25\xaa\x17\xb2\xb2\x71\x3a\xe4\xf8\xcb\xaa\xa8\x38\x31\x6c\x75\x3f\xe7\xdf\x3f\xe2\x0b\x69\x08\x07\xfd\x7f\x1f\x77\x67\xae\xfd\x3e\x04\x62\xbf\x6a\xd8\x0a\x6e\x45\x76\xcb\xad\x2e\xcd\x69\xb1\xa7\x4d\xdb\x45\xe8\x0e\xe5\x66\x4b\x7f\xb4\xf7\xa8\x6a\x96\xda\xcb\x74\xd7\x94\x18\x74\x28\x86\x86\xf6\xfa\x15\xcd\xc7\x5d\xcd\x8a\xba\xdd\x43\xd9\x2a\x00\xd7\x9e\x9d\x06\x94\xe9\x92\x93\xcd\x33\x10\x52\xe0\xf3\xa6\xc1\x99\x3f\x99\xbd\xaf\x84\xbd\x81\xc0\x95\x48\xdd\x25\x84\x49\xe1\xef\xfc\xa7\xab\xc7\x5e\x34\xfd\x6f\x6f\xc3\xfa\x19\x16\x57\xd7\x26\x35\x4d\x73\xb6\x97\x3d\x00\xdd\x30\x9f\xcd\x0d\xb1\x0f\x21\x9f\x34\xfd\x25\x97\x9c\x2f\x98\x32\x79\x93\x2d\xed\x5a\xdc\xb1\x69\x83\x35\x08\x55\x6b\xa6\x6b\x20\xff\xc5\x46\xb6\x37\x8c\x4f\xb8\x39\x87\x33\x9f\xfe\xf6\x6d\xd1\x76\xb3\xdb\x4e\xce\x21\x8f\x0c\xaa\x6a\xbb\xb5\x5c\x1a\xd0\xd8\x77\x8f\xe7\x7d\xca\x3d\xc3\x9c\xba\xd3\x5d\x2e\x18\x55\x09\xaf\x39\xc9\xf4\x7f\xd3\x55\x4e\x81\xff\x35\x37\xb5\xb5\x38\xae\xfb\xda\x14\x39\x14\x92\x12\x5e\x37\xa9\xdb\x0b\x3a\x45\x1e\xba\x85\xf6\x72\xee\xc9\x52\x42\x31\xb0\xbe\x71\xef\xf9\x69\x10\x3e\x6e\x7a\x22\x94\x11\x2e\xb3\x03\x57\x77\xcb\xaa\x79\x3a\xba\xc5\x9c\x51\x8a\x62\x1a\x18\x55\xe1\x81\x5f\x2c\x9c\xa0\xd0\xb3\xf3\xca\x85\xba\x08\x66\x07\x5a\x87\x7e\xb1\xe9\xa3\xef\xf5\x10\xfd\x62\x2d\xb8\xee\x13\x7e\x33\x5c\x74\x3f\x60\xd6\x3d\x04\x26\x05\x5c\x4a\x91\xb2\xae\xcc\xbe\x19\x04\xe2\xd4\xef\x25\x09\x97\xed\x63\x94\x32\x5d\xb0\x56\xc6\xf0\x77\x8d\x4b\x47\xb7\xd7\x73\x76\xf7\xee\x03\x8e\xf9\xc2\xc2\x9b\x7e\x3e\xfc\x51\x03\x86\x8f\xac\x41\x7b\x63\x60\x5b\xd7\x6b\x98\x94\xc3\x88\x86\x85\xce\x82\x99\x0b\xff\x42\xc2\x12\x21\x65\x36\x54\x40\x37\x82\x14\x2c\x21\x9c\x6f\x22\x9b\x0e\x93\xb8\x3c\x29\x21\x95\xd2\xb4\xae\xbd\xe3\x5d\x7f\xd8\x37\xb3\x4b\xfb\x42\xe0\xc3\xbe\xcd\x61\x4e\xf5\xeb\xa1\xd7\x33\x3b\xd2\x27\xa3\xae\x21\xe4\xfa\x41\x0f\xdb\xfe\xd0\x6d\x8f\x35\x9e\x3c\xd4\x5d\xf7\x15\x32\x89\x7d\x01\x4d\x62\xff\xdf\x3b\xfe\x13\x00\x00\xff\xff\xb4\xaf\x76\x6f\xef\x21\x00\x00"), }, } fs["/"].(*vfsgen۰DirInfo).entries = []os.FileInfo{ diff --git a/go.mod b/go.mod index e62d485c..01962024 100644 --- a/go.mod +++ b/go.mod @@ -15,7 +15,7 @@ require ( github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749 // indirect github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd golang.org/x/sys v0.0.0-20190909082730-f460065e899a // indirect - golang.org/x/tools v0.0.0-20190731214159-1e85ed8060aa // indirect + golang.org/x/tools v0.0.0-20190919031856-7460b8e10b7e // indirect gopkg.in/alecthomas/kingpin.v2 v2.2.6 ) diff --git a/go.sum b/go.sum index 836a09c8..13d2d8ba 100644 --- a/go.sum +++ b/go.sum @@ -25,6 +25,7 @@ github.com/go-logfmt/logfmt v0.4.0 h1:MP4Eh7ZCb31lleYCFuwm0oe4/YGak+5l1vA2NOE80n github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/gogo/protobuf v1.1.1 h1:72R+M5VuhED/KujmZVcIquuo8mBgX4oVda//DQb3PXo= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= @@ -95,10 +96,12 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90Pveol golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -107,10 +110,14 @@ golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3 h1:4y9KwBHBgBNwDbtu44R5o1fdO golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190909082730-f460065e899a h1:mIzbOulag9/gXacgxKlFVwpCOWSfBT3/pDyyCwGA9as= golang.org/x/sys v0.0.0-20190909082730-f460065e899a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/tools v0.0.0-20190731214159-1e85ed8060aa h1:kwa/4M1dbmhZqOIqYiTtbA6JrvPwo1+jqlub2qDXX90= -golang.org/x/tools v0.0.0-20190731214159-1e85ed8060aa/go.mod h1:jcCCGcm9btYwXyDqrUWc6MKQKKGJCWEQ3AfLSRIbEuI= +golang.org/x/tools v0.0.0-20190919031856-7460b8e10b7e h1:DxffoHYXmce3WTEBU/6/5bBSV7wmPSvT+atzBfv8hJI= +golang.org/x/tools v0.0.0-20190919031856-7460b8e10b7e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/yaml.v2 v2.2.1 h1:mUhvW9EsL+naU5Q3cakzfE91YhliOondGd6ZrsDBHQE= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= diff --git a/handler/handler_test.go b/handler/handler_test.go index 5eda1a90..34183172 100644 --- a/handler/handler_test.go +++ b/handler/handler_test.go @@ -15,9 +15,11 @@ package handler import ( "bytes" + "errors" "net/http" "net/http/httptest" "testing" + "time" "github.com/go-kit/kit/log" @@ -31,15 +33,26 @@ import ( var logger = log.NewNopLogger() +// MockMetricStore isn't doing any of the validation and sanitation a real +// metric store implementation has to do. Those are tested in the storage +// package. Here we only ensure that the right method calls are performed +// by the code in the handlers. type MockMetricStore struct { lastWriteRequest storage.WriteRequest metricGroups storage.GroupingKeyToMetricGroup writeRequests []storage.WriteRequest + err error // If non-nil, will be sent to Done channel in request. } func (m *MockMetricStore) SubmitWriteRequest(req storage.WriteRequest) { m.writeRequests = append(m.writeRequests, req) m.lastWriteRequest = req + if req.Done != nil { + if m.err != nil { + req.Done <- m.err + } + close(req.Done) + } } func (m *MockMetricStore) GetMetricFamilies() []*dto.MetricFamily { @@ -86,7 +99,9 @@ func TestHealthyReady(t *testing.T) { func TestPush(t *testing.T) { mms := MockMetricStore{} + mmsWithErr := MockMetricStore{err: errors.New("testerror")} handler := Push(&mms, false, false, logger) + handlerWithErr := Push(&mmsWithErr, false, false, logger) handlerBase64 := Push(&mms, false, true, logger) req, err := http.NewRequest("POST", "http://example.org/", &bytes.Buffer{}) if err != nil { @@ -107,7 +122,7 @@ func TestPush(t *testing.T) { mms.lastWriteRequest = storage.WriteRequest{} w = httptest.NewRecorder() handler(w, req, httprouter.Params{httprouter.Param{Key: "job", Value: "testjob"}}) - if expected, got := http.StatusAccepted, w.Code; expected != got { + if expected, got := http.StatusOK, w.Code; expected != got { t.Errorf("Wanted status code %v, got %v.", expected, got) } if mms.lastWriteRequest.Timestamp.IsZero() { @@ -148,7 +163,7 @@ func TestPush(t *testing.T) { mms.lastWriteRequest = storage.WriteRequest{} req, err = http.NewRequest( "POST", "http://example.org/", - bytes.NewBufferString("some_metric 3.14\nanother_metric 42\n"), + bytes.NewBufferString("some_metric 3.14\nanother_metric{instance=\"testinstance\",job=\"testjob\"} 42\n"), ) if err != nil { t.Fatal(err) @@ -161,7 +176,7 @@ func TestPush(t *testing.T) { httprouter.Param{Key: "labels", Value: "/instance/testinstance"}, }, ) - if expected, got := http.StatusAccepted, w.Code; expected != got { + if expected, got := http.StatusOK, w.Code; expected != got { t.Errorf("Wanted status code %v, got %v.", expected, got) } if mms.lastWriteRequest.Timestamp.IsZero() { @@ -173,95 +188,93 @@ func TestPush(t *testing.T) { if expected, got := "testinstance", mms.lastWriteRequest.Labels["instance"]; expected != got { t.Errorf("Wanted instance %v, got %v.", expected, got) } - if expected, got := `name:"some_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { + // Note that sanitation hasn't happened yet, grouping labels not in request. + if expected, got := `name:"some_metric" type:UNTYPED metric: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } - if _, ok := mms.lastWriteRequest.MetricFamilies["push_time_seconds"]; !ok { - t.Errorf("Wanted metric family push_time_seconds missing.") - } - // With base64-encoded job name and instance name and text content. - mms.lastWriteRequest = storage.WriteRequest{} + // With job name and instance name and text content, storage returns error. req, err = http.NewRequest( "POST", "http://example.org/", - bytes.NewBufferString("some_metric 3.14\nanother_metric 42\n"), + bytes.NewBufferString("some_metric 3.14\nanother_metric{instance=\"testinstance\",job=\"testjob\"} 42\n"), ) if err != nil { t.Fatal(err) } w = httptest.NewRecorder() - handlerBase64( + handlerWithErr( w, req, httprouter.Params{ - httprouter.Param{Key: "job", Value: "dGVzdC9qb2I="}, // job="test/job" - httprouter.Param{Key: "labels", Value: "/instance@base64/dGVzdGluc3RhbmNl"}, // instance="testinstance" + httprouter.Param{Key: "job", Value: "testjob"}, + httprouter.Param{Key: "labels", Value: "/instance/testinstance"}, }, ) - if expected, got := http.StatusAccepted, w.Code; expected != got { + if expected, got := http.StatusBadRequest, w.Code; expected != got { t.Errorf("Wanted status code %v, got %v.", expected, got) } - if mms.lastWriteRequest.Timestamp.IsZero() { - t.Errorf("Write request timestamp not set: %#v", mms.lastWriteRequest) + if mmsWithErr.lastWriteRequest.Timestamp.IsZero() { + t.Errorf("Write request timestamp not set: %#v", mmsWithErr.lastWriteRequest) } - if expected, got := "test/job", mms.lastWriteRequest.Labels["job"]; expected != got { + if expected, got := "testjob", mmsWithErr.lastWriteRequest.Labels["job"]; expected != got { t.Errorf("Wanted job %v, got %v.", expected, got) } - if expected, got := "testinstance", mms.lastWriteRequest.Labels["instance"]; expected != got { + if expected, got := "testinstance", mmsWithErr.lastWriteRequest.Labels["instance"]; expected != got { t.Errorf("Wanted instance %v, got %v.", expected, got) } - if expected, got := `name:"some_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { + // Note that sanitation hasn't happened yet, grouping labels not in request. + if expected, got := `name:"some_metric" type:UNTYPED metric: > `, mmsWithErr.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } - if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { + if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mmsWithErr.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } - if _, ok := mms.lastWriteRequest.MetricFamilies["push_time_seconds"]; !ok { - t.Errorf("Wanted metric family push_time_seconds missing.") - } - // With job name and no instance name and text content. + // With base64-encoded job name and instance name and text content. mms.lastWriteRequest = storage.WriteRequest{} req, err = http.NewRequest( "POST", "http://example.org/", - bytes.NewBufferString("some_metric 3.14\nanother_metric 42\n"), + bytes.NewBufferString("some_metric 3.14\nanother_metric{instance=\"testinstance\",job=\"testjob\"} 42\n"), ) if err != nil { t.Fatal(err) } w = httptest.NewRecorder() - handler( + handlerBase64( w, req, httprouter.Params{ - httprouter.Param{Key: "job", Value: "testjob"}, + httprouter.Param{Key: "job", Value: "dGVzdC9qb2I="}, // job="test/job" + httprouter.Param{Key: "labels", Value: "/instance@base64/dGVzdGluc3RhbmNl"}, // instance="testinstance" }, ) - if expected, got := http.StatusAccepted, w.Code; expected != got { + if expected, got := http.StatusOK, w.Code; expected != got { t.Errorf("Wanted status code %v, got %v.", expected, got) } if mms.lastWriteRequest.Timestamp.IsZero() { t.Errorf("Write request timestamp not set: %#v", mms.lastWriteRequest) } - if expected, got := "testjob", mms.lastWriteRequest.Labels["job"]; expected != got { + if expected, got := "test/job", mms.lastWriteRequest.Labels["job"]; expected != got { t.Errorf("Wanted job %v, got %v.", expected, got) } - if expected, got := "", mms.lastWriteRequest.Labels["instance"]; expected != got { + if expected, got := "testinstance", mms.lastWriteRequest.Labels["instance"]; expected != got { t.Errorf("Wanted instance %v, got %v.", expected, got) } - if expected, got := `name:"some_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { + // Note that sanitation hasn't happened yet, grouping labels not in request. + if expected, got := `name:"some_metric" type:UNTYPED metric: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } - if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { + // Note that sanitation hasn't happened yet, job label as still as in the push, not aligned to grouping labels. + if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } - // With job name and instance name and timestamp specified. + // With job name and no instance name and text content. mms.lastWriteRequest = storage.WriteRequest{} req, err = http.NewRequest( "POST", "http://example.org/", - bytes.NewBufferString("a 1\nb 1 1000\n"), + bytes.NewBufferString("some_metric 3.14\nanother_metric{instance=\"testinstance\",job=\"testjob\"} 42\n"), ) if err != nil { t.Fatal(err) @@ -271,24 +284,33 @@ func TestPush(t *testing.T) { w, req, httprouter.Params{ httprouter.Param{Key: "job", Value: "testjob"}, - httprouter.Param{Key: "labels", Value: "/instance/testinstance"}, }, ) - if expected, got := http.StatusBadRequest, w.Code; expected != got { + if expected, got := http.StatusOK, w.Code; expected != got { t.Errorf("Wanted status code %v, got %v.", expected, got) } - if !mms.lastWriteRequest.Timestamp.IsZero() { - t.Errorf("Write request timestamp unexpectedly set: %#v", mms.lastWriteRequest) + if mms.lastWriteRequest.Timestamp.IsZero() { + t.Errorf("Write request timestamp not set: %#v", mms.lastWriteRequest) + } + if expected, got := "testjob", mms.lastWriteRequest.Labels["job"]; expected != got { + t.Errorf("Wanted job %v, got %v.", expected, got) + } + if expected, got := "", mms.lastWriteRequest.Labels["instance"]; expected != got { + t.Errorf("Wanted instance %v, got %v.", expected, got) + } + // Note that sanitation hasn't happened yet, grouping labels not in request. + if expected, got := `name:"some_metric" type:UNTYPED metric: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { + t.Errorf("Wanted metric family %v, got %v.", expected, got) + } + if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { + t.Errorf("Wanted metric family %v, got %v.", expected, got) } - // With job name and instance name and text content and job and instance labels. + // With job name and instance name and timestamp specified. mms.lastWriteRequest = storage.WriteRequest{} req, err = http.NewRequest( - "POST", "http://example.org", - bytes.NewBufferString(` -some_metric{job="foo",instance="bar"} 3.14 -another_metric{instance="baz"} 42 -`), + "POST", "http://example.org/", + bytes.NewBufferString("a 1\nb 1 1000\n"), ) if err != nil { t.Fatal(err) @@ -301,25 +323,19 @@ another_metric{instance="baz"} 42 httprouter.Param{Key: "labels", Value: "/instance/testinstance"}, }, ) - if expected, got := http.StatusAccepted, w.Code; expected != got { + // Note that a real storage shourd reject pushes with timestamps. Here + // we only make sure it gets through. Rejection is tested in the storage + // package. + if expected, got := http.StatusOK, w.Code; expected != got { t.Errorf("Wanted status code %v, got %v.", expected, got) } - if mms.lastWriteRequest.Timestamp.IsZero() { - t.Errorf("Write request timestamp not set: %#v", mms.lastWriteRequest) + // Make sure the timestamp from the push didn't make it to the WriteRequest. + if time.Now().Sub(mms.lastWriteRequest.Timestamp) > time.Minute { + t.Errorf("Write request timestamp set to a too low value: %#v", mms.lastWriteRequest) } - if expected, got := "testjob", mms.lastWriteRequest.Labels["job"]; expected != got { - t.Errorf("Wanted job %v, got %v.", expected, got) + if expected, got := int64(1000), mms.lastWriteRequest.MetricFamilies["b"].GetMetric()[0].GetTimestampMs(); expected != got { + t.Errorf("Wanted protobuf timestamp %v, got %v.", expected, got) } - if expected, got := "testinstance", mms.lastWriteRequest.Labels["instance"]; expected != got { - t.Errorf("Wanted instance %v, got %v.", expected, got) - } - if expected, got := `name:"some_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { - t.Errorf("Wanted metric family %v, got %v.", expected, got) - } - if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { - t.Errorf("Wanted metric family %v, got %v.", expected, got) - } - // With job name and instance name and protobuf content. mms.lastWriteRequest = storage.WriteRequest{} buf := &bytes.Buffer{} @@ -368,7 +384,7 @@ another_metric{instance="baz"} 42 httprouter.Param{Key: "labels", Value: "/instance/testinstance"}, }, ) - if expected, got := http.StatusAccepted, w.Code; expected != got { + if expected, got := http.StatusOK, w.Code; expected != got { t.Errorf("Wanted status code %v, got %v.", expected, got) } if mms.lastWriteRequest.Timestamp.IsZero() { @@ -380,10 +396,12 @@ another_metric{instance="baz"} 42 if expected, got := "testinstance", mms.lastWriteRequest.Labels["instance"]; expected != got { t.Errorf("Wanted instance %v, got %v.", expected, got) } - if expected, got := `name:"some_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { + // Note that sanitation hasn't happened yet, grouping labels not in request. + if expected, got := `name:"some_metric" type:UNTYPED metric: > `, mms.lastWriteRequest.MetricFamilies["some_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } - if expected, got := `name:"another_metric" type:UNTYPED metric: label: untyped: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { + // Note that sanitation hasn't happened yet, grouping labels not in request. + if expected, got := `name:"another_metric" type:UNTYPED metric: > `, mms.lastWriteRequest.MetricFamilies["another_metric"].String(); expected != got { t.Errorf("Wanted metric family %v, got %v.", expected, got) } } diff --git a/handler/push.go b/handler/push.go index 57b3b25c..0f98c37e 100644 --- a/handler/push.go +++ b/handler/push.go @@ -19,14 +19,12 @@ import ( "io" "mime" "net/http" - "sort" "strings" "sync" "time" "github.com/go-kit/kit/log" "github.com/go-kit/kit/log/level" - "github.com/golang/protobuf/proto" "github.com/julienschmidt/httprouter" "github.com/matttproud/golang_protobuf_extensions/pbutil" "github.com/prometheus/client_golang/prometheus" @@ -40,8 +38,6 @@ import ( ) const ( - pushMetricName = "push_time_seconds" - pushMetricHelp = "Last Unix time when this group was changed in the Pushgateway." // Base64Suffix is appended to a label name in the request URL path to // mark the following label value as base64 encoded. Base64Suffix = "@base64" @@ -84,13 +80,6 @@ func Push( } labels["job"] = job - if replace { - ms.SubmitWriteRequest(storage.WriteRequest{ - Labels: labels, - Timestamp: time.Now(), - }) - } - var metricFamilies map[string]*dto.MetricFamily ctMediatype, ctParams, ctErr := mime.ParseMediaType(r.Header.Get("Content-Type")) if ctErr == nil && ctMediatype == "application/vnd.google.protobuf" && @@ -119,20 +108,28 @@ func Push( level.Debug(logger).Log("msg", "failed to parse text", "err", err.Error()) return } - if timestampsPresent(metricFamilies) { - http.Error(w, "pushed metrics must not have timestamps", http.StatusBadRequest) - level.Debug(logger).Log("msg", "pushed metrics must not have timestamps") - return - } now := time.Now() - addPushTimestamp(metricFamilies, now) - sanitizeLabels(metricFamilies, labels) + errCh := make(chan error, 1) ms.SubmitWriteRequest(storage.WriteRequest{ Labels: labels, Timestamp: now, MetricFamilies: metricFamilies, + Replace: replace, + Done: errCh, }) - w.WriteHeader(http.StatusAccepted) + for err := range errCh { + http.Error( + w, + fmt.Sprintf("pushed metrics are invalid or inconsistent with existing metrics: %v", err), + http.StatusBadRequest, + ) + level.Error(logger).Log( + "msg", "pushed metrics are invalid or inconsistent with existing metrics", + "method", r.Method, + "source", r.RemoteAddr, + "err", err.Error(), + ) + } }) instrumentedHandler := promhttp.InstrumentHandlerRequestSize( @@ -149,61 +146,6 @@ func Push( } } -// sanitizeLabels ensures that all the labels in groupingLabels and the -// `instance` label are present in each MetricFamily in metricFamilies. The -// label values from groupingLabels are set in each MetricFamily, no matter -// what. After that, if the 'instance' label is not present at all in a -// MetricFamily, it will be created (with an empty string as value). -// -// Finally, sanitizeLabels sorts the label pairs of all metrics. -func sanitizeLabels( - metricFamilies map[string]*dto.MetricFamily, - groupingLabels map[string]string, -) { - gLabelsNotYetDone := make(map[string]string, len(groupingLabels)) - - for _, mf := range metricFamilies { - metric: - for _, m := range mf.GetMetric() { - for ln, lv := range groupingLabels { - gLabelsNotYetDone[ln] = lv - } - hasInstanceLabel := false - for _, lp := range m.GetLabel() { - ln := lp.GetName() - if lv, ok := gLabelsNotYetDone[ln]; ok { - lp.Value = proto.String(lv) - delete(gLabelsNotYetDone, ln) - } - if ln == string(model.InstanceLabel) { - hasInstanceLabel = true - } - if len(gLabelsNotYetDone) == 0 && hasInstanceLabel { - sort.Sort(labelPairs(m.Label)) - continue metric - } - } - for ln, lv := range gLabelsNotYetDone { - m.Label = append(m.Label, &dto.LabelPair{ - Name: proto.String(ln), - Value: proto.String(lv), - }) - if ln == string(model.InstanceLabel) { - hasInstanceLabel = true - } - delete(gLabelsNotYetDone, ln) // To prepare map for next metric. - } - if !hasInstanceLabel { - m.Label = append(m.Label, &dto.LabelPair{ - Name: proto.String(string(model.InstanceLabel)), - Value: proto.String(""), - }) - } - sort.Sort(labelPairs(m.Label)) - } - } -} - // decodeBase64 decodes the provided string using the “Base 64 Encoding with URL // and Filename Safe Alphabet” (RFC 4648). Padding characters (i.e. trailing // '=') are ignored. @@ -242,47 +184,3 @@ func splitLabels(labels string) (map[string]string, error) { } return result, nil } - -// Checks if any timestamps have been specified. -func timestampsPresent(metricFamilies map[string]*dto.MetricFamily) bool { - for _, mf := range metricFamilies { - for _, m := range mf.GetMetric() { - if m.TimestampMs != nil { - return true - } - } - } - return false -} - -// Add metric to indicate the push time. -func addPushTimestamp(metricFamilies map[string]*dto.MetricFamily, t time.Time) { - metricFamilies[pushMetricName] = &dto.MetricFamily{ - Name: proto.String(pushMetricName), - Help: proto.String(pushMetricHelp), - Type: dto.MetricType_GAUGE.Enum(), - Metric: []*dto.Metric{ - { - Gauge: &dto.Gauge{ - Value: proto.Float64(float64(t.UnixNano()) / 1e9), - }, - }, - }, - } -} - -// labelPairs implements sort.Interface. It provides a sortable version of a -// slice of dto.LabelPair pointers. -type labelPairs []*dto.LabelPair - -func (s labelPairs) Len() int { - return len(s) -} - -func (s labelPairs) Swap(i, j int) { - s[i], s[j] = s[j], s[i] -} - -func (s labelPairs) Less(i, j int) bool { - return s[i].GetName() < s[j].GetName() -} diff --git a/main.go b/main.go index 8f46162a..16243343 100644 --- a/main.go +++ b/main.go @@ -95,8 +95,8 @@ func main() { ms := storage.NewDiskMetricStore(*persistenceFile, *persistenceInterval, prometheus.DefaultGatherer, logger) - // Inject the metric families returned by ms.GetMetricFamilies into the default Gatherer: - prometheus.DefaultGatherer = prometheus.Gatherers{ + // Create a Gatherer combining the DefaultGatherer and the metrics from the metric store. + g := prometheus.Gatherers{ prometheus.DefaultGatherer, prometheus.GathererFunc(func() ([]*dto.MetricFamily, error) { return ms.GetMetricFamilies(), nil }), } @@ -106,7 +106,7 @@ func main() { r.Handler("GET", *routePrefix+"/-/ready", handler.Ready(ms)) r.Handler( "GET", path.Join(*routePrefix, *metricsPath), - promhttp.HandlerFor(prometheus.DefaultGatherer, promhttp.HandlerOpts{ + promhttp.HandlerFor(g, promhttp.HandlerOpts{ ErrorLog: logFunc(level.Error(logger).Log), }), ) diff --git a/resources/template.html b/resources/template.html index fbe57f76..5e6f90da 100644 --- a/resources/template.html +++ b/resources/template.html @@ -68,7 +68,8 @@

{{range $i, $ln := .SortedLabels}} {{$ln}}="{{index $metricGroup.Labels $ln}}" {{end}} - + + {{if not $metricGroup.LastPushSuccess}}Last push failed!{{end}}

diff --git a/storage/diskmetricstore.go b/storage/diskmetricstore.go index 1566050b..b6c44bca 100644 --- a/storage/diskmetricstore.go +++ b/storage/diskmetricstore.go @@ -15,10 +15,12 @@ package storage import ( "encoding/gob" + "errors" "fmt" "io/ioutil" "os" "path" + "sort" "sync" "time" @@ -32,9 +34,15 @@ import ( ) const ( - writeQueueCapacity = 1000 + pushMetricName = "push_time_seconds" + pushMetricHelp = "Last Unix time when changing this group in the Pushgateway succeeded." + pushFailedMetricName = "push_failure_time_seconds" + pushFailedMetricHelp = "Last Unix time when changing this group in the Pushgateway failed." + writeQueueCapacity = 1000 ) +var errTimestamp = errors.New("pushed metrics must not have timestamps") + // DiskMetricStore is an implementation of MetricStore that persists metrics to // disk. type DiskMetricStore struct { @@ -100,14 +108,40 @@ func (dms *DiskMetricStore) SubmitWriteRequest(req WriteRequest) { dms.writeQueue <- req } +// Shutdown implements the MetricStore interface. +func (dms *DiskMetricStore) Shutdown() error { + close(dms.drain) + return <-dms.done +} + +// Healthy implements the MetricStore interface. +func (dms *DiskMetricStore) Healthy() error { + // By taking the lock we check that there is no deadlock. + dms.lock.Lock() + defer dms.lock.Unlock() + + // A pushgateway that cannot be written to should not be + // considered as healthy. + if len(dms.writeQueue) == cap(dms.writeQueue) { + return fmt.Errorf("write queue is full") + } + + return nil +} + +// Ready implements the MetricStore interface. +func (dms *DiskMetricStore) Ready() error { + return dms.Healthy() +} + // GetMetricFamilies implements the MetricStore interface. func (dms *DiskMetricStore) GetMetricFamilies() []*dto.MetricFamily { - result := []*dto.MetricFamily{} - mfStatByName := map[string]mfStat{} - dms.lock.RLock() defer dms.lock.RUnlock() + result := []*dto.MetricFamily{} + mfStatByName := map[string]mfStat{} + for _, group := range dms.metricGroups { for name, tmf := range group.Metrics { mf := tmf.GetMetricFamily() @@ -151,30 +185,19 @@ func (dms *DiskMetricStore) GetMetricFamilies() []*dto.MetricFamily { return result } -// Shutdown implements the MetricStore interface. -func (dms *DiskMetricStore) Shutdown() error { - close(dms.drain) - return <-dms.done -} - -// Healthy implements the MetricStore interface. -func (dms *DiskMetricStore) Healthy() error { - // By taking the lock we check that there is no deadlock. - dms.lock.Lock() - defer dms.lock.Unlock() - - // A pushgateway that cannot be written to should not be - // considered as healthy. - if len(dms.writeQueue) == cap(dms.writeQueue) { - return fmt.Errorf("write queue is full") +// GetMetricFamiliesMap implements the MetricStore interface. +func (dms *DiskMetricStore) GetMetricFamiliesMap() GroupingKeyToMetricGroup { + dms.lock.RLock() + defer dms.lock.RUnlock() + groupsCopy := make(GroupingKeyToMetricGroup, len(dms.metricGroups)) + for k, g := range dms.metricGroups { + metricsCopy := make(NameToTimestampedMetricFamilyMap, len(g.Metrics)) + groupsCopy[k] = MetricGroup{Labels: g.Labels, Metrics: metricsCopy} + for n, tmf := range g.Metrics { + metricsCopy[n] = tmf + } } - - return nil -} - -// Ready implements the MetricStore interface. -func (dms *DiskMetricStore) Ready() error { - return dms.Healthy() + return groupsCopy } func (dms *DiskMetricStore) loop(persistenceInterval time.Duration) { @@ -205,8 +228,15 @@ func (dms *DiskMetricStore) loop(persistenceInterval time.Duration) { for { select { case wr := <-dms.writeQueue: - dms.processWriteRequest(wr) lastWrite = time.Now() + if dms.checkWriteRequest(wr) { + dms.processWriteRequest(wr) + } else { + dms.setPushFailedTimestamp(wr) + } + if wr.Done != nil { + close(wr.Done) + } checkPersist() case lastPersist = <-persistDone: persistScheduled = false @@ -237,20 +267,35 @@ func (dms *DiskMetricStore) processWriteRequest(wr WriteRequest) { key := model.LabelsToSignature(wr.Labels) if wr.MetricFamilies == nil { - // Delete. + // No MetricFamilies means delete request. Delete the whole + // metric group, and we are done here. delete(dms.metricGroups, key) return } - // Update. - for name, mf := range wr.MetricFamilies { - group, ok := dms.metricGroups[key] - if !ok { - group = MetricGroup{ - Labels: wr.Labels, - Metrics: NameToTimestampedMetricFamilyMap{}, + // Otherwise, it's an update. + group, ok := dms.metricGroups[key] + if !ok { + group = MetricGroup{ + Labels: wr.Labels, + Metrics: NameToTimestampedMetricFamilyMap{}, + } + dms.metricGroups[key] = group + } else if wr.Replace { + // For replace, we have to delete all metric families in the + // group except pre-existing push timestamps. + for name := range group.Metrics { + if name != pushMetricName && name != pushFailedMetricName { + delete(group.Metrics, name) } - dms.metricGroups[key] = group } + } + wr.MetricFamilies[pushMetricName] = newPushTimestampGauge(wr.Labels, wr.Timestamp) + // Only add a zero push-failed metric if none is there yet, so that a + // previously added fail timestamp is retained. + if _, ok := group.Metrics[pushFailedMetricName]; !ok { + wr.MetricFamilies[pushFailedMetricName] = newPushFailedTimestampGauge(wr.Labels, time.Time{}) + } + for name, mf := range wr.MetricFamilies { group.Metrics[name] = TimestampedMetricFamily{ Timestamp: wr.Timestamp, GobbableMetricFamily: (*GobbableMetricFamily)(mf), @@ -258,19 +303,82 @@ func (dms *DiskMetricStore) processWriteRequest(wr WriteRequest) { } } -// GetMetricFamiliesMap implements the MetricStore interface. -func (dms *DiskMetricStore) GetMetricFamiliesMap() GroupingKeyToMetricGroup { - dms.lock.RLock() - defer dms.lock.RUnlock() - groupsCopy := make(GroupingKeyToMetricGroup, len(dms.metricGroups)) - for k, g := range dms.metricGroups { - metricsCopy := make(NameToTimestampedMetricFamilyMap, len(g.Metrics)) - groupsCopy[k] = MetricGroup{Labels: g.Labels, Metrics: metricsCopy} - for n, tmf := range g.Metrics { - metricsCopy[n] = tmf +func (dms *DiskMetricStore) setPushFailedTimestamp(wr WriteRequest) { + dms.lock.Lock() + defer dms.lock.Unlock() + + key := model.LabelsToSignature(wr.Labels) + + group, ok := dms.metricGroups[key] + if !ok { + group = MetricGroup{ + Labels: wr.Labels, + Metrics: NameToTimestampedMetricFamilyMap{}, } + dms.metricGroups[key] = group } - return groupsCopy + + group.Metrics[pushFailedMetricName] = TimestampedMetricFamily{ + Timestamp: wr.Timestamp, + GobbableMetricFamily: (*GobbableMetricFamily)(newPushFailedTimestampGauge(wr.Labels, wr.Timestamp)), + } + // Only add a zero push metric if none is there yet, so that a + // previously added push timestamp is retained. + if _, ok := group.Metrics[pushMetricName]; !ok { + group.Metrics[pushMetricName] = TimestampedMetricFamily{ + Timestamp: wr.Timestamp, + GobbableMetricFamily: (*GobbableMetricFamily)(newPushTimestampGauge(wr.Labels, time.Time{})), + } + } +} + +// checkWriteRequest return if applying the provided WriteRequest will result in +// a consistent state of metrics. The dms is not modified by the check. However, +// the WriteRequest _will_ be sanitized: the MetricFamilies are ensured to +// contain the grouping Labels after the check. If false is returned, the +// causing error is written to the Done channel of the WriteRequest. +func (dms *DiskMetricStore) checkWriteRequest(wr WriteRequest) bool { + if wr.MetricFamilies == nil { + // Delete request cannot create inconsistencies, and nothing has + // to be sanitized. + return true + } + + var err error + defer func() { + if err != nil && wr.Done != nil { + wr.Done <- err + } + }() + + if timestampsPresent(wr.MetricFamilies) { + err = errTimestamp + return false + } + for _, mf := range wr.MetricFamilies { + sanitizeLabels(mf, wr.Labels) + } + + // Construct a test dms, acting on a copy of the metrics, to test the + // WriteRequest with. + tdms := &DiskMetricStore{ + metricGroups: dms.GetMetricFamiliesMap(), + predefinedHelp: dms.predefinedHelp, + logger: log.NewNopLogger(), + } + tdms.processWriteRequest(wr) + + // Construct a test Gatherer to check if consistent gathering is possible. + tg := prometheus.Gatherers{ + prometheus.DefaultGatherer, + prometheus.GathererFunc(func() ([]*dto.MetricFamily, error) { + return tdms.GetMetricFamilies(), nil + }), + } + if _, err = tg.Gather(); err != nil { + return false + } + return true } func (dms *DiskMetricStore) persist() error { @@ -345,3 +453,110 @@ func extractPredefinedHelpStrings(g prometheus.Gatherer) (map[string]string, err } return result, nil } + +func newPushTimestampGauge(groupingLabels map[string]string, t time.Time) *dto.MetricFamily { + return newTimestampGauge(pushMetricName, pushMetricHelp, groupingLabels, t) +} + +func newPushFailedTimestampGauge(groupingLabels map[string]string, t time.Time) *dto.MetricFamily { + return newTimestampGauge(pushFailedMetricName, pushFailedMetricHelp, groupingLabels, t) +} + +func newTimestampGauge(name, help string, groupingLabels map[string]string, t time.Time) *dto.MetricFamily { + var ts float64 + if !t.IsZero() { + ts = float64(t.UnixNano()) / 1e9 + } + mf := &dto.MetricFamily{ + Name: proto.String(name), + Help: proto.String(help), + Type: dto.MetricType_GAUGE.Enum(), + Metric: []*dto.Metric{ + { + Gauge: &dto.Gauge{ + Value: proto.Float64(ts), + }, + }, + }, + } + sanitizeLabels(mf, groupingLabels) + return mf +} + +// sanitizeLabels ensures that all the labels in groupingLabels and the +// `instance` label are present in the MetricFamily. The label values from +// groupingLabels are set in each Metric, no matter what. After that, if the +// 'instance' label is not present at all in a Metric, it will be created (with +// an empty string as value). +// +// Finally, sanitizeLabels sorts the label pairs of all metrics. +func sanitizeLabels(mf *dto.MetricFamily, groupingLabels map[string]string) { + gLabelsNotYetDone := make(map[string]string, len(groupingLabels)) + +metric: + for _, m := range mf.GetMetric() { + for ln, lv := range groupingLabels { + gLabelsNotYetDone[ln] = lv + } + hasInstanceLabel := false + for _, lp := range m.GetLabel() { + ln := lp.GetName() + if lv, ok := gLabelsNotYetDone[ln]; ok { + lp.Value = proto.String(lv) + delete(gLabelsNotYetDone, ln) + } + if ln == string(model.InstanceLabel) { + hasInstanceLabel = true + } + if len(gLabelsNotYetDone) == 0 && hasInstanceLabel { + sort.Sort(labelPairs(m.Label)) + continue metric + } + } + for ln, lv := range gLabelsNotYetDone { + m.Label = append(m.Label, &dto.LabelPair{ + Name: proto.String(ln), + Value: proto.String(lv), + }) + if ln == string(model.InstanceLabel) { + hasInstanceLabel = true + } + delete(gLabelsNotYetDone, ln) // To prepare map for next metric. + } + if !hasInstanceLabel { + m.Label = append(m.Label, &dto.LabelPair{ + Name: proto.String(string(model.InstanceLabel)), + Value: proto.String(""), + }) + } + sort.Sort(labelPairs(m.Label)) + } +} + +// Checks if any timestamps have been specified. +func timestampsPresent(metricFamilies map[string]*dto.MetricFamily) bool { + for _, mf := range metricFamilies { + for _, m := range mf.GetMetric() { + if m.TimestampMs != nil { + return true + } + } + } + return false +} + +// labelPairs implements sort.Interface. It provides a sortable version of a +// slice of dto.LabelPair pointers. +type labelPairs []*dto.LabelPair + +func (s labelPairs) Len() int { + return len(s) +} + +func (s labelPairs) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s labelPairs) Less(i, j int) bool { + return s[i].GetName() < s[j].GetName() +} diff --git a/storage/diskmetricstore_test.go b/storage/diskmetricstore_test.go index b93ab0bc..214b14f2 100644 --- a/storage/diskmetricstore_test.go +++ b/storage/diskmetricstore_test.go @@ -33,26 +33,25 @@ import ( var ( logger = log.NewNopLogger() - // Example metric families. + // Example metric families. Keep labels sorted lexicographically! mf1a = &dto.MetricFamily{ Name: proto.String("mf1"), Type: dto.MetricType_UNTYPED.Enum(), Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ - { - Name: proto.String("job"), - Value: proto.String("job1"), - }, { Name: proto.String("instance"), Value: proto.String("instance2"), }, + { + Name: proto.String("job"), + Value: proto.String("job1"), + }, }, Untyped: &dto.Untyped{ Value: proto.Float64(-3e3), }, - TimestampMs: proto.Int64(103948), }, }, } @@ -62,14 +61,14 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ - { - Name: proto.String("job"), - Value: proto.String("job1"), - }, { Name: proto.String("instance"), Value: proto.String("instance2"), }, + { + Name: proto.String("job"), + Value: proto.String("job1"), + }, }, Untyped: &dto.Untyped{ Value: proto.Float64(42), @@ -83,14 +82,14 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ - { - Name: proto.String("job"), - Value: proto.String("job2"), - }, { Name: proto.String("instance"), Value: proto.String("instance1"), }, + { + Name: proto.String("job"), + Value: proto.String("job2"), + }, }, Untyped: &dto.Untyped{ Value: proto.Float64(42), @@ -104,13 +103,30 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String("instance2"), + }, { Name: proto.String("job"), Value: proto.String("job3"), }, + }, + Untyped: &dto.Untyped{ + Value: proto.Float64(42), + }, + }, + }, + } + mf1e = &dto.MetricFamily{ + Name: proto.String("mf1"), + Type: dto.MetricType_UNTYPED.Enum(), + Metric: []*dto.Metric{ + { + Label: []*dto.LabelPair{ { - Name: proto.String("instance"), - Value: proto.String("instance2"), + Name: proto.String("job"), + Value: proto.String("job1"), }, }, Untyped: &dto.Untyped{ @@ -126,29 +142,65 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String("instance2"), + }, { Name: proto.String("job"), Value: proto.String("job1"), }, + }, + Untyped: &dto.Untyped{ + Value: proto.Float64(-3e3), + }, + }, + { + Label: []*dto.LabelPair{ { Name: proto.String("instance"), - Value: proto.String("instance2"), + Value: proto.String("instance1"), + }, + { + Name: proto.String("job"), + Value: proto.String("job2"), }, }, Untyped: &dto.Untyped{ - Value: proto.Float64(-3e3), + Value: proto.Float64(42), }, - TimestampMs: proto.Int64(103948), }, { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String("instance2"), + }, { Name: proto.String("job"), - Value: proto.String("job2"), + Value: proto.String("job3"), }, + }, + Untyped: &dto.Untyped{ + Value: proto.Float64(42), + }, + }, + }, + } + // mf1be is merged from mf1b and mf1e, with added empty instance label for mf1e. + mf1be = &dto.MetricFamily{ + Name: proto.String("mf1"), + Type: dto.MetricType_UNTYPED.Enum(), + Metric: []*dto.Metric{ + { + Label: []*dto.LabelPair{ { Name: proto.String("instance"), - Value: proto.String("instance1"), + Value: proto.String("instance2"), + }, + { + Name: proto.String("job"), + Value: proto.String("job1"), }, }, Untyped: &dto.Untyped{ @@ -157,18 +209,41 @@ var ( }, { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), - Value: proto.String("job3"), + Value: proto.String("job1"), }, + }, + Untyped: &dto.Untyped{ + Value: proto.Float64(42), + }, + }, + }, + } + // mf1ts is mf1a with a timestamp set. + mf1ts = &dto.MetricFamily{ + Name: proto.String("mf1"), + Type: dto.MetricType_UNTYPED.Enum(), + Metric: []*dto.Metric{ + { + Label: []*dto.LabelPair{ { Name: proto.String("instance"), Value: proto.String("instance2"), }, + { + Name: proto.String("job"), + Value: proto.String("job1"), + }, }, Untyped: &dto.Untyped{ - Value: proto.Float64(42), + Value: proto.Float64(-3e3), }, + TimestampMs: proto.Int64(103948), }, }, } @@ -180,37 +255,36 @@ var ( { Label: []*dto.LabelPair{ { - Name: proto.String("job"), - Value: proto.String("job1"), + Name: proto.String("basename"), + Value: proto.String("basevalue2"), }, { Name: proto.String("instance"), Value: proto.String("instance2"), }, { - Name: proto.String("labelname"), - Value: proto.String("val2"), + Name: proto.String("job"), + Value: proto.String("job1"), }, { - Name: proto.String("basename"), - Value: proto.String("basevalue2"), + Name: proto.String("labelname"), + Value: proto.String("val2"), }, }, Gauge: &dto.Gauge{ Value: proto.Float64(math.Inf(+1)), }, - TimestampMs: proto.Int64(54321), }, { Label: []*dto.LabelPair{ - { - Name: proto.String("job"), - Value: proto.String("job1"), - }, { Name: proto.String("instance"), Value: proto.String("instance2"), }, + { + Name: proto.String("job"), + Value: proto.String("job1"), + }, { Name: proto.String("labelname"), Value: proto.String("val1"), @@ -228,14 +302,14 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ - { - Name: proto.String("job"), - Value: proto.String("job1"), - }, { Name: proto.String("instance"), Value: proto.String("instance1"), }, + { + Name: proto.String("job"), + Value: proto.String("job1"), + }, }, Untyped: &dto.Untyped{ Value: proto.Float64(42), @@ -249,14 +323,14 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ - { - Name: proto.String("job"), - Value: proto.String("job3"), - }, { Name: proto.String("instance"), Value: proto.String("instance2"), }, + { + Name: proto.String("job"), + Value: proto.String("job3"), + }, }, Untyped: &dto.Untyped{ Value: proto.Float64(3.4345), @@ -270,14 +344,14 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ - { - Name: proto.String("job"), - Value: proto.String("job5"), - }, { Name: proto.String("instance"), Value: proto.String("instance5"), }, + { + Name: proto.String("job"), + Value: proto.String("job5"), + }, }, Summary: &dto.Summary{ SampleCount: proto.Uint64(0), @@ -293,6 +367,10 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job1"), @@ -311,6 +389,10 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job2"), @@ -330,6 +412,10 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job1"), @@ -341,6 +427,10 @@ var ( }, { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job2"), @@ -360,6 +450,10 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job1"), @@ -371,6 +465,10 @@ var ( }, { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job2"), @@ -382,6 +480,7 @@ var ( }, }, } + // mfgg is the usual go_goroutines gauge but with a different help text. mfgg = &dto.MetricFamily{ Name: proto.String("go_goroutines"), Help: proto.String("Inconsistent doc string, fixed version in mfggFixed."), @@ -389,6 +488,10 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job1"), @@ -400,6 +503,29 @@ var ( }, }, } + // mfgc is the usual go_goroutines metric but mistyped as counter. + mfgc = &dto.MetricFamily{ + Name: proto.String("go_goroutines"), + Help: proto.String("Number of goroutines that currently exist."), + Type: dto.MetricType_COUNTER.Enum(), + Metric: []*dto.Metric{ + { + Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, + { + Name: proto.String("job"), + Value: proto.String("job1"), + }, + }, + Counter: &dto.Counter{ + Value: proto.Float64(5), + }, + }, + }, + } mfggFixed = &dto.MetricFamily{ Name: proto.String("go_goroutines"), Help: proto.String("Number of goroutines that currently exist."), @@ -407,6 +533,10 @@ var ( Metric: []*dto.Metric{ { Label: []*dto.LabelPair{ + { + Name: proto.String("instance"), + Value: proto.String(""), + }, { Name: proto.String("job"), Value: proto.String("job1"), @@ -420,6 +550,27 @@ var ( } ) +// metricFamiliesMap creates the map needed in the MetricFamilies field of a +// WriteRequest from the provided reference metric families. While doing so, it +// creates deep copies of the metric families so that modifications that might +// happen during processing of the WriteRequest will not affect the reference +// metric families. +func metricFamiliesMap(mfs ...*dto.MetricFamily) map[string]*dto.MetricFamily { + m := map[string]*dto.MetricFamily{} + for _, mf := range mfs { + buf, err := proto.Marshal(mf) + if err != nil { + panic(err) + } + mfCopy := &dto.MetricFamily{} + if err := proto.Unmarshal(buf, mfCopy); err != nil { + panic(err) + } + m[mf.GetName()] = mfCopy + } + return m +} + func addGroup( mg GroupingKeyToMetricGroup, groupingLabels map[string]string, @@ -529,61 +680,106 @@ func TestAddDeletePersistRestore(t *testing.T) { // Submit a single simple metric family. ts1 := time.Now() + grouping1 := map[string]string{ + "job": "job1", + "instance": "instance1", + } + errCh := make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job1", - "instance": "instance1", - }, + Labels: grouping1, Timestamp: ts1, - MetricFamilies: map[string]*dto.MetricFamily{"mf3": mf3}, + MetricFamilies: metricFamiliesMap(mf3), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf3); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp := newPushTimestampGauge(grouping1, ts1) + pushFailedTimestamp := newPushFailedTimestampGauge(grouping1, time.Time{}) + if err := checkMetricFamilies( + dms, mf3, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Submit two metric families for a different instance. ts2 := ts1.Add(time.Second) + grouping2 := map[string]string{ + "job": "job1", + "instance": "instance2", + } + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job1", - "instance": "instance2", - }, + Labels: grouping2, Timestamp: ts2, - MetricFamilies: map[string]*dto.MetricFamily{"mf1": mf1b, "mf2": mf2}, + MetricFamilies: metricFamiliesMap(mf1b, mf2), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf1b, mf2, mf3); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp.Metric = append( + pushTimestamp.Metric, newPushTimestampGauge(grouping2, ts2).Metric[0], + ) + pushFailedTimestamp.Metric = append( + pushFailedTimestamp.Metric, newPushFailedTimestampGauge(grouping2, time.Time{}).Metric[0], + ) + if err := checkMetricFamilies( + dms, mf1b, mf2, mf3, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } - + for err := range errCh { + t.Fatal("Unexpected error:", err) + } // Submit a metric family with the same name for the same job/instance again. // Should overwrite the previous metric family for the same job/instance ts3 := ts2.Add(time.Second) + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job1", - "instance": "instance2", - }, + Labels: grouping2, Timestamp: ts3, - MetricFamilies: map[string]*dto.MetricFamily{"mf1": mf1a}, + MetricFamilies: metricFamiliesMap(mf1a), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf1a, mf2, mf3); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp.Metric[1] = newPushTimestampGauge(grouping2, ts3).Metric[0] + if err := checkMetricFamilies( + dms, mf1a, mf2, mf3, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Add a new group by job, with a summary without any observations yet. ts4 := ts3.Add(time.Second) + grouping4 := map[string]string{ + "job": "job5", + } + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job5", - }, + Labels: grouping4, Timestamp: ts4, - MetricFamilies: map[string]*dto.MetricFamily{"mf5": mf5}, + MetricFamilies: metricFamiliesMap(mf5), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf1a, mf2, mf3, mf5); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp.Metric = append( + pushTimestamp.Metric, newPushTimestampGauge(grouping4, ts4).Metric[0], + ) + pushFailedTimestamp.Metric = append( + pushFailedTimestamp.Metric, newPushFailedTimestampGauge(grouping4, time.Time{}).Metric[0], + ) + if err := checkMetricFamilies( + dms, mf1a, mf2, mf3, mf5, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } @@ -594,7 +790,10 @@ func TestAddDeletePersistRestore(t *testing.T) { // Load it again. dms = NewDiskMetricStore(fileName, 100*time.Millisecond, nil, logger) - if err := checkMetricFamilies(dms, mf1a, mf2, mf3, mf5); err != nil { + if err := checkMetricFamilies( + dms, mf1a, mf2, mf3, mf5, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Spot-check timestamp. @@ -613,59 +812,92 @@ func TestAddDeletePersistRestore(t *testing.T) { "instance": "instance1", }, }) + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ Labels: map[string]string{ "job": "job5", }, + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf1a, mf2); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp = newPushTimestampGauge(grouping2, ts3) + pushFailedTimestamp = newPushFailedTimestampGauge(grouping2, time.Time{}) + if err := checkMetricFamilies( + dms, mf1a, mf2, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Submit another one. ts5 := ts4.Add(time.Second) + grouping5 := map[string]string{ + "job": "job3", + "instance": "instance2", + } + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job3", - "instance": "instance2", - }, + Labels: grouping5, Timestamp: ts5, - MetricFamilies: map[string]*dto.MetricFamily{"mf4": mf4}, + MetricFamilies: metricFamiliesMap(mf4), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf1a, mf2, mf4); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp.Metric = append( + pushTimestamp.Metric, newPushTimestampGauge(grouping5, ts5).Metric[0], + ) + pushFailedTimestamp.Metric = append( + pushFailedTimestamp.Metric, newPushFailedTimestampGauge(grouping5, time.Time{}).Metric[0], + ) + if err := checkMetricFamilies( + dms, mf1a, mf2, mf4, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Delete a job does not remove anything because there is no suitable // grouping. + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ Labels: map[string]string{ "job": "job1", }, + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf1a, mf2, mf4); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + if err := checkMetricFamilies( + dms, mf1a, mf2, mf4, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Delete another group. + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job3", - "instance": "instance2", - }, + Labels: grouping5, + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf1a, mf2); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp = newPushTimestampGauge(grouping2, ts3) + pushFailedTimestamp = newPushFailedTimestampGauge(grouping2, time.Time{}) + if err := checkMetricFamilies( + dms, mf1a, mf2, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Check that no empty map entry for job3 was left behind. - if _, stillExists := dms.metricGroups[model.LabelsToSignature(map[string]string{ - "job": "job3", - "instance": "instance2", - })]; stillExists { + if _, stillExists := dms.metricGroups[model.LabelsToSignature(grouping5)]; stillExists { t.Error("An instance map for 'job3' still exists.") } @@ -673,18 +905,24 @@ func TestAddDeletePersistRestore(t *testing.T) { // (to check draining). for i := 0; i < 10; i++ { dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job3", - "instance": "instance2", - }, - Timestamp: ts4, - MetricFamilies: map[string]*dto.MetricFamily{"mf4": mf4}, + Labels: grouping5, + Timestamp: ts5, + MetricFamilies: metricFamiliesMap(mf4), }) } if err := dms.Shutdown(); err != nil { t.Fatal(err) } - if err := checkMetricFamilies(dms, mf1a, mf2, mf4); err != nil { + pushTimestamp.Metric = append( + pushTimestamp.Metric, newPushTimestampGauge(grouping5, ts5).Metric[0], + ) + pushFailedTimestamp.Metric = append( + pushFailedTimestamp.Metric, newPushFailedTimestampGauge(grouping5, time.Time{}).Metric[0], + ) + if err := checkMetricFamilies( + dms, mf1a, mf2, mf4, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } } @@ -693,16 +931,26 @@ func TestNoPersistence(t *testing.T) { dms := NewDiskMetricStore("", 100*time.Millisecond, nil, logger) ts1 := time.Now() + grouping1 := map[string]string{ + "job": "job1", + "instance": "instance1", + } + errCh := make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ - Labels: map[string]string{ - "job": "job1", - "instance": "instance1", - }, + Labels: grouping1, Timestamp: ts1, - MetricFamilies: map[string]*dto.MetricFamily{"mf3": mf3}, + MetricFamilies: metricFamiliesMap(mf3), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf3); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp := newPushTimestampGauge(grouping1, ts1) + pushFailedTimestamp := newPushFailedTimestampGauge(grouping1, time.Time{}) + if err := checkMetricFamilies( + dms, mf3, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } @@ -724,6 +972,340 @@ func TestNoPersistence(t *testing.T) { } } +func TestRejectTimestamps(t *testing.T) { + dms := NewDiskMetricStore("", 100*time.Millisecond, nil, logger) + + ts1 := time.Now() + grouping1 := map[string]string{ + "job": "job1", + "instance": "instance1", + } + errCh := make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts1, + MetricFamilies: metricFamiliesMap(mf1ts), + Done: errCh, + }) + var err error + for err = range errCh { + if err != errTimestamp { + t.Errorf("Expected error %q, got %q.", errTimestamp, err) + } + } + if err == nil { + t.Error("Expected error on pushing metric with timestamp.") + } + pushTimestamp := newPushTimestampGauge(grouping1, time.Time{}) + pushFailedTimestamp := newPushFailedTimestampGauge(grouping1, ts1) + if err := checkMetricFamilies( + dms, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + if err := dms.Shutdown(); err != nil { + t.Fatal(err) + } +} + +func TestRejectInconsistentPush(t *testing.T) { + dms := NewDiskMetricStore("", 100*time.Millisecond, nil, logger) + + ts1 := time.Now() + grouping1 := map[string]string{ + "job": "job1", + } + errCh := make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts1, + MetricFamilies: metricFamiliesMap(mfgc), + Done: errCh, + }) + var err error + for err = range errCh { + } + if err == nil { + t.Error("Expected error pushing inconsistent go_goroutines metric.") + } + pushTimestamp := newPushTimestampGauge(grouping1, time.Time{}) + pushFailedTimestamp := newPushFailedTimestampGauge(grouping1, ts1) + if err := checkMetricFamilies( + dms, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + ts2 := ts1.Add(time.Second) + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts2, + MetricFamilies: metricFamiliesMap(mf1a), + Done: errCh, + }) + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp = newPushTimestampGauge(grouping1, ts2) + if err := checkMetricFamilies( + dms, mf1a, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + ts3 := ts2.Add(time.Second) + grouping3 := map[string]string{ + "job": "job1", + "instance": "instance2", + } + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping3, + Timestamp: ts3, + MetricFamilies: metricFamiliesMap(mf1b), + Done: errCh, + }) + err = nil + for err = range errCh { + } + if err == nil { + t.Error("Expected error pushing duplicate mf1 metric.") + } + pushTimestamp.Metric = append( + pushTimestamp.Metric, newPushTimestampGauge(grouping3, time.Time{}).Metric[0], + ) + pushFailedTimestamp.Metric = append( + pushFailedTimestamp.Metric, newPushFailedTimestampGauge(grouping3, ts3).Metric[0], + ) + if err := checkMetricFamilies( + dms, mf1a, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + if err := dms.Shutdown(); err != nil { + t.Fatal(err) + } +} + +func TestSanitizeLabels(t *testing.T) { + dms := NewDiskMetricStore("", 100*time.Millisecond, nil, logger) + + // Push mf1c with the grouping matching mf1b, mf1b should end up in storage. + ts1 := time.Now() + grouping1 := map[string]string{ + "job": "job1", + "instance": "instance2", + } + errCh := make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts1, + MetricFamilies: metricFamiliesMap(mf1c), + Done: errCh, + }) + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp := newPushTimestampGauge(grouping1, ts1) + pushFailedTimestamp := newPushFailedTimestampGauge(grouping1, time.Time{}) + if err := checkMetricFamilies( + dms, mf1b, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + // Push mf1e, missing the instance label. Again, mf1b should end up in storage. + ts2 := ts1.Add(1) + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts2, + MetricFamilies: metricFamiliesMap(mf1e), + Done: errCh, + }) + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp = newPushTimestampGauge(grouping1, ts2) + if err := checkMetricFamilies( + dms, mf1b, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + // Push mf1e, missing the instance label, into a grouping without the + // instance label. The result in the storage should have an empty + // instance label. + ts3 := ts2.Add(1) + grouping3 := map[string]string{ + "job": "job1", + } + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping3, + Timestamp: ts3, + MetricFamilies: metricFamiliesMap(mf1e), + Done: errCh, + }) + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp.Metric = append( + pushTimestamp.Metric, newPushTimestampGauge(grouping3, ts3).Metric[0], + ) + pushFailedTimestamp.Metric = append( + pushFailedTimestamp.Metric, newPushFailedTimestampGauge(grouping3, time.Time{}).Metric[0], + ) + if err := checkMetricFamilies( + dms, mf1be, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + +} + +func TestReplace(t *testing.T) { + dms := NewDiskMetricStore("", 100*time.Millisecond, nil, logger) + + // First do an invalid push to set pushFailedTimestamp and to later + // verify that it is retained and not replaced. + ts1 := time.Now() + grouping1 := map[string]string{ + "job": "job1", + } + errCh := make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts1, + MetricFamilies: metricFamiliesMap(mf1ts), + Done: errCh, + }) + var err error + for err = range errCh { + if err != errTimestamp { + t.Errorf("Expected error %q, got %q.", errTimestamp, err) + } + } + if err == nil { + t.Error("Expected error on pushing metric with timestamp.") + } + pushTimestamp := newPushTimestampGauge(grouping1, time.Time{}) + pushFailedTimestamp := newPushFailedTimestampGauge(grouping1, ts1) + if err := checkMetricFamilies( + dms, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + // Now a valid update in replace mode. It doesn't replace anything, but + // it already tests that the push-failed timestamp is retained. + ts2 := ts1.Add(time.Second) + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts2, + MetricFamilies: metricFamiliesMap(mf1a), + Done: errCh, + Replace: true, + }) + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp = newPushTimestampGauge(grouping1, ts2) + if err := checkMetricFamilies( + dms, mf1a, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + // Now push something else in replace mode that should replace mf1. + ts3 := ts2.Add(time.Second) + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts3, + MetricFamilies: metricFamiliesMap(mf2), + Done: errCh, + Replace: true, + }) + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp = newPushTimestampGauge(grouping1, ts3) + if err := checkMetricFamilies( + dms, mf2, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + // Another invalid push in replace mode, which should only update the + // push-failed timestamp. + ts4 := ts3.Add(time.Second) + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts4, + MetricFamilies: metricFamiliesMap(mf1ts), + Done: errCh, + Replace: true, + }) + err = nil + for err = range errCh { + if err != errTimestamp { + t.Errorf("Expected error %q, got %q.", errTimestamp, err) + } + } + if err == nil { + t.Error("Expected error on pushing metric with timestamp.") + } + pushFailedTimestamp = newPushFailedTimestampGauge(grouping1, ts4) + if err := checkMetricFamilies( + dms, mf2, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + // Push an empty map (rather than a nil map) in replace mode. Should + // delete everything except the push timestamps. + ts5 := ts4.Add(time.Second) + errCh = make(chan error, 1) + dms.SubmitWriteRequest(WriteRequest{ + Labels: grouping1, + Timestamp: ts5, + MetricFamilies: metricFamiliesMap(), + Done: errCh, + Replace: true, + }) + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp = newPushTimestampGauge(grouping1, ts5) + if err := checkMetricFamilies( + dms, + pushTimestamp, pushFailedTimestamp, + ); err != nil { + t.Error(err) + } + + if err := dms.Shutdown(); err != nil { + t.Fatal(err) + } +} + func TestGetMetricFamiliesMap(t *testing.T) { tempDir, err := ioutil.TempDir("", "diskmetricstore.TestGetMetricFamiliesMap.") if err != nil { @@ -740,7 +1322,7 @@ func TestGetMetricFamiliesMap(t *testing.T) { } labels2 := map[string]string{ - "job": "job2", + "job": "job1", "instance": "instance2", } @@ -749,34 +1331,52 @@ func TestGetMetricFamiliesMap(t *testing.T) { // Submit a single simple metric family. ts1 := time.Now() + errCh := make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ Labels: labels1, Timestamp: ts1, - MetricFamilies: map[string]*dto.MetricFamily{"mf3": mf3}, + MetricFamilies: metricFamiliesMap(mf3), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. - if err := checkMetricFamilies(dms, mf3); err != nil { + for err := range errCh { + t.Fatal("Unexpected error:", err) + } + pushTimestamp := newPushTimestampGauge(labels1, ts1) + pushFailedTimestamp := newPushFailedTimestampGauge(labels1, time.Time{}) + if err := checkMetricFamilies( + dms, mf3, + pushTimestamp, pushFailedTimestamp, + ); err != nil { t.Error(err) } // Submit two metric families for a different instance. ts2 := ts1.Add(time.Second) + errCh = make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ Labels: labels2, Timestamp: ts2, - MetricFamilies: map[string]*dto.MetricFamily{"mf1": mf1b, "mf2": mf2}, + MetricFamilies: metricFamiliesMap(mf1b, mf2), + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. + for err := range errCh { + t.Fatal("Unexpected error:", err) + } - // expectedMFMap is a multi-layered map that maps the labelset fingerprints to the corresponding metric family string representations. - // This is for test assertion purposes. + // expectedMFMap is a multi-layered map that maps the labelset + // fingerprints to the corresponding metric family string + // representations. This is for test assertion purposes. expectedMFMap := map[uint64]map[string]string{ ls1: { - "mf3": mf3.String(), + "mf3": mf3.String(), + pushMetricName: pushTimestamp.String(), + pushFailedMetricName: pushFailedTimestamp.String(), }, ls2: { - "mf1": mf1b.String(), - "mf2": mf2.String(), + "mf1": mf1b.String(), + "mf2": mf2.String(), + pushMetricName: newPushTimestampGauge(labels2, ts2).String(), + pushFailedMetricName: newPushFailedTimestampGauge(labels2, time.Time{}).String(), }, } @@ -789,6 +1389,7 @@ func TestHelpStringFix(t *testing.T) { dms := NewDiskMetricStore("", 100*time.Millisecond, prometheus.DefaultGatherer, logger) ts1 := time.Now() + errCh := make(chan error, 1) dms.SubmitWriteRequest(WriteRequest{ Labels: map[string]string{ "job": "job1", @@ -807,13 +1408,16 @@ func TestHelpStringFix(t *testing.T) { MetricFamilies: map[string]*dto.MetricFamily{ "mf_help": mfh2, }, + Done: errCh, }) - time.Sleep(20 * time.Millisecond) // Give loop() time to process. + for err := range errCh { + t.Fatal("Unexpected error:", err) + } - // Either we have settle on the mfh1 help string or the mfh2 help string. + // Either we have settled on the mfh1 help string or the mfh2 help string. gotMFs := dms.GetMetricFamilies() - if len(gotMFs) != 2 { - t.Fatalf("expected 2 metric families, got %d", len(gotMFs)) + if len(gotMFs) != 4 { + t.Fatalf("expected 4 metric families, got %d", len(gotMFs)) } gotMFsAsStrings := make([]string, len(gotMFs)) for i, mf := range gotMFs { diff --git a/storage/interface.go b/storage/interface.go index dc31aba4..50508d1b 100644 --- a/storage/interface.go +++ b/storage/interface.go @@ -35,17 +35,17 @@ type MetricStore interface { // so the caller is not allowed to modify the returned MetricFamilies. // If different groups have saved MetricFamilies of the same name, they // are all merged into one MetricFamily by concatenating the contained - // Metrics. Inconsistent help strings or types are logged, and one of - // the versions will "win". Inconsistent and duplicate label sets will - // go undetected. + // Metrics. Inconsistent help strings are logged, and one of the + // versions will "win". Inconsistent types and inconsistent or duplicate + // label sets will go undetected. GetMetricFamilies() []*dto.MetricFamily // GetMetricFamiliesMap returns a map grouping-key -> MetricGroup. The // MetricFamily pointed to by the Metrics map in each MetricGroup is // guaranteed to not be modified by the MetricStore anymore. However, // they may still be read somewhere else, so the caller is not allowed - // to modify it. Otherwise, the returned nested map is a deep copy of - // the internal state of the MetricStore and completely owned by the - // caller. + // to modify it. Otherwise, the returned nested map can be seen as a + // deep copy of the internal state of the MetricStore and completely + // owned by the caller. GetMetricFamiliesMap() GroupingKeyToMetricGroup // Shutdown must only be called after the caller has made sure that // SubmitWriteRequests is not called anymore. (If it is called later, @@ -68,18 +68,38 @@ type MetricStore interface { } // WriteRequest is a request to change the MetricStore, i.e. to process it, a -// write lock has to be acquired. If MetricFamilies is nil, this is a request to -// delete metrics that share the given Labels as a grouping key. Otherwise, this -// is a request to update the MetricStore with the MetricFamilies. The key in -// MetricFamilies is the name of the mapped metric family. All metrics in -// MetricFamilies MUST have already set job and other labels that are consistent -// with the Labels fields. The Timestamp field marks the time the request was -// received from the network. It is not related to the timestamp_ms field in the -// Metric proto message. +// write lock has to be acquired. +// +// If MetricFamilies is nil, this is a request to delete metrics that share the +// given Labels as a grouping key. Otherwise, this is a request to update the +// MetricStore with the MetricFamilies. +// +// If Replace is true, the MetricFamilies will completely replace the metrics +// with the same grouping key. Otherwise, only those MetricFamilies whith the +// same name as new MetricFamilies will be replaced. +// +// The key in MetricFamilies is the name of the mapped metric family. +// +// When the WriteRequest is processed, the metrics in MetricFamilies will be +// sanitized to have the same job and other labels as those in the Labels +// fields. Also, if there is no instance label, an instance label with an empty +// value will be set. This implies that the MetricFamilies in the WriteRequest +// may be modified be the MetricStore during processing of the WriteRequest! +// +// The Timestamp field marks the time the request was received from the +// network. It is not related to the TimestampMs field in the Metric proto +// message. In fact, WriteRequests containing any Metrics with a TimestampMs set +// are invalid and will be rejected. +// +// The Done channel may be nil. If it is not nil, it will be closed once the +// write request is processed. Any errors occuring during processing are sent to +// the channel before closing it. type WriteRequest struct { Labels map[string]string Timestamp time.Time MetricFamilies map[string]*dto.MetricFamily + Replace bool + Done chan error } // GroupingKeyToMetricGroup is the first level of the metric store, keyed by @@ -107,6 +127,23 @@ func (mg MetricGroup) SortedLabels() []string { return lns } +// LastPushSuccess returns false if the automatically added metric for the +// timestamp of the last failed push has a value larger than the value of the +// automatically added metric for the timestamp of the last successful push. In +// all other cases, it returns true (including the case that one or both of +// those metrics are missing for some reason.) +func (mg MetricGroup) LastPushSuccess() bool { + fail := mg.Metrics[pushFailedMetricName].GobbableMetricFamily + if fail == nil { + return true + } + success := mg.Metrics[pushMetricName].GobbableMetricFamily + if success == nil { + return true + } + return (*dto.MetricFamily)(fail).GetMetric()[0].GetGauge().GetValue() <= (*dto.MetricFamily)(success).GetMetric()[0].GetGauge().GetValue() +} + // NameToTimestampedMetricFamilyMap is the second level of the metric store, // keyed by metric name. type NameToTimestampedMetricFamilyMap map[string]TimestampedMetricFamily