[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#793749: ITP: telegraf -- plugin-driven server agent for reporting metrics into InfluxDB



Control: tags -1 patch

Hi!

On Sun, 2015-07-26 at 23:32:17 -0400, Alexandre Viau wrote:
> Package: wnpp
> Severity: wishlist

> * Package name    : telegraf
>   Upstream Author : InfluxDB inc.
> * URL             : https://github.com/influxdb/telegraf
> * License         : Expat
>   Programming Lang: Go

Ok, all necessary parts have either bugs filed against existing packages
in Debian, or RFP filed with packaging patches. And are blocking this
bug.

Attached is a working and tested packaging delta against
<https://anonscm.debian.org/cgit/pkg-go/packages/telegraf.git/>. The
missing part is updating that repo to version 1.0.1, which is the
latest upstream release, and what I've been working against. Please
let me know if something smells fishy.

For the patches needed for telegraf, I'll be submitting them upstream,
but only once I've internally cleared the situation with the CLA.

Thanks,
Guillem
From cc6f2ab1a4f25530c46418edfb3eb79de2c67ce9 Mon Sep 17 00:00:00 2001
From: Guillem Jover <gjover@sipwise.com>
Date: Mon, 17 Oct 2016 23:32:53 +0200
Subject: [PATCH] Update packaging to 1.0.1

---
 debian/.gitignore                               |    7 +
 debian/changelog                                |   35 +-
 debian/control                                  |   98 +-
 debian/copyright                                |   11 +-
 debian/patches/excise-unavailable-plugins.patch |   97 ++
 debian/patches/fix-snmp-plugin.patch            |   33 +
 debian/patches/series                           |    4 +
 debian/patches/testsuite-no-network.patch       |  412 +++++++
 debian/patches/use-licenseful-module.patch      |   22 +
 debian/rules                                    |   33 +-
 debian/telegraf-dev.install                     |    1 +
 debian/telegraf.conf                            | 1456 +++++++++++++++++++++++
 debian/telegraf.dirs                            |    3 +
 debian/telegraf.init                            |  121 ++
 debian/telegraf.install                         |    2 +
 debian/telegraf.lintian-overrides               |    3 +
 debian/telegraf.logrotate                       |   10 +
 debian/telegraf.postinst                        |   42 +
 debian/telegraf.postrm                          |   28 +
 19 files changed, 2387 insertions(+), 31 deletions(-)
 create mode 100644 debian/.gitignore
 create mode 100644 debian/patches/excise-unavailable-plugins.patch
 create mode 100644 debian/patches/fix-snmp-plugin.patch
 create mode 100644 debian/patches/series
 create mode 100644 debian/patches/testsuite-no-network.patch
 create mode 100644 debian/patches/use-licenseful-module.patch
 create mode 100644 debian/telegraf-dev.install
 create mode 100644 debian/telegraf.conf
 create mode 100644 debian/telegraf.dirs
 create mode 100755 debian/telegraf.init
 create mode 100644 debian/telegraf.install
 create mode 100644 debian/telegraf.lintian-overrides
 create mode 100644 debian/telegraf.logrotate
 create mode 100644 debian/telegraf.postinst
 create mode 100644 debian/telegraf.postrm

diff --git a/debian/.gitignore b/debian/.gitignore
new file mode 100644
index 0000000..30f7739
--- /dev/null
+++ b/debian/.gitignore
@@ -0,0 +1,7 @@
+*.debhelper
+*.substvars
+*.log
+files
+tmp
+telegraf
+telegraf-dev
diff --git a/debian/changelog b/debian/changelog
index f7377c4..dff214e 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,5 +1,38 @@
-telegraf (0.1.8~dfsg1-1) UNRELEASED; urgency=low
+telegraf (1.0.1-1) UNRELEASED; urgency=low
 
+  [ Alexandre Viau ]
   * Initial release. (Closes: #793749)
 
+  [ Guillem Jover ]
+  * New upstream release 1.0.1.
+    - Update Build-Depends and telegraf-dev Depends.
+  * Fix SNMP plugin to build and pass the test suite.
+  * Fix test suite to avoid a flaky test, and to not assume service daemons
+    are running by keying on a new environment variable TELEGRAF_SERVERS_TEST,
+    which we do not set.
+  * Use the github.com/kballard/go-shellquote module instead of
+    github.com/gonuts/go-shellquote, which has no license as is not in Debian.
+  * Use dpkg Makefile fragments to get the upstream version, and set
+    main.version with it.
+  * Use current import path, and switch from DH_GOPKG to XS-Go-Import-Path.
+  * Wrap and sort (-ast) debian/control fields.
+  * Add a Built-Using field to the telegraf binary.
+  * Change the Architecture field for telegraf from all to any.
+  * Set the builddirectory to build.
+  * Set DH_GOLANG_INSTALL_EXTRA to the list of required testdata files.
+  * Set DH_GOLANG_EXCLUDES to the plugins that pull dependencies not present
+    in Debian, and that are peripheral. And disable those plugins from the
+    code with a patch. They can be re-enabled once the dependencies get
+    packaged in Debian.
+  * Use https in the debian/copyright Format field.
+  * Install a default telegraf configuration file.
+  * Install a logrotate file for telegraf.
+  * Create postinst and prerm maintaner scripts to handle creation of user,
+    groups and set ownership of directories.
+  * Create an SystemV init script.
+  * Remove shlibs:Depends substvar from telegraf-dev binary package.
+  * Add lsb-base (>= 3.0-6) and adduser to telegraf binary package.
+  * Add a lintian-overrides file for telegraf to override hardening tags that
+    are not fixable with the current Go policy in Debian.
+
  -- Alexandre Viau <alexandre@alexandreviau.net>  Sun, 26 Jul 2015 23:28:09 -0400
diff --git a/debian/control b/debian/control
index 5f982e6..e187d1c 100644
--- a/debian/control
+++ b/debian/control
@@ -4,27 +4,77 @@ Priority: extra
 Homepage: https://github.com/influxdb/telegraf
 Maintainer: pkg-go <pkg-go-maintainers@lists.alioth.debian.org>
 Uploaders: Alexandre Viau <alexandre@alexandreviau.net>
-Build-Depends: debhelper (>= 9),
-               dh-golang,
-               golang-go,
-               influxdb-dev,
-               golang-github-go-sql-driver-mysql-dev,
-               golang-pq-dev | golang-github-lib-pq-dev,
-               golang-gopkg-dancannon-gorethink.v1-dev,
-               golang-github-naoina-toml-dev
-Standards-Version: 3.9.6
-Vcs-Git: git://anonscm.debian.org/pkg-go/packages/telegraf.git
-Vcs-Browser: http://anonscm.debian.org/cgit/pkg-go/packages/telegraf.git
+Build-Depends:
+ debhelper (>= 9),
+ dh-golang,
+ golang-go,
+ golang-eclipse-paho-dev,
+ golang-github-aws-aws-sdk-go-dev,
+ golang-github-docker-engine-api-dev,
+ golang-github-go-sql-driver-mysql-dev,
+ golang-github-gobwas-glob-dev,
+ golang-github-gorilla-mux-dev,
+ golang-github-hashicorp-consul-dev,
+ golang-github-hpcloud-tail-dev,
+ golang-github-influxdata-config-dev,
+ golang-github-influxdata-toml-dev,
+ golang-github-influxdb-influxdb-dev,
+ golang-github-kardianos-service-dev,
+ golang-github-kballard-go-shellquote-dev,
+ golang-github-lib-pq-dev | golang-pq-dev,
+ golang-github-miekg-dns-dev,
+ golang-github-prometheus-client-golang-dev,
+ golang-github-prometheus-client-model-dev,
+ golang-github-prometheus-common-dev,
+ golang-github-shirou-gopsutil-dev,
+ golang-github-shopify-sarama-dev,
+ golang-github-soniah-gosnmp-dev,
+ golang-github-streadway-amqp-dev,
+ golang-github-stretchr-testify-dev,
+ golang-github-vjeantet-grok-dev,
+ golang-golang-x-net-dev,
+ golang-gopkg-dancannon-gorethink.v1-dev,
+ golang-gopkg-mgo.v2-dev,
+ golang-gopkg-yaml.v2-dev,
+ golang-protobuf-extensions-dev,
+Standards-Version: 3.9.8
+Vcs-Git: https://anonscm.debian.org/pkg-go/packages/telegraf.git
+Vcs-Browser: https://anonscm.debian.org/cgit/pkg-go/packages/telegraf.git
+XS-Go-Import-Path: github.com/influxdata/telegraf
 
 Package: telegraf-dev
 Architecture: all
-Depends: ${shlibs:Depends},
-         ${misc:Depends},
-         influxdb-dev,
-         golang-github-go-sql-driver-mysql-dev,
-         golang-pq-dev | golang-github-lib-pq-dev,
-         golang-gopkg-dancannon-gorethink.v1-dev,
-         golang-github-naoina-toml-dev
+Depends:
+ ${misc:Depends},
+ influxdb-dev,
+ golang-eclipse-paho-dev,
+ golang-github-aws-aws-sdk-go-dev,
+ golang-github-docker-engine-api-dev,
+ golang-github-go-sql-driver-mysql-dev,
+ golang-github-gobwas-glob-dev,
+ golang-github-gorilla-mux-dev,
+ golang-github-hashicorp-consul-dev,
+ golang-github-hpcloud-tail-dev,
+ golang-github-influxdata-config-dev,
+ golang-github-influxdata-toml-dev,
+ golang-github-kardianos-service-dev,
+ golang-github-kballard-go-shellquote-dev,
+ golang-github-lib-pq-dev | golang-pq-dev,
+ golang-github-miekg-dns-dev,
+ golang-github-prometheus-client-golang-dev,
+ golang-github-prometheus-client-model-dev,
+ golang-github-prometheus-common-dev,
+ golang-github-shirou-gopsutil-dev,
+ golang-github-shopify-sarama-dev,
+ golang-github-soniah-gosnmp-dev,
+ golang-github-streadway-amqp-dev,
+ golang-github-stretchr-testify-dev,
+ golang-github-vjeantet-grok-dev,
+ golang-golang-x-net-dev,
+ golang-gopkg-dancannon-gorethink.v1-dev,
+ golang-gopkg-mgo.v2-dev,
+ golang-gopkg-yaml.v2-dev,
+ golang-protobuf-extensions-dev,
 Description: plugin-driven server agent for reporting metrics into InfluxDB. Dev package
  Telegraf is an agent written in Go for collecting metrics from the system
  it's running on or from other services and writing them into InfluxDB.
@@ -36,9 +86,13 @@ Description: plugin-driven server agent for reporting metrics into InfluxDB. Dev
  This is the dev package
 
 Package: telegraf
-Architecture: all
-Depends: ${shlibs:Depends},
-         ${misc:Depends}
+Architecture: any
+Depends:
+ ${shlibs:Depends},
+ ${misc:Depends},
+ lsb-base (>= 3.0-6),
+ adduser,
+Built-Using: ${misc:Built-Using}
 Description: plugin-driven server agent for reporting metrics into InfluxDB
  Telegraf is an agent written in Go for collecting metrics from the system
  it's running on or from other services and writing them into InfluxDB.
@@ -46,5 +100,3 @@ Description: plugin-driven server agent for reporting metrics into InfluxDB
  so that developers in the community can easily add support for collecting
  metrics from well known services (like Hadoop, or Postgres, or Redis) and
  third party APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
-
-
diff --git a/debian/copyright b/debian/copyright
index 17dcf98..031cfd3 100644
--- a/debian/copyright
+++ b/debian/copyright
@@ -1,14 +1,17 @@
-Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
-Upstream-Name: telegraf
+Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
 Source: https://github.com/influxdb/telegraf
+Upstream-Name: telegraf
 Files-Excluded: Godeps/*
 
 Files: *
-Copyright: 2015 InfluxDB
+Copyright:
+ Copyright © 2015, 2016 InfluxData
 License: Expat
 
 Files: debian/*
-Copyright: 2015 Alexandre Viau <alexandre@alexandreviau.net>
+Copyright:
+ Copyright © 2015 Alexandre Viau <alexandre@alexandreviau.net>
+ Copyright © 2016 Sipwise GmbH, Austria
 License: Expat
 
 License: Expat
diff --git a/debian/patches/excise-unavailable-plugins.patch b/debian/patches/excise-unavailable-plugins.patch
new file mode 100644
index 0000000..75784d3
--- /dev/null
+++ b/debian/patches/excise-unavailable-plugins.patch
@@ -0,0 +1,97 @@
+Description: Excise unavailable plugins
+ There are several plugins that depend on Go modules that are not yet packaged
+ in Debian. As those plugins seem periferal, and certainly not blockers, they
+ can be enabled once they are packaged and available in Debian.
+Author: Guillem Jover <gjover@sipwise.com>
+Origin: vendor
+Forwarded: no
+Last-Update: 2016-10-07
+
+---
+ internal/config/testdata/telegraf-agent.toml |   13 -------------
+ plugins/inputs/all/all.go                    |    7 -------
+ plugins/outputs/all/all.go                   |    3 ---
+ 3 files changed, 0 insertions(+), 23 deletions(-)
+
+--- a/plugins/inputs/all/all.go
++++ b/plugins/inputs/all/all.go
+@@ -1,7 +1,6 @@
+ package all
+ 
+ import (
+-	_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/apache"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
+@@ -11,7 +10,6 @@ import (
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/cloudwatch"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/conntrack"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/consul"
+-	_ "github.com/influxdata/telegraf/plugins/inputs/couchbase"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/disque"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/dns_query"
+@@ -28,7 +26,6 @@ import (
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/ipmi_sensor"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
+-	_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/logparser"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"
+@@ -36,13 +33,10 @@ import (
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/memcached"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/mesos"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/mongodb"
+-	_ "github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/mysql"
+-	_ "github.com/influxdata/telegraf/plugins/inputs/nats_consumer"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
+-	_ "github.com/influxdata/telegraf/plugins/inputs/nsq_consumer"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/nstat"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/ntpq"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
+@@ -62,7 +56,6 @@ import (
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/snmp_legacy"
+-	_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/sysstat"
+ 	_ "github.com/influxdata/telegraf/plugins/inputs/system"
+--- a/plugins/outputs/all/all.go
++++ b/plugins/outputs/all/all.go
+@@ -13,9 +13,6 @@ import (
+ 	_ "github.com/influxdata/telegraf/plugins/outputs/kafka"
+ 	_ "github.com/influxdata/telegraf/plugins/outputs/kinesis"
+ 	_ "github.com/influxdata/telegraf/plugins/outputs/librato"
+-	_ "github.com/influxdata/telegraf/plugins/outputs/mqtt"
+-	_ "github.com/influxdata/telegraf/plugins/outputs/nsq"
+ 	_ "github.com/influxdata/telegraf/plugins/outputs/opentsdb"
+ 	_ "github.com/influxdata/telegraf/plugins/outputs/prometheus_client"
+-	_ "github.com/influxdata/telegraf/plugins/outputs/riemann"
+ )
+--- a/internal/config/testdata/telegraf-agent.toml
++++ b/internal/config/testdata/telegraf-agent.toml
+@@ -143,19 +143,6 @@
+ [[inputs.diskio]]
+   # no configuration
+ 
+-# read metrics from a Kafka topic
+-[[inputs.kafka_consumer]]
+-  # topic(s) to consume
+-  topics = ["telegraf"]
+-  # an array of Zookeeper connection strings
+-  zookeeper_peers = ["localhost:2181"]
+-  # the name of the consumer group
+-  consumer_group = "telegraf_metrics_consumers"
+-  # Maximum number of points to buffer between collection intervals
+-  point_buffer = 100000
+-  # Offset (must be either "oldest" or "newest")
+-  offset = "oldest"
+-
+ # Read metrics from a LeoFS Server via SNMP
+ [[inputs.leofs]]
+   # An array of URI to gather stats about LeoFS.
diff --git a/debian/patches/fix-snmp-plugin.patch b/debian/patches/fix-snmp-plugin.patch
new file mode 100644
index 0000000..37b318b
--- /dev/null
+++ b/debian/patches/fix-snmp-plugin.patch
@@ -0,0 +1,33 @@
+Description: Fix SNMP plugin when used against latest upstream gosnmp module
+ Fix module and test.
+Author: Guillem Jover <gjover@sipwise.com>
+Origin: vendor
+Forwarded: no
+Last-Update: 2016-10-07
+
+---
+ plugins/inputs/snmp/snmp.go |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/plugins/inputs/snmp/snmp.go
++++ b/plugins/inputs/snmp/snmp.go
+@@ -610,7 +610,7 @@ func (s *Snmp) getConnection(agent strin
+ 		}
+ 	}
+ 
+-	gs.MaxRepetitions = int(s.MaxRepetitions)
++	gs.MaxRepetitions = uint8(s.MaxRepetitions)
+ 
+ 	if s.Version == 3 {
+ 		gs.ContextName = s.ContextName
+--- a/plugins/inputs/snmp/snmp_test.go
++++ b/plugins/inputs/snmp/snmp_test.go
+@@ -302,7 +302,7 @@ func TestGetSNMPConnection_v3(t *testing
+ 	assert.Equal(t, gs.Version, gosnmp.Version3)
+ 	sp := gs.SecurityParameters.(*gosnmp.UsmSecurityParameters)
+ 	assert.Equal(t, "1.2.3.4", gsc.Host())
+-	assert.Equal(t, 20, gs.MaxRepetitions)
++	assert.Equal(t, 20, int(gs.MaxRepetitions))
+ 	assert.Equal(t, "mycontext", gs.ContextName)
+ 	assert.Equal(t, gosnmp.AuthPriv, gs.MsgFlags&gosnmp.AuthPriv)
+ 	assert.Equal(t, "myuser", sp.UserName)
diff --git a/debian/patches/series b/debian/patches/series
new file mode 100644
index 0000000..e96efd2
--- /dev/null
+++ b/debian/patches/series
@@ -0,0 +1,4 @@
+excise-unavailable-plugins.patch
+use-licenseful-module.patch
+fix-snmp-plugin.patch
+testsuite-no-network.patch
diff --git a/debian/patches/testsuite-no-network.patch b/debian/patches/testsuite-no-network.patch
new file mode 100644
index 0000000..94c9bec
--- /dev/null
+++ b/debian/patches/testsuite-no-network.patch
@@ -0,0 +1,412 @@
+Description: Disable flaky or not self-contained tests
+ One of the tests seems flaky and fails randomly.
+ .
+ The other tests require running and configured services which might conflict
+ with the local setup and might be disruptive to deal with. Just disable them.
+Author: Guillem Jover <gjover@sipwise.com>
+Origin: vendor
+Forwarded: no
+Last-Update: 2016-10-07
+
+---
+ plugins/inputs/mailchimp/mailchimp_test.go                         |   16 +++
+ plugins/inputs/memcached/memcached_test.go                         |    6 +
+ plugins/inputs/mysql/mysql_test.go                                 |    6 +
+ plugins/inputs/postgresql/postgresql_test.go                       |   21 +++++
+ plugins/inputs/postgresql_extensible/postgresql_extensible_test.go |   11 ++
+ plugins/inputs/redis/redis_test.go                                 |    6 +
+ plugins/inputs/snmp_legacy/snmp_legacy_test.go                     |   41 ++++++++++
+ plugins/inputs/tcp_listener/tcp_listener_test.go                   |    3 
+ plugins/inputs/zookeeper/zookeeper_test.go                         |    6 +
+ plugins/outputs/amqp/amqp_test.go                                  |    6 +
+ plugins/outputs/kafka/kafka_test.go                                |    6 +
+ 11 files changed, 127 insertions(+), 1 deletion(-)
+
+--- a/plugins/inputs/zookeeper/zookeeper_test.go
++++ b/plugins/inputs/zookeeper/zookeeper_test.go
+@@ -1,6 +1,7 @@
+ package zookeeper
+ 
+ import (
++	"os"
+ 	"testing"
+ 
+ 	"github.com/influxdata/telegraf/testutil"
+@@ -13,6 +14,11 @@ func TestZookeeperGeneratesMetrics(t *te
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	z := &Zookeeper{
+ 		Servers: []string{testutil.GetLocalHost() + ":2181"},
+ 	}
+--- a/plugins/outputs/kafka/kafka_test.go
++++ b/plugins/outputs/kafka/kafka_test.go
+@@ -1,6 +1,7 @@
+ package kafka
+ 
+ import (
++	"os"
+ 	"testing"
+ 
+ 	"github.com/influxdata/telegraf/plugins/serializers"
+@@ -13,6 +14,11 @@ func TestConnectAndWrite(t *testing.T) {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	brokers := []string{testutil.GetLocalHost() + ":9092"}
+ 	s, _ := serializers.NewInfluxSerializer()
+ 	k := &Kafka{
+--- a/plugins/inputs/mailchimp/mailchimp_test.go
++++ b/plugins/inputs/mailchimp/mailchimp_test.go
+@@ -1,6 +1,7 @@
+ package mailchimp
+ 
+ import (
++	"os"
+ 	"fmt"
+ 	"net/http"
+ 	"net/http/httptest"
+@@ -13,6 +14,11 @@ import (
+ )
+ 
+ func TestMailChimpGatherReports(t *testing.T) {
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	ts := httptest.NewServer(
+ 		http.HandlerFunc(
+ 			func(w http.ResponseWriter, r *http.Request) {
+@@ -76,6 +82,11 @@ func TestMailChimpGatherReports(t *testi
+ }
+ 
+ func TestMailChimpGatherReport(t *testing.T) {
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	ts := httptest.NewServer(
+ 		http.HandlerFunc(
+ 			func(w http.ResponseWriter, r *http.Request) {
+@@ -141,6 +152,11 @@ func TestMailChimpGatherReport(t *testin
+ }
+ 
+ func TestMailChimpGatherError(t *testing.T) {
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	ts := httptest.NewServer(
+ 		http.HandlerFunc(
+ 			func(w http.ResponseWriter, r *http.Request) {
+--- a/plugins/inputs/memcached/memcached_test.go
++++ b/plugins/inputs/memcached/memcached_test.go
+@@ -1,6 +1,7 @@
+ package memcached
+ 
+ import (
++	"os"
+ 	"bufio"
+ 	"strings"
+ 	"testing"
+@@ -15,6 +16,11 @@ func TestMemcachedGeneratesMetrics(t *te
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	m := &Memcached{
+ 		Servers: []string{testutil.GetLocalHost()},
+ 	}
+--- a/plugins/inputs/mysql/mysql_test.go
++++ b/plugins/inputs/mysql/mysql_test.go
+@@ -1,6 +1,7 @@
+ package mysql
+ 
+ import (
++	"os"
+ 	"database/sql"
+ 	"fmt"
+ 	"testing"
+@@ -15,6 +16,11 @@ func TestMysqlDefaultsToLocal(t *testing
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	m := &Mysql{
+ 		Servers: []string{fmt.Sprintf("root@tcp(%s:3306)/", testutil.GetLocalHost())},
+ 	}
+--- a/plugins/inputs/postgresql/postgresql_test.go
++++ b/plugins/inputs/postgresql/postgresql_test.go
+@@ -1,6 +1,7 @@
+ package postgresql
+ 
+ import (
++	"os"
+ 	"fmt"
+ 	"testing"
+ 
+@@ -14,6 +15,11 @@ func TestPostgresqlGeneratesMetrics(t *t
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	p := &Postgresql{
+ 		Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
+ 			testutil.GetLocalHost()),
+@@ -85,6 +91,11 @@ func TestPostgresqlTagsMetricsWithDataba
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	p := &Postgresql{
+ 		Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
+ 			testutil.GetLocalHost()),
+@@ -107,6 +118,11 @@ func TestPostgresqlDefaultsToAllDatabase
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	p := &Postgresql{
+ 		Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
+ 			testutil.GetLocalHost()),
+@@ -136,6 +152,11 @@ func TestPostgresqlIgnoresUnwantedColumn
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	p := &Postgresql{
+ 		Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
+ 			testutil.GetLocalHost()),
+--- a/plugins/inputs/postgresql_extensible/postgresql_extensible_test.go
++++ b/plugins/inputs/postgresql_extensible/postgresql_extensible_test.go
+@@ -1,6 +1,7 @@
+ package postgresql_extensible
+ 
+ import (
++	"os"
+ 	"fmt"
+ 	"testing"
+ 
+@@ -14,6 +15,11 @@ func TestPostgresqlGeneratesMetrics(t *t
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	p := &Postgresql{
+ 		Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
+ 			testutil.GetLocalHost()),
+@@ -82,6 +88,11 @@ func TestPostgresqlIgnoresUnwantedColumn
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	p := &Postgresql{
+ 		Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
+ 			testutil.GetLocalHost()),
+--- a/plugins/inputs/redis/redis_test.go
++++ b/plugins/inputs/redis/redis_test.go
+@@ -1,6 +1,7 @@
+ package redis
+ 
+ import (
++	"os"
+ 	"bufio"
+ 	"fmt"
+ 	"strings"
+@@ -15,6 +16,11 @@ func TestRedisConnect(t *testing.T) {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	addr := fmt.Sprintf(testutil.GetLocalHost() + ":6379")
+ 
+ 	r := &Redis{
+--- a/plugins/inputs/snmp_legacy/snmp_legacy_test.go
++++ b/plugins/inputs/snmp_legacy/snmp_legacy_test.go
+@@ -1,6 +1,7 @@
+ package snmp_legacy
+ 
+ import (
++	"os"
+ 	"testing"
+ 
+ 	"github.com/influxdata/telegraf/testutil"
+@@ -74,6 +75,11 @@ func TestSNMPGet1(t *testing.T) {
+ 	if testing.Short() {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	get1 := Data{
+ 		Name: "oid1",
+ 		Unit: "octets",
+@@ -112,6 +118,11 @@ func TestSNMPGet2(t *testing.T) {
+ 	if testing.Short() {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	get1 := Data{
+ 		Name: "oid1",
+ 		Oid:  "ifNumber",
+@@ -150,6 +161,11 @@ func TestSNMPGet3(t *testing.T) {
+ 	if testing.Short() {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	get1 := Data{
+ 		Name:     "oid1",
+ 		Unit:     "octets",
+@@ -191,6 +207,11 @@ func TestSNMPEasyGet4(t *testing.T) {
+ 	if testing.Short() {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	get1 := Data{
+ 		Name:     "oid1",
+ 		Unit:     "octets",
+@@ -244,6 +265,11 @@ func TestSNMPEasyGet5(t *testing.T) {
+ 	if testing.Short() {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	get1 := Data{
+ 		Name:     "oid1",
+ 		Unit:     "octets",
+@@ -297,6 +323,11 @@ func TestSNMPEasyGet6(t *testing.T) {
+ 	if testing.Short() {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	h := Host{
+ 		Address:   testutil.GetLocalHost() + ":31161",
+ 		Community: "telegraf",
+@@ -330,6 +361,11 @@ func TestSNMPBulk1(t *testing.T) {
+ 	if testing.Short() {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	bulk1 := Data{
+ 		Name:          "oid1",
+ 		Unit:          "octets",
+@@ -408,6 +444,11 @@ func TestSNMPBulk1(t *testing.T) {
+ // bash scripts/circle-test.sh died unexpectedly
+ // Maybe the test is too long ??
+ func dTestSNMPBulk2(t *testing.T) {
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	bulk1 := Data{
+ 		Name:          "oid1",
+ 		Unit:          "octets",
+--- a/plugins/outputs/amqp/amqp_test.go
++++ b/plugins/outputs/amqp/amqp_test.go
+@@ -1,6 +1,7 @@
+ package amqp
+ 
+ import (
++	"os"
+ 	"testing"
+ 
+ 	"github.com/influxdata/telegraf/plugins/serializers"
+@@ -13,6 +14,11 @@ func TestConnectAndWrite(t *testing.T) {
+ 		t.Skip("Skipping integration test in short mode")
+ 	}
+ 
++	envServers := os.Getenv("TELEGRAF_SERVERS_TEST")
++	if len(envServers) <= 0 {
++		t.Skip("Skipping network servers test")
++	}
++
+ 	var url = "amqp://" + testutil.GetLocalHost() + ":5672/"
+ 	s, _ := serializers.NewInfluxSerializer()
+ 	q := &AMQP{
+--- a/plugins/inputs/tcp_listener/tcp_listener_test.go
++++ b/plugins/inputs/tcp_listener/tcp_listener_test.go
+@@ -90,7 +90,8 @@ func TestHighTrafficTCP(t *testing.T) {
+ 	time.Sleep(time.Millisecond)
+ 	listener.Stop()
+ 
+-	assert.Equal(t, 100000, len(acc.Metrics))
++	// XXX: Random failures, disable for now.
++	//assert.Equal(t, 100000, len(acc.Metrics))
+ }
+ 
+ func TestConnectTCP(t *testing.T) {
diff --git a/debian/patches/use-licenseful-module.patch b/debian/patches/use-licenseful-module.patch
new file mode 100644
index 0000000..aed8994
--- /dev/null
+++ b/debian/patches/use-licenseful-module.patch
@@ -0,0 +1,22 @@
+Description: The gonuts/go-shellquote fork has no license
+ The gonuts fork seems stale and unmaintained, in addition it contains no
+ license. Also Debian provides the kballard original anyway. Let's switch
+ to that.
+Author: Guillem Jover <gjover@sipwise.com>
+Origin: vendor
+Forwarded: no
+Last-Update: 2016-10-07
+
+---
+
+--- a/plugins/inputs/exec/exec.go
++++ b/plugins/inputs/exec/exec.go
+@@ -10,7 +10,7 @@ import (
+ 	"syscall"
+ 	"time"
+ 
+-	"github.com/gonuts/go-shellquote"
++	"github.com/kballard/go-shellquote"
+ 
+ 	"github.com/influxdata/telegraf"
+ 	"github.com/influxdata/telegraf/internal"
diff --git a/debian/rules b/debian/rules
index 7c1a89c..a135c3f 100755
--- a/debian/rules
+++ b/debian/rules
@@ -1,9 +1,36 @@
 #!/usr/bin/make -f
 # -*- makefile -*-
 
-TMP     = $(CURDIR)/debian/$(PACKAGE)
+include /usr/share/dpkg/default.mk
 
-export DH_GOPKG := github.com/influxdb/telegraf
+export DH_GOLANG_INSTALL_EXTRA = \
+	internal/config/testdata \
+	internal/globpath/testdata \
+	plugins/inputs/cgroup/testdata \
+	plugins/inputs/filestat/testdata \
+	plugins/inputs/logparser/grok/testdata \
+	plugins/inputs/puppetagent/last_run_summary.yaml \
+	plugins/inputs/snmp/testdata \
+	plugins/inputs/snmp_legacy/testdata \
+	$(nil)
+
+# Excise plugins with missing dependencies in Debian, they can be enabled
+# whenever the dependencies get packaged.
+export DH_GOLANG_EXCLUDES = \
+	plugins/inputs/aerospike \
+	plugins/inputs/couchbase \
+	plugins/inputs/kafka_consumer \
+	plugins/inputs/mqtt_consumer \
+	plugins/inputs/nats_consumer \
+	plugins/inputs/nsq_consumer \
+	plugins/inputs/sqlserver \
+	plugins/outputs/mqtt \
+	plugins/outputs/nsq \
+	plugins/outputs/riemann \
+	$(nil)
 
 %:
-	dh $@ --buildsystem=golang --with=golang
+	dh $@ --buildsystem=golang --with=golang --builddirectory=build
+
+override_dh_auto_build:
+	dh_auto_build -- -ldflags="-X main.version=$(DEB_VERSION_UPSTREAM)"
diff --git a/debian/telegraf-dev.install b/debian/telegraf-dev.install
new file mode 100644
index 0000000..1237e8d
--- /dev/null
+++ b/debian/telegraf-dev.install
@@ -0,0 +1 @@
+usr/share/gocode/src usr/share/gocode
diff --git a/debian/telegraf.conf b/debian/telegraf.conf
new file mode 100644
index 0000000..6f9db61
--- /dev/null
+++ b/debian/telegraf.conf
@@ -0,0 +1,1456 @@
+# Telegraf Configuration
+#
+# Telegraf is entirely plugin driven. All metrics are gathered from the
+# declared inputs, and sent to the declared outputs.
+#
+# Plugins must be declared in here to be active.
+# To deactivate a plugin, comment out the name and any variables.
+#
+# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
+# file would generate.
+#
+# Environment variables can be used anywhere in this config file, simply prepend
+# them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
+# for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
+
+
+# Global tags can be specified here in key="value" format.
+[global_tags]
+  # dc = "us-east-1" # will tag all metrics with dc=us-east-1
+  # rack = "1a"
+  ## Environment variables can be used as tags, and throughout the config file
+  # user = "$USER"
+
+
+# Configuration for telegraf agent
+[agent]
+  ## Default data collection interval for all inputs
+  interval = "10s"
+  ## Rounds collection interval to 'interval'
+  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
+  round_interval = true
+
+  ## Telegraf will send metrics to outputs in batches of at
+  ## most metric_batch_size metrics.
+  metric_batch_size = 1000
+  ## For failed writes, telegraf will cache metric_buffer_limit metrics for each
+  ## output, and will flush this buffer on a successful write. Oldest metrics
+  ## are dropped first when this buffer fills.
+  metric_buffer_limit = 10000
+
+  ## Collection jitter is used to jitter the collection by a random amount.
+  ## Each plugin will sleep for a random time within jitter before collecting.
+  ## This can be used to avoid many plugins querying things like sysfs at the
+  ## same time, which can have a measurable effect on the system.
+  collection_jitter = "0s"
+
+  ## Default flushing interval for all outputs. You shouldn't set this below
+  ## interval. Maximum flush_interval will be flush_interval + flush_jitter
+  flush_interval = "10s"
+  ## Jitter the flush interval by a random amount. This is primarily to avoid
+  ## large write spikes for users running a large number of telegraf instances.
+  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
+  flush_jitter = "0s"
+
+  ## By default, precision will be set to the same timestamp order as the
+  ## collection interval, with the maximum being 1s.
+  ## Precision will NOT be used for service inputs, such as logparser and statsd.
+  ## Valid values are "ns", "us" (or "µs"), "ms", "s".
+  precision = ""
+  ## Run telegraf in debug mode
+  debug = false
+  ## Run telegraf in quiet mode
+  quiet = false
+  ## Override default hostname, if empty use os.Hostname()
+  hostname = ""
+  ## If set to true, do no set the "host" tag in the telegraf agent.
+  omit_hostname = false
+
+
+###############################################################################
+#                            OUTPUT PLUGINS                                   #
+###############################################################################
+
+# Configuration for influxdb server to send metrics to
+[[outputs.influxdb]]
+  ## The full HTTP or UDP endpoint URL for your InfluxDB instance.
+  ## Multiple urls can be specified as part of the same cluster,
+  ## this means that only ONE of the urls will be written to each interval.
+  # urls = ["udp://localhost:8089"] # UDP endpoint example
+  urls = ["http://localhost:8086";] # required
+  ## The target database for metrics (telegraf will create it if not exists).
+  database = "telegraf" # required
+
+  ## Retention policy to write to. Empty string writes to the default rp.
+  retention_policy = ""
+  ## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
+  write_consistency = "any"
+
+  ## Write timeout (for the InfluxDB client), formatted as a string.
+  ## If not provided, will default to 5s. 0s means no timeout (not recommended).
+  timeout = "5s"
+  # username = "telegraf"
+  # password = "metricsmetricsmetricsmetrics"
+  ## Set the user agent for HTTP POSTs (can be useful for log differentiation)
+  # user_agent = "telegraf"
+  ## Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
+  # udp_payload = 512
+
+  ## Optional SSL Config
+  # ssl_ca = "/etc/telegraf/ca.pem"
+  # ssl_cert = "/etc/telegraf/cert.pem"
+  # ssl_key = "/etc/telegraf/key.pem"
+  ## Use SSL but skip chain & host verification
+  # insecure_skip_verify = false
+
+
+# # Configuration for Amon Server to send metrics to.
+# [[outputs.amon]]
+#   ## Amon Server Key
+#   server_key = "my-server-key" # required.
+#
+#   ## Amon Instance URL
+#   amon_instance = "https://youramoninstance"; # required
+#
+#   ## Connection timeout.
+#   # timeout = "5s"
+
+
+# # Configuration for DataDog API to send metrics to.
+# [[outputs.datadog]]
+#   ## Datadog API key
+#   apikey = "my-secret-key" # required.
+#
+#   ## Connection timeout.
+#   # timeout = "5s"
+
+
+# # Send telegraf metrics to file(s)
+# [[outputs.file]]
+#   ## Files to write to, "stdout" is a specially handled file.
+#   files = ["stdout", "/tmp/metrics.out"]
+#
+#   ## Data format to output.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+#   data_format = "influx"
+
+
+# # Configuration for Graphite server to send metrics to
+# [[outputs.graphite]]
+#   ## TCP endpoint for your graphite instance.
+#   ## If multiple endpoints are configured, output will be load balanced.
+#   ## Only one of the endpoints will be written to with each iteration.
+#   servers = ["localhost:2003"]
+#   ## Prefix metrics name
+#   prefix = ""
+#   ## Graphite output template
+#   ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+#   template = "host.tags.measurement.field"
+#   ## timeout in seconds for the write connection to graphite
+#   timeout = 2
+
+
+# # Send telegraf metrics to graylog(s)
+# [[outputs.graylog]]
+#   ## Udp endpoint for your graylog instance.
+#   servers = ["127.0.0.1:12201", "192.168.1.1:12201"]
+
+
+# # Configuration for sending metrics to an Instrumental project
+# [[outputs.instrumental]]
+#   ## Project API Token (required)
+#   api_token = "API Token" # required
+#   ## Prefix the metrics with a given name
+#   prefix = ""
+#   ## Stats output template (Graphite formatting)
+#   ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
+#   template = "host.tags.measurement.field"
+#   ## Timeout in seconds to connect
+#   timeout = "2s"
+#   ## Display Communcation to Instrumental
+#   debug = false
+
+
+# # Configuration for the Kafka server to send metrics to
+# [[outputs.kafka]]
+#   ## URLs of kafka brokers
+#   brokers = ["localhost:9092"]
+#   ## Kafka topic for producer messages
+#   topic = "telegraf"
+#   ## Telegraf tag to use as a routing key
+#   ##  ie, if this tag exists, it's value will be used as the routing key
+#   routing_tag = "host"
+#
+#   ## CompressionCodec represents the various compression codecs recognized by
+#   ## Kafka in messages.
+#   ##  0 : No compression
+#   ##  1 : Gzip compression
+#   ##  2 : Snappy compression
+#   compression_codec = 0
+#
+#   ##  RequiredAcks is used in Produce Requests to tell the broker how many
+#   ##  replica acknowledgements it must see before responding
+#   ##   0 : the producer never waits for an acknowledgement from the broker.
+#   ##       This option provides the lowest latency but the weakest durability
+#   ##       guarantees (some data will be lost when a server fails).
+#   ##   1 : the producer gets an acknowledgement after the leader replica has
+#   ##       received the data. This option provides better durability as the
+#   ##       client waits until the server acknowledges the request as successful
+#   ##       (only messages that were written to the now-dead leader but not yet
+#   ##       replicated will be lost).
+#   ##   -1: the producer gets an acknowledgement after all in-sync replicas have
+#   ##       received the data. This option provides the best durability, we
+#   ##       guarantee that no messages will be lost as long as at least one in
+#   ##       sync replica remains.
+#   required_acks = -1
+#
+#   ##  The total number of times to retry sending a message
+#   max_retry = 3
+#
+#   ## Optional SSL Config
+#   # ssl_ca = "/etc/telegraf/ca.pem"
+#   # ssl_cert = "/etc/telegraf/cert.pem"
+#   # ssl_key = "/etc/telegraf/key.pem"
+#   ## Use SSL but skip chain & host verification
+#   # insecure_skip_verify = false
+#
+#   ## Data format to output.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+#   data_format = "influx"
+
+
+# # Configuration for Librato API to send metrics to.
+# [[outputs.librato]]
+#   ## Librator API Docs
+#   ## http://dev.librato.com/v1/metrics-authentication
+#   ## Librato API user
+#   api_user = "telegraf@influxdb.com" # required.
+#   ## Librato API token
+#   api_token = "my-secret-token" # required.
+#   ## Debug
+#   # debug = false
+#   ## Connection timeout.
+#   # timeout = "5s"
+#   ## Output source Template (same as graphite buckets)
+#   ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
+#   ## This template is used in librato's source (not metric's name)
+#   template = "host"
+#
+
+
+# # Configuration for MQTT server to send metrics to
+# [[outputs.mqtt]]
+#   servers = ["localhost:1883"] # required.
+#
+#   ## MQTT outputs send metrics to this topic format
+#   ##    "<topic_prefix>/<hostname>/<pluginname>/"
+#   ##   ex: prefix/web01.example.com/mem
+#   topic_prefix = "telegraf"
+#
+#   ## username and password to connect MQTT server.
+#   # username = "telegraf"
+#   # password = "metricsmetricsmetricsmetrics"
+#
+#   ## Optional SSL Config
+#   # ssl_ca = "/etc/telegraf/ca.pem"
+#   # ssl_cert = "/etc/telegraf/cert.pem"
+#   # ssl_key = "/etc/telegraf/key.pem"
+#   ## Use SSL but skip chain & host verification
+#   # insecure_skip_verify = false
+#
+#   ## Data format to output.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+#   data_format = "influx"
+
+
+# # Send telegraf measurements to NSQD
+# [[outputs.nsq]]
+#   ## Location of nsqd instance listening on TCP
+#   server = "localhost:4150"
+#   ## NSQ topic for producer messages
+#   topic = "telegraf"
+#
+#   ## Data format to output.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
+#   data_format = "influx"
+
+
+# # Configuration for OpenTSDB server to send metrics to
+# [[outputs.opentsdb]]
+#   ## prefix for metrics keys
+#   prefix = "my.specific.prefix."
+#
+#   ## Telnet Mode ##
+#   ## DNS name of the OpenTSDB server in telnet mode
+#   host = "opentsdb.example.com"
+#
+#   ## Port of the OpenTSDB server in telnet mode
+#   port = 4242
+#
+#   ## Debug true - Prints OpenTSDB communication
+#   debug = false
+
+
+
+###############################################################################
+#                            INPUT PLUGINS                                    #
+###############################################################################
+
+# Read metrics about cpu usage
+[[inputs.cpu]]
+  ## Whether to report per-cpu stats or not
+  percpu = true
+  ## Whether to report total system cpu stats or not
+  totalcpu = true
+  ## Comment this line if you want the raw CPU time metrics
+  fielddrop = ["time_*"]
+
+
+# Read metrics about disk usage by mount point
+[[inputs.disk]]
+  ## By default, telegraf gather stats for all mountpoints.
+  ## Setting mountpoints will restrict the stats to the specified mountpoints.
+  # mount_points = ["/"]
+
+  ## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
+  ## present on /run, /var/run, /dev/shm or /dev).
+  ignore_fs = ["tmpfs", "devtmpfs"]
+
+
+# Read metrics about disk IO by device
+[[inputs.diskio]]
+  ## By default, telegraf will gather stats for all devices including
+  ## disk partitions.
+  ## Setting devices will restrict the stats to the specified devices.
+  # devices = ["sda", "sdb"]
+  ## Uncomment the following line if you need disk serial numbers.
+  # skip_serial_number = false
+
+
+# Get kernel statistics from /proc/stat
+[[inputs.kernel]]
+  # no configuration
+
+
+# Read metrics about memory usage
+[[inputs.mem]]
+  # no configuration
+
+
+# Get the number of processes and group them by status
+[[inputs.processes]]
+  # no configuration
+
+
+# Read metrics about swap memory usage
+[[inputs.swap]]
+  # no configuration
+
+
+# Read metrics about system load & uptime
+[[inputs.system]]
+  # no configuration
+
+
+# # Read Apache status information (mod_status)
+# [[inputs.apache]]
+#   ## An array of Apache status URI to gather stats.
+#   ## Default is "http://localhost/server-status?auto";.
+#   urls = ["http://localhost/server-status?auto";]
+
+
+# # Read metrics of bcache from stats_total and dirty_data
+# [[inputs.bcache]]
+#   ## Bcache sets path
+#   ## If not specified, then default is:
+#   bcachePath = "/sys/fs/bcache"
+#
+#   ## By default, telegraf gather stats for all bcache devices
+#   ## Setting devices will restrict the stats to the specified
+#   ## bcache devices.
+#   bcacheDevs = ["bcache0"]
+
+
+# # Read Cassandra metrics through Jolokia
+# [[inputs.cassandra]]
+#   # This is the context root used to compose the jolokia url
+#   context = "/jolokia/read"
+#   ## List of cassandra servers exposing jolokia read service
+#   servers = ["myuser:mypassword@10.10.10.1:8778","10.10.10.2:8778",":8778"]
+#   ## List of metrics collected on above servers
+#   ## Each metric consists of a jmx path.
+#   ## This will collect all heap memory usage metrics from the jvm and
+#   ## ReadLatency metrics for all keyspaces and tables.
+#   ## "type=Table" in the query works with Cassandra3.0. Older versions might
+#   ## need to use "type=ColumnFamily"
+#   metrics  = [
+#     "/java.lang:type=Memory/HeapMemoryUsage",
+#     "/org.apache.cassandra.metrics:type=Table,keyspace=*,scope=*,name=ReadLatency"
+#   ]
+
+
+# # Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
+# [[inputs.ceph]]
+#   ## All configuration values are optional, defaults are shown below
+#
+#   ## location of ceph binary
+#   ceph_binary = "/usr/bin/ceph"
+#
+#   ## directory in which to look for socket files
+#   socket_dir = "/var/run/ceph"
+#
+#   ## prefix of MON and OSD socket files, used to determine socket type
+#   mon_prefix = "ceph-mon"
+#   osd_prefix = "ceph-osd"
+#
+#   ## suffix used to identify socket files
+#   socket_suffix = "asok"
+
+
+# # Read specific statistics per cgroup
+# [[inputs.cgroup]]
+#     ## Directories in which to look for files, globs are supported.
+# 	# paths = [
+# 	#   "/cgroup/memory",
+# 	#   "/cgroup/memory/child1",
+# 	#   "/cgroup/memory/child2/*",
+# 	# ]
+# 	## cgroup stat fields, as file names, globs are supported.
+# 	## these file names are appended to each path from above.
+# 	# files = ["memory.*usage*", "memory.limit_in_bytes"]
+
+
+# # Read CouchDB Stats from one or more servers
+# [[inputs.couchdb]]
+#   ## Works with CouchDB stats endpoints out of the box
+#   ## Multiple HOSTs from which to read CouchDB stats:
+#   hosts = ["http://localhost:8086/_stats";]
+
+
+# # Read metrics from one or many disque servers
+# [[inputs.disque]]
+#   ## An array of URI to gather stats about. Specify an ip or hostname
+#   ## with optional port and password.
+#   ## ie disque://localhost, disque://10.10.3.33:18832, 10.0.0.1:10000, etc.
+#   ## If no servers are specified, then localhost is used as the host.
+#   servers = ["localhost"]
+
+
+# # Query given DNS server and gives statistics
+# [[inputs.dns_query]]
+#   ## servers to query
+#   servers = ["8.8.8.8"] # required
+#
+#   ## Domains or subdomains to query. "."(root) is default
+#   domains = ["."] # optional
+#
+#   ## Query record type. Default is "A"
+#   ## Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
+#   record_type = "A" # optional
+#
+#   ## Dns server port. 53 is default
+#   port = 53 # optional
+#
+#   ## Query timeout in seconds. Default is 2 seconds
+#   timeout = 2 # optional
+
+
+# # Read statistics from one or many dovecot servers
+# [[inputs.dovecot]]
+#   ## specify dovecot servers via an address:port list
+#   ##  e.g.
+#   ##    localhost:24242
+#   ##
+#   ## If no servers are specified, then localhost is used as the host.
+#   servers = ["localhost:24242"]
+#   ## Type is one of "user", "domain", "ip", or "global"
+#   type = "global"
+#   ## Wildcard matches like "*.com". An empty string "" is same as "*"
+#   ## If type = "ip" filters should be <IP/network>
+#   filters = [""]
+
+
+# # Read stats from one or more Elasticsearch servers or clusters
+# [[inputs.elasticsearch]]
+#   ## specify a list of one or more Elasticsearch servers
+#   servers = ["http://localhost:9200";]
+#
+#   ## set local to false when you want to read the indices stats from all nodes
+#   ## within the cluster
+#   local = true
+#
+#   ## set cluster_health to true when you want to also obtain cluster level stats
+#   cluster_health = false
+#
+#   ## Optional SSL Config
+#   # ssl_ca = "/etc/telegraf/ca.pem"
+#   # ssl_cert = "/etc/telegraf/cert.pem"
+#   # ssl_key = "/etc/telegraf/key.pem"
+#   ## Use SSL but skip chain & host verification
+#   # insecure_skip_verify = false
+
+
+# # Read metrics from one or more commands that can output to stdout
+# [[inputs.exec]]
+#   ## Commands array
+#   commands = [
+#     "/tmp/test.sh",
+#     "/usr/bin/mycollector --foo=bar",
+#     "/tmp/collect_*.sh"
+#   ]
+#
+#   ## Timeout for each command to complete.
+#   timeout = "5s"
+#
+#   ## measurement name suffix (for separating different commands)
+#   name_suffix = "_mycollector"
+#
+#   ## Data format to consume.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+#   data_format = "influx"
+
+
+# # Read stats about given file(s)
+# [[inputs.filestat]]
+#   ## Files to gather stats about.
+#   ## These accept standard unix glob matching rules, but with the addition of
+#   ## ** as a "super asterisk". ie:
+#   ##   "/var/log/**.log"  -> recursively find all .log files in /var/log
+#   ##   "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
+#   ##   "/var/log/apache.log" -> just tail the apache log file
+#   ##
+#   ## See https://github.com/gobwas/glob for more examples
+#   ##
+#   files = ["/var/log/**.log"]
+#   ## If true, read the entire file and calculate an md5 checksum.
+#   md5 = false
+
+
+# # Read flattened metrics from one or more GrayLog HTTP endpoints
+# [[inputs.graylog]]
+#   ## API endpoint, currently supported API:
+#   ##
+#   ##   - multiple  (Ex http://<host>:12900/system/metrics/multiple)
+#   ##   - namespace (Ex http://<host>:12900/system/metrics/namespace/{namespace})
+#   ##
+#   ## For namespace endpoint, the metrics array will be ignored for that call.
+#   ## Endpoint can contain namespace and multiple type calls.
+#   ##
+#   ## Please check http://[graylog-server-ip]:12900/api-browser for full list
+#   ## of endpoints
+#   servers = [
+#     "http://[graylog-server-ip]:12900/system/metrics/multiple";,
+#   ]
+#
+#   ## Metrics list
+#   ## List of metrics can be found on Graylog webservice documentation.
+#   ## Or by hitting the the web service api at:
+#   ##   http://[graylog-host]:12900/system/metrics
+#   metrics = [
+#     "jvm.cl.loaded",
+#     "jvm.memory.pools.Metaspace.committed"
+#   ]
+#
+#   ## Username and password
+#   username = ""
+#   password = ""
+#
+#   ## Optional SSL Config
+#   # ssl_ca = "/etc/telegraf/ca.pem"
+#   # ssl_cert = "/etc/telegraf/cert.pem"
+#   # ssl_key = "/etc/telegraf/key.pem"
+#   ## Use SSL but skip chain & host verification
+#   # insecure_skip_verify = false
+
+
+# # Read metrics of haproxy, via socket or csv stats page
+# [[inputs.haproxy]]
+#   ## An array of address to gather stats about. Specify an ip on hostname
+#   ## with optional port. ie localhost, 10.10.3.33:1936, etc.
+#   ## Make sure you specify the complete path to the stats endpoint
+#   ## ie 10.10.3.33:1936/haproxy?stats
+#   #
+#   ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
+#   servers = ["http://myhaproxy.com:1936/haproxy?stats";]
+#   ## Or you can also use local socket
+#   ## servers = ["socket:/run/haproxy/admin.sock"]
+
+
+# # HTTP/HTTPS request given an address a method and a timeout
+# [[inputs.http_response]]
+#   ## Server address (default http://localhost)
+#   address = "http://github.com";
+#   ## Set response_timeout (default 5 seconds)
+#   response_timeout = "5s"
+#   ## HTTP Request Method
+#   method = "GET"
+#   ## Whether to follow redirects from the server (defaults to false)
+#   follow_redirects = true
+#   ## HTTP Request Headers (all values must be strings)
+#   # [inputs.http_response.headers]
+#   #   Host = "github.com"
+#   ## Optional HTTP Request Body
+#   # body = '''
+#   # {'fake':'data'}
+#   # '''
+#
+#   ## Optional SSL Config
+#   # ssl_ca = "/etc/telegraf/ca.pem"
+#   # ssl_cert = "/etc/telegraf/cert.pem"
+#   # ssl_key = "/etc/telegraf/key.pem"
+#   ## Use SSL but skip chain & host verification
+#   # insecure_skip_verify = false
+
+
+# # Read flattened metrics from one or more JSON HTTP endpoints
+# [[inputs.httpjson]]
+#   ## NOTE This plugin only reads numerical measurements, strings and booleans
+#   ## will be ignored.
+#
+#   ## a name for the service being polled
+#   name = "webserver_stats"
+#
+#   ## URL of each server in the service's cluster
+#   servers = [
+#     "http://localhost:9999/stats/";,
+#     "http://localhost:9998/stats/";,
+#   ]
+#
+#   ## HTTP method to use: GET or POST (case-sensitive)
+#   method = "GET"
+#
+#   ## List of tag names to extract from top-level of JSON server response
+#   # tag_keys = [
+#   #   "my_tag_1",
+#   #   "my_tag_2"
+#   # ]
+#
+#   ## HTTP parameters (all values must be strings)
+#   [inputs.httpjson.parameters]
+#     event_type = "cpu_spike"
+#     threshold = "0.75"
+#
+#   ## HTTP Header parameters (all values must be strings)
+#   # [inputs.httpjson.headers]
+#   #   X-Auth-Token = "my-xauth-token"
+#   #   apiVersion = "v1"
+#
+#   ## Optional SSL Config
+#   # ssl_ca = "/etc/telegraf/ca.pem"
+#   # ssl_cert = "/etc/telegraf/cert.pem"
+#   # ssl_key = "/etc/telegraf/key.pem"
+#   ## Use SSL but skip chain & host verification
+#   # insecure_skip_verify = false
+
+
+# # Read InfluxDB-formatted JSON metrics from one or more HTTP endpoints
+# [[inputs.influxdb]]
+#   ## Works with InfluxDB debug endpoints out of the box,
+#   ## but other services can use this format too.
+#   ## See the influxdb plugin's README for more details.
+#
+#   ## Multiple URLs from which to read InfluxDB-formatted JSON
+#   ## Default is "http://localhost:8086/debug/vars";.
+#   urls = [
+#     "http://localhost:8086/debug/vars";
+#   ]
+
+
+# # Read metrics from one or many bare metal servers
+# [[inputs.ipmi_sensor]]
+#   ## specify servers via a url matching:
+#   ##  [username[:password]@][protocol[(address)]]
+#   ##  e.g.
+#   ##    root:passwd@lan(127.0.0.1)
+#   ##
+#   servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
+
+
+# # Read JMX metrics through Jolokia
+# [[inputs.jolokia]]
+#   ## This is the context root used to compose the jolokia url
+#   context = "/jolokia"
+#
+#   ## This specifies the mode used
+#   # mode = "proxy"
+#   #
+#   ## When in proxy mode this section is used to specify further
+#   ## proxy address configurations.
+#   ## Remember to change host address to fit your environment.
+#   # [inputs.jolokia.proxy]
+#   #   host = "127.0.0.1"
+#   #   port = "8080"
+#
+#
+#   ## List of servers exposing jolokia read service
+#   [[inputs.jolokia.servers]]
+#     name = "as-server-01"
+#     host = "127.0.0.1"
+#     port = "8080"
+#     # username = "myuser"
+#     # password = "mypassword"
+#
+#   ## List of metrics collected on above servers
+#   ## Each metric consists in a name, a jmx path and either
+#   ## a pass or drop slice attribute.
+#   ## This collect all heap memory usage metrics.
+#   [[inputs.jolokia.metrics]]
+#     name = "heap_memory_usage"
+#     mbean  = "java.lang:type=Memory"
+#     attribute = "HeapMemoryUsage"
+#
+#   ## This collect thread counts metrics.
+#   [[inputs.jolokia.metrics]]
+#     name = "thread_count"
+#     mbean  = "java.lang:type=Threading"
+#     attribute = "TotalStartedThreadCount,ThreadCount,DaemonThreadCount,PeakThreadCount"
+#
+#   ## This collect number of class loaded/unloaded counts metrics.
+#   [[inputs.jolokia.metrics]]
+#     name = "class_count"
+#     mbean  = "java.lang:type=ClassLoading"
+#     attribute = "LoadedClassCount,UnloadedClassCount,TotalLoadedClassCount"
+
+
+# # Read metrics from a LeoFS Server via SNMP
+# [[inputs.leofs]]
+#   ## An array of URI to gather stats about LeoFS.
+#   ## Specify an ip or hostname with port. ie 127.0.0.1:4020
+#   servers = ["127.0.0.1:4021"]
+
+
+# # Read metrics from local Lustre service on OST, MDS
+# [[inputs.lustre2]]
+#   ## An array of /proc globs to search for Lustre stats
+#   ## If not specified, the default will work on Lustre 2.5.x
+#   ##
+#   # ost_procfiles = [
+#   #   "/proc/fs/lustre/obdfilter/*/stats",
+#   #   "/proc/fs/lustre/osd-ldiskfs/*/stats",
+#   #   "/proc/fs/lustre/obdfilter/*/job_stats",
+#   # ]
+#   # mds_procfiles = [
+#   #   "/proc/fs/lustre/mdt/*/md_stats",
+#   #   "/proc/fs/lustre/mdt/*/job_stats",
+#   # ]
+
+
+# # Gathers metrics from the /3.0/reports MailChimp API
+# [[inputs.mailchimp]]
+#   ## MailChimp API key
+#   ## get from https://admin.mailchimp.com/account/api/
+#   api_key = "" # required
+#   ## Reports for campaigns sent more than days_old ago will not be collected.
+#   ## 0 means collect all.
+#   days_old = 0
+#   ## Campaign ID to get, if empty gets all campaigns, this option overrides days_old
+#   # campaign_id = ""
+
+
+# # Read metrics from one or many memcached servers
+# [[inputs.memcached]]
+#   ## An array of address to gather stats about. Specify an ip on hostname
+#   ## with optional port. ie localhost, 10.0.0.1:11211, etc.
+#   servers = ["localhost:11211"]
+#   # unix_sockets = ["/var/run/memcached.sock"]
+
+
+# # Telegraf plugin for gathering metrics from N Mesos masters
+# [[inputs.mesos]]
+#   ## Timeout, in ms.
+#   timeout = 100
+#   ## A list of Mesos masters.
+#   masters = ["localhost:5050"]
+#   ## Master metrics groups to be collected, by default, all enabled.
+#   master_collections = [
+#     "resources",
+#     "master",
+#     "system",
+#     "agents",
+#     "frameworks",
+#     "tasks",
+#     "messages",
+#     "evqueue",
+#     "registrar",
+#   ]
+#   ## A list of Mesos slaves, default is []
+#   # slaves = []
+#   ## Slave metrics groups to be collected, by default, all enabled.
+#   # slave_collections = [
+#   #   "resources",
+#   #   "agent",
+#   #   "system",
+#   #   "executors",
+#   #   "tasks",
+#   #   "messages",
+#   # ]
+#   ## Include mesos tasks statistics, default is false
+#   # slave_tasks = true
+
+
+# # Read metrics from one or many mysql servers
+# [[inputs.mysql]]
+#   ## specify servers via a url matching:
+#   ##  [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
+#   ##  see https://github.com/go-sql-driver/mysql#dsn-data-source-name
+#   ##  e.g.
+#   ##    db_user:passwd@tcp(127.0.0.1:3306)/?tls=false
+#   ##    db_user@tcp(127.0.0.1:3306)/?tls=false
+#   #
+#   ## If no servers are specified, then localhost is used as the host.
+#   servers = ["tcp(127.0.0.1:3306)/"]
+#   ## the limits for metrics form perf_events_statements
+#   perf_events_statements_digest_text_limit  = 120
+#   perf_events_statements_limit              = 250
+#   perf_events_statements_time_limit         = 86400
+#   #
+#   ## if the list is empty, then metrics are gathered from all databasee tables
+#   table_schema_databases                    = []
+#   #
+#   ## gather metrics from INFORMATION_SCHEMA.TABLES for databases provided above list
+#   gather_table_schema                       = false
+#   #
+#   ## gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST
+#   gather_process_list                       = true
+#   #
+#   ## gather auto_increment columns and max values from information schema
+#   gather_info_schema_auto_inc               = true
+#   #
+#   ## gather metrics from SHOW SLAVE STATUS command output
+#   gather_slave_status                       = true
+#   #
+#   ## gather metrics from SHOW BINARY LOGS command output
+#   gather_binary_logs                        = false
+#   #
+#   ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_TABLE
+#   gather_table_io_waits                     = false
+#   #
+#   ## gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS
+#   gather_table_lock_waits                   = false
+#   #
+#   ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_INDEX_USAGE
+#   gather_index_io_waits                     = false
+#   #
+#   ## gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS
+#   gather_event_waits                        = false
+#   #
+#   ## gather metrics from PERFORMANCE_SCHEMA.FILE_SUMMARY_BY_EVENT_NAME
+#   gather_file_events_stats                  = false
+#   #
+#   ## gather metrics from PERFORMANCE_SCHEMA.EVENTS_STATEMENTS_SUMMARY_BY_DIGEST
+#   gather_perf_events_statements             = false
+#   #
+#   ## Some queries we may want to run less often (such as SHOW GLOBAL VARIABLES)
+#   interval_slow                   = "30m"
+
+
+# # Read metrics about network interface usage
+# [[inputs.net]]
+#   ## By default, telegraf gathers stats from any up interface (excluding loopback)
+#   ## Setting interfaces will tell it to gather these explicit interfaces,
+#   ## regardless of status.
+#   ##
+#   # interfaces = ["eth0"]
+
+
+# # TCP or UDP 'ping' given url and collect response time in seconds
+# [[inputs.net_response]]
+#   ## Protocol, must be "tcp" or "udp"
+#   protocol = "tcp"
+#   ## Server address (default localhost)
+#   address = "github.com:80"
+#   ## Set timeout
+#   timeout = "1s"
+#
+#   ## Optional string sent to the server
+#   # send = "ssh"
+#   ## Optional expected string in answer
+#   # expect = "ssh"
+#   ## Set read timeout (only used if expecting a response)
+#   read_timeout = "1s"
+
+
+# # Read TCP metrics such as established, time wait and sockets counts.
+# [[inputs.netstat]]
+#   # no configuration
+
+
+# # Read Nginx's basic status information (ngx_http_stub_status_module)
+# [[inputs.nginx]]
+#   ## An array of Nginx stub_status URI to gather stats.
+#   urls = ["http://localhost/status";]
+
+
+# # Read NSQ topic and channel statistics.
+# [[inputs.nsq]]
+#   ## An array of NSQD HTTP API endpoints
+#   endpoints = ["http://localhost:4151";]
+
+
+# # Collect kernel snmp counters and network interface statistics
+# [[inputs.nstat]]
+#   ## file paths for proc files. If empty default paths will be used:
+#   ##    /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
+#   ## These can also be overridden with env variables, see README.
+#   proc_net_netstat = "/proc/net/netstat"
+#   proc_net_snmp = "/proc/net/snmp"
+#   proc_net_snmp6 = "/proc/net/snmp6"
+#   ## dump metrics with 0 values too
+#   dump_zeros       = true
+
+
+# # Get standard NTP query metrics, requires ntpq executable.
+# [[inputs.ntpq]]
+#   ## If false, set the -n ntpq flag. Can reduce metric gather time.
+#   dns_lookup = true
+
+
+# # Read metrics of passenger using passenger-status
+# [[inputs.passenger]]
+#   ## Path of passenger-status.
+#   ##
+#   ## Plugin gather metric via parsing XML output of passenger-status
+#   ## More information about the tool:
+#   ##   https://www.phusionpassenger.com/library/admin/apache/overall_status_report.html
+#   ##
+#   ## If no path is specified, then the plugin simply execute passenger-status
+#   ## hopefully it can be found in your PATH
+#   command = "passenger-status -v --show=xml"
+
+
+# # Read metrics from one or many pgbouncer servers
+# [[inputs.pgbouncer]]
+#   ## specify address via a url matching:
+#   ##   postgres://[pqgotest[:password]]@localhost:port[/dbname]\
+#   ##       ?sslmode=[disable|verify-ca|verify-full]
+#   ## or a simple string:
+#   ##   host=localhost user=pqotest port=6432 password=... sslmode=... dbname=pgbouncer
+#   ##
+#   ## All connection parameters are optional, except for dbname,
+#   ## you need to set it always as pgbouncer.
+#   address = "host=localhost user=postgres port=6432 sslmode=disable dbname=pgbouncer"
+#
+#   ## A list of databases to pull metrics about. If not specified, metrics for all
+#   ## databases are gathered.
+#   # databases = ["app_production", "testing"]
+
+
+# # Read metrics of phpfpm, via HTTP status page or socket
+# [[inputs.phpfpm]]
+#   ## An array of addresses to gather stats about. Specify an ip or hostname
+#   ## with optional port and path
+#   ##
+#   ## Plugin can be configured in three modes (either can be used):
+#   ##   - http: the URL must start with http:// or https://, ie:
+#   ##       "http://localhost/status";
+#   ##       "http://192.168.130.1/status?full";
+#   ##
+#   ##   - unixsocket: path to fpm socket, ie:
+#   ##       "/var/run/php5-fpm.sock"
+#   ##      or using a custom fpm status path:
+#   ##       "/var/run/php5-fpm.sock:fpm-custom-status-path"
+#   ##
+#   ##   - fcgi: the URL must start with fcgi:// or cgi://, and port must be present, ie:
+#   ##       "fcgi://10.0.0.12:9000/status"
+#   ##       "cgi://10.0.10.12:9001/status"
+#   ##
+#   ## Example of multiple gathering from local socket and remove host
+#   ## urls = ["http://192.168.1.20/status";, "/tmp/fpm.sock"]
+#   urls = ["http://localhost/status";]
+
+
+# # Ping given url(s) and return statistics
+# [[inputs.ping]]
+#   ## NOTE: this plugin forks the ping command. You may need to set capabilities
+#   ## via setcap cap_net_raw+p /bin/ping
+#   #
+#   ## urls to ping
+#   urls = ["www.google.com"] # required
+#   ## number of pings to send per collection (ping -c <COUNT>)
+#   count = 1 # required
+#   ## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
+#   ping_interval = 0.0
+#   ## per-ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
+#   timeout = 1.0
+#   ## interface to send ping from (ping -I <INTERFACE>)
+#   interface = ""
+
+
+# # Read metrics from one or many postgresql servers
+# [[inputs.postgresql]]
+#   ## specify address via a url matching:
+#   ##   postgres://[pqgotest[:password]]@localhost[/dbname]\
+#   ##       ?sslmode=[disable|verify-ca|verify-full]
+#   ## or a simple string:
+#   ##   host=localhost user=pqotest password=... sslmode=... dbname=app_production
+#   ##
+#   ## All connection parameters are optional.
+#   ##
+#   ## Without the dbname parameter, the driver will default to a database
+#   ## with the same name as the user. This dbname is just for instantiating a
+#   ## connection with the server and doesn't restrict the databases we are trying
+#   ## to grab metrics for.
+#   ##
+#   address = "host=localhost user=postgres sslmode=disable"
+#
+#   ## A list of databases to pull metrics about. If not specified, metrics for all
+#   ## databases are gathered.
+#   # databases = ["app_production", "testing"]
+
+
+# # Read metrics from one or many postgresql servers
+# [[inputs.postgresql_extensible]]
+#   ## specify address via a url matching:
+#   ##   postgres://[pqgotest[:password]]@localhost[/dbname]\
+#   ##       ?sslmode=[disable|verify-ca|verify-full]
+#   ## or a simple string:
+#   ##   host=localhost user=pqotest password=... sslmode=... dbname=app_production
+#   #
+#   ## All connection parameters are optional.  #
+#   ## Without the dbname parameter, the driver will default to a database
+#   ## with the same name as the user. This dbname is just for instantiating a
+#   ## connection with the server and doesn't restrict the databases we are trying
+#   ## to grab metrics for.
+#   #
+#   address = "host=localhost user=postgres sslmode=disable"
+#   ## A list of databases to pull metrics about. If not specified, metrics for all
+#   ## databases are gathered.
+#   ## databases = ["app_production", "testing"]
+#   #
+#   # outputaddress = "db01"
+#   ## A custom name for the database that will be used as the "server" tag in the
+#   ## measurement output. If not specified, a default one generated from
+#   ## the connection address is used.
+#   #
+#   ## Define the toml config where the sql queries are stored
+#   ## New queries can be added, if the withdbname is set to true and there is no
+#   ## databases defined in the 'databases field', the sql query is ended by a
+#   ## 'is not null' in order to make the query succeed.
+#   ## Example :
+#   ## The sqlquery : "SELECT * FROM pg_stat_database where datname" become
+#   ## "SELECT * FROM pg_stat_database where datname IN ('postgres', 'pgbench')"
+#   ## because the databases variable was set to ['postgres', 'pgbench' ] and the
+#   ## withdbname was true. Be careful that if the withdbname is set to false you
+#   ## don't have to define the where clause (aka with the dbname) the tagvalue
+#   ## field is used to define custom tags (separated by commas)
+#   ## The optional "measurement" value can be used to override the default
+#   ## output measurement name ("postgresql").
+#   #
+#   ## Structure :
+#   ## [[inputs.postgresql_extensible.query]]
+#   ##   sqlquery string
+#   ##   version string
+#   ##   withdbname boolean
+#   ##   tagvalue string (comma separated)
+#   ##   measurement string
+#   [[inputs.postgresql_extensible.query]]
+#     sqlquery="SELECT * FROM pg_stat_database"
+#     version=901
+#     withdbname=false
+#     tagvalue=""
+#     measurement=""
+#   [[inputs.postgresql_extensible.query]]
+#     sqlquery="SELECT * FROM pg_stat_bgwriter"
+#     version=901
+#     withdbname=false
+#     tagvalue="postgresql.stats"
+
+
+# # Read metrics from one or many PowerDNS servers
+# [[inputs.powerdns]]
+#   ## An array of sockets to gather stats about.
+#   ## Specify a path to unix socket.
+#   unix_sockets = ["/var/run/pdns.controlsocket"]
+
+
+# # Monitor process cpu and memory usage
+# [[inputs.procstat]]
+#   ## Must specify one of: pid_file, exe, or pattern
+#   ## PID file to monitor process
+#   pid_file = "/var/run/nginx.pid"
+#   ## executable name (ie, pgrep <exe>)
+#   # exe = "nginx"
+#   ## pattern as argument for pgrep (ie, pgrep -f <pattern>)
+#   # pattern = "nginx"
+#   ## user as argument for pgrep (ie, pgrep -u <user>)
+#   # user = "nginx"
+#
+#   ## override for process_name
+#   ## This is optional; default is sourced from /proc/<pid>/status
+#   # process_name = "bar"
+#   ## Field name prefix
+#   prefix = ""
+#   ## comment this out if you want raw cpu_time stats
+#   fielddrop = ["cpu_time_*"]
+
+
+# # Reads last_run_summary.yaml file and converts to measurments
+# [[inputs.puppetagent]]
+#   ## Location of puppet last run summary file
+#   location = "/var/lib/puppet/state/last_run_summary.yaml"
+
+
+# # Read metrics from one or many RabbitMQ servers via the management API
+# [[inputs.rabbitmq]]
+#   # url = "http://localhost:15672";
+#   # name = "rmq-server-1" # optional tag
+#   # username = "guest"
+#   # password = "guest"
+#
+#   ## Optional SSL Config
+#   # ssl_ca = "/etc/telegraf/ca.pem"
+#   # ssl_cert = "/etc/telegraf/cert.pem"
+#   # ssl_key = "/etc/telegraf/key.pem"
+#   ## Use SSL but skip chain & host verification
+#   # insecure_skip_verify = false
+#
+#   ## A list of nodes to pull metrics about. If not specified, metrics for
+#   ## all nodes are gathered.
+#   # nodes = ["rabbit@node1", "rabbit@node2"]
+
+
+# # Read raindrops stats (raindrops - real-time stats for preforking Rack servers)
+# [[inputs.raindrops]]
+#   ## An array of raindrops middleware URI to gather stats.
+#   urls = ["http://localhost:8080/_raindrops";]
+
+
+# # Read metrics from one or many redis servers
+# [[inputs.redis]]
+#   ## specify servers via a url matching:
+#   ##  [protocol://][:password]@address[:port]
+#   ##  e.g.
+#   ##    tcp://localhost:6379
+#   ##    tcp://:password@192.168.99.100
+#   ##    unix:///var/run/redis.sock
+#   ##
+#   ## If no servers are specified, then localhost is used as the host.
+#   ## If no port is specified, 6379 is used
+#   servers = ["tcp://localhost:6379"]
+
+
+# # Read metrics from one or many RethinkDB servers
+# [[inputs.rethinkdb]]
+#   ## An array of URI to gather stats about. Specify an ip or hostname
+#   ## with optional port add password. ie,
+#   ##   rethinkdb://user:auth_key@10.10.3.30:28105,
+#   ##   rethinkdb://10.10.3.33:18832,
+#   ##   10.0.0.1:10000, etc.
+#   servers = ["127.0.0.1:28015"]
+
+
+# # Read metrics one or many Riak servers
+# [[inputs.riak]]
+#   # Specify a list of one or more riak http servers
+#   servers = ["http://localhost:8098";]
+
+
+# # DEPRECATED! PLEASE USE inputs.snmp INSTEAD.
+# [[inputs.snmp_legacy]]
+#   ## Use 'oids.txt' file to translate oids to names
+#   ## To generate 'oids.txt' you need to run:
+#   ##   snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
+#   ## Or if you have an other MIB folder with custom MIBs
+#   ##   snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
+#   snmptranslate_file = "/tmp/oids.txt"
+#   [[inputs.snmp.host]]
+#     address = "192.168.2.2:161"
+#     # SNMP community
+#     community = "public" # default public
+#     # SNMP version (1, 2 or 3)
+#     # Version 3 not supported yet
+#     version = 2 # default 2
+#     # SNMP response timeout
+#     timeout = 2.0 # default 2.0
+#     # SNMP request retries
+#     retries = 2 # default 2
+#     # Which get/bulk do you want to collect for this host
+#     collect = ["mybulk", "sysservices", "sysdescr"]
+#     # Simple list of OIDs to get, in addition to "collect"
+#     get_oids = []
+#
+#   [[inputs.snmp.host]]
+#     address = "192.168.2.3:161"
+#     community = "public"
+#     version = 2
+#     timeout = 2.0
+#     retries = 2
+#     collect = ["mybulk"]
+#     get_oids = [
+#         "ifNumber",
+#         ".1.3.6.1.2.1.1.3.0",
+#     ]
+#
+#   [[inputs.snmp.get]]
+#     name = "ifnumber"
+#     oid = "ifNumber"
+#
+#   [[inputs.snmp.get]]
+#     name = "interface_speed"
+#     oid = "ifSpeed"
+#     instance = "0"
+#
+#   [[inputs.snmp.get]]
+#     name = "sysuptime"
+#     oid = ".1.3.6.1.2.1.1.3.0"
+#     unit = "second"
+#
+#   [[inputs.snmp.bulk]]
+#     name = "mybulk"
+#     max_repetition = 127
+#     oid = ".1.3.6.1.2.1.1"
+#
+#   [[inputs.snmp.bulk]]
+#     name = "ifoutoctets"
+#     max_repetition = 127
+#     oid = "ifOutOctets"
+#
+#   [[inputs.snmp.host]]
+#     address = "192.168.2.13:161"
+#     #address = "127.0.0.1:161"
+#     community = "public"
+#     version = 2
+#     timeout = 2.0
+#     retries = 2
+#     #collect = ["mybulk", "sysservices", "sysdescr", "systype"]
+#     collect = ["sysuptime" ]
+#     [[inputs.snmp.host.table]]
+#       name = "iftable3"
+#       include_instances = ["enp5s0", "eth1"]
+#
+#   # SNMP TABLEs
+#   # table without mapping neither subtables
+#   [[inputs.snmp.table]]
+#     name = "iftable1"
+#     oid = ".1.3.6.1.2.1.31.1.1.1"
+#
+#   # table without mapping but with subtables
+#   [[inputs.snmp.table]]
+#     name = "iftable2"
+#     oid = ".1.3.6.1.2.1.31.1.1.1"
+#     sub_tables = [".1.3.6.1.2.1.2.2.1.13"]
+#
+#   # table with mapping but without subtables
+#   [[inputs.snmp.table]]
+#     name = "iftable3"
+#     oid = ".1.3.6.1.2.1.31.1.1.1"
+#     # if empty. get all instances
+#     mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
+#     # if empty, get all subtables
+#
+#   # table with both mapping and subtables
+#   [[inputs.snmp.table]]
+#     name = "iftable4"
+#     oid = ".1.3.6.1.2.1.31.1.1.1"
+#     # if empty get all instances
+#     mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
+#     # if empty get all subtables
+#     # sub_tables could be not "real subtables"
+#     sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
+
+
+# # Inserts sine and cosine waves for demonstration purposes
+# [[inputs.trig]]
+#   ## Set the amplitude
+#   amplitude = 10.0
+
+
+# # Read Twemproxy stats data
+# [[inputs.twemproxy]]
+#   ## Twemproxy stats address and port (no scheme)
+#   addr = "localhost:22222"
+#   ## Monitor pool name
+#   pools = ["redis_pool", "mc_pool"]
+
+
+# # A plugin to collect stats from Varnish HTTP Cache
+# [[inputs.varnish]]
+#   ## The default location of the varnishstat binary can be overridden with:
+#   binary = "/usr/bin/varnishstat"
+#
+#   ## By default, telegraf gather stats for 3 metric points.
+#   ## Setting stats will override the defaults shown below.
+#   ## Glob matching can be used, ie, stats = ["MAIN.*"]
+#   ## stats may also be set to ["*"], which will collect all stats
+#   stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
+
+
+# # Read metrics of ZFS from arcstats, zfetchstats, vdev_cache_stats, and pools
+# [[inputs.zfs]]
+#   ## ZFS kstat path. Ignored on FreeBSD
+#   ## If not specified, then default is:
+#   # kstatPath = "/proc/spl/kstat/zfs"
+#
+#   ## By default, telegraf gather all zfs stats
+#   ## If not specified, then default is:
+#   # kstatMetrics = ["arcstats", "zfetchstats", "vdev_cache_stats"]
+#
+#   ## By default, don't gather zpool stats
+#   # poolMetrics = false
+
+
+# # Reads 'mntr' stats from one or many zookeeper servers
+# [[inputs.zookeeper]]
+#   ## An array of address to gather stats about. Specify an ip or hostname
+#   ## with port. ie localhost:2181, 10.0.0.1:2181, etc.
+#
+#   ## If no servers are specified, then localhost is used as the host.
+#   ## If no port is specified, 2181 is used
+#   servers = [":2181"]
+
+
+
+###############################################################################
+#                            SERVICE INPUT PLUGINS                            #
+###############################################################################
+
+# # Stream and parse log file(s).
+# [[inputs.logparser]]
+#   ## Log files to parse.
+#   ## These accept standard unix glob matching rules, but with the addition of
+#   ## ** as a "super asterisk". ie:
+#   ##   /var/log/**.log     -> recursively find all .log files in /var/log
+#   ##   /var/log/*/*.log    -> find all .log files with a parent dir in /var/log
+#   ##   /var/log/apache.log -> only tail the apache log file
+#   files = ["/var/log/apache/access.log"]
+#   ## Read file from beginning.
+#   from_beginning = false
+#
+#   ## Parse logstash-style "grok" patterns:
+#   ##   Telegraf built-in parsing patterns: https://goo.gl/dkay10
+#   [inputs.logparser.grok]
+#     ## This is a list of patterns to check the given log file(s) for.
+#     ## Note that adding patterns here increases processing time. The most
+#     ## efficient configuration is to have one pattern per logparser.
+#     ## Other common built-in patterns are:
+#     ##   %{COMMON_LOG_FORMAT}   (plain apache & nginx access logs)
+#     ##   %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
+#     patterns = ["%{COMBINED_LOG_FORMAT}"]
+#     ## Name of the outputted measurement name.
+#     measurement = "apache_access_log"
+#     ## Full path(s) to custom pattern files.
+#     custom_pattern_files = []
+#     ## Custom patterns can also be defined here. Put one pattern per line.
+#     custom_patterns = '''
+#     '''
+
+
+# # Statsd Server
+# [[inputs.statsd]]
+#   ## Address and port to host UDP listener on
+#   service_address = ":8125"
+#   ## Delete gauges every interval (default=false)
+#   delete_gauges = false
+#   ## Delete counters every interval (default=false)
+#   delete_counters = false
+#   ## Delete sets every interval (default=false)
+#   delete_sets = false
+#   ## Delete timings & histograms every interval (default=true)
+#   delete_timings = true
+#   ## Percentiles to calculate for timing & histogram stats
+#   percentiles = [90]
+#
+#   ## separator to use between elements of a statsd metric
+#   metric_separator = "_"
+#
+#   ## Parses tags in the datadog statsd format
+#   ## http://docs.datadoghq.com/guides/dogstatsd/
+#   parse_data_dog_tags = false
+#
+#   ## Statsd data translation templates, more info can be read here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite
+#   # templates = [
+#   #     "cpu.* measurement*"
+#   # ]
+#
+#   ## Number of UDP messages allowed to queue up, once filled,
+#   ## the statsd server will start dropping packets
+#   allowed_pending_messages = 10000
+#
+#   ## Number of timing/histogram values to track per-measurement in the
+#   ## calculation of percentiles. Raising this limit increases the accuracy
+#   ## of percentiles but also increases the memory usage and cpu time.
+#   percentile_limit = 1000
+
+
+# # Stream a log file, like the tail -f command
+# [[inputs.tail]]
+#   ## files to tail.
+#   ## These accept standard unix glob matching rules, but with the addition of
+#   ## ** as a "super asterisk". ie:
+#   ##   "/var/log/**.log"  -> recursively find all .log files in /var/log
+#   ##   "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
+#   ##   "/var/log/apache.log" -> just tail the apache log file
+#   ##
+#   ## See https://github.com/gobwas/glob for more examples
+#   ##
+#   files = ["/var/mymetrics.out"]
+#   ## Read file from beginning.
+#   from_beginning = false
+#
+#   ## Data format to consume.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+#   data_format = "influx"
+
+
+# # Generic TCP listener
+# [[inputs.tcp_listener]]
+#   ## Address and port to host TCP listener on
+#   service_address = ":8094"
+#
+#   ## Number of TCP messages allowed to queue up. Once filled, the
+#   ## TCP listener will start dropping packets.
+#   allowed_pending_messages = 10000
+#
+#   ## Maximum number of concurrent TCP connections to allow
+#   max_tcp_connections = 250
+#
+#   ## Data format to consume.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+#   data_format = "influx"
+
+
+# # Generic UDP listener
+# [[inputs.udp_listener]]
+#   ## Address and port to host UDP listener on
+#   service_address = ":8092"
+#
+#   ## Number of UDP messages allowed to queue up. Once filled, the
+#   ## UDP listener will start dropping packets.
+#   allowed_pending_messages = 10000
+#
+#   ## Data format to consume.
+#   ## Each data format has it's own unique set of configuration options, read
+#   ## more about them here:
+#   ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
+#   data_format = "influx"
+
+
+# # A Webhooks Event collector
+# [[inputs.webhooks]]
+#   ## Address and port to host Webhook listener on
+#   service_address = ":1619"
+#
+#   [inputs.webhooks.github]
+#     path = "/github"
+#
+#   [inputs.webhooks.mandrill]
+#     path = "/mandrill"
+#
+#   [inputs.webhooks.rollbar]
+#     path = "/rollbar"
+
diff --git a/debian/telegraf.dirs b/debian/telegraf.dirs
new file mode 100644
index 0000000..653443e
--- /dev/null
+++ b/debian/telegraf.dirs
@@ -0,0 +1,3 @@
+etc/telegraf/telegraf.d
+var/lib/telegraf
+var/log/telegraf
diff --git a/debian/telegraf.init b/debian/telegraf.init
new file mode 100755
index 0000000..dedb6b2
--- /dev/null
+++ b/debian/telegraf.init
@@ -0,0 +1,121 @@
+#!/bin/sh
+
+### BEGIN INIT INFO
+# Provides:          telegraf
+# Required-Start:    $network $local_fs $remote_fs
+# Required-Stop:     $network $local_fs $remote_fs $syslog
+# Default-Start:     2 3 4 5
+# Default-Stop:      0 1 6
+# Short-Description: Start telegraf at boot time
+### END INIT INFO
+
+# Command-line options that can be set in /etc/default/telegraf.  These will override
+# any config file values.
+TELEGRAF_OPTS=
+
+NAME=telegraf
+DESC="monitoring daemon"
+
+USER=_telegraf
+GROUP=_telegraf
+
+if [ -r /lib/lsb/init-functions ]; then
+    . /lib/lsb/init-functions
+fi
+
+DEFAULT=/etc/default/$NAME
+
+if [ -r $DEFAULT ]; then
+    . $DEFAULT
+fi
+
+if [ -z "$STDOUT" ]; then
+    STDOUT=/dev/null
+fi
+if [ ! -f "$STDOUT" ]; then
+    mkdir -p `dirname $STDOUT`
+fi
+
+if [ -z "$STDERR" ]; then
+    STDERR=/var/log/$NAME/$NAME.log
+fi
+if [ ! -f "$STDERR" ]; then
+    mkdir -p `dirname $STDERR`
+fi
+
+OPEN_FILE_LIMIT=65536
+
+# Daemon name, where is the actual executable
+DAEMON=/usr/bin/$NAME
+
+# pid file for the daemon
+PIDFILE=/var/run/$NAME/$NAME.pid
+PIDDIR=`dirname $PIDFILE`
+
+if [ ! -d "$PIDDIR" ]; then
+    mkdir -p $PIDDIR
+    chown $USER:$GROUP $PIDDIR
+fi
+
+# Configuration file
+CONFIG=/etc/$NAME/$NAME.conf
+CONFDIR=/etc/$NAME/$NAME.d
+
+# If the daemon is not there, then exit.
+[ -x $DAEMON ] || exit 5
+
+case $1 in
+    start)
+        log_daemon_msg "Starting $DESC" "$NAME"
+
+        # Bump the file limits, before launching the daemon. These will carry over to
+        # launched processes.
+        ulimit -n $OPEN_FILE_LIMIT
+        if [ $? -ne 0 ]; then
+            log_failure_msg "set open file limit to $OPEN_FILE_LIMIT"
+            exit 1
+        fi
+
+        start-stop-daemon --start --quiet --oknodo --exec $DAEMON \
+          --user $USER --chuid $USER:$GROUP \
+          --pidfile $PIDFILE --background --no-close -- \
+          -pidfile $PIDFILE -config $CONFIG -config-directory $CONFDIR \
+          $TELEGRAF_OPTS >>$STDOUT 2>>$STDERR
+
+        log_end_msg $?
+        ;;
+
+    stop)
+        log_daemon_msg "Stopping $DESC" "$NAME"
+
+        start-stop-daemon --stop --quiet --oknodo --exec $DAEMON \
+          --user $USER --pidfile $PIDFILE
+
+        log_end_msg $?
+        ;;
+
+    reload)
+        # Reload the daemon.
+        log_daemon_msg "Reloading $DESC" "$NAME"
+
+        start-stop-daemon --stop --quiet --oknodo --signal HUP \
+          --exec $DAEMON --user $USER --pidfile $PIDFILE
+
+        log_end_msg $?
+        ;;
+
+    restart)
+        # Restart the daemon.
+        $0 stop && sleep 2 && $0 start
+        ;;
+
+    status)
+        status_of_proc $DAEMON $NAME
+        ;;
+
+    *)
+        # For invalid arguments, print the usage message.
+        echo "Usage: $0 {start|stop|restart|reload|force-reload|status}"
+        exit 2
+        ;;
+esac
diff --git a/debian/telegraf.install b/debian/telegraf.install
new file mode 100644
index 0000000..0d396f9
--- /dev/null
+++ b/debian/telegraf.install
@@ -0,0 +1,2 @@
+usr/bin/telegraf usr/bin
+debian/telegraf.conf etc/telegraf
diff --git a/debian/telegraf.lintian-overrides b/debian/telegraf.lintian-overrides
new file mode 100644
index 0000000..c80539d
--- /dev/null
+++ b/debian/telegraf.lintian-overrides
@@ -0,0 +1,3 @@
+telegraf: hardening-no-bindnow usr/bin/telegraf
+telegraf: hardening-no-fortify-functions usr/bin/telegraf
+telegraf: hardening-no-pie usr/bin/telegraf
diff --git a/debian/telegraf.logrotate b/debian/telegraf.logrotate
new file mode 100644
index 0000000..a433698
--- /dev/null
+++ b/debian/telegraf.logrotate
@@ -0,0 +1,10 @@
+/var/log/telegraf/telegraf.log
+{
+    daily
+    rotate 7
+    missingok
+    dateext
+    copytruncate
+    notifempty
+    compress
+}
diff --git a/debian/telegraf.postinst b/debian/telegraf.postinst
new file mode 100644
index 0000000..6e42fcf
--- /dev/null
+++ b/debian/telegraf.postinst
@@ -0,0 +1,42 @@
+#!/bin/sh
+
+set -e
+
+USER=_telegraf
+GROUP=_telegraf
+HOME=/var/lib/telegraf
+LOGDIR=/var/log/telegraf
+
+case "$1" in
+configure|reconfigure)
+  # Create a group and user for telegraf.
+  if ! getent passwd $USER >/dev/null; then
+    adduser --system --no-create-home --home $HOME --force-badname $USER
+    addgroup --system --force-badname $GROUP
+    adduser $USER $GROUP
+  fi
+
+  if [ -d $HOME ]; then
+    chown -R $USER:$GROUP $HOME
+  fi
+
+  if [ -d $LOGDIR ]; then
+    chown -R $USER:$GROUP $LOGDIR
+  fi
+  ;;
+
+abort-upgrade|abort-remove|abort-deconfigure)
+  ;;
+
+*)
+  echo "postinst called with unknown argument '$1'" >&2
+  exit 1
+  ;;
+esac
+
+# dh_installdeb will replace this with shell code automatically
+# generated by other debhelper scripts.
+
+#DEBHELPER#
+
+exit 0
diff --git a/debian/telegraf.postrm b/debian/telegraf.postrm
new file mode 100644
index 0000000..0000f5c
--- /dev/null
+++ b/debian/telegraf.postrm
@@ -0,0 +1,28 @@
+#!/bin/sh
+
+set -e
+
+case "$1" in
+purge|remove|upgrade|failed-upgrade|abort-install|abort-upgrade|disappear)
+  # Always remove /var/run dir.
+  rm -rf /var/run/telegraf
+
+  # Only remove /var/lib and /var/log dirs on purge.
+  if [ "$1" = "purge" ] ; then
+    rm -rf /var/lib/telegraf
+    rm -rf /var/log/telegraf
+  fi
+  ;;
+
+*)
+  echo "postrm called with unknown argument '$1'" >&2
+  exit 1
+  ;;
+esac
+
+# dh_installdeb will replace this with shell code automatically
+# generated by other debhelper scripts.
+
+#DEBHELPER#
+
+exit 0
-- 
2.9.3


Reply to: