Categories

Dokku Maintenance

Logging

Dokku now supports Vector. In order to use vector for logging, use the following command to start vector container

dokku logs:vector-start

There are multiple ways to have log processed and pushed to differrent storage, please check reference for more information. In my example, I use minio (S3 compatible storage service).

The configuration is a bit tricky to make due to the encoding. I come up with a Python snippet to generate dokku command for the config.

config = {
    "endpoint": "https://example-minio-url",
    "bucket": "example-bucket",
    "region": "us-east-1",
    "compression": "gzip",
    "auth[access_key_id]": "aws-access-key",
    "auth[secret_access_key]": "aws-access-secret",
    "encoding[codec]": "text",
    "key_prefix": "%F",
}

from urllib.parse import urlencode
config = urlencode(config)
config = "dokku logs:set --global vector-sink \"aws_s3://?%s\"" % config

print(config)

After the command runs, you can now check the generated config:

cat /var/lib/dokku/data/logs/vector.json

It looks like this:

{
  "sources": {
    "docker-global-source": {
      "type": "docker_logs",
      "include_labels": [
        "com.dokku.app-name"
      ]
    }
  },
  "sinks": {
    "docker-global-sink": {
      "auth": {
        "access_key_id": "aws-access-key",
        "secret_access_key": "aws-access-secret"
      },
      "bucket": "example-bucket",
      "region": "us-east-1",
      "compression": "gzip",
      "encoding": {
        "codec": "text"
      },
      "endpoint": "https://example-minio-url",
      "inputs": [
        "docker-global-source"
      ],
      "key_prefix": "%F",
      "type": "aws_s3"
    }
  }
}

Please notice docker-global-source in this json file, the inputs value matches the value in sources. Both of which are filled by dokku. If you specify inputs in your config command, it will be ignored.

Now check the log from vector container:

docker logs -f vector

If vector is configured correctly, you should be able to find log similar like this:

Jul 28 07:48:28.139  INFO vector::config::watcher: Configuration file changed.
Jul 28 07:48:28.147  INFO vector::topology::running: Waiting for sources to finish shutting down. timeout=30
Jul 28 07:48:28.147  INFO vector::topology::running: Removing source. key=docker-null-source
Jul 28 07:48:28.147  INFO vector::topology::running: Waiting for up to 30 seconds for sources to finish shutting down.
Jul 28 07:48:28.147  INFO vector::topology::running: Removing sink. key=docker-null-sink
Jul 28 07:48:28.147  INFO vector::sinks::blackhole: Total events collected events=0 raw_bytes_collected=0
Jul 28 07:48:28.147  INFO vector::sources::docker_logs: Capturing logs from now on. now=2022-07-28T07:48:28.147827782+00:00
Jul 28 07:48:28.148  INFO vector::sources::docker_logs: Listening to docker log events.
Jul 28 07:48:28.198  WARN vector::sinks::util::service: Option `in_flight_limit` has been renamed to `concurrency`. Ignoring `in_flight_limit` and using `concurrency` option.
Jul 28 07:48:28.198  INFO vector::topology::running: Running healthchecks.
Jul 28 07:48:28.199  INFO vector::topology::running: Starting source. key=docker-global-source
Jul 28 07:48:28.199  INFO vector::topology::running: Starting sink. key=docker-global-sink
Jul 28 07:48:28.199  INFO vector: Vector has reloaded. path=[File("/etc/vector/vector.json", None)]
Jul 28 07:48:28.209  INFO vector::internal_events::docker_logs: Started watching for container logs. container_id=xxx
Jul 28 07:48:28.209  INFO vector::internal_events::docker_logs: Started watching for container logs. container_id=xxx
Jul 28 07:48:28.852  INFO vector::topology::builder: Healthcheck: Passed.

If you did not see any log in your bucket, please calm for the moment and wait the log be flushed from the buffer. Please check Buffers and batches here.

Before I added "key_prefix": "%F",, the following error shows after I waited long enough to get the log flushed, the reason is that the default value of key_prefix is date=%F/ which makes the url not matching the naming requirements of minio.

Jul 28 08:05:24.492 ERROR sink{component_kind="sink" component_id=docker-global-sink component_type=aws_s3 component_name=docker-global-sink}:request{request_id=0}: vector::sinks::util::retries: Non-retriable error; dropping the request. error=Request ID: None Body: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>XMinioInvalidObjectName</Code><Message>Object name contains unsupported characters.</Message><Key>date=2022-07-28//1658995524-50501972-0f75-4f88-b013-6f2e97e1b355.log.gz</Key><BucketName>example-bucket</BucketName><Resource>/example-bucket/date=2022-07-28//1658995524-50501972-0f75-4f88-b013-6f2e97e1b355.log.gz</Resource><RequestId>1705EFA3C99FE394</RequestId><HostId>8a1c3062-386f-42b4-9731-a0f5d9479e42</HostId></Error>

Update (2022-08-18): It seems that the config region is required now, but since we are using minio here. It can be set to whatever characters and should be ignored.

Reference

  • https://vector.dev/docs/reference/configuration/sinks/aws_s3/
  • https://dokku.com/docs/deployment/logs/